-
Notifications
You must be signed in to change notification settings - Fork 821
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Does related queries work? I keep getting 429 #625
Comments
the same |
Same for me, It appers that only a few methods keep working. The following are working
|
build_payload() doesn't seem to be working for me |
How you're doing it? |
it's same for me I've been attempting to make API calls using PyTrends, but I keep receiving an HTTP error 429, which indicates "Too Many Requests." This error typically occurs when the rate limit for API requests is exceeded. However, the strange part is that I encounter this error even when I make only a single API call. |
the same here =/ |
Can I ask what payload you are using please? |
Sure!, here it is
-To get realtime trendings
-To get top searches in US
-To get suggestions based on a word
-To get related categories
|
@lapiceroazul4 - Thank you very much. |
Are the other methods being fixed currently, or does anyone know of good ways to work around them? I'm trying to use interest_over_time() and it seems to have completely stopped working, probably when Google Trends updated recently. I've been consistently getting 429 errors on my first request, and haven't had luck with changing backoff_factor/retries, using proxies, or even running it on a different computer and network. |
Starting to see data coming in now, but slowly. No immediate 429. |
I am also having the "TooManyRequestsError: The request failed: Google returned a response with code 429" error |
How the hell are these guys able to do it? 🤔 https://serpapi.com/playground?engine=google_trends&q=Coffee |
interest_over_time() only returns 429 for me also |
Idk how they do that, but, it looks awesome, any idea of how can we get that data? |
hmm.. |
Got any idea? |
I just know, that if I try to replicate the http calls my browser does in postman, I keep getting an error that this isn't a browser. So my guess would be, that they are just web scraping it. When you use their API (with only 100 free calls/months) my guess would be, that they have headless browsers running on a server. |
Can this library include such a feature where it uses a headless browser to
do the scraping just like the API? Possibly, someone can add it
…On Sat, Aug 17, 2024 at 2:25 PM Tobias ***@***.***> wrote:
I just know, that if I try to replicate the http calls my browser does in
postman, I keep getting an error that this isn't a browser. So my guess
would be, that they are just web scraping it. When you use their API (with
only 100 free calls/months) my guess would be, that they have headless
browsers running on a server.
—
Reply to this email directly, view it on GitHub
<#625 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AA7IJS7IH2BKR6FLQKDN3BTZR6ISBAVCNFSM6AAAAABK3TZXKWVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDEOJUHEZTKMRTHA>
.
You are receiving this because you are subscribed to this thread.Message
ID: ***@***.***>
|
My personal workaround for similar data is to use the wikipedia api: |
Here is a MRE, using what is in README
This throws a 429 errors. I tried wrapping this with tenacity and I keep getting 429s no matter how long I wait.
The text was updated successfully, but these errors were encountered: