-
Notifications
You must be signed in to change notification settings - Fork 76
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How to fetch a single 'page' of search results? #22
Comments
You can access the underlying list of model objects via the I will look to writing a proper |
Thanks for the pointers. I'm digging through It's confusing (to me, anyway) to specify I'm can't suggest improvements just yet, but at a minimum the current behavior should be called out in the docs: FAQ or Wiki page describing the pagination model and how it works 'under the hood' (er... bonnet) could help here. (Yes, reading the code answers all questions, but that's not how most developers want to spend their time.) I'll be only to happy to contribute to said docs as time (and my understanding) permits. Edit: I'd suggest that automatic next page fetching during iteration could be an explicit option (explicit being better than implicit) but we don't want to go breaking existing code, do we? |
I'm sure this is a dumb question, but I'm trying to figure out how to enumerate only the first 'page' of results from a
search
-- I want to perform one API query (hitting the server only once) then enumerate the results.For example, the
search.py
demo enumerates the full results of a query, even if I pass inpage=1
andpagesize=30
. I would have expected a maximum of 30 results. What am I missing?I'm aware of the 'lazy lists' option, but haven't tried that. Is that the right solution?
Edit: Here's my modified
search.py
tweaked to search for tags rather than title text. Invoking it with the following command line:$ python search.py python
results in over 100,000 results being enumerated, and thus hitting the API endpoint repeatedly. Time to fire up the debugger...
The text was updated successfully, but these errors were encountered: