You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The inference is quite slow for decisions and discussions that must happen sequentially. Still, for decisions that can be done in parallel like voting, it would be better to simultaneously prompt all of the models since the votes are only revealed after all the voting is completed. This is mostly if the model is not being run locally and rather is being accessed via an API call.
The text was updated successfully, but these errors were encountered:
The inference is quite slow for decisions and discussions that must happen sequentially. Still, for decisions that can be done in parallel like voting, it would be better to simultaneously prompt all of the models since the votes are only revealed after all the voting is completed. This is mostly if the model is not being run locally and rather is being accessed via an API call.
The text was updated successfully, but these errors were encountered: