You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi Prithiviraj, thank you for the great work!
Is it possible to run this model with batches of input sentences so that we can leverage using the GPU much better? At the moment, setting use_gpu to True doesn't achieve much performance gains because we're not parallelizing across input phrases. Unless I missed something in the source code, in which case please let me know (and this would be good instruction to better emphasize in the documentation, at least in my case and I'm sure for many others if they try using this model for paraphrasing phrases in the 1mil+ data sizes)
The text was updated successfully, but these errors were encountered:
Hi Prithiviraj, thank you for the great work!
Is it possible to run this model with batches of input sentences so that we can leverage using the GPU much better? At the moment, setting use_gpu to True doesn't achieve much performance gains because we're not parallelizing across input phrases. Unless I missed something in the source code, in which case please let me know (and this would be good instruction to better emphasize in the documentation, at least in my case and I'm sure for many others if they try using this model for paraphrasing phrases in the 1mil+ data sizes)
The text was updated successfully, but these errors were encountered: