Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Non-sparse x sparse #4

Open
lorenlugosch opened this issue Mar 20, 2020 · 2 comments
Open

Non-sparse x sparse #4

lorenlugosch opened this issue Mar 20, 2020 · 2 comments

Comments

@lorenlugosch
Copy link

Is it possible to multiply a sparse matrix by a non-sparse matrix? What I'd like to do is multiply a (sparse) transition matrix in an HMM with a (non-sparse) vector of forward log probabilities.

@srush
Copy link
Contributor

srush commented Mar 20, 2020

Great question. We don't have this yet, but I would like to add it. In the meantime you have two interesting options:

  1. If you read and understand this code which implements HMMs using parallel prefix scan (also described in the paper) https://github.com/harvardnlp/pytorch-struct/blob/master/torch_struct/linearchain.py , then you will see that all the operations of sparse matrix x sparse matrix multiplies. Using this method you could have a really fast sparse HMM.

  2. If 1 is still not fast enough, we have some unreleased code that implements HMM entirely in Cuda. If you email me we could help adapt this code for your task.

@lorenlugosch
Copy link
Author

Thanks!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants