You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I noticed that for computing multiple hashes, you make use of the work of Less Hashing, Same Performance, which is calculated by: gi(x) = h1(x)+ih2(x) . For that you generate a 256 bits long hash partitioned into 4 uint64s. So I wonder why you decided to generate a 256 long hash partitioned into 4 uint64s. instead of 128 long hash partitioned into 2 uint64s 🤔 Wouldn't it be the same regarding hashing, but with a performance improvement?
The text was updated successfully, but these errors were encountered:
Would you be willing to produce a pull request ? Note that we want to preserve backward compatibility so that the hash function is not allowed to change. However, if you have a more efficient implementation of the same hash function, we would love to have it.
I noticed that for computing multiple hashes, you make use of the work of Less Hashing, Same Performance, which is calculated by: gi(x) = h1(x)+ih2(x) . For that you generate a 256 bits long hash partitioned into 4
uint64
s. So I wonder why you decided to generate a 256 long hash partitioned into 4uint64
s. instead of 128 long hash partitioned into 2uint64
s 🤔 Wouldn't it be the same regarding hashing, but with a performance improvement?The text was updated successfully, but these errors were encountered: