Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Optimize qubit hash for Set operations #6908

Open
wants to merge 10 commits into
base: main
Choose a base branch
from

Conversation

daxfohl
Copy link
Collaborator

@daxfohl daxfohl commented Jan 1, 2025

Change the hash function from tuple, to manually multiplying each term by 1_000_003, which is also the term multiplier Python uses internally for strings and complex ints. This hashes at the same speed as the tuple, but maintains a linear relationship with each term, which reduces the number of bucket collisions in the hash tables underlying Sets and Dicts for line and grid qubits. Improves amortized Set operations perf such as the below by around 50%.

s = set()
for q in cirq.GridQubit.square(100):
    s = s.union({q})

Fixes #6886

Improves amortized `Set` operations perf by around 50%, though with the caveat that sets with qudits of different dimensions but the same index will always have the same key (not just the same bucket), and thus have to check `__eq__`, causing degenerate perf impact. It seems unlikely that anyone would intentionally do this though.

```python
s = set()
for q in cirq.GridQubit.square(100):
    s = s.union({q})
```
@daxfohl daxfohl requested review from vtomole and a team as code owners January 1, 2025 19:38
@daxfohl daxfohl requested a review from mhucka January 1, 2025 19:38
@CirqBot CirqBot added the size: S 10< lines changed <50 label Jan 1, 2025
Copy link

codecov bot commented Jan 2, 2025

Codecov Report

All modified and coverable lines are covered by tests ✅

Project coverage is 97.87%. Comparing base (5d317ba) to head (57468b5).
Report is 2 commits behind head on main.

Additional details and impacted files
@@           Coverage Diff           @@
##             main    #6908   +/-   ##
=======================================
  Coverage   97.87%   97.87%           
=======================================
  Files        1084     1084           
  Lines       94406    94408    +2     
=======================================
+ Hits        92396    92398    +2     
  Misses       2010     2010           

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

Comment on lines 41 to 42
# This approach seems to perform better than traditional "random" hash in `Set`
# operations for typical circuits, as it reduces bucket collisions. Caveat: it does not
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How did you evaluate this reduction in bucket collisions? Would be good to show this explicitly before we decide to abandon the standard tuple hash.

Copy link
Collaborator Author

@daxfohl daxfohl Jan 2, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Test code is up in the description. It's about 50% faster with this implementation.

One note is that it seems like it's only faster for copy-on-change ops like s = s.union({q}). It doesn't seem to have any effect when we operate on sets mutably like s |= {q}. But given most of our stuff is immutable, we see a lot more of the former in our codebase.

Comment on lines 60 to 70
square_index = max(abs_row, abs_col)
inner_square_side_len = square_index * 2 - 1
outer_square_side_len = inner_square_side_len + 2
inner_square_area = inner_square_side_len**2
if abs_row == square_index:
offset = 0 if row < 0 else outer_square_side_len
i = inner_square_area + offset + (col + square_index)
else:
offset = (2 * outer_square_side_len) + (0 if col < 0 else inner_square_side_len)
i = inner_square_area + offset + (row + (square_index - 1))
self._hash = hash(i)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It looks like this is almost 3x slower than the current tuple hash, which is quite a big regression so unless we can really show that this reduces hash collisions I'm not sure we would want to make this change.

In [1]: def tuple_hash(row, col, d):
   ...:     return hash((row, col, d))
   ...: 

In [2]: def square_hash(row, col, d):
   ...:     if row == 0 and col == 0:
   ...:         return 0
   ...:     abs_row = abs(row)
   ...:     abs_col = abs(col)
   ...:     square_index = max(abs_row, abs_col)
   ...:     inner_square_side_len = square_index * 2 - 1
   ...:     outer_square_side_len = inner_square_side_len + 2
   ...:     inner_square_area = inner_square_side_len**2
   ...:     if abs_row == square_index:
   ...:         offset = 0 if row < 0 else outer_square_side_len
   ...:         i = inner_square_area + offset + (col + square_index)
   ...:     else:
   ...:         offset = (2 * outer_square_side_len) + (0 if col < 0 else inner_square_side_len)
   ...:         i = inner_square_area + offset + (row + (square_index - 1))
   ...:     return hash(i)
   ...: 

In [3]: %timeit [tuple_hash(r, c, d) for r in range(20) for c in range(20) for d in [2, 3, 4]]
151 µs ± 427 ns per loop (mean ± std. dev. of 7 runs, 10,000 loops each)

In [4]: %timeit [square_hash(r, c, d) for r in range(20) for c in range(20) for d in [2, 3, 4]]
437 µs ± 2.37 µs per loop (mean ± std. dev. of 7 runs, 1,000 loops each)

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not married to it. It was something I noticed when looking into creating very wide circuits and got nerd sniped. It's a reasonable optimization for copy-on-change operations on large sets. But if we want to stick to the existing approach, I'd say it's completely justifiable.

Copy link
Collaborator Author

@daxfohl daxfohl Jan 11, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Instead of the fancy plane-covering algorithm, I realized we could just hash the complex number row + col * 1j. This ends up being only about 2.5x faster than the fancy plane-covering hash, but still 30% slower than the tuple hash, to hash a million distinct GridQubits, yet still 50% faster than the tuple hash to do set unions on a 100x100 GridQubit square.

Then, looking up the actual algorithm for hashing complex numbers, it's just real_part + complex_part * sys.hash_info.imag. So, switching the algorithm to that, now it's about 30% faster than the tuple hash to hash a million distinct GridQubits, and still 50% faster to do set unions on a 100x100 GridQubit square. Plus, it looks like....a normal hash function. (I feel kind of silly now for not trying this first).

So, vastly simplified the code, and it's faster for all "normal" cases now, but the caveat still applies about it being slow on sets that have multiple qudits of different dimensions on the same grid position.

Copy link
Collaborator Author

@daxfohl daxfohl Jan 12, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

And, finally coming to my senses, I included the dimension term in the hash, which slows it back down to exactly the tuple hash speed, but is still 50% faster on set unions. But now it is a more standard hash function, including all attributes.

I'm going to mark the PR as ready again; at this point it seems like a pretty straightforward improvement with no downside.

@daxfohl daxfohl marked this pull request as draft January 6, 2025 17:28
@daxfohl daxfohl marked this pull request as ready for review January 12, 2025 06:43
@daxfohl daxfohl requested a review from maffoo January 12, 2025 06:59
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
size: S 10< lines changed <50
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Make Line and Grid Qubit hashes faster for common set ops
3 participants