Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Implemented lookup table for finite field inversion #7

Open
wants to merge 1 commit into
base: master
Choose a base branch
from

Conversation

michiexile
Copy link

If I understand C++ correctly, this design (with a singleton pattern carrying the actual lookup table) will keep instantiation of the actual lookup to once per program-and-prime.

@@ -57,6 +58,43 @@ std::size_t inverse( const ctl::Finite_field< N> & x, const std::size_t prime){
return inverse( x.value(), prime);
}

Copy link
Contributor

@rhl- rhl- May 2, 2016

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

CTL uses the snake_case convention, and the use of singletons is un-necessary here.

You can do something like this:

namespace ctl {
namespace detail {
template< std::size_t Prime>
constexpr std::array<std::size_t, Prime> inversion_table = { ctl::detail::inverse<prime>(0),  ctl::detail::inverse<prime>(1), ...,  ctl::detail::inverse<prime>(prime-1) };
} //end namespace detail
} //end namespace ctl

the compiler will just instantiate one (or lookup the value) when it is used..

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ctl::detail::inverse(_) just needs to be constexpr..

@rhl-
Copy link
Contributor

rhl- commented May 3, 2016

I just merged my massive PR which changes a bunch of the C++ API as well as provides the python api in the mainline. You will need to merge master into your branch.

This change is a welcome change, although, I think there is something to think about, which is that this requires building a static $O(p)$ array into the binary for each prime $p$. As $p$ grows large, the size of a binary will grow with it, and, in the same vein this could potentially harm the instruction cache. As a trade off the algorithm itself may run more slowly, but, is quite compact (only a small constant lines of assembly).

I think we should consider these tradeoffs.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants