-
Notifications
You must be signed in to change notification settings - Fork 13
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Multivariate functions #185
Conversation
current implementation, compared to the 2D case which already existed and still uses 100% the old code:
Most time is needed to change the coefficients format from #86 to tensor format. Maybe it would be better to change the vectorization after all. |
Unfortunately, CI is probably broken now, as JuliaArrays/FillArrays.jl#182 needs to be merged first |
The PR is not finished yet, I haven't added tests yet and will probably have to optimize some parts. |
Ok, I've cancelled the tests for now to avoid clogging CI resources |
reduced amount of allocations and increased performance:
|
The basic functionality of multivariate functions of the same type is now working.
The mixed type of (polynomial) spaces should be easy to extend, but is beyond this PR in my opinion. The same thing for arithmetics, which also do not work fully in the 2D case #187 (I explicitly did not tuch the 2D case yet). I think the PR is ready for a review or merge. |
Codecov ReportBase: 71.30% // Head: 70.92% // Decreases project coverage by
Additional details and impacted files@@ Coverage Diff @@
## master #185 +/- ##
==========================================
- Coverage 71.30% 70.92% -0.38%
==========================================
Files 78 79 +1
Lines 8105 8178 +73
==========================================
+ Hits 5779 5800 +21
- Misses 2326 2378 +52
Help us with your feedback. Take ten seconds to tell us how you rate us. Have a feature suggestion? Share it here. ☔ View full report at Codecov. |
From my side the PR is ready to be merged @jishnub |
How much does including |
src/Multivariate/ProductFun.jl
Outdated
@@ -35,6 +45,15 @@ function ProductFun(cfs::AbstractMatrix{T},sp::AbstractProductSpace{Tuple{S,V},D | |||
end | |||
end | |||
|
|||
## TODO: This Product Fun actually does not return a productfun, dirty but was easiest to implement. Probably an abstract type of ProductFuns | |||
# is needed in the future. | |||
function ProductFun(iter::TrivialTensorizer{d},cfs::Vector{T},blk::Block, sp::AbstractProductSpace{NTuple{d, S},DD}) where {S<:UnivariateSpace,T<:Number,d,DD} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is this necessary? Or could we define a function productfun
that can choose to return an appropriate type? It'll be good if the constructor of a struct retains its type.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Currently multivariate functions are designed in a way to first build a tensor out of the coefficients and then first solve the first dimension followed by the second. ProductFun
is designed for that, storing a vector of functions.
I first implemented it that way, but this needs too much memory for high dimensional cases and threw errors for me for dimensions larger about 80 and 10000 coefficients (~1 million coefficients are possible in the 1D case for me). Therefore, I never build this tensor and don't use the vector of functions ProductFun
is using. I think the approach is not compatible with the current ProductFun
.
I think the best way would actually be to change the order of how the coefficients are vectorized, see #86 , because this change in coefficients is computationally costly (functions fromtensor
and totensor
build the "real" tensor out of the current way the coefficients are stored. I introduced totensoriterator
for evaluating d>2 dimensional functions to avoid building the full tensor).
However, this change could be breaking with current code using ApproxFun
How do I measure the import time exactly? I got with compilation the import time of
and precompiled in the package
|
You can try |
I get |
Could you post the full report? In particular, I want to see if including |
|
Ok, seems reasonable as it's fairly low in the chain. We can include this then |
Tests seem to fail because of unbound type parameters in ApproxFunBase.TensorIteratorFun(space::SS, coefficients::Vector{T}, iterator::ApproxFunBase.Tensorizer{Tuple{Vararg{FillArrays.Ones{Int64, 1, Tuple{InfiniteArrays.OneToInf{Int64}}}, d}}}, orders::BlockArrays.Block) where {S<:(Space{D} where D<:(Domain{<:Number})), d, SS<:(TensorSpace{Tuple{Vararg{S, d}}}), T<:Number} in ApproxFunBase at /Users/runner/work/ApproxFunBase.jl/ApproxFunBase.jl/src/Multivariate/ProductFun.jl:12, totensor(SS::TensorSpace{Tuple{Vararg{S, d}}}, M::AbstractVector) where {d, S<:(Space{D} where D<:(Domain{<:Number}))} in ApproxFunBase at /Users/runner/work/ApproxFunBase.jl/ApproxFunBase.jl/src/Multivariate/TensorSpace.jl:493
ProductFun(iter::ApproxFunBase.Tensorizer{Tuple{Vararg{FillArrays.Ones{Int64, 1, Tuple{InfiniteArrays.OneToInf{Int64}}}, d}}}, cfs::Vector{T}, blk::BlockArrays.Block, sp::ApproxFunBase.AbstractProductSpace{Tuple{Vararg{S, d}}, DD}) where {S<:(Space{D} where D<:(Domain{<:Number})), T<:Number, d, DD} in ApproxFunBase at /Users/runner/work/ApproxFunBase.jl/ApproxFunBase.jl/src/Multivariate/ProductFun.jl:50 Could you fix these? |
Ok, I think I understand. The Multivariate methods I added only work for Univariate Tensor Spaces of the same type. If I understand it correctly, I will delete the tests from here and move them to Orthogonal Polynomials. The problem is probably that if there is a braking change in this package, the test in the orthogonal polynomial package will break, not here. Is it possible to also include those tests in the github ci of this package? |
Since the compat bound of The problem is that if we want to use
Yes, this is correct. There are already some multivariate tests in |
Could you re-run the 2D and other dimensional benchmarks, but this time interpolating the function? This is what I obtain, for example, on the master branches on julia> f2 = Fun(Chebyshev()^2, rand(100));
julia> @btime $f2(0.2, 0.4);
1.502 μs (17 allocations: 4.34 KiB) |
Thanks for catching that! My changes should only apply to higher dimensional functions, d>2, but I did a mistake here. Currently the wrong iterator is called, which is using multiexponential from Combinatorics.jl and that is unfortunately slow. I will fix this, so that the original code is called for the 2d case and run the benchmark. |
I fixed the 2D case, it is now the same as before. Unfortunately, the higher dimensional cases are not very fast, because of the slow Stars and Bars implementation I am using.... I already implemented an own version of the algorithm which is faster than the Julia implementation, maybe I will add this later on.
I will open a PR in the ApproxFunOrthogonalPolynomials for the multivariate test cases soon. |
I have rewritten the iterator for better performances:
|
@jishnub Are you going to accept this PR? |
Yes sorry I've been busy, this seems good now |
Multivariate functions for Polynomial spaces (and maybe others).
Work in progress