API reference
Types and constants
EnzymeCore.ABI
EnzymeCore.Active
EnzymeCore.Annotation
EnzymeCore.BatchDuplicated
EnzymeCore.BatchDuplicatedNoNeed
EnzymeCore.BatchMixedDuplicated
EnzymeCore.Const
EnzymeCore.Duplicated
EnzymeCore.DuplicatedNoNeed
EnzymeCore.FFIABI
EnzymeCore.ForwardMode
EnzymeCore.InlineABI
EnzymeCore.MixedDuplicated
EnzymeCore.Mode
EnzymeCore.NonGenABI
EnzymeCore.ReverseMode
EnzymeCore.ReverseModeSplit
EnzymeCore.Forward
EnzymeCore.ForwardWithPrimal
EnzymeCore.Reverse
EnzymeCore.ReverseHolomorphic
EnzymeCore.ReverseHolomorphicWithPrimal
EnzymeCore.ReverseSplitNoPrimal
EnzymeCore.ReverseSplitWithPrimal
EnzymeCore.ReverseWithPrimal
EnzymeCore.EnzymeRules.AugmentedReturn
EnzymeCore.EnzymeRules.FwdConfig
EnzymeCore.EnzymeRules.RevConfig
EnzymeTestUtils.ExprAndMsg
Functions and macros
Enzyme.@import_frule
Enzyme.@import_rrule
Enzyme.gradient
Enzyme.gradient
Enzyme.gradient!
Enzyme.guess_activity
Enzyme.hvp
Enzyme.hvp!
Enzyme.hvp_and_gradient!
Enzyme.jacobian
Enzyme.jacobian
Enzyme.typetree
Enzyme.unsafe_to_pointer
EnzymeCore.NoPrimal
EnzymeCore.ReverseSplitModified
EnzymeCore.ReverseSplitWidth
EnzymeCore.WithPrimal
EnzymeCore.autodiff
EnzymeCore.autodiff
EnzymeCore.autodiff
EnzymeCore.autodiff
EnzymeCore.autodiff
EnzymeCore.autodiff_deferred
EnzymeCore.autodiff_deferred
EnzymeCore.autodiff_deferred_thunk
EnzymeCore.autodiff_thunk
EnzymeCore.autodiff_thunk
EnzymeCore.clear_err_if_func_written
EnzymeCore.clear_runtime_activity
EnzymeCore.compiler_job_from_backend
EnzymeCore.make_zero
EnzymeCore.make_zero
EnzymeCore.make_zero!
EnzymeCore.needs_primal
EnzymeCore.set_abi
EnzymeCore.set_err_if_func_written
EnzymeCore.set_runtime_activity
EnzymeCore.within_autodiff
EnzymeCore.within_autodiff
EnzymeCore.EnzymeRules.augmented_primal
EnzymeCore.EnzymeRules.forward
EnzymeCore.EnzymeRules.inactive
EnzymeCore.EnzymeRules.inactive_noinl
EnzymeCore.EnzymeRules.inactive_type
EnzymeCore.EnzymeRules.needs_primal
EnzymeCore.EnzymeRules.needs_shadow
EnzymeCore.EnzymeRules.noalias
EnzymeCore.EnzymeRules.primal_type
EnzymeCore.EnzymeRules.reverse
EnzymeCore.EnzymeRules.shadow_type
EnzymeTestUtils.@test_msg
EnzymeTestUtils.are_activities_compatible
EnzymeTestUtils.test_forward
EnzymeTestUtils.test_reverse
Enzyme.API.fast_math!
Enzyme.API.inlineall!
Enzyme.API.instname!
Enzyme.API.looseTypeAnalysis!
Enzyme.API.maxtypedepth!
Enzyme.API.maxtypeoffset!
Enzyme.API.memmove_warning!
Enzyme.API.printactivity!
Enzyme.API.printall!
Enzyme.API.printdiffuse!
Enzyme.API.printperf!
Enzyme.API.printtype!
Enzyme.API.printunnecessary!
Enzyme.API.strictAliasing!
Enzyme.API.strong_zero!
Enzyme.API.typeWarning!
Documentation
Enzyme.@import_frule
— Macroimport_frule(::fn, tys...)
Automatically import a ChainRulesCore.frule
as a custom forward mode
EnzymeRule. When called in batch mode, this will end up calling the primal multiple times, which may result in incorrect behavior if the function mutates, and slow code, always. Importing the rule from
ChainRules` is also likely to be slower than writing your own rule, and may also be slower than not having a rule at all.
Use with caution.
Enzyme.@import_frule(typeof(Base.sort), Any);
+
+x=[1.0, 2.0, 0.0]; dx=[0.1, 0.2, 0.3]; ddx = [0.01, 0.02, 0.03];
+
+Enzyme.autodiff(Forward, sort, Duplicated, BatchDuplicated(x, (dx,ddx)))
+Enzyme.autodiff(Forward, sort, DuplicatedNoNeed, BatchDuplicated(x, (dx,ddx)))
+Enzyme.autodiff(Forward, sort, DuplicatedNoNeed, BatchDuplicated(x, (dx,)))
+Enzyme.autodiff(Forward, sort, Duplicated, BatchDuplicated(x, (dx,)))
+
+# output
+
+(var"1" = [0.0, 1.0, 2.0], var"2" = (var"1" = [0.3, 0.1, 0.2], var"2" = [0.03, 0.01, 0.02]))
+(var"1" = (var"1" = [0.3, 0.1, 0.2], var"2" = [0.03, 0.01, 0.02]),)
+(var"1" = [0.3, 0.1, 0.2],)
+(var"1" = [0.0, 1.0, 2.0], var"2" = [0.3, 0.1, 0.2])
+
Enzyme.@import_rrule
— Macroimport_rrule(::fn, tys...)
Automatically import a ChainRules.rrule as a custom reverse mode EnzymeRule. When called in batch mode, this will end up calling the primal multiple times which results in slower code. This macro assumes that the underlying function to be imported is read-only, and returns a Duplicated or Const object. This macro also assumes that the inputs permit a .+= operation and that the output has a valid Enzyme.make_zero function defined. It also assumes that overwritten(x) accurately describes if there is any non-preserved data from forward to reverse, not just the outermost data structure being overwritten as provided by the specification.
Finally, this macro falls back to almost always caching all of the inputs, even if it may not be needed for the derivative computation.
As a result, this auto importer is also likely to be slower than writing your own rule, and may also be slower than not having a rule at all.
Use with caution.
Enzyme.@import_rrule(typeof(Base.sort), Any);
Enzyme.gradient!
— Methodgradient!(::ReverseMode, dx, f, x)
Compute the gradient of an array-input function f
using reverse mode, storing the derivative result in an existing array dx
. Both x
and dx
must be Array
s of the same type.
Example:
f(x) = x[1]*x[2]
+
+dx = [0.0, 0.0]
+gradient!(Reverse, dx, f, [2.0, 3.0])
+
+# output
+([3.0, 2.0],)
dx = [0.0, 0.0]
+gradient!(ReverseWithPrimal, dx, f, [2.0, 3.0])
+
+# output
+(derivs = ([3.0, 2.0],), val = 6.0)
Enzyme.gradient
— Methodgradient(::ReverseMode, f, args...)
Compute the gradient of a real-valued function f
using reverse mode. For each differentiable argument, this function will allocate and return new derivative object, returning a tuple of derivatives for each argument. If an argument is not differentiable, the element of the returned tuple with be nothing.
In reverse mode (here), the derivatives will be the same type as the original argument.
This is a structure gradient. For a struct x
it returns another instance of the same type, whose fields contain the components of the gradient. In the result, grad.a
contains ∂f/∂x.a
for any differential x.a
, while grad.c == x.c
for other types.
Examples:
f(x) = x[1]*x[2]
+
+grad = gradient(Reverse, f, [2.0, 3.0])
+
+# output
+([3.0, 2.0],)
grad = gradient(Reverse, only ∘ f, (a = 2.0, b = [3.0], c = "str"))
+
+# output
+
+((a = 3.0, b = [2.0], c = "str"),)
mul(x, y) = x[1]*y[1]
+
+grad = gradient(Reverse, mul, [2.0], [3.0])
+
+# output
+([3.0], [2.0])
+grad = gradient(Reverse, mul, [2.0], Const([3.0]))
+
+# output
+([3.0], nothing)
If passing a mode that returns the primal (e.g. ReverseWithPrimal), the return type will instead be a tuple where the first element contains the derivatives, and the second element contains the result of the original computation.
+grad = gradient(ReverseWithPrimal, f, [2.0, 3.0])
+
+# output
+(derivs = ([3.0, 2.0],), val = 6.0)
+grad = gradient(ReverseWithPrimal, mul, [2.0], [3.0])
+
+# output
+(derivs = ([3.0], [2.0]), val = 6.0)
grad = gradient(ReverseWithPrimal, mul, [2.0], Const([3.0]))
+
+# output
+(derivs = ([3.0], nothing), val = 6.0)
Enzyme.gradient
— Methodgradient(::ForwardMode, f, x; shadows=onehot(x), chunk=nothing)
Compute the gradient of an array-input function f
using forward mode. The optional keyword argument shadow
is a vector of one-hot vectors of type x
which are used to forward-propagate into the return. For performance reasons, this should be computed once, outside the call to gradient
, rather than within this call.
Example:
f(x) = x[1]*x[2]
+
+gradient(Forward, f, [2.0, 3.0])
+
+# output
+
+([3.0, 2.0],)
gradient(ForwardWithPrimal, f, [2.0, 3.0])
+
+# output
+(derivs = ([3.0, 2.0],), val = 6.0)
gradient(Forward, f, [2.0, 3.0]; chunk=Val(1))
+
+# output
+
+([3.0, 2.0],)
gradient(ForwardWithPrimal, f, [2.0, 3.0]; chunk=Val(1))
+
+# output
+(derivs = ([3.0, 2.0],), val = 6.0)
For functions which return an AbstractArray or scalar, this function will return an AbstractArray whose shape is (size(output)..., size(input)...)
. No guarantees are presently made about the type of the AbstractArray returned by this function (which may or may not be the same as the input AbstractArray if provided).
For functions who return other types, this function will retun an AbstractArray of shape size(input)
of values of the output type.
f(x) = [ x[1] * x[2], x[2] + x[3] ]
+
+grad = gradient(Forward, f, [2.0, 3.0, 4.0])
+
+# output
+([3.0 2.0 0.0; 0.0 1.0 1.0],)
This function supports multiple arguments and computes the gradient with respect to each
mul(x, y) = x[1]*y[2] + x[2]*y[1]
+
+gradient(Forward, mul, [2.0, 3.0], [2.7, 3.1])
+
+# output
+
+([3.1, 2.7], [3.0, 2.0])
This includes the ability to mark some arguments as Const
if its derivative is not needed, returning nothing in the corresponding derivative map.
gradient(Forward, mul, [2.0, 3.0], Const([2.7, 3.1]))
+
+# output
+
+([3.1, 2.7], nothing)
Enzyme.hvp!
— Methodhvp!(res::X, f::F, x::X, v::X) where {F, X}
Compute an in-place Hessian-vector product of an array-input scalar-output function f
, as evaluated at x
times the vector v
. The result will be stored into res
. The function still allocates and zero's a buffer to store the intermediate gradient, which is not returned to the user.
In other words, compute res .= hessian(f)(x) * v
See hvp_and_gradient!
for a function to compute both the hvp and the gradient in a single call.
Example:
f(x) = sin(x[1] * x[2])
+
+res = Vector{Float64}(undef, 2)
+hvp!(res, f, [2.0, 3.0], [5.0, 2.7])
+
+res
+# output
+2-element Vector{Float64}:
+ 19.6926882637302
+ 16.201003759768003
Enzyme.hvp
— Methodhvp(f::F, x::X, v::X) where {F, X}
Compute the Hessian-vector product of an array-input scalar-output function f
, as evaluated at x
times the vector v
.
In other words, compute hessian(f)(x) * v
See hvp!
for a version which stores the result in an existing buffer and also hvp_and_gradient!
for a function to compute both the hvp and the gradient in a single call.
Example:
f(x) = sin(x[1] * x[2])
+
+hvp(f, [2.0, 3.0], [5.0, 2.7])
+
+# output
+2-element Vector{Float64}:
+ 19.6926882637302
+ 16.201003759768003
Enzyme.hvp_and_gradient!
— Methodhvp_and_gradient!(res::X, grad::X, f::F, x::X, v::X) where {F, X}
Compute an in-place Hessian-vector product of an array-input scalar-output function f
, as evaluated at x
times the vector v
as well as the gradient, storing the gradient into grad
. Both the hessian vector product and the gradient can be computed together more efficiently than computing them separately.
The result will be stored into res
. The gradient will be stored into grad
.
In other words, compute res .= hessian(f)(x) * v and grad .= gradient(Reverse, f)(x)
Example:
f(x) = sin(x[1] * x[2])
+
+res = Vector{Float64}(undef, 2)
+grad = Vector{Float64}(undef, 2)
+hvp_and_gradient!(res, grad, f, [2.0, 3.0], [5.0, 2.7])
+
+res
+grad
+# output
+2-element Vector{Float64}:
+ 2.880510859951098
+ 1.920340573300732
Enzyme.jacobian
— Methodjacobian(::ForwardMode, args...; kwargs...)
Equivalent to gradient(::ForwardMode, args...; kwargs...)
Enzyme.jacobian
— Methodjacobian(::ReverseMode, f, x; n_outs=nothing, chunk=nothing)
+jacobian(::ReverseMode, f, x)
Compute the jacobian of a array-output function f
using (potentially vector) reverse mode. The chunk
argument optionally denotes the chunk size to use and n_outs
optionally denotes the shape of the array returned by f
(e.g size(f(x))
).
Example:
f(x) = [ x[1] * x[2], x[2] + x[3] ]
+
+jacobian(Reverse, f, [2.0, 3.0, 4.0])
+
+# output
+([3.0 2.0 0.0; 0.0 1.0 1.0],)
f(x) = [ x[1] * x[2], x[2] + x[3] ]
+
+grad = jacobian(ReverseWithPrimal, f, [2.0, 3.0, 4.0])
+
+# output
+(derivs = ([3.0 2.0 0.0; 0.0 1.0 1.0],), val = [6.0, 7.0])
f(x) = [ x[1] * x[2], x[2] + x[3] ]
+
+grad = jacobian(Reverse, f, [2.0, 3.0, 4.0], n_outs=Val((2,)))
+
+# output
+([3.0 2.0 0.0; 0.0 1.0 1.0],)
f(x) = [ x[1] * x[2], x[2] + x[3] ]
+
+grad = jacobian(ReverseWithPrimal, f, [2.0, 3.0, 4.0], n_outs=Val((2,)))
+
+# output
+(derivs = ([3.0 2.0 0.0; 0.0 1.0 1.0],), val = [6.0, 7.0])
This function will return an AbstractArray whose shape is (size(output)..., size(input)...)
. No guarantees are presently made about the type of the AbstractArray returned by this function (which may or may not be the same as the input AbstractArray if provided).
In the future, when this function is extended to handle non-array return types, this function will retun an AbstractArray of shape size(output)
of values of the input type. ```
Enzyme.typetree
— Functionfunction typetree(T, ctx, dl, seen=TypeTreeTable())
Construct a Enzyme typetree from a Julia type.
When using a memoized lookup by providing seen
across multiple calls to typtree the user must call copy
on the returned value before mutating it.
Enzyme.unsafe_to_pointer
— Methodunsafe_to_pointer
Assumes that val
is globally rooted and pointer to it can be leaked. Prefer pointer_from_objref
. Only use inside Enzyme.jl should be for Types.
EnzymeCore.autodiff
— Methodautodiff(::ForwardMode, f, Activity, args::Annotation...)
Auto-differentiate function f
at arguments args
using forward mode.
args
may be numbers, arrays, structs of numbers, structs of arrays and so on. Enzyme will only differentiate in respect to arguments that are wrapped in a Duplicated
or similar argument. Unlike reverse mode in autodiff
, Active
arguments are not allowed here, since all derivative results of immutable objects will be returned and should instead use Duplicated
or variants like DuplicatedNoNeed
.
Activity
is the Activity of the return value, it may be:
Const
if the return is not to be differentiated with respect toDuplicated
, if the return is being differentiated with respect toBatchDuplicated
, likeDuplicated
, but computing multiple derivatives at once. All batch sizes must be the same for all arguments.
Example returning both original return and derivative:
f(x) = x*x
+res, ∂f_∂x = autodiff(ForwardWithPrimal, f, Duplicated, Duplicated(3.14, 1.0))
+
+# output
+
+(6.28, 9.8596)
Example returning just the derivative:
f(x) = x*x
+∂f_∂x = autodiff(Forward, f, Duplicated, Duplicated(3.14, 1.0))
+
+# output
+
+(6.28,)
EnzymeCore.autodiff
— Methodautodiff(::ReverseMode, f, Activity, args::Annotation...)
Auto-differentiate function f
at arguments args
using reverse mode.
Limitations:
f
may only return aReal
(of a built-in/primitive type) ornothing
, not an array, struct,BigFloat
, etc. To handle vector-valued return types, use a mutatingf!
that returnsnothing
and stores it's return value in one of the arguments, which must be wrapped in aDuplicated
.
args
may be numbers, arrays, structs of numbers, structs of arrays and so on. Enzyme will only differentiate in respect to arguments that are wrapped in an Active
(for arguments whose derivative result must be returned rather than mutated in place, such as primitive types and structs thereof) or Duplicated
(for mutable arguments like arrays, Ref
s and structs thereof).
Activity
is the Activity of the return value, it may be Const
or Active
.
Example:
a = 4.2
+b = [2.2, 3.3]; ∂f_∂b = zero(b)
+c = 55; d = 9
+
+f(a, b, c, d) = a * √(b[1]^2 + b[2]^2) + c^2 * d^2
+∂f_∂a, _, _, ∂f_∂d = autodiff(Reverse, f, Active, Active(a), Duplicated(b, ∂f_∂b), Const(c), Active(d))[1]
+
+# output
+
+(3.966106403010388, nothing, nothing, 54450.0)
here, autodiff
returns a tuple $(\partial f/\partial a, \partial f/\partial d)$, while $\partial f/\partial b$ will be added to ∂f_∂b
(but not returned). c
will be treated as Const(c)
.
One can also request the original returned value of the computation.
Example:
Enzyme.autodiff(ReverseWithPrimal, x->x*x, Active(3.0))
+
+# output
+
+((6.0,), 9.0)
Enzyme gradients with respect to integer values are zero. Active
will automatically convert plain integers to floating point values, but cannot do so for integer values in tuples and structs.
EnzymeCore.autodiff
— Methodautodiff(::Function, ::Mode, args...)
Specialization of autodiff
to handle do argument closures.
+autodiff(Reverse, Active(3.1)) do x
+ return x*x
+end
+
+# output
+((6.2,),)
EnzymeCore.autodiff
— Methodautodiff(mode::Mode, f, args...)
Like autodiff
but will try to guess the activity of the return value.
EnzymeCore.autodiff
— Methodautodiff(mode::Mode, f, ::Type{A}, args::Annotation...)
Like autodiff
but will try to extend f to an annotation, if needed.
EnzymeCore.autodiff_deferred
— Methodautodiff_deferred(::ReverseMode, f, Activity, args::Annotation...)
Same as autodiff
but uses deferred compilation to support usage in GPU code, as well as high-order differentiation.
EnzymeCore.autodiff_deferred
— Methodautodiff_deferred(::ForwardMode, f, Activity, args::Annotation...)
Same as autodiff(::ForwardMode, f, Activity, args...)
but uses deferred compilation to support usage in GPU code, as well as high-order differentiation.
EnzymeCore.autodiff_deferred_thunk
— Methodautodiff_deferred_thunk(::ReverseModeSplit, TapeType::Type, ftype::Type{<:Annotation}, Activity::Type{<:Annotation}, argtypes::Type{<:Annotation}...)
Provide the split forward and reverse pass functions for annotated function type ftype when called with args of type argtypes
when using reverse mode.
Activity
is the Activity of the return value, it may be Const
, Active
, or Duplicated
(or its variants DuplicatedNoNeed
, BatchDuplicated
, and BatchDuplicatedNoNeed
).
The forward function will return a tape, the primal (or nothing if not requested), and the shadow (or nothing if not a Duplicated
variant), and tapes the corresponding type arguements provided.
The reverse function will return the derivative of Active
arguments, updating the Duplicated
arguments in place. The same arguments to the forward pass should be provided, followed by the adjoint of the return (if the return is active), and finally the tape from the forward pass.
Example:
+A = [2.2]; ∂A = zero(A)
+v = 3.3
+
+function f(A, v)
+ res = A[1] * v
+ A[1] = 0
+ res
+end
+
+TapeType = tape_type(ReverseSplitWithPrimal, Const{typeof(f)}, Active, Duplicated{typeof(A)}, Active{typeof(v)})
+forward, reverse = autodiff_deferred_thunk(ReverseSplitWithPrimal, TapeType, Const{typeof(f)}, Active{Float64}, Duplicated{typeof(A)}, Active{typeof(v)})
+
+tape, result, shadow_result = forward(Const(f), Duplicated(A, ∂A), Active(v))
+_, ∂v = reverse(Const(f), Duplicated(A, ∂A), Active(v), 1.0, tape)[1]
+
+result, ∂v, ∂A
+
+# output
+
+(7.26, 2.2, [3.3])
EnzymeCore.autodiff_thunk
— Methodautodiff_thunk(::ForwardMode, ftype, Activity, argtypes::Type{<:Annotation}...)
Provide the thunk forward mode function for annotated function type ftype when called with args of type argtypes
.
Activity
is the Activity of the return value, it may be Const
or Duplicated
(or its variants DuplicatedNoNeed
, BatchDuplicated
, andBatchDuplicatedNoNeed
).
The forward function will return the primal (if requested) and the shadow (or nothing if not a Duplicated
variant).
Example returning both the return derivative and original return:
a = 4.2
+b = [2.2, 3.3]; ∂f_∂b = zero(b)
+c = 55; d = 9
+
+f(x) = x*x
+forward = autodiff_thunk(ForwardWithPrimal, Const{typeof(f)}, Duplicated, Duplicated{Float64})
+res, ∂f_∂x = forward(Const(f), Duplicated(3.14, 1.0))
+
+# output
+
+(6.28, 9.8596)
Example returning just the derivative:
a = 4.2
+b = [2.2, 3.3]; ∂f_∂b = zero(b)
+c = 55; d = 9
+
+f(x) = x*x
+forward = autodiff_thunk(Forward, Const{typeof(f)}, Duplicated, Duplicated{Float64})
+∂f_∂x = forward(Const(f), Duplicated(3.14, 1.0))
+
+# output
+
+(6.28,)
EnzymeCore.autodiff_thunk
— Methodautodiff_thunk(::ReverseModeSplit, ftype, Activity, argtypes::Type{<:Annotation}...)
Provide the split forward and reverse pass functions for annotated function type ftype when called with args of type argtypes
when using reverse mode.
Activity
is the Activity of the return value, it may be Const
, Active
, or Duplicated
(or its variants DuplicatedNoNeed
, BatchDuplicated
, and BatchDuplicatedNoNeed
).
The forward function will return a tape, the primal (or nothing if not requested), and the shadow (or nothing if not a Duplicated
variant), and tapes the corresponding type arguements provided.
The reverse function will return the derivative of Active
arguments, updating the Duplicated
arguments in place. The same arguments to the forward pass should be provided, followed by the adjoint of the return (if the return is active), and finally the tape from the forward pass.
Example:
+A = [2.2]; ∂A = zero(A)
+v = 3.3
+
+function f(A, v)
+ res = A[1] * v
+ A[1] = 0
+ res
+end
+
+forward, reverse = autodiff_thunk(ReverseSplitWithPrimal, Const{typeof(f)}, Active, Duplicated{typeof(A)}, Active{typeof(v)})
+
+tape, result, shadow_result = forward(Const(f), Duplicated(A, ∂A), Active(v))
+_, ∂v = reverse(Const(f), Duplicated(A, ∂A), Active(v), 1.0, tape)[1]
+
+result, ∂v, ∂A
+
+# output
+
+(7.26, 2.2, [3.3])
EnzymeCore.within_autodiff
— Methodwithin_autodiff()
Returns true if within autodiff, otherwise false.
EnzymeCore.ABI
— Typeabstract type ABI
Abstract type for what ABI will be used.
Subtypes
EnzymeCore.Active
— TypeActive(x)
Mark a function argument x
of autodiff
as active, Enzyme will auto-differentiate in respect Active
arguments.
Enzyme gradients with respect to integer values are zero. Active
will automatically convert plain integers to floating point values, but cannot do so for integer values in tuples and structs.
EnzymeCore.Annotation
— Typeabstract type Annotation{T}
Abstract type for autodiff
function argument wrappers like Const
, Active
and Duplicated
.
EnzymeCore.BatchDuplicated
— TypeBatchDuplicated(x, ∂f_∂xs)
Like Duplicated
, except contains several shadows to compute derivatives for all at once. Argument ∂f_∂xs
should be a tuple of the several values of type x
.
EnzymeCore.BatchDuplicatedNoNeed
— TypeBatchDuplicatedNoNeed(x, ∂f_∂xs)
Like DuplicatedNoNeed
, except contains several shadows to compute derivatives for all at once. Argument ∂f_∂xs
should be a tuple of the several values of type x
.
EnzymeCore.BatchMixedDuplicated
— TypeBatchMixedDuplicated(x, ∂f_∂xs)
Like MixedDuplicated
, except contains several shadows to compute derivatives for all at once. Only used within custom rules.
EnzymeCore.Const
— TypeConst(x)
Mark a function argument x
of autodiff
as constant, Enzyme will not auto-differentiate in respect Const
arguments.
EnzymeCore.Duplicated
— TypeDuplicated(x, ∂f_∂x)
Mark a function argument x
of autodiff
as duplicated, Enzyme will auto-differentiate in respect to such arguments, with dx
acting as an accumulator for gradients (so $\partial f / \partial x$ will be added to) ∂f_∂x
.
EnzymeCore.DuplicatedNoNeed
— TypeDuplicatedNoNeed(x, ∂f_∂x)
Like Duplicated
, except also specifies that Enzyme may avoid computing the original result and only compute the derivative values. This creates opportunities for improved performance.
+function square_byref(out, v)
+ out[] = v * v
+ nothing
+end
+
+out = Ref(0.0)
+dout = Ref(1.0)
+Enzyme.autodiff(Reverse, square_byref, DuplicatedNoNeed(out, dout), Active(1.0))
+dout[]
+
+# output
+0.0
For example, marking the out variable as DuplicatedNoNeed
instead of Duplicated
allows Enzyme to avoid computing v * v
(while still computing its derivative).
This should only be used if x
is a write-only variable. Otherwise, if the differentiated function stores values in x
and reads them back in subsequent computations, using DuplicatedNoNeed
may result in incorrect derivatives. In particular, DuplicatedNoNeed
should not be used for preallocated workspace, even if the user might not care about its final value, as marking a variable as NoNeed means that reads from the variable are now undefined.
EnzymeCore.FFIABI
— Typestruct FFIABI <: ABI
Foreign function call ABI
. JIT the differentiated function, then inttoptr call the address.
EnzymeCore.ForwardMode
— Typestruct ForwardMode{
+ ReturnPrimal,
+ ABI,
+ ErrIfFuncWritten,
+ RuntimeActivity
+} <: Mode{ABI,ErrIfFuncWritten,RuntimeActivity}
Subtype of Mode
for forward mode differentiation.
Type parameters
ReturnPrimal
: whether to return the primal return value from the augmented-forward.- other parameters: see
Mode
The type parameters of ForwardMode
are not part of the public API and can change without notice. Please use one of the following concrete instantiations instead:
You can modify them with the following helper functions:
EnzymeCore.InlineABI
— Typestruct InlineABI <: ABI
Inlining function call ABI
.
EnzymeCore.MixedDuplicated
— TypeMixedDuplicated(x, ∂f_∂x)
Like Duplicated
, except x may contain both active [immutable] and duplicated [mutable] data which is differentiable. Only used within custom rules.
EnzymeCore.Mode
— Typeabstract type Mode{ABI,ErrIfFuncWritten,RuntimeActivity}
Abstract type for which differentiation mode will be used.
Subtypes
Type parameters
ABI
: what runtimeABI
to useErrIfFuncWritten
: whether to error when the function differentiated is a closure and written to.RuntimeActivity
: whether to enable runtime activity (default off)
The type parameters of Mode
are not part of the public API and can change without notice. You can modify them with the following helper functions:
EnzymeCore.NonGenABI
— Typestruct NonGenABI <: ABI
Non-generated function ABI
.
EnzymeCore.ReverseMode
— Typestruct ReverseMode{
+ ReturnPrimal,
+ RuntimeActivity,
+ ABI,
+ Holomorphic,
+ ErrIfFuncWritten
+} <: Mode{ABI,ErrIfFuncWritten,RuntimeActivity}
Subtype of Mode
for reverse mode differentiation.
Type parameters
ReturnPrimal
: whether to return the primal return value from the augmented-forward pass.Holomorphic
: Whether the complex result function is holomorphic and we should computed/dz
- other parameters: see
Mode
The type parameters of ReverseMode
are not part of the public API and can change without notice. Please use one of the following concrete instantiations instead:
You can modify them with the following helper functions:
EnzymeCore.ReverseModeSplit
— Typestruct ReverseModeSplit{
+ ReturnPrimal,
+ ReturnShadow,
+ Width,
+ RuntimeActivity,
+ ModifiedBetween,
+ ABI,
+ ErrFuncIfWritten
+} <: Mode{ABI,ErrIfFuncWritten,RuntimeActivity}
+ WithPrimal(::Enzyme.Mode)
Subtype of Mode
for split reverse mode differentiation, to use in autodiff_thunk
and variants.
Type parameters
ReturnShadow
: whether to return the shadow return value from the augmented-forward.Width
: batch size (pick0
to derive it automatically)ModifiedBetween
:Tuple
of each argument's "modified between" state (picktrue
to derive it automatically).- other parameters: see
ReverseMode
The type parameters of ReverseModeSplit
are not part of the public API and can change without notice. Please use one of the following concrete instantiations instead:
You can modify them with the following helper functions:
EnzymeCore.Forward
— Constantconst Forward
Default instance of ForwardMode
that doesn't return the primal
EnzymeCore.ForwardWithPrimal
— Constantconst ForwardWithPrimal
Default instance of ForwardMode
that also returns the primal
EnzymeCore.Reverse
— Constantconst Reverse
Default instance of ReverseMode
that doesn't return the primal
EnzymeCore.ReverseHolomorphic
— Constantconst ReverseHolomorphic
Holomorphic instance of ReverseMode
that doesn't return the primal
EnzymeCore.ReverseHolomorphicWithPrimal
— Constantconst ReverseHolomorphicWithPrimal
Holomorphic instance of ReverseMode
that also returns the primal
EnzymeCore.ReverseSplitNoPrimal
— Constantconst ReverseSplitNoPrimal
Default instance of ReverseModeSplit
that doesn't return the primal
EnzymeCore.ReverseSplitWithPrimal
— Constantconst ReverseSplitWithPrimal
Default instance of ReverseModeSplit
that also returns the primal
EnzymeCore.ReverseWithPrimal
— Constantconst ReverseWithPrimal
Default instance of ReverseMode
that also returns the primal.
EnzymeCore.NoPrimal
— MethodNoPrimal(::Mode)
Return a new mode which excludes the primal value.
EnzymeCore.ReverseSplitModified
— MethodReverseSplitModified(::ReverseModeSplit, ::Val{MB})
Return a new instance of ReverseModeSplit
mode where ModifiedBetween
is set to MB
.
EnzymeCore.ReverseSplitWidth
— MethodReverseSplitWidth(::ReverseModeSplit, ::Val{W})
Return a new instance of ReverseModeSplit
mode where Width
is set to W
.
EnzymeCore.WithPrimal
— MethodWithPrimal(::Mode)
Return a new mode which includes the primal value.
EnzymeCore.clear_err_if_func_written
— Functionclear_err_if_func_written(::Mode)
Return a new mode which doesn't throw an error for attempts to write into an unannotated function object.
EnzymeCore.clear_runtime_activity
— Functionclear_runtime_activity(::Mode)
Return a new mode where runtime activity analysis is deactivated.
EnzymeCore.compiler_job_from_backend
— Functioncompiler_job_from_backend(::KernelAbstractions.Backend, F::Type, TT:Type)::GPUCompiler.CompilerJob
Returns a GPUCompiler CompilerJob from a backend as specified by the first argument to the function.
For example, in CUDA one would do:
function EnzymeCore.compiler_job_from_backend(::CUDABackend, @nospecialize(F::Type), @nospecialize(TT::Type))
+ mi = GPUCompiler.methodinstance(F, TT)
+ return GPUCompiler.CompilerJob(mi, CUDA.compiler_config(CUDA.device()))
+end
EnzymeCore.make_zero
— Functionmake_zero(::Type{T}, seen::IdDict, prev::T, ::Val{copy_if_inactive}=Val(false))::T
Recursively make a zero'd copy of the value prev
of type T
. The argument copy_if_inactive
specifies what to do if the type T
is guaranteed to be inactive, use the primal (the default) or still copy the value.
EnzymeCore.make_zero!
— Functionmake_zero!(val::T, seen::IdSet{Any}=IdSet())::Nothing
Recursively set a variables differentiable fields to zero. Only applicable for mutable types T
.
EnzymeCore.make_zero
— Methodmake_zero(prev::T)
Helper function to recursively make zero.
EnzymeCore.needs_primal
— Methodneeds_primal(::Mode)
+needs_primal(::Type{Mode})
Returns true
if the mode needs the primal value, otherwise false
.
EnzymeCore.set_abi
— Functionset_abi(::Mode, ::Type{ABI})
Return a new mode with its ABI
set to the chosen type.
EnzymeCore.set_err_if_func_written
— Functionset_err_if_func_written(::Mode)
Return a new mode which throws an error for any attempt to write into an unannotated function object.
EnzymeCore.set_runtime_activity
— Functionset_runtime_activity(::Mode)
+set_runtime_activity(::Mode, activitiy::Bool)
+set_runtime_activity(::Mode, config::Union{FwdConfig,RevConfig})
Return a new mode where runtime activity analysis is activated / set to the desired value.
EnzymeCore.within_autodiff
— Functionwithin_autodiff()
Returns true if within autodiff, otherwise false.
EnzymeCore.EnzymeRules.AugmentedReturn
— TypeAugmentedReturn(primal, shadow, tape)
Augment the primal return value of a function with its shadow, as well as any additional information needed to correctly compute the reverse pass, stored in tape
.
Unless specified by the config that a variable is not overwritten, rules must assume any arrays/data structures/etc are overwritten between the forward and the reverse pass. Any floats or variables passed by value are always preserved as is (as are the arrays themselves, just not necessarily the values in the array).
See also augmented_primal
.
EnzymeCore.EnzymeRules.FwdConfig
— TypeFwdConfig{NeedsPrimal, NeedsShadow, Width, RuntimeActivity}
+FwdConfigWidth{Width} = FwdConfig{<:Any, <:Any, Width}
Configuration type to dispatch on in custom forward rules (see forward
.
NeedsPrimal
andNeedsShadow
: boolean values specifying whether the primal and shadow (resp.) should be returned.Width
: an integer that specifies the number of adjoints/shadows simultaneously being propagated.RuntimeActivity
: whether runtime activity is enabled.
Getters for the type parameters are provided by needs_primal
, needs_shadow
, width
and runtime_activity
.
EnzymeCore.EnzymeRules.RevConfig
— TypeRevConfig{NeedsPrimal, NeedsShadow, Width, Overwritten, RuntimeActivity}
+RevConfigWidth{Width} = RevConfig{<:Any, <:Any, Width}
Configuration type to dispatch on in custom reverse rules (see augmented_primal
and reverse
).
NeedsPrimal
andNeedsShadow
: boolean values specifying whether the primal and shadow (resp.) should be returned.Width
: an integer that specifies the number of adjoints/shadows simultaneously being propagated.Overwritten
: a tuple of booleans of whether each argument (including the function itself) is modified between the forward and reverse pass (true if potentially modified between).RuntimeActivity
: whether runtime activity is enabled.
Getters for the four type parameters are provided by needs_primal
, needs_shadow
, width
, overwritten
, and runtime_activity
.
EnzymeCore.EnzymeRules.augmented_primal
— Functionaugmented_primal(::RevConfig, func::Annotation{typeof(f)}, RT::Type{<:Annotation}, args::Annotation...)
Must return an AugmentedReturn
type.
- The primal must be the same type of the original return if
needs_primal(config)
, otherwise nothing. - The shadow must be nothing if needs_shadow(config) is false. If width is 1, the shadow should be the same type of the original return. If the width is greater than 1, the shadow should be NTuple{original return, width}.
- The tape can be any type (including Nothing) and is preserved for the reverse call.
EnzymeCore.EnzymeRules.forward
— Functionforward(fwdconfig, func::Annotation{typeof(f)}, RT::Type{<:Annotation}, args::Annotation...)
Calculate the forward derivative. The first argument is a `FwdConfig object describing parameters of the differentiation. The second argument func
is the callable for which the rule applies to. Either wrapped in a Const
), or a Duplicated
if it is a closure. The third argument is the return type annotation, and all other arguments are the annotated function arguments.
EnzymeCore.EnzymeRules.inactive
— Functioninactive(func::typeof(f), args...)
Mark a particular function as always being inactive in both its return result and the function call itself.
EnzymeCore.EnzymeRules.inactive_noinl
— Functioninactive_noinl(func::typeof(f), args...)
Mark a particular function as always being inactive in both its return result and the function call itself, but do not prevent inlining of the function.
EnzymeCore.EnzymeRules.inactive_type
— Methodinactive_type(::Type{Ty})
Mark a particular type Ty
as always being inactive.
EnzymeCore.EnzymeRules.needs_primal
— Methodneeds_primal(::FwdConfig)
+needs_primal(::RevConfig)
Whether a custom rule should return the original result of the function.
EnzymeCore.EnzymeRules.needs_shadow
— Methodneeds_shadow(::FwdConfig)
+needs_shadow(::RevConfig)
Whether a custom rule should return the shadow (derivative) of the function result.
EnzymeCore.EnzymeRules.noalias
— Functionnoalias(func::typeof(f), args...)
Mark a particular function as always being a fresh allocation which does not alias any other accessible memory.
EnzymeCore.EnzymeRules.primal_type
— Methodprimal_type(::FwdConfig, ::Type{<:Annotation{RT}})
+primal_type(::RevConfig, ::Type{<:Annotation{RT}})
Compute the exepcted primal return type given a reverse mode config and return activity
EnzymeCore.EnzymeRules.reverse
— Functionreverse(::RevConfig, func::Annotation{typeof(f)}, dret::Active, tape, args::Annotation...)
+reverse(::RevConfig, func::Annotation{typeof(f)}, ::Type{<:Annotation), tape, args::Annotation...)
Takes gradient of derivative, activity annotation, and tape. If there is an active return dret is passed as Active{T} with the derivative of the active return val. Otherwise dret is passed as Type{Duplicated{T}}, etc.
EnzymeCore.EnzymeRules.shadow_type
— Methodshadow_type(::FwdConfig, ::Type{<:Annotation{RT}})
+shadow_type(::RevConfig, ::Type{<:Annotation{RT}})
Compute the exepcted shadow return type given a reverse mode config and return activity
EnzymeTestUtils.ExprAndMsg
— TypeA cunning hack to carry extra message along with the original expression in a test
EnzymeTestUtils.@test_msg
— Macro@test_msg msg condion kws...
This is per Test.@test condion kws...
except that if it fails it also prints the msg
. If msg==""
then this is just like @test
, nothing is printed
Examles
julia> @test_msg "It is required that the total is under 10" sum(1:1000) < 10;
+Test Failed at REPL[1]:1
+ Expression: sum(1:1000) < 10
+ Problem: It is required that the total is under 10
+ Evaluated: 500500 < 10
+ERROR: There was an error during testing
+
+
+julia> @test_msg "It is required that the total is under 10" error("not working at all");
+Error During Test at REPL[2]:1
+ Test threw exception
+ Expression: error("not working at all")
+ Problem: It is required that the total is under 10
+ "not working at all"
+ Stacktrace:
+
+julia> a = "";
+
+julia> @test_msg a sum(1:1000) < 10;
+ Test Failed at REPL[153]:1
+ Expression: sum(1:1000) < 10
+ Evaluated: 500500 < 10
+ ERROR: There was an error during testing
EnzymeTestUtils.are_activities_compatible
— Methodare_activities_compatible(Tret, activities...) -> Bool
Return true
if return activity type Tret
and activity types activities
are compatible.
EnzymeTestUtils.test_forward
— Methodtest_forward(f, Activity, args...; kwargs...)
Test Enzyme.autodiff
of f
in Forward
-mode against finite differences.
f
has all constraints of the same argument passed to Enzyme.autodiff
, with additional constraints:
- If it mutates one of its arguments, it must return that argument.
Arguments
Activity
: the activity of the return value off
args
: Each entry is either an argument tof
, an activity type accepted byautodiff
, or a tuple of the form(arg, Activity)
, whereActivity
is the activity type ofarg
. If the activity type specified requires a tangent, a random tangent will be automatically generated.
Keywords
rng::AbstractRNG
: The random number generator to use for generating random tangents.fdm=FiniteDifferences.central_fdm(5, 1)
: The finite differences method to use.fkwargs
: Keyword arguments to pass tof
.rtol
: Relative tolerance forisapprox
.atol
: Absolute tolerance forisapprox
.testset_name
: Name to use for a testset in which all tests are evaluated.
Examples
Here we test a rule for a function of scalars. Because we don't provide an activity annotation for y
, it is assumed to be Const
.
using Enzyme, EnzymeTestUtils
+
+x, y = randn(2)
+for Tret in (Const, Duplicated, DuplicatedNoNeed), Tx in (Const, Duplicated)
+ test_forward(*, Tret, (x, Tx), y)
+end
Here we test a rule for a function of an array in batch forward-mode:
x = randn(3)
+y = randn()
+for Tret in (Const, BatchDuplicated, BatchDuplicatedNoNeed),
+ Tx in (Const, BatchDuplicated),
+ Ty in (Const, BatchDuplicated)
+
+ test_forward(*, Tret, (x, Tx), (y, Ty))
+end
EnzymeTestUtils.test_reverse
— Methodtest_reverse(f, Activity, args...; kwargs...)
Test Enzyme.autodiff_thunk
of f
in ReverseSplitWithPrimal
-mode against finite differences.
f
has all constraints of the same argument passed to Enzyme.autodiff_thunk
, with additional constraints:
- If an
Array{<:AbstractFloat}
appears in the input/output, then a reshaped version of it may not also appear in the input/output.
Arguments
Activity
: the activity of the return value off
.args
: Each entry is either an argument tof
, an activity type accepted byautodiff
, or a tuple of the form(arg, Activity)
, whereActivity
is the activity type ofarg
. If the activity type specified requires a shadow, one will be automatically generated.
Keywords
rng::AbstractRNG
: The random number generator to use for generating random tangents.fdm=FiniteDifferences.central_fdm(5, 1)
: The finite differences method to use.fkwargs
: Keyword arguments to pass tof
.rtol
: Relative tolerance forisapprox
.atol
: Absolute tolerance forisapprox
.testset_name
: Name to use for a testset in which all tests are evaluated.
Examples
Here we test a rule for a function of scalars. Because we don't provide an activity annotation for y
, it is assumed to be Const
.
using Enzyme, EnzymeTestUtils
+
+x = randn()
+y = randn()
+for Tret in (Const, Active), Tx in (Const, Active)
+ test_reverse(*, Tret, (x, Tx), y)
+end
Here we test a rule for a function of an array in batch reverse-mode:
x = randn(3)
+for Tret in (Const, Active), Tx in (Const, BatchDuplicated)
+ test_reverse(prod, Tret, (x, Tx))
+end
Enzyme.API.fast_math!
— Methodfast_math!(val::Bool)
Whether generated derivatives have fast math on or off, default on.
Enzyme.API.inlineall!
— Methodinlineall!(val::Bool)
Whether to inline all (non-recursive) functions generated by Julia within a single compilation unit. This may improve Enzyme's ability to successfully differentiate code and improve performance of the original and generated derivative program. It often, however, comes with an increase in compile time. This is off by default.
Enzyme.API.instname!
— Methodinstname!(val::Bool)
Whether to add a name to all LLVM values. This may be helpful for debugging generated programs, both primal and derivative. Off by default.
Enzyme.API.looseTypeAnalysis!
— MethodlooseTypeAnalysis!(val::Bool)
Enzyme runs a type analysis to deduce the corresponding types of all values being differentiated. This is necessary to compute correct derivatives of various values. For example, a copy of Float32's requires a different derivative than a memcpy of Float64's, Ptr's, etc. In some cases Enzyme may not be able to deduce all the types necessary and throw an unknown type error. If this is the case, open an issue. One can silence these issues by setting looseTypeAnalysis!(true)
which tells Enzyme to make its best guess. This will remove the error and allow differentiation to continue, however, it may produce incorrect results. Alternatively one can consider increasing the space of the evaluated type lattice which gives Enzyme more time to run a more thorough analysis through the use of maxtypeoffset!
Enzyme.API.maxtypedepth!
— Methodmaxtypedepth!(val::Int)
Enzyme runs a type analysis to deduce the corresponding types of all values being differentiated. This is necessary to compute correct derivatives of various values. To ensure this analysis temrinates, it operates on a finite lattice of possible states. This function sets the maximum depth into a type that Enzyme will consider. A smaller value will cause type analysis to run faster, but may result in some necessary types not being found and result in unknown type errors. A larger value may result in unknown type errors being resolved by searching a larger space, but may run longer. The default setting is 6.
Enzyme.API.maxtypeoffset!
— Methodmaxtypeoffset!(val::Int)
Enzyme runs a type analysis to deduce the corresponding types of all values being differentiated. This is necessary to compute correct derivatives of various values. To ensure this analysis temrinates, it operates on a finite lattice of possible states. This function sets the maximum offset into a type that Enzyme will consider. A smaller value will cause type analysis to run faster, but may result in some necessary types not being found and result in unknown type errors. A larger value may result in unknown type errors being resolved by searching a larger space, but may run longer. The default setting is 512.
Enzyme.API.memmove_warning!
— Methodmemmove_warning!(val::Bool)
Whether to issue a warning when differentiating memmove. Off by default.
Enzyme.API.printactivity!
— Methodprintactivity!(val::Bool)
An debugging option for developers of Enzyme. If one sets this flag prior to the first differentiation of a function, Enzyme will print (to stderr) a log of all decisions made during Activity Analysis (the analysis which determines what values/instructions are differentiated). This may be useful for debugging MixedActivity errors, correctness, and performance errors. Off by default
Enzyme.API.printall!
— Methodprintall!(val::Bool)
An debugging option for developers of Enzyme. If one sets this flag prior to the first differentiation of a function, Enzyme will print (to stderr) the LLVM function being differentiated, as well as all generated derivatives immediately after running Enzyme (but prior to any other optimizations). Off by default
Enzyme.API.printdiffuse!
— Methodprintdiffuse!(val::Bool)
An debugging option for developers of Enzyme. If one sets this flag prior to the first differentiation of a function, Enzyme will print (to stderr) information about each LLVM value – specifically whether it and its shadow is required for computing the derivative. In contrast to printunnecessary!
, this flag prints debug log for the analysis which determines for each value and shadow value, whether it can find a user which would require it to be kept around (rather than being deleted). This is prior to any cache optimizations and a debug log of Differential Use Analysis. This may be helpful for debugging caching, phi node deletion, performance, and other errors. Off by default
Enzyme.API.printperf!
— Methodprintperf!(val::Bool)
An debugging option for developers of Enzyme. If one sets this flag prior to the first differentiation of a function, Enzyme will print (to stderr) performance information about generated derivative programs. It will provide debug information that warns why particular values are cached for the reverse pass, and thus require additional computation/storage. This is particularly helpful for debugging derivatives which OOM or otherwise run slow. ff by default
Enzyme.API.printtype!
— Methodprinttype!(val::Bool)
An debugging option for developers of Enzyme. If one sets this flag prior to the first differentiation of a function, Enzyme will print (to stderr) a log of all decisions made during Type Analysis (the analysis which Enzyme determines the type of all values in the program). This may be useful for debugging correctness errors, illegal type analysis errors, insufficient type information errors, correctness, and performance errors. Off by default
Enzyme.API.printunnecessary!
— Methodprintunnecessary!(val::Bool)
An debugging option for developers of Enzyme. If one sets this flag prior to the first differentiation of a function, Enzyme will print (to stderr) information about each LLVM value – specifically whether it and its shadow is required for computing the derivative. In contrast to printdiffuse!
, this flag prints the final results after running cache optimizations such as minCut (see Recompute vs Cache Heuristics from this paper and slides 31-33 from this presentation) for a description of the caching algorithm. This may be helpful for debugging caching, phi node deletion, performance, and other errors. Off by default
Enzyme.API.strictAliasing!
— MethodstrictAliasing!(val::Bool)
Whether Enzyme's type analysis will assume strict aliasing semantics. When strict aliasing semantics are on (the default), Enzyme can propagate type information up through conditional branches. This may lead to illegal type errors when analyzing code with unions. Disabling strict aliasing will enable these union types to be correctly analyzed. However, it may lead to some errors that sufficient type information cannot be deduced. One can turn these insufficient type information errors into to warnings by calling looseTypeAnalysis!
(true)
which tells Enzyme to use its best guess in such scenarios.
Enzyme.API.strong_zero!
— Methodstrong_zero!(val::Bool)
Whether to enforce multiplication by zero as enforcing a zero result even if multiplying against a NaN or infinity. Necessary for some programs in which a value has a zero derivative since it is unused, even if it has an otherwise infinite or nan derivative.
Enzyme.API.typeWarning!
— MethodtypeWarning!(val::Bool)
Whether to print a warning when Type Analysis learns informatoin about a value's type which cannot be represented in the current size of the lattice. See maxtypeoffset!
for more information. Off by default.