0.2.0
0.2.0
New features
Flexbox "gap" and AlignContent::SpaceEvenly
The gap property is now supported on flex containers. This can make it much easier to create even spacing or "gutters" between nodes.
Additionally we have a SpaceEvenly
variant to the AlignContent
enum to support evenly spaced justification in the cross axis (equivalent to align-content: space-evenly
in CSS)
Debug module and cargo feature
Two debugging features have been added:
taffy::debug::print_tree(&Taffy, root)
- This will print a debug representation of the computed layout of an entire node tree (starting atroot
), which can be useful for debugging layouts.- A cargo feature
debug
. This enabled debug logging of the layout computation process itself (this is probably mainly useful for those working taffy itself).
Performance improvements
A number of performance improvements have landed since taffy 0.1:
- Firstly, our custom
taffy::forest
storage implementation was ripped out and replaced with a much simpler implementation using theslotmap
crate. This led to performance increases of up to 90%. - Secondly, the caching implementation was improved by upping the number of cache slots from 2 to 4 and tweaking how computed results are allocated to chache slots to better match the actual usage patterns of the flexbox layout algorithm. This had a particularly dramatic effect on deep hierachies (which often involve recomputing the same results repeatedly), fixing the exponential blowup that was previously exhibited on these trees and improving performance by over 1000x in some cases!
Benchmarks vs. Taffy 0.1
Benchmark | Taffy 0.1 | Taffy 0.2 | % change (0.1 -> 0.2) |
---|---|---|---|
wide/1_000 nodes (2-level hierarchy) | 699.18 µs | 445.01 µs | -36.279% |
wide/10_000 nodes (2-level hierarchy) | 8.8244 ms | 7.1313 ms | -16.352% |
wide/100_000 nodes (2-level hierarchy) | 204.48 ms | 242.93 ms | +18.803% |
deep/4000 nodes (12-level hierarchy)) | 5.2320 s | 2.7363 ms | -99.947% |
deep/10_000 nodes (14-level hierarchy) | 75.207 s | 6.9415 ms | -99.991% |
deep/100_000 nodes (17-level hierarchy) | - | 102.72 ms | - |
deep/1_000_000 nodes (20-level hierarchy) | - | 799.35 ms | - |
(note that the table above contains multiple different units (milliseconds vs. microseconds vs. nanoseconds))
As you can see, we have actually regressed slightly in the "wide" benchmarks (where all nodes are siblings of a single parent node). Although it should be noted our results in these benchmarks are still very fast, especially on the 10,000 node benchmark which we consider to be the most realistic size where the result is measured in microseconds.
However, in the "deep" benchmarks we see dramatic improvements. The previous version of Taffy suffered from exponential blowup in the case of deeply nested hierachies. This has resulted in somewhat silly improvements like the 10,000 node (14-level) hierachy where Taffy 0.2 is a full 1 million times faster than Taffy 0.1. We've also included results with larger numbers of nodes (although you're unlikely to need that many) to demonstrate that this scalability continues up to even deeper levels of nesting.
Benchmarks vs. Yoga
Yoga benchmarks run via it's node.js bindings (the yoga-layout-prebuilt
npm package), they were run a few times manually and it was verified that variance in the numbers of each run was minimal. It should be noted that this is using an old version of Yoga.
Benchmark | Yoga | Taffy 0.2 |
---|---|---|
yoga/10 nodes (1-level hierarchy) | 45.1670 µs | 33.297 ns |
yoga/100 nodes (2-level hierarchy) | 134.1250 µs | 336.53 ns |
yoga/1_000 nodes (3-level hierarchy) | 1.2221 ms | 3.8928 µs |
yoga/10_000 nodes (4-level hierarchy) | 13.8672 ms | 36.162 µs |
yoga/100_000 nodes (5-level hierarchy) | 141.5307 ms | 1.6404 ms |
(note that the table above contains multiple different units (milliseconds vs. microseconds vs. nanoseconds))
While we're trying not to get too excited (there could easily be an issue with our benchmarking methodology which make this an unfair comparison), we are pleased to see that we seem to be anywhere between 100x and 1000x times faster depending on the node count!
Breaking API changes
Node creation changes
taffy::Node
is now unique only to the Taffy instance from which it was created.- Renamed
Taffy.new_node(..)
->Taffy.new_with_children(..)
- Renamed
Taffy.new_leaf()
->Taffy.new_leaf_with_measure()
- Added
taffy::node::Taffy.new_leaf()
which allows the creation of new leaf-nodes without having to supply a measure function
Error handling/representation improvements
- Renamed
taffy::Error
->taffy::error::TaffyError
- Replaced
taffy::error::InvalidChild
with a newInvalidChild
variant oftaffy::error::TaffyError
- Replaced
taffy::error::InvalidNode
with a newInvalidNode
variant oftaffy::error::TaffyError
- The following method new return
Err(TaffyError::ChildIndexOutOfBounds)
instead of panicking:taffy::Taffy::remove_child_at_index
taffy::Taffy::replace_child_at_index
taffy::Taffy::child_at_index
Taffy::remove
now returns aResult<usize, Error>
, to indicate if the operation was sucessful (and if it was, which ID was invalidated).
Some uses of Option<f32>
replaced with a new AvailableSpace
enum
A new enum Taffy::layout::AvailableSpace
has been added.
The definition looks like this:
/// The amount of space available to a node in a given axis
pub enum AvailableSpace {
/// The amount of space available is the specified number of pixels
Definite(f32),
/// The amount of space available is indefinite and the node should be laid out under a min-content constraint
MinContent,
/// The amount of space available is indefinite and the node should be laid out under a max-content constraint
MaxContent,
}
This enum is now used instead of Option<f32>
when calling Taffy.compute_layout
(if you previously passing Size::NONE
to compute_layout
, then you will need to change this to Size::MAX_CONTENT
).
And a different instance of it is passed as a new second parameter to MeasureFunc
. MeasureFunc
s may choose to use this parameter in their computation or ignore it as they see fit. The canonical example of when it makes sense to use it is when laying out text. If MinContent
has been passed in the axis in which the text is flowing (i.e. the horizontal axis for left-to-right text), then you should line-break at every possible opportunity (e.g. all word boundaries), whereas if MaxContent
has been passed then you shouldn't line break at all..
Builder methods are now const
where possible
- Several convenience constants have been defined: notably
Style::DEFAULT
Size<f32>.zero()
is nowSize::<f32>::ZERO
Point<f32>.zero()
is nowPoint::<f32>::ZERO
Size::undefined()
is nowSize::NONE
Removals
- Removed
taffy::forest::Forest
.taffy::node::Taffy
now handles it's own storage using a slotmap (which comes with a performance boost up to 90%). - Removed
taffy::number::Number
. UseOption<f32>
is used instead- the associated public
MinMax
andOrElse
traits have also been removed; these should never have been public
- the associated public
- Removed unused dependencies
hashbrown
,hash32
, andtypenum
.slotmap
is now the only required dependency (num_traits
andarrayvec
are also required if you wish to use taffy in ano_std
environment).
Fixes
-
Miscellaneous correctness fixes which align our implementation with Chrome:
- Nodes can only ever have one parent
- Fixed rounding of fractional values to follow latest Chrome - values are now rounded the same regardless of their position
- Fixed computing free space when using both
flex-grow
and a minimum size - Padding is now only subtracted when determining the available space if the node size is unspecified, following section 9.2.2 of the flexbox spec
MeasureFunc
(and henceNodeData
and henceForest
and hence the publicTaffy
type) are nowSend
andSync
, enabling their use in async and parallel applications
-
Taffy can now be vendored using
cargo-vendor
(README.md is now included in package).