-
Notifications
You must be signed in to change notification settings - Fork 13
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Large scale inference docs #994
base: main
Are you sure you want to change the base?
Conversation
@hyanwong Can I have a read-through here? |
Codecov ReportAll modified and coverable lines are covered by tests ✅
Additional details and impacted files@@ Coverage Diff @@
## main #994 +/- ##
=======================================
Coverage 93.17% 93.17%
=======================================
Files 18 18
Lines 6374 6374
Branches 1088 1088
=======================================
Hits 5939 5939
Misses 296 296
Partials 139 139
Flags with carried forward coverage won't be shown. Click here to find out more. ☔ View full report in Codecov by Sentry. |
952dbb9
to
37645ef
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
:param int min_work_per_job: The minimum amount of work (as a count of genotypes) to | ||
allocate to a single parallel job. If the amount of work in a group of ancestors | ||
exceeds this level it will be broken up into parallel partitions, subject to | ||
the constriant of `max_num_partitions`. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
typo, constriant
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Great works, thanks @benjeffery
It's all quite complicated, and I'm not sure I have a feel for how all the parts fit together, but the descriptions are detailed enough that I could follow them without problems.
I guess at some future point we might want a schematic, but that can wait for now. I reckon you can merge this and get someone (e.g. Duncan or Savita?) to try it out.
entire genotype array for the contig being inferred needs to fit in RAM. | ||
This is the high-water mark for memory usage in tsinfer. | ||
Note the `genotype_encoding` argument, setting this to | ||
{class}`tsinfer.GenotypeEncoding.ONE_BIT` reduces the memory footprint of |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we need to say that this can't be used if there is missing data?
The plot below shows the number of ancestors matched in each group for a typical | ||
human data set: | ||
|
||
```{figure} _static/ancestor_grouping.png |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
May be worth indicating that the group number is ordered by time, so that group 0 represents the oldest ancestors?
{meth}`match_ancestors_batch_group_finalise` will then insert the matches and | ||
output the tree sequence to `work_dir`. | ||
|
||
At anypoint the process can be resumed from the last successfully completed call to |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
"anypoint" -> "any point"
|
||
At anypoint the process can be resumed from the last successfully completed call to | ||
{meth}`match_ancestors_batch_groups`. As the tree sequences in `work_dir` checkpoint the | ||
progress. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not sure I understand / can parse this last sentence
Fixes #840