Replies: 1 comment 1 reply
-
Oh, this sounds interesting! Have you seen jupytergraffiti? Would you think something like that would work, in terms of “recording” comments/questions? |
Beta Was this translation helpful? Give feedback.
1 reply
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
We’ve talked a bit about “live” documentation that can reference values/variables in an actual execution, and also about scripted/macro-recorded interactions that could be used to guide a user through a particular query. If we put these ideas together they point to another important use case/scenario it would be good to start thinking about, perhaps in the context of this mini-article: peer review.
As a reviewer, I want to be able to explore/interact with a pipeline as part of my review process. When I have comments/questions for the author, rather than just writing those down using unstructured text, it would be nice if I could embed those comments/questions directly into the computation – highlighting bits of data/output that are of interest (or problematic), and then perhaps annotating the feedback provided by Fluid to illustrate my concern/question. E.g., why is this data element being discarded here?
By making these interactions/queries persistent, they can be shared with the author, who can then respond in the same context. So something like a notebook-like view where reviewing takes place, except that review comments are integrated into the computational process itself and are able to reference variables/value in specific runtime contexts.
Beta Was this translation helpful? Give feedback.
All reactions