-
Notifications
You must be signed in to change notification settings - Fork 0
Conversation
🤖 OpenAI High-level SummaryThe changes made to the codebase include improvements in logic, security, performance, data races, consistency, error handling, maintainability, modularity, complexity, and optimization. The changeset includes modifications to various files such as Files Summary
Chat with 🤖 OpenAI Bot (
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hey @lauraporta excellent work! This looks very clean and thoroughly tested. The demo example run well and all tests pass locally. Only minor suggestions. At some point would be nice to talk about the method (e.g. Andermann et al., 2011.) as looks interesting.
@@ -30,7 +31,7 @@ def __init__(self, data: dict, is_allen: bool = True): | |||
self.neuropil_coeficient = self._unpack_data(data["r_neu"], data) | |||
logging.info("Unpacked r_neu") | |||
|
|||
self.stim = self._unpack_data(data["stim"], data) | |||
self.stim: np.ndarray = self._unpack_data(data["stim"], data) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice typing, I am not very sure about the difference between using the typing
module or numpy.typing
module and the standard types (e.g. typing.Dict
vs. dict
), np.ndarray
vs. numpy.typing.NDArray
). However down the line it might be useful to be using the typing module type hints. I think they let you do a bit more (e.g. NDArray[np.integer]
or typing.Dict[str, Dict[str, int]]
(e.g.). (though this might be possible with pyton standard types). But they don't seem to have everything covered yet, like str
or int
rich.print("Something went wrong 😱") | ||
logging.exception(e) | ||
|
||
return inner_function |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nice!
# pipeline draft | ||
start_logging() | ||
|
||
# TODO: add TUI or GUI fuctionality to get input from user |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I have Adam in my brain, this could go into a new issue
logging.info("Analysis finished") | ||
logging.info(f"Updated photon_data object: {photon_data}") | ||
|
||
with open(f"{folder_name}_data.pickle", "wb") as f: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
if I understand correctly this saves the results in the same directory as the original data file. It it makes sense to, down the line this could use the SWC-Blueprint setup to make a new derivatives
folder where all output is saved. At some point it would be cool to discuss how this pipeline can be generalised it seems like a lot is here for 2p analysis
return result | ||
|
||
|
||
def fit_2D_gaussian_to_data( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would consider for readability ordering the functions in the file in their call order (unless I have missed something) fit_2D_gaussian_to_data > single_fit > create_gaussian_matrix > elliptical_gaussian_andermann
An array containing the spatial frequencies of the stimuli. | ||
tfs : np.ndarray | ||
An array containing the temporal frequencies of the stimuli. | ||
response_matrix : np.ndarray |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
could maybe give a tiny bit more detail here e.g. the dimension sizes (in terms of the variables) e.g. is response_matrix a sfs by tfs size matrix? (i.e. the fit 2D gaussian of this matrix is a joint distribution on the probability of the neuron firing at some given spatial frequency, temporal frequency?
𝜻_power_law_exp = 0.5 | ||
|
||
# Expected output | ||
expected_output = 0.9999000049998332 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
is there any way to test over a range of values? I guess not without re-implementing the function. I wonder if a couple of simple / extreme cases could be added for full covering. Its not super-important though
), "Optimized parameters above upper bounds" | ||
|
||
|
||
def test_fit_2D_gaussian_to_data(parameters_to_fit): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
would it make sense to also do something like create a 2D multivariate gaussian with known means, covariances and check that the fit matches the pre-set means, covariances? This could provide a bit more flexibility on the input ranges to test. But, maybe it is not as simple as this to to the particulars of this fitting function, specialised for this use case
@@ -0,0 +1,45 @@ | |||
import pathlib | |||
from functools import lru_cache |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
the test_data
folder could be moved to a folder called tests/data
Joe, thanks for all your comments and suggestions. Right now I would not address them, I add them to the backlog of the possible improvements that I could do in the future. |
@openai: ignore
☝🏻stops future reviews of the bot
👉🏻 @ Future human reviewer
I am experimenting using a chatgpt bot to have PR reviewed, while waiting for copilot X to arrive.
The bot is not able to process files that are too large, and unfortunately those are the core of the PR.
These files are:
spatial_freq_temporal_freq.py
photon_data.py
So no summary and no comments on the code are provided for these two files. Therefore I have updated the descriptions of the bot filling them up with the missing info.
I noticed also that applies comments relevant to one file in the wrong file. I'll see if I can move them around. Sorry about that.
N.B.: a previous PR (#25) had many relevant suggestions that are still applicable to this one. They are not addressed in this PR since they require extensive refactoring. This PR is mostly focused on adding new functionalities and writing better tests.
Finally, these changes where actually done while developing the dashboard, which you can find in the commits. Please ignore any changes related to it. I have split the changes related to the analysis and put them all in this PR to keep things cleaner.
Laura's summary
Broad description of the main changes:
python3 demo_cly.py
in your environment.app.py
substitutes whatmain.py
was and now allows us to save the analysis result as an object to be then used by the dashboard.FrequencyResponsiveness
and used with the nameresponsiveness
for short when instantiated.responsiveness.data
which is aPhotonData
object. Anything else inresponsiveness
is a method that takes care of one bit of the analysis. On instantiation,FrequencyResponsiveness
has the attributedata
evaluated.PhotonData
has been changed to accomodate this changes. Now all the attributes that will be filled by the analysis are instantiated at the creation of thePhotonData
object. Now in the docstring of the class there is a list of all possible attributes. I added some checks to evaluate the internal consistency of the datasets with expected resutls.__call__
, by simply calling the instantiated object:responsiveness()
. The analysis pipeline consists of:1. calculating statistics (not changed)
2. fitting the response to a gaussian and storing the values in
data
. The methods dealing with this part are what has been added to the class.gaussian_calculations.py
. They are static methods describing the gaussian equation in use and the way to fit the data to it.mock_data/
. This new feature required changes in the tests, inconftest.py
and the creation of a shared set of helper methods ingenerate_mock_data.py
.Summary by OpenAI
Release Notes:
main
function with a newapp.py
file that allows the application to be started by runningpython3 demo_cly.py
.gaussians_calculations.py
.