Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

New sliding window strategy #4

Open
AN-AN-369 opened this issue Jul 6, 2023 · 3 comments
Open

New sliding window strategy #4

AN-AN-369 opened this issue Jul 6, 2023 · 3 comments

Comments

@AN-AN-369
Copy link

Where do you use your new sliding window strategy that I can't find in the code?

@Ziyan-Huang
Copy link
Owner

Hi @AN-AN-369,

Thank you for your interest in my new sliding window strategy. The implementation of this strategy can be found in the file at this link: https://github.com/Ziyan-Huang/FLARE22/blob/main/nnunet/network_architecture/neural_network.py

In particular, I would draw your attention to two key functions:

  1. _compute_steps_for_sliding_window
  2. _internal_predict_3D_3Dconv_tiled

These functions contain the core implementation of the new sliding window strategy. Please have a look and let me know if you have any further questions.

@AN-AN-369
Copy link
Author

AN-AN-369 commented Jul 7, 2023

Thank you for your answer about the new sliding window. We know that sliding Windows are used in the prediction phase, I wonder what is the difference between prediction and testing? I don't hear much about predicting this stage in other papers. According to nnUNet's original article, I understand that prediction is a test, I don't know if there is any problem with my understanding, please correct me if there is.

@Ziyan-Huang
Copy link
Owner

Hi @AN-AN-369,

Thank you for your question. In many contexts, "prediction" and "testing" are used interchangeably. However, sometimes they are used to denote slightly different stages of the process. Here's how they are typically differentiated:

Prediction: This refers to using the trained model to predict the output given new, unseen data. The aim here is not to evaluate the performance of the model but rather to use the model for its intended purpose, i.e., generating predictions for new data.

Testing: This is the process of evaluating the performance of a trained model. This is done by comparing the model's predictions on a separate testing dataset (for which the true outcomes are known) to the actual outcomes. The testing phase allows us to measure the model's generalization ability.

I hope this clarifies the distinction.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants