Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

RuntimeError: element 0 of tensors does not require grad and does not have a grad_fn #18

Open
sleepingcat4 opened this issue Aug 19, 2023 · 4 comments

Comments

@sleepingcat4
Copy link

Thanks for sharing such oustanding tutorials on Transfer-learning and Quantum Circuits. I was experimenting the code with MobileNet V2. By default the notebook uses Resnet18. But, I thought to use MobileNet V2 for experimentation. Unfortunately, I am receiving this error. Can you please help me in revolving this error?

Code I changed:

model_hybrid = torchvision.models.mobilenet_v2(pretrained=True)

Error received:

Training started:
---------------------------------------------------------------------------
RuntimeError                              Traceback (most recent call last)
[<ipython-input-21-efc42279baeb>](https://localhost:8080/#) in <cell line: 1>()
----> 1 model_hybrid = train_model(model_hybrid, criterion, optimizer_hybrid,exp_lr_scheduler, num_epochs=num_epochs)

2 frames
[/usr/local/lib/python3.10/dist-packages/torch/autograd/__init__.py](https://localhost:8080/#) in backward(tensors, grad_tensors, retain_graph, create_graph, grad_variables, inputs)
    198     # some Python versions print out the first line of a multi-line function
    199     # calls in the traceback and some print out the last line
--> 200     Variable._execution_engine.run_backward(  # Calls into the C++ engine to run the backward pass
    201         tensors, grad_tensors_, retain_graph, create_graph, inputs,
    202         allow_unreachable=True, accumulate_grad=True)  # Calls into the C++ engine to run the backward pass

RuntimeError: element 0 of tensors does not require grad and does not have a grad_fn

Notebook used: c2q_transfer_learning_ants_bees.ipynb

@KetpuntoG
Copy link

Hi! In this case, the issue is here:

model_hybrid = torchvision.models.resnet18(pretrained=True)

for param in model_hybrid.parameters():
    param.requires_grad = False
    
if quantum:
    model_hybrid.fc = Quantumnet()
    
elif classical_model == '512_2':
    model_hybrid.fc = nn.Linear(512, 2)
    
elif classical_model == '512_nq_2':
    model_hybrid.fc = nn.Sequential(nn.Linear(512, n_qubits), torch.nn.ReLU(), nn.Linear(n_qubits, 2)) 

elif classical_model == '551_512_2':
    model_hybrid.fc = nn.Sequential(nn.Linear(512, 512), torch.nn.ReLU(), nn.Linear(512, 2))

# Use CUDA or CPU according to the "device" object.
model_hybrid = model_hybrid.to(device)

This cell is built taking into account the resnet model. For example, the attribute "fc" does not exist in the model you propose. So a more detailed study would have to be done to change to another one.

It would be great to see the results with other models 😄

@CatalinaAlbornoz
Copy link

Hi @sleepingcat4, to complement @KetpuntoG's answer,

The attributes in ResNet and MobileNetV2 are different. You can check the source code to see this.
Maybe this stackoverflow post can help you. I haven't tested it myself but it seems like it worked for some people.

Let us know if this helps!

@sleepingcat4
Copy link
Author

thank you for the comments. @CatalinaAlbornoz @KetpuntoG I was aware of the differences between architecture but I was not sure which particular code/code cell was causing the error. I am not sure, whether it was the QuantumNet circuit or when I was calling the QuantumNet on the pretained-model.

I will keep you guys updated and maybe able to share results with you as well after a few days.

@CatalinaAlbornoz
Copy link

Yes please let us know how it goes @sleepingcat4 !

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants