-
Notifications
You must be signed in to change notification settings - Fork 1.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Train on different dataset #69
Comments
Hi, |
@guddulrk you must first hand simulate the output of conv1 in your case of input to conv1 i.e., 48x48x3. You will see that output of conv1 in your case will be different i.e., it will be [cfg.batch_size, 40, 40, 256] as per the current code of capsnet.py . |
@SOLOYT You cannot reshape array of size 651786224 into [ _, 32, 32, 3] as 651786224 is not a multiple of 32x32x3. Please look into what you actually have to do or you can elaborate here so that we can discuss it further. |
Hi, My error maybe occur in this line. I'm not sure: |
trainX = loaded[16:].reshape((602, 256, 256, 1)).astype(np.float32) Getting the same error. Can anyone tell me the reason. I have just replace mnist dataset with custom dataset which has 602 grayscale images with 256x256 dimension. Any new suggestions are welcome as I am very new to this programming. |
@AnushaMehta the problem here is that you are trying to reshape "loaded[16:]" array which has in total 1,20,58,624 values to a tensor of dimension (602,256,256,1) which has 3,94,52,672 values. So as you might have observed by now that this is not possible, you need to reshape into shape of (-1, 256, 256, 1). Here -1 is the dimension value accordingly calculated automatically. Is your doubt clear by now? |
@parinaya-007 Thank you so much. It works. |
Hi,
I want to train the model on character dataset where the images are 48x48x3. When I change the size of the image it shows me an error at this line indicating dimensions must be equal:
assert conv1.get_shape() == [cfg.batch_size, 20, 20, 256]
The total label classes are 68, in my case.
The text was updated successfully, but these errors were encountered: