You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
So you reshaped the 5D input tensor into 4D and used a normal conv2d to perform the dimension transformation.
Here during reshape, you merged the "batch_size" and "input_num_capsule" together. But doing so means that the same conv weights are used for each input capsule type.
However in the paper you mentioned in 3.1 contirbution 2.(ii) that the transformation matrices are shared within a capsule type. Do correct me if i am wrong.
Thanks and good work btw.
Chen.
The text was updated successfully, but these errors were encountered:
The transformation matrices are shared across child capsule types but are unique for parent capsule types. Therefore each incoming capsule type will have the same set (number of parent capsule types) of transformations applied to it, whereas in the original paper no such sharing was described... from the original Sabour et al. paper "In convolutional capsule layers, each capsule outputs a local grid of vectors to each type of capsule in the layer above using different transformation matrices for each member of the grid as well as for each type of capsule".
I hope this clears things up and thank you for the kind words.
When I compare your implementation of the transformation matrices: self.W = self.add_weight(shape=[self.kernel_size, self.kernel_size, self.input_num_atoms, self.num_capsule * self.num_atoms], initializer=self.kernel_initializer, name='W')
with Sabours implementation: kernel = variables.weight_variable(shape=[ kernel_size, kernel_size, input_atoms, output_dim * output_atoms ])
I can not see any difference in the transformation matrices.
I have a question regarding this part :
SegCaps/capsule_layers.py
Lines 128 to 139 in c6b3f9e
So you reshaped the 5D input tensor into 4D and used a normal conv2d to perform the dimension transformation.
Here during reshape, you merged the "batch_size" and "input_num_capsule" together. But doing so means that the same conv weights are used for each input capsule type.
However in the paper you mentioned in 3.1 contirbution 2.(ii) that the transformation matrices are shared within a capsule type. Do correct me if i am wrong.
Thanks and good work btw.
Chen.
The text was updated successfully, but these errors were encountered: