Hello there,

May someone provide some codes to help me how can I convert keras `convolution`

, `batchnorm`

, `leakyRelu`

and `maxpooling`

layers to pytorch equivalent? Are there any differences between these layers in keras and pytorch? I have wrote some codes to do this but not sure about the correctness.

I have a `.h5`

file which has the pretrained model in keras. What I want to do is to create a dictionary variable in python which is indexed by the layer name and has the definition of each layer in pytorch. Here is the code:

```
def loadWeights(self):
model = load_model('keras_model.h5')
j = json.loads(model.to_json())
for i, layer in enumerate(j['config']['layers']):
ln = layer['name']
l = model.get_layer(name=layer['name'])
if layer['class_name'] != 'Concatenate':
self.lid[ln] = l.input_shape[3]
else:
self.lid[ln] = l.input_shape[0][3]
self.lod[ln] = l.output_shape[3]
w = l.get_weights()
if layer['class_name'] == 'Conv2D':
filter_size = layer['config']['kernel_size'][0]
if filter_size == 3:
self.layers[ln] = nn.Conv2d(self.lid[ln],self.lod[ln],
filter_size,padding=1,stride=1,bias=False)
elif filter_size==1:
self.layers[ln] = nn.Conv2d(self.lid[ln],self.lod[ln],
filter_size,padding=0,stride=1,bias=False)
self.layers[ln].weight.data = torch.from_numpy(w[0].transpose((3,2,0,1)))
elif layer['class_name'] == 'BatchNormalization':
self.layers[ln] = nn.BatchNorm2d(self.lid[ln])
self.layers[ln].weight.data = torch.from_numpy(w[0])
self.layers[ln].bias.data = torch.from_numpy(w[1])
self.layers[ln].running_mean.data = torch.from_numpy(w[2])
self.layers[ln].running_var.data = torch.from_numpy(w[3])
elif layer['class_name'] == 'LeakyReLU':
self.layers[ln] = nn.LeakyReLU(.1)
elif layer['class_name'] == 'MaxPooling2D':
self.layers[ln] = nn.MaxPool2d(2, 2)
elif layer['class_name'] == 'Lambda':
self.layers[ln] = scale_to_depth(2)
```

May someone verify this code for me?

Thanks by the way!