Training deeper networks using MAML framework

Currently the pytorch implementation of networks trained by MAML, are implemented such that weights have to be passed as arguments, an example is given below

Blockquote def functional_forward(self, x, weights):
“”“Applies the same forward pass using PyTorch functional operators using a specified set of weights.”“”

    for block in [1, 2, 3, 4]:
        x = functional_conv_block(x, weights[f'conv{block}.0.weight'], weights[f'conv{block}.0.bias'],
                                  weights.get(f'conv{block}.1.weight'), weights.get(f'conv{block}.1.bias'))

    x = x.view(x.size(0), -1)

    x = F.linear(x, weights['logits.weight'], weights['logits.bias'])

return x

Now this foward implementation is fine, if the network has just 4 layers, but with techniques like meta transfer learning (https://arxiv.org/pdf/1812.02391.pdf) deeper networks are being trained within the MAML framework.

So my question is it possible to train a model using the MAML framework, without having to code up the model in a functional way.

Any help would be appreciated ! Thanks