Help with model parameters

data_transforms = {
‘train’: transforms.Compose([
transforms.RandomResizedCrop(224),
transforms.RandomHorizontalFlip(),
transforms.ToTensor(),
transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])
]),
‘val’: transforms.Compose([
transforms.Resize(224),
transforms.CenterCrop(256),
transforms.ToTensor(),
transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])
]),
}

class Net(nn.Module):
def init(self):
super(Net, self).init()
self.conv1 = nn.Conv2d(3, 64, kernel_size=6, padding=2)
self.pool = nn.MaxPool2d(2, 2)
self.conv2 = nn.Conv2d(64, 128, kernel_size=6, padding=2)
self.fc1 = nn.Linear(256 * 6 * 6, 4096)
self.fc2 = nn.Linear(4096, 4096)
self.fc3 = nn.Linear(4096, 2)

def forward(self, x):
    x = self.pool(F.relu(self.conv1(x)))
    x = self.pool(F.relu(self.conv2(x)))
    x = x.reshape(x.size(0), -1)
    x = F.relu(self.fc1(x)) 
    x = F.relu(self.fc2(x))
    x = self.fc3(x)
    return x

RuntimeError: size mismatch, m1: [4 x 387200], m2: [9216 x 4096]

how can I set parameters for this model?

Since the number of features of the input activation to self.fc1 is 387200, you should set in_features of self.fc1 also to this value.
Your current model expects 256*6*6 input features, which doesn’t match the activation shape for your current input.

where is that 387200 is coming from

It’s coming from the number of features of the activation you are trying to pass to self.fc1.
I.e. x would have the shape here:

x = x.reshape(x.size(0), -1)
print(x.shape) # should be [batch_size, 387200]
x = F.relu(self.fc1(x)) 

So how would you write this model?

Either change in_features=387200 in self.fc1 as posted before.
Alternatively you could add more conv/pooling operation to reduce the spatial size of the activation or pass a smaller input tensor to the model.
The current shape is defined by your input shape and the conv/pool layers.