Error $ Torch: not enough memory: you tried to allocate 42GB. Buy new RAM!

This is the error:
RuntimeError: $ Torch: not enough memory: you tried to allocate 42GB. Buy new RAM! at /opt/conda/conda-bld/pytorch_1524584710464/work/aten/src/TH/THGeneral.c:218

I imported everything separately and these are my transofrms

train_transform_list = [transforms.RandomRotation(30),
                        transforms.RandomResizedCrop(224),
                        transforms.RandomHorizontalFlip(),
                        transforms.ToTensor()
                       ]

test_transform_list = [transforms.Resize(255),
                       transforms.CenterCrop(224),
                       transforms.ToTensor()
                      ]

train_transform = transforms.Compose(train_transform_list)
test_transform = transforms.Compose(test_transform_list)


train_set = torchvision.datasets.CIFAR10(root='./data', download=True, transform = train_transform)
trainloader = torch.utils.data.DataLoader(train_set, batch_size = 64)


test_set = torchvision.datasets.CIFAR10(root='./data', download=True, transform = test_transform)
testloader = torch.utils.data.DataLoader(test_set, batch_size = 64)

classes = ('plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck')

And this is my network

device = torch.device("cuda" if torch.cuda.is_available() else "cpu")


class Network(nn.Module):
    def __init__(self):
        super().__init__()
        self.fc1 = nn.Linear(150528, 75264)
        self.fc2 = nn.Linear(75264, 64)
        self.fc3 = nn.Linear(64, 10)

        
    def forward(self, x):
        x = x.view(x.shape[0], -1)
        
        x = F.relu(self.fc1(x))
        x = F.relu(self.fc2(x))

        
        x = F.log_softmax(self.fc3(x), dim=1)
        return x


model = Network()  

If I use smaller parameters I get another error which is
RuntimeError: size mismatch, m1: [64 x 150528], m2: [4096 x 1024] at /opt/conda/conda-bld/pytorch_1524584710464/work/aten/src/THC/generic/THCTensorMathBlas.cu:249
(Using 4096 as the first parameter)
How do I choose the right numbers for nn.Linear exactly?

The in_features are set as the number of features in the incoming activation tensor. To reduce this value you would need to reduce the activation shape e.g. by using smaller inputs, a more aggressive pooling etc.