Hey Folks,
My problem is this.
Currently I have an almost arbitrarily large dataset with which I want to train a common network architecture e.g. Resnet18.
My problem now is that this data set does not consist of images but only of single variables. So I have to get from a 10x1 to 3x224x224 somehow.
should i create a first new Layer like this:
class Resnet_Input(nn.Module):
def __init__(self):
super(Resnet_Input, self).__init__()
self.fc1 = nn.Linear(10, 224*224*3)
def forward(self, data):
x = F.relu(self.fc1(data))
x = x.view(-1, 3, 224, 224)
return x
and stack it with nn.Sequential( Resnet_Input, ResNet18), should i modify the first layer or should i add a complete new “Residual Block”? What du you think is the best way? Do you have better ideas?
First I want to check some baselines.
how well do established networks work? Can I do this with more simple Networks? I also plan with going for GAN Networks (https://arxiv.org/abs/1905.12868)
The thing is, all my previously trained networks have been using datasets between 4000 and 8000 images. Thereby I was able to play with different parameters, because a training didn’t last very long. And after a lot of different Hyperparameters I figuered out a good Network Strukture.
Now I want to test datasets larger then 300 00 to 1 Million datapoints. Training will be last a lot longer. So I want to start with a established Network, to look where am I
The main question would be: how would you like to “reshape” your 10 values to a tensor of [3, 224, 224]?
I.e. you could somehow interpolate the values, just repeat them, fill with some other information?
Okay, maybe going for ResNet isn’t the best solution.
But I did not find any good ideas how to train a Network with only two inputs and one output.
Any good ideas here?