Autoencoder and Classification inside the same model

@ptrblck @shivammehta007 I am extremely sorry to both of you, I think I am not able to explain my problem well here.
Let me start again: I am trying to merged two models, one is working for raw data and other on processed data , by that I mean both the models are using different inputs. By reading that paper I got an idea of training both the models separately and then removing the final classification layers and the previous FC layers are concatenated to form the classification layer of merged model. So my problem is how to save the features up to first fully connecting layers of model1 and model2 while training them individually and the concatenated them for final classification. Is there any concept of auto encoder here or not?
thanks for patience.

or if this can help: can I use this final merged model when both the models have autoencoders architecture as above in my initial question, but then how to instantiate the models as both has different inputs. If nothing works can I share my codes personally sir?

I think I should not confused by autoencoders here, I guess transfer learning is the correct approach here, I should run the models individually and then save the models remove the final layers then merge the outputs for other model, so many ideas but no implementation.
please point out any mistake

If I understand correctly! This is what you want to do right?

class Model1(nn.Module):
	def __init__(self, *args, **kwargs):
		super(Model1, self).__init__()

		self.fc1 = nn.Linear(model1_input_size, 64)

		# you remove the last layer of this model
		# self.fc2 = nn.Linear(64, nb_classes)

	def forward(self, input_for_first_model):

		## Note the removed FC2

		return F.relu(self.fc1(input_for_first_model))

class Model2(nn.Module):
	def __init__(self, *args, **kwargs):
		super(Model2, self).__init__()

		self.fc1 = nn.Linear(model_2_input_size, 128)
		self.fc2 = nn.Linear(128, 64)
		# you remove the layer layer of second model too
		# self.fc3 = nn.Linear(64, nb_classes)

	def forward(self, input_for_second_model):

		x = F.relu(self.fc1(input_for_second_model))

		## again note the removed last layer fc3

		return F.relu(self.fc2)

class MainModel(nn.Module):
	def __init__(self, *args, **kwargs):
		super(MainModel, self).__init__()

		self.model1 = Model1(*args, **kwargs)
		self.model2 = Model2(*args, **kwargs)

		self.fc = nn.Linear(64 + 64, nb_classes) # output from first model + output from second model

	def forward(self, input_for_first_model, input_for_second_model):

		x1 = self.model1(input_for_first_model)
		x2 = self.model2(input_for_second_model)

		x =, x2), dim=1)

		return F.softmax(self.fc)

And regarding the question you can use any architecture of Model1 and Model2 (including autoencoders) and now both the models will have different inputs.
Also note to adjust your high level API where you pass the data to take 2 parameters as input to the MainModel call.

I hope that helps :slight_smile: or at least gives some ideas for further implementationsā€¦

1 Like

@shivammehta007 this is exactly what I want sir, Thank you I was just confused in autoencoders. Though I am still confused how to use autoencoders for classification as the decoder part is the replication of input data:(

Here in above case If I will pass the MainModel to the training model then How to use the dataloader here, as the inputs for both the models are different.

Also, a general use of AutoEncoder is to denoise the training data, so a general notion is to sort of preprocess your data inputs with a AE or maybe use their learnt representation in some way.

Regarding datasets, I suggest you look into where you can contact two datasets and then use the general DataLoader to fetch both values at the same time.

1 Like

thanks sir @shivammehta007 will go through this