Hi, basically I’m trying to generate an image based on, let’s say, 10 conditions which are wrapped in a tensor.
However, among those 10 conditions, fifth and sixth conditions are sometimes missing, which means those conditions do not apply in those observations and should be considered as something like None.
In this case, how could you deal with the situation or how would you make a neural network based on the description above?
Yes, they are meant to be passed into the model directly.
For example, I’m generating an image of a car and the conditions I’m using are like wheel type, car colour, car number plate etc,. However, among them, my dataset has some observations with no car number plate and in that case I need to pass None to the model
The tricky thing is the model input size seems to be fixed, so I have pass in any value for that missing car number plate
The data handling would depend on your use case and eg. if you are using categorical inputs, you could try to encode the empty inputs with placeholder values.
However, unsure if this would work for you.
But the thing is those concerned data contain a categorical value and a continuous value as well. In the latter case, I’m not sure how the model would manage to interpret the embedding of the placeholder value… I’m thinking of using -100 or that kind of irrelevant and extremely distant value from [-1, 1] to make it distinguishable. Would you suggest any other value if my option seems problematic?
I’m not sure if this translates to variable-length input models, if so, try a structure like transformer. If you can’t change your model, I think you can try transformer as an encoder, a bit like VAE