Hi Team
Requirement
I want to multiply a single image with multiple tensors vice versa with another images also same using Linear Layer of Pytorch
|Image|Target_Color|
|Image1|color1|
|Image1|color2|
|Image1|color3|
|Image1|color4|
|Image1|color5|
|Image1|color6|
|Image1|color7|
|Image2|color1|
|Image2|color2|
|Image2|color3|
|Image2|color4|
|Image2|color5|
|Image2|color6|
|Image2|color7|
where Image1 and Image2 are 2 tensors which are multiplied with 7 colors each
Below is the code i am using -
producing both red and blue colors
Target color blue
Target_color_blue = torch.zeros(1, 1, 3, dtype=torch.float)
Target_color_blue[:,:,2] = 255
Target color red
Target_color_red = torch.zeros(1, 1, 3, dtype=torch.float)
Target_color_red[:,:,0] = 255
Target_color_list = [Target_color_blue,Target_color_red]
Stacking both red and blue tensors
Target_colors = torch.stack(target_color_list)
Single image
image = torch.randn(1,16,16,256)
#This is my model block i am using -
Main block
class AdaIN(nn.Module):
‘’’
Adain Class
###Passing style information w in Adain Network from Color embedding####
‘’’
def init(self,channels,w_dim):
super(AdaIN,self).init()
self.instance_norm = nn.InstanceNorm2d(channels)
self.style_scale_transform = nn.Linear(w_dim,channels)
self.style_shift_transform = nn.Linear(w_dim,channels)
def forward(self,image,w):
x = self.instance_norm(image)
style_scale = self.style_scale_transform(w).unsqueeze(2).unsqueeze(3)
style_bias = self.style_shift_transform(w).unsqueeze(2).unsqueeze(3)
out = style_scale * x + style_bias
return out
class MappingNetwork(nn.Module):
'''
Mapping Layers Class
Values:
z_dim: the dimension of the noise vector, a scalar
hidden_dim: the inner dimension, a scalar
w_dim: the dimension of the intermediate noise vector, a scalar
'''
def __init__(self,z_dim,w_dim):
super(MappingNetwork,self).__init__()
self.mapping = nn.Sequential(
PixelNorm(),
nn.Linear(z_dim,w_dim),
nn.ReLU(),
nn.Linear(w_dim, w_dim),
nn.ReLU(),
nn.Linear(w_dim, w_dim),
nn.ReLU(),
nn.Linear(w_dim, w_dim),
nn.ReLU(),
nn.Linear(w_dim, w_dim),
nn.ReLU(),
nn.Linear(w_dim, w_dim),
nn.ReLU(),
nn.Linear(w_dim, w_dim),
nn.ReLU(),
nn.Linear(w_dim, w_dim),
)
def forward(self,target_color):
'''
Function for completing a forward pass of MappingLayers:
Given an initial Target tensor, returns the intermediate target tensor.
Parameters:
noise: a target_color tensor with dimensions (n_samples, z_dim)
'''
return self.mapping(target_color)
calling the above blocks -
style = MappingNetwork(3,256)
style_tensor = style(Target_colors)
Adain_bk = AdaIN(3,256)
image = torch.randn(1,16,16,256)
final = Adain_bk(image,Target_colors)
RuntimeError: mat1 and mat2 shapes cannot be multiplied (2x3 and 256x3)
while calling the Adain block above .
Could you please guide how we can i change our model according to the requirement above