I want to us maxout activation in pytorch, and I use torch.max() function to implement.
class Maxout(nn.Module):
def __init__(self):
super(Maxout, self).__init__()
def forward(self, x, y):
return x.max(y)
Is it right?
I want to us maxout activation in pytorch, and I use torch.max() function to implement.
class Maxout(nn.Module):
def __init__(self):
super(Maxout, self).__init__()
def forward(self, x, y):
return x.max(y)
Is it right?
I also have the same question. In my case, I use the code below. Can anyone help?
class Maxout(nn.Module):
def __init__(self):
super(Maxout, self).__init__()
def forward(self, x, y):
return torch.max(x,y)```
You can define a layer like that but it’s not necesary, you can call torch max in forward and it will do the same.
Check this: maxout-layer
You can also check out this MaxOut2D implementation that performs maxout on the channels for an input that is C x H x W