Per-parameter optimizer settings don't work

I have a simple model, and an additional parameter act_max that I want to train with gradient descent:

class Net(nn.Module):
	def __init__(self):
		super(Net, self).__init__()

		self.act_max = nn.Parameter(torch.Tensor([0]), requires_grad=True)

		self.conv1 = nn.Conv2d(3, 32, kernel_size=5)
		self.conv2 = nn.Conv2d(32, 64, kernel_size=5)
		self.pool = nn.MaxPool2d(2, 2)
		self.relu = nn.ReLU()
		self.linear = nn.Linear(64 * 5 * 5, 10)

	def forward(self, input):
		conv1 = self.conv1(input)
		pool1 = self.pool(conv1)
		relu1 = self.relu(pool1)

		relu1 = torch.where(relu1 > self.act_max, self.act_max, relu1)

		conv2 = self.conv2(relu1)
		pool2 = self.pool(conv2)
		relu2 = self.relu(pool2)
		relu2 = relu2.view(relu2.size(0), -1)
		return self.linear(relu2)

model = Net()
model.apply(utils.weights_init)
nn.init.constant_(model.act_max, 1.0)
model = model.cuda()
optimizer = torch.optim.SGD([
	{'params': model.conv1.parameters(), 'weight_decay': 0.001},
	{'params': model.conv2.parameters(), 'weight_decay': 0.002},
	{'params': model.linear.parameters(), 'weight_decay': 0.003}], lr=0.01, momentum=0.9, nesterov=True)

for epoch in range(100):
	model.train()
	for i in range(1000):
		output = model(input)
		loss = nn.CrossEntropyLoss()(output, label)
		optimizer.zero_grad()
		loss.backward()
		optimizer.step()

However, the act_max variable in the code above is not being updated. If I change the optimizer to
optimizer = torch.optim.SGD(model.parameters(), lr=0.01) it works (act_max is updated every iteration).

According to per-parameter optimizer docs this should work as intended. Is this a bug?

You have to pass the default parameters to the optimizer, too.
This should work:

optimizer = torch.optim.SGD([
	{'params': model.act_max},
        {'params': model.conv1.parameters(), 'weight_decay': 0.001},
	{'params': model.conv2.parameters(), 'weight_decay': 0.002},
	{'params': model.linear.parameters(), 'weight_decay': 0.003}], lr=0.01, momentum=0.9, nesterov=True)

I see thanks! It wasn’t entirely clear from the docs.