Best practices multiple net output

So I have a task where the net outputs different variables. Some outputs are bounded like [0,1], [0, +inf] and [-inf, +inf]. I want the network to handle these restrictions. My thought was to do

def forward(self, x):
	x = 
	#print(x.shape), (bs, 5)

	# say idx 0 is the variable with [0,1] bound so
	x[:, 0] = nn.Sigmoid(x[:, 0])

	# idx 1-3 is [0, +inf] bound so
	x[:, 1:4] = nn.ReLU(x[:, 1:4])

	# idx 5 is [-inf, +inf] so just left as is.

	return x

Is this a valid approach? The model will be saved with jit.trace and served using the c++ api.

Offtopic. What you guys think of training with only MSE loss? Sure you can combine cross entrpoy loss for the 0-1 vars and MSE for the others but I wonder if it will have a major effect?

Using different activation functions sounds reasonable. However, I would recommend to check if your code would raise errors for invalid inplace operations before porting it to libtorch.
If so, you might assign the results to temporal tensors and concatenate them afterwards.