Correct Loss for bounded Multiple Regression

Suppose I have target vector y= (y_1, y_2, ..., y_n) where y_i in [0, inf) for all i e.g.

# here n = 6 and n is constant for all data
y = [
  0,
  0,
  2,
  1,
  0,
  1
]

In general y_i both integers and in practice almost always are in [0,10]. In theory, however they can take any non negative real value.

What is the best way to handle this for training and which loss should I use?
Should I treat it like classification and refractor y so the above example becomes

# assuming zero indexed
 [
  [1, 0, 0, 0, 0, 0, 0, 0, 0, 0], # 0
  [1, 0, 0, 0, 0, 0, 0, 0, 0, 0], # 0
  [0, 1, 0, 0, 0, 0, 0, 0, 0, 0], # 1
  [0, 1, 0, 0, 0, 0, 0, 0, 0, 0], # 1
  [1, 0, 0, 0, 0, 0, 0, 0, 0, 0], # 0
  [0, 1, 0, 0, 0, 0, 0, 0, 0, 0], # 1
]