How to create a tensor based on another one

I’m studying IA using PyTorch and implementing some toy examples.
First, I created a one-dimensional tensor (X) and a second tensor (y), derived from the first one:

X = torch.arange(0, 100, 1.0).unsqueeze(dim=1)
y = X * 2

So I have something like

X = tensor([[0.], [1.], [2.], [3.], [4.], [5.], ...
y = tensor([[ 0.], [ 2.], [ 4.], [ 6.], [ 8.], [10.], ...

Then, I trained a model to predict y and it was working fine.

Now, I would like something different. The X will be 2D and y 1D. y is calculated by an operation in the elements of X:
If x[0] + x[1] >0? y = 10: y -10

X = tensor([[ 55.5348, -97.7608],
            [ 29.0493, -52.1908],
            [ 47.1722, -43.1151],
            [ 11.1242, -62.8652],
            [ 44.8067,  80.8335],...
y = tensor([[-10.], [-10.], [ 10.], [-10.], [ 10.],...

First question, Is it make sense in terms of Machine Learning?

Second one…
I’m generating the tensors using numpy. Could I do it in a smarter way?

# Criar X valores de entrada para testes
X_numpy = np.random.uniform(low=-100, high=100, size=(1000,2))
print("X", X_numpy)

#y_numpy = np.array([[ (n[0]+n[1]) >= 0 ? 10:-10] for n in X_numpy])
y_numpy = np.empty(shape=[0, 1])
for n in X_numpy:
    if n[0] + n[1] >= 0:
        y_numpy = np.append(y_numpy, [[10.]], axis=0)
    elif n[0] + n[1] < 0:
        y_numpy = np.append(y_numpy, [[-10.]], axis=0)