Non differentiable forward pass

I try to do some regression (in a strange way):

import torch
from torch import autograd
import numpy as np
import math

SIZE=100
torch.manual_seed(123)

class DS:
    def __init__(self,size=SIZE):
        self.x=torch.rand(size,2)
        self.y=torch.Tensor(size)
dataset=DS()

for i in range(SIZE):
    dataset.x[i]=torch.rand(2)
    if(torch.sum(dataset.x[i]) > 1 ):
        dataset.y[i]=1
    else:
        dataset.y[i]=0

x = autograd.Variable(dataset.x,requires_grad=False)
y = autograd.Variable(dataset.y,requires_grad=False)

a = autograd.Variable(torch.Tensor([-1]),requires_grad=True)
b = autograd.Variable(torch.Tensor([1]),requires_grad=True)

lr = 1e-2
optimizer = torch.optim.Adam([a , b], lr=lr)
for t in range(1000):
    y_pred= (a*x[:,0]+b)<x[:,1]
    y_pred = y_pred.type(torch.FloatTensor)
    loss=(y_pred-y).pow(2).sum()
    if t % 10 == 0:
        print(t, loss.data[0])
    optimizer.zero_grad()
    loss.backward()
    optimizer.step()

print("Optimized Beta: %.5f"%b.data[0])
print("Optimized Alpha: %.5f"%a.data[0])

But my code is having runtime error. any idea?

Traceback (most recent call last):
  File "/home/div/PycharmProjects/torch_my_way/main.py", line 37, in <module>
    loss.backward()
  File "/usr/local/lib/python2.7/dist-packages/torch/autograd/variable.py", line 156, in backward
    torch.autograd.backward(self, gradient, retain_graph, create_graph, retain_variables)
  File "/usr/local/lib/python2.7/dist-packages/torch/autograd/__init__.py", line 98, in backward
    variables, grad_variables, retain_graph)
RuntimeError: there are no graph nodes that require computing gradients

I know that probably this is because my forward pass is not fully direntiable:
y_pred= (a*x[:,0]+b)<x[:,1]
First I want to make sure that this is the only problem.
Second I want to ask if (hypothetically) I can define a backward pass for torch.ge(), where should I put it so that autograd would understand it.