Hello, how can I create a custom activation function like the binary step for example:

binary = lambda x: np.where(x>=0, 1, 0) ?

I tried “activation = lambda x: torch.where(x < 0.5, 1., 0.)” and i run into an error:

runtimeError Traceback (most recent call last)

/tmp/ipykernel_615/1226976390.py in

69 x_colloc_tens = torch.tensor(x_colloc,requires_grad=True).float().to(device)

70 t_colloc_tens = torch.tensor(t_colloc,requires_grad=True).float().to(device)

—> 71 f_out = f(x_colloc_tens,t_colloc_tens)

72 zeros_tens = torch.tensor(np.zeros((taille_dataset,1)),requires_grad=True).float().to(device)/tmp/ipykernel_615/828930035.py in f(x, t)

8 u = modele(entry_data)

----> 9 u_x = torch.autograd.grad(u, x, torch.ones_like(u),retain_graph=True,create_graph=True)[0]

10 u_t = torch.autograd.grad(u, t, torch.ones_like(u),retain_graph=True,create_graph=True)[0]~/.conda/envs/default/lib/python3.9/site-packages/torch/autograd/

init.py in grad(outputs, inputs, grad_outputs, retain_graph, create_graph, only_inputs, allow_unused)

232 retain_graph = create_graph

233

→ 234 return Variable.execution_engine.run_backward(, retain_graph, create_graph,

235 outputs, grad_outputs

236 inputs, allow_unused, accumulate_grad=False)RuntimeError: One of the differentiated Tensors appears to not have been used in the graph. Set allow_unused=True if this is the desired behavior.