Using torch.trapz for integrating 0 to \infinity

Consider the following code

# Choose number of layers
n_input = 1
n_hidden = 10
n_output = 2
n_bias = 1

def NN_s(q,w1,w2s,b):
sigma = torch.nn.functional.softplus(q * w1 + b)
outputs = torch.matmul(w2s,sigma)

def target_func_s(q):
return q**0*np.exp(-1.5**2*q**2/2)

def K_s(w1, w2, b1):
x = torch.linspace(0, float('inf'), steps=1000)
#numerator part
f =  x**2 * NN_s(x, w1, w2, b) * target_func_s(x)
num = torch.trapz(f, x)
# denominator part 1
g = x**2 * NN_s(x, w1, w2, b)**2
denom1=  torch.trapz(g, x)
# denominator part 2
h = x**2 * target_func_s(x)**2
denom2 = torch.trapz(h, x)
return (num**2) / (denom1*denom2)

#### cost function
def cost(w1, w2, b):
w2s = w2[0,:]
c_s = (K_s(w1, w2s, b)-1)**2
return c_s


When I run the following code,

epochs = 10**5
beta = 0.9
learning_rate = 0.02
costs = []
epsilon = 1e-6

w2 = torch.distributions.uniform.Uniform(0,1).sample([n_output, n_hidden]).requires_grad_(True)
b = torch.distributions.uniform.Uniform(-1,1).sample([n_hidden, 1]).requires_grad_(True)

optimizer = optim.RMSprop([w1, w2, b], lr=learning_rate)

for epoch in range(epochs):
loss = cost(w1, w2, b)
loss.backward()
optimizer.step()
costs.append(loss.item())

print("costs = {costs}");



I have problem in my function K_s(w1, w2, b1) when using
x = torch.linspace(0, float('inf'), steps=1000)

Is there a way to handle \int_0^{\inf} in torch? Any suggestions would help. Thank you !!!

Hi!
assuming that your integral converges (which is probably true as you want to compute it), than \int_0^\inf may be approximated with \int_0^L for some large number L. so the option for you is to choose satisfactory numerical limit instead of ‘float(“inf”)’, i.e. ‘1e-9’

Thank you! I have a follow up question on torch.optim.RMSprop:

When I do:

optimizer = optim.RMSprop([w1, w2, b], lr=learning_rate,  alpha=0.9, eps=1e-08)

for epoch in range(epochs):
loss = cost(w1, w2, b)
print(loss)
print(loss.backward())
optimizer.step()
costs.append(loss.item())


I get values for loss, but  loss.backward() gives None:

example of prints

tensor(0.0413, grad_fn=<AddBackward0>)
None
None
None