More data than neurons with autograd?

Is it possible to use autograd or “manual” back propagation when input is larger than number of neurons and/or when input requires processing outside of nn? For example something like this to find average of N numbers that would be output by one neuron:

import numpy
import torch
samples = 5
data = numpy.random.default_rng().uniform(0, 1, samples)
reference = numpy.mean(data)
pred = torch.randn(1, requires_grad = True)
lr = 0.01
for i in range(10):
	distance = torch.tensor(
		numpy.pow(pred.item() - reference, 2),
		requires_grad = True)
	distance.backward(retain_graph = True)
	pred.sub_(lr * pred.grad.data)
	pred.grad.data.zero_()

Above code doesn’t work but I’m not sure how to fix it, or if it’s even possible with autograd/pytorch.

Managed to make this work with mygrad:

import numpy
import mygrad
samples = 5
data = numpy.random.default_rng().uniform(0, 1, samples)
reference = mygrad.tensor(numpy.mean(data))
pred = mygrad.tensor(numpy.random.default_rng().uniform(0, 1, 1)[0])
lr = 0.1
for i in range(1):
	distance = numpy.pow(pred - reference, 2)
	distance.backward()
	pred -= lr * pred.grad

The below code is working for me

import numpy
import torch
samples = 5
data = numpy.random.default_rng().uniform(0, 1, samples)
reference = numpy.mean(data)
pred = torch.randn(1, requires_grad = True)
lr = 0.01

for i in range(10):
    distance = torch.pow(pred - reference, 2)
    distance.backward()
    pred.data.sub_(lr * pred.grad.data)
    pred.grad.data.zero_()

Am I missing something?

Thanks, looks like I was missing data in pred.data.sub_.