Combining Automatic and Manual Methods

I am testing a two-step architecture that is composed of a conventional first section that can be implemented with any standard deep learning architecture and a second section that must be coded manually outside the declaration of the Pytorch graph (while still utilizing numpy-like torch functions).

My problem can be simplified to coding a feed-forward neural network with two hidden layers, where the first is implemented within the Pytorch graph and the second is implemented manually outside the Pytorch graph.

Architecture:

-> Linear(28 * 28, 120) in Pytorch graph
-> ReLU in Pytorch graph
-> Linear(120, 84) in Pytorch graph
-> ReLU in Pytorch graph
-> Linear(84, 10) outside of Pytorch graph
-> Output

Problem: My implementation below achieves a very low ~74%, while a standard fully Pytorch implementation achieves ~95%. What is causing this disparity?

I believe my problem lies in manually passing back the deltas, although the math looks right, so I am stuck in finding a solution to this.

Implementation of architecture and training on MNIST:

class Net(nn.Module):
    def __init__(self):
        super(Net, self).__init__()
        self.fc1 = nn.Linear(28 * 28, 120)
        self.fc2 = nn.Linear(120, 84)

    def forward(self, x):
        x = x.view(-1, 28 * 28)
        x = F.relu(self.fc1(x))
        x = F.relu(self.fc2(x))
        return x

net = Net()

criterion = nn.MSELoss()
optimizer = optim.SGD(net.parameters(), lr=0.01)

# Initialize weights just as Pytorch does by default:
m = torch.distributions.uniform.Uniform(torch.tensor([-np.sqrt(1.0/84)]),
                                        torch.tensor([np.sqrt(1.0/84)]))
W = m.sample((84, 10)).reshape((84, 10))

# based on https://pytorch.org/tutorials/beginner/blitz/cifar10_tutorial.html
for epoch in range(2):  # loop over the dataset multiple times

    for i, data in enumerate(trainloader, 0):
        # get the inputs
        inputs, labels = data

        # make one-hot encoding of labels
        targets = oneHot(labels)

        # zero the parameter gradients
        optimizer.zero_grad()

        # forward + backward + optimize
        pytorch_outputs = net(inputs)
        pytorch_outputs = torch.autograd.Variable(pytorch_outputs,
                                                  requires_grad=True)

        manual_outputs = torch.mm(pytorch_outputs, W)

        delta_out = manual_outputs - targets.view(-1,10)  # = error_out 
        dEdW3 = torch.mm(torch.t(pytorch_outputs), delta_out)
        W -= 0.01 * dEdW3  # gradient descent

        delta_h = torch.autograd.Variable(
                                  torch.t(torch.mm(W, torch.t(delta_out))))

        loss = criterion(pytorch_outputs, delta_h)
        loss.backward()
        optimizer.step()

Full code: https://pastebin.com/EM5q4P6w