Implementing Numpy Model in Pytorch

Hi,

I’m just starting with pytorch, so starting the models from the basic. So I was implementing the numpy model into pytorch. Following is the code I was trying.

import torch
import numpy as np
import pandas as pd


admissions = pd.read_csv('https://stats.idre.ucla.edu/stat/data/binary.csv')

# Make dummy variables for rank
data = pd.concat([admissions, pd.get_dummies(admissions['rank'], prefix='rank')], axis=1)
data = data.drop('rank', axis=1)

# Standarize features
for field in ['gre', 'gpa']:
    mean, std = data[field].mean(), data[field].std()
    data.loc[:, field] = (data[field] - mean) / std

# Split off random 10% of the data for testing
np.random.seed(21)
sample = np.random.choice(data.index, size=int(len(data) * 0.9), replace=False)
data, test_data = data.ix[sample], data.drop(sample)

# Split into features and targets
features, targets = data.drop('admit', axis=1), data['admit']

features_test, targets_test = test_data.drop('admit', axis=1), test_data['admit']


dtype = torch.FloatTensor


m = torch.nn.Sigmoid()

n_hidden = 2
epochs = 10
learnrate = 0.005
n_records, n_features = features.shape

last_loss = None

weights_input_hidden = torch.randn(n_features, n_hidden).type(dtype)
weights_hidden_output = torch.randn(n_hidden).type(dtype)

for e in range(epochs):

    del_w_input_hidden = torch.from_numpy(np.zeros(weights_input_hidden.size())).type(dtype)
    del_w_hidden_output = torch.from_numpy(np.zeros(weights_hidden_output.size())).type(dtype)

    for x, y in zip(features.values, targets):

        hidden_input = torch.mm(x, weights_input_hidden)
        hidden_output = m(hidden_input)

        output = m(torch.mm(hidden_output, weights_hidden_output))

        error = y - output
        output_error_term = error * output * (1 - output)
        hidden_error = torch.mm(output_error_term, weights_hidden_output)
        hidden_error_term = hidden_error * hidden_output * (1 - hidden_output)
        del_w_hidden_output += output_error_term * hidden_output
        del_w_input_hidden += hidden_error_term * x[:, None]
    weights_input_hidden += learnrate * del_w_input_hidden / n_records
    weights_hidden_output += learnrate * del_w_hidden_output / n_records

    if e % (epochs / 10) == 0:
        hidden_output = m(torch.mm(x, weights_input_hidden))
        out = m(np.dot(hidden_output,
                       weights_hidden_output))
        loss = np.mean((out - targets) ** 2)
        if last_loss and last_loss < loss:
            print("Train loss: ", loss, "  WARNING - Loss Increasing")
        else:
            print("Train loss: ", loss)
        last_loss = loss
hidden = m(torch.mm(features_test, weights_input_hidden))
out = m(torch.mm(hidden, weights_hidden_output))
predictions = out > 0.5
accuracy = np.mean(predictions == targets_test)
print("Prediction accuracy: {:.3f}".format(accuracy))

The error I’m getting is the following:

Traceback (most recent call last):
File “pytorch_tutorial.py”, line 50, in
hidden_input = torch.mm(x, weights_input_hidden)
TypeError: torch.mm received an invalid combination of arguments - got (numpy.ndarray, torch.FloatTensor), but expected one of:

  • (torch.SparseFloatTensor mat1, torch.FloatTensor mat2)
    didn’t match because some of the arguments have invalid types: (!numpy.ndarray!, torch.FloatTensor)
  • (torch.FloatTensor source, torch.FloatTensor mat2)
    didn’t match because some of the arguments have invalid types: (!numpy.ndarray!, torch.FloatTensor)

I’m not getting how to convert the “x” into “torch.FloatTensor”.

If someone can please guide me, how to resolve the issue.

Edit:

For comparison I’m putting the numpy code as well.

def sigmoid(x):
    return 1 / (1 + np.exp(-x))


n_hidden = 2
epochs = 10
learnrate = 0.005
n_records, n_features = features.shape

last_loss = None
weights_input_hidden = np.random.normal(scale=1 / n_features ** .5,
                                        size=(n_features, n_hidden))
weights_hidden_output = np.random.normal(scale=1 / n_features ** .5,
                                         size=n_hidden)
for e in range(epochs):

    del_w_input_hidden = np.zeros(weights_input_hidden.shape)
    del_w_hidden_output = np.zeros(weights_hidden_output.shape)

    for x, y in zip(features.values, targets):

        hidden_input = np.dot(x, weights_input_hidden)
        hidden_output = sigmoid(hidden_input)

        output = sigmoid(np.dot(hidden_output, weights_hidden_output))

        error = y - output
        output_error_term = error * output * (1 - output)
        hidden_error = np.dot(output_error_term, weights_hidden_output)
        hidden_error_term = hidden_error * hidden_output * (1 - hidden_output)
        del_w_hidden_output += output_error_term * hidden_output
        del_w_input_hidden += hidden_error_term * x[:, None]
    weights_input_hidden += learnrate * del_w_input_hidden / n_records
    weights_hidden_output += learnrate * del_w_hidden_output / n_records

    if e % (epochs / 10) == 0:
        hidden_output = sigmoid(np.dot(x, weights_input_hidden))
        out = sigmoid(np.dot(hidden_output,
                             weights_hidden_output))
        loss = np.mean((out - targets) ** 2)
        if last_loss and last_loss < loss:
            print("Train loss: ", loss, "  WARNING - Loss Increasing")
        else:
            print("Train loss: ", loss)
        last_loss = loss
hidden = sigmoid(np.dot(features_test, weights_input_hidden))
out = sigmoid(np.dot(hidden, weights_hidden_output))
predictions = out > 0.5
accuracy = np.mean(predictions == targets_test)
print("Prediction accuracy: {:.3f}".format(accuracy))

Thank you!

Put the following on line 49.

 x = torch.from_numpy(x).float()
1 Like

Going from numpy to pytorch tensors and back is very simple.

Pytorch tensor to Numpy array: 
numpy_array = pytorch_tensor.numpy()

Numpy array to Pytorch tensor:
pytorch_tensor = torch.from_numpy(numpy_array)

More info here.

1 Like

@dhpollack: Thanks for your reply. But after implementing your suggestion, I’m getting the following error:

Traceback (most recent call last):
File “pytorch_tutorial.py”, line 50, in
hidden_input = torch.mm(x, weights_input_hidden)
RuntimeError: matrices expected, got 1D, 2D tensors at d:\downloads\pytorch-master-1\torch\lib\th\generic/THTensorMath.c:1233

This code works perfectly when I’m running using numpy. So during conversion to pytorch I’m making some mistake.

Thanks Bixqu for your answer!

1 Like

What is happening is that numpy is more lenient with regards to vector/matrix multiplication than pytorch. So you need to make one or both of the tensors into 2 dimensional tensors rather than 1d. You should look at the size of each tensor (x.size(), weights_input_hidden.size()), you’ll find one or both have just one dimension. To add dummy dimensions use any (but not all!) of the following:

x = x.unsqueeze(0)
x.unsqueeze_(0)
x = x.view(1, -1).contiguous()
1 Like

Thanks @dhpollack: I’ll try and let you know.

Full example of going from Numpy to PyTorch for binary classification:

1 Like

@QuantScientist: Thanks for sharing the link. I already checked your link and it’s a wonderful presentation. But I wanted a simpler conversion, which I already did. But yours is next level of complexity I’ll try.