I made a simple net and very simple data. (just want learn pytorch) .
data
def make_data(nr_of_data):
x_list = []
y_list = []
for i in range(nr_of_data):
x = [random.randint(0, 1), random.randint(0, 1)]
y = x[0]
x_list.append(x)
y_list.append(y)
return x_list, y_list
How are you measuring the accuracy and is the test data the same as the training data? (I wouldn’t expect a model to be able to generalize from one random dataset to another).
X, y = make_data(50)
X = torch.FloatTensor(X)
y = torch.tensor(y)
for i in range(100):
linear_classifier.learn(X, y)
prediction = linear_classifier(X)
c = 0
good = 0
for a in zip(prediction, y):
c += 1
if np.argmax([0]) == a[1]:
good += 1
print(“acc==”,good/c)
And i use only train data (i just want learny torch “API” so i want just overfit very simple data but i can’t XD )
I believe there is just a small typo in the code, please try this loop instead:
for i in range(100):
linear_classifier.learn(X, y)
with torch.no_grad():
prediction = linear_classifier(X)
c = 0
good = 0
for a in zip(prediction, y):
c += 1
if np.argmax(a[0]) == a[1]:
good += 1
print("acc==",good/c)
(argmax of just 0 was called in the original)
no_grad is added to make the numpy conversion happy
Note that you can also use torch.argmax with axis=1 instead of numpy as well.