Hello everyone,
I’m trying to create a simple example with Pytorch that would detect if an input number is odd or even.
I do not know exactly if it is possible, if my understanding is correct pytorch uses a linear function whose coefficients are adjusted with training and then a sigmoid function which allows to trigger or not to create the classification.
I started coding something that creates the dataset and then performs the training.
Note that I use the EarlyStopping algorithm to stop the learning when the error goes bigger.
X, y = split_sequences(dataset, 1)
model = nn.Sequential(
nn.Linear(1, 1),
nn.Sigmoid())
patience = 3
early_stopping = EarlyStopping(patience=patience, verbose=True)
train_losses = []
valid_losses = []
avg_train_losses = []
avg_valid_losses = []
criterion = nn.BCELoss()
optimizer = torch.optim.SGD(model.parameters(), lr=0.003)
epochs = 1000
for e in range(epochs):
running_loss = 0
model.train()
batchi1 = 1
for batchi in range(5000,len(X),5000):
for x in range(batchi1,batchi):
line = torch.tensor([X[x]],dtype=torch.float32)
out = torch.tensor([y[x]],dtype=torch.float32)
optimizer.zero_grad()
output = model(line)
loss = criterion(output, out)
train_losses.append(loss.item())
loss.backward()
optimizer.step()
running_loss += loss.item()
model.eval()
for x in range(batchi1,batchi):
line = torch.tensor([X[x]],dtype=torch.float32)
out = torch.tensor([y[x]],dtype=torch.float32)
optimizer.zero_grad()
output = model(line)
loss = criterion(output, out)
valid_losses.append(loss.item())
train_loss = np.average(train_losses)
valid_loss = np.average(valid_losses)
avg_train_losses.append(train_loss)
avg_valid_losses.append(valid_loss)
epoch_len = len(str(train_episodes))
print_msg = (f'[{e:>{epoch_len}}/{train_episodes:>{epoch_len}}] ' +
f'train_loss: {train_loss:.5f} ' +
f'valid_loss: {valid_loss:.5f}')
train_losses = []
valid_losses = []
early_stopping.trace_func = noprint
early_stopping.path = 'NEURALNN.pt'
early_stopping(valid_loss, model)
print(f"Training loss: {running_loss/len(X)}")
if early_stopping.early_stop:
print("Early stopping")
break
batchi1 = batchi1 + batchi
model.load_state_dict(torch.load('NEURALNN.pt'))
My example doesn’t really work because I’m getting a very high error:
Training loss: 24.53215186495483
Training loss: 24.53096993828714
Training loss: 24.530958538565038
Training loss: 24.537694978424906
Training loss: 24.537682025301457
Training loss: 24.53767285807431
Training loss: 24.53766483396888
Training loss: 24.537656717956065
Training loss: 24.53767231979668
Training loss: 24.537667768600585
Training loss: 24.537658959439398
Training loss: 24.537649419358374
Early stopping
<All keys matched successfully>
And with a test :
print(model(torch.tensor([[1]],dtype=torch.float32) ))
print(model(torch.tensor([[2]],dtype=torch.float32) ))
print(model(torch.tensor([[3]],dtype=torch.float32) ))
print(model(torch.tensor([[4]],dtype=torch.float32) ))
print(model(torch.tensor([[5]],dtype=torch.float32) ))
I get :
tensor([[0.4762]], grad_fn=<SigmoidBackward>)
tensor([[0.5165]], grad_fn=<SigmoidBackward>)
tensor([[0.5567]], grad_fn=<SigmoidBackward>)
tensor([[0.5961]], grad_fn=<SigmoidBackward>)
tensor([[0.6343]], grad_fn=<SigmoidBackward>)
Can you give me some advice on how to implement this problem?
Of course I am a beginner and looking to learn about pytorch, thank you for your indulgence.
Thank you in advance.
Note that the code to make my dataset is
dataset = []
for A in range(10000):
B = 2
C = 0
if A%B == 0:
C=1
dataset.append([A,C])
dataset = np.array(dataset)