Problems about achieving 'fizzbuzz' game using 2-layer nn

I’m a starter learning pytorch based on python 3.7. As I tried to implete ‘fizzbuzz’ game(details about this little game is in the code) using nn, I found the loss decreased quite slowly. And I also noticed that the dimension of y_pre and batchY is different. Is this the reason why the model seems to be unefficient? How can I change my code?

### FuzzBuzz game. If the given number can be divided by 15, print 'fizzbuzz'; 5, print 'buzz'; 3, print 'fizz', else print itself

def fizz_buzz_encode(i):
    if i % 15 == 0: return 0 
    elif i % 5 == 0: return 1 
    elif i % 3 == 0: return 2 
    else: return 3 

def fizz_buzz_decode(i,prediction):
    return ['fizzbuzz','buzz','fizz',i][prediction]

for i in range(50):
    print(fizz_buzz_decode(i,fizz_buzz_encode(i)))


### impletment using nn

import numpy as np
import torch
NUM_DIGITS = 10 
def binary_encode(i,num_digits):
    return np.array([i>>d&1 for d in range(num_digits)][::-1])

trX = torch.Tensor([binary_encode(i,NUM_DIGITS) for i in range(101,2**NUM_DIGITS)])
trY = torch.LongTensor([fizz_buzz_encode(i) for i in range(101,2**NUM_DIGITS)])


NUM_HIDDEN = 100 
model = torch.nn.Sequential(torch.nn.Linear(NUM_DIGITS,NUM_HIDDEN),
                           torch.nn.ReLU(),
                           torch.nn.Linear(NUM_HIDDEN,4))

device=torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
model=model.to(device)

loss_fn = torch.nn.CrossEntropyLoss()
optimizer = torch.optim.SGD(model.parameters(),lr = 0.01)

BATCH_SIZE = 256 
for epoch in range(10000):    # 进行10000次epoch
    for start in range(0,len(trX),BATCH_SIZE):
        end = start + BATCH_SIZE    # 每隔128个数分段
        batchX =trX[start:end].to(device)
        batchY =trY[start:end].to(device)
        
        y_pre = model(batchX)
        loss = loss_fn(y_pre,batchY)
        print('Epoch',epoch,loss.item())
        
        optimizer.zero_grad()
        loss.backward()
        optimizer.step()

The output are as flow:
Epoch 0 1.33250093460083
Epoch 0 1.3155981302261353
Epoch 0 1.3044551610946655
Epoch 0 1.2906633615493774
Epoch 1 1.3085103034973145
Epoch 1 1.290393352508545
Epoch 1 1.2784980535507202
Epoch 1 1.2634493112564087
Epoch 2 1.2881089448928833
Epoch 2 1.2691847085952759
Epoch 2 1.256813645362854
Epoch 2 1.2410774230957031

Epoch 9998 0.3821529746055603
Epoch 9998 0.38773271441459656
Epoch 9998 0.44764772057533264
Epoch 9998 0.35775092244148254
Epoch 9999 0.38201063871383667
Epoch 9999 0.3875814378261566
Epoch 9999 0.4476431608200073
Epoch 9999 0.357889324426651

I also get confused that my CPU utilization is 100% but my GPU’s is 0, but I already transfered the whole process to cuda. What’s wrong with my code? Thanks a lot.

If you look at the nn.CrossEntropyLoss documentation, you can see that y_pre (shape B x C) and batchY (shape C) have the correct shape, so no problem here (B = batch_size and C = number_classes).

You might want to make sure that torch.cuda.is_available() returns True. I tried your code and it went on my GPU no problem, so you may not be using yours.

This is one possibility, however I also think you cannot achieve a high utilization on the GPU with such a tiny model, so I would assume to see only a few spikes for this workload.

Thanks for your reply. Actually I’ve already checked torch.cuda.is_available() and the return is True, so I omit it from my code.
What I do not understand is that, my CPU gets 100% when running the code and I can hardly do others things on my computer:( but my GPU is empty.
Also somebody told me that this code is quite efficient running on his computers as the loss drop fast, while mine seems to be useless. Changing learning_rate does not help.

Ok, this seems weird to me. What I can tell you is that running your code takes 53.99 seconds (for the 10 000 epochs). Also, it uses around 511 MiB of memory on my GPU. You can get information about you GPU by running this command in the terminal:

watch -n 0.5 nvidia-smi

Maybe you could check if you get the same thing as me on your GPU?

I also met the same problem. Have you solved it? :smiley: