About creating an LSTM that predicts NBA player positions

Doubts about creating an LSTM that predicts NBA player positions

Hi, I’m really new in Pytorch but I this summer I wanted to do something cool with it. I wanted to predict the position of a particular player based on historical position information of all players in course using data that comes from real NBA games (GitHub - linouk23/NBA-Player-Movements: 🏀 Visualization of NBA games from raw SportVU data logs).

To do it I think that the best option is using an LSTM because the previous information about the positions is important to be able to predict it. The point is that I watched the example that Pytorch gives about LSTM (the one about recreating the sine wave) and there are some things that are not clear for me.

Firstly, you have to know that as an input data for training I thought about tensors of (team_id, player_id, x, y) of each moment and use the (x,y) values as an output of the training. And whenever I wanted to test it I would delete a player position by (-1,-1) and see if I get a correct value.

I still do not know if this would be the best model to do what I want to do so I am open to suggestions.

class RNN(nn.Module):
    def __init__(self, input_size, hidden_size, num_layers):
        super(RNN, self).__init__()
        self.hidden_size = hidden_size
        self.num_layers = num_layers
        self.input_size = input_size
        self.lstm = nn.LSTM(input_size, hidden_size, num_layers, batch_first=True)
        self.out = nn.Linear(hidden_size)
    
    def forward(self, x):
        # Set initial hidden and cell states 
        h0 = torch.zeros(self.num_layers, x.size(0), self.hidden_size).to(device) 
        c0 = torch.zeros(self.num_layers, x.size(0), self.hidden_size).to(device)
        
        # Forward propagate LSTM
        out, hidden = self.lstm(x, (h0, c0))  # out: tensor of shape (batch_size, seq_length, hidden_size)
        
        # Decode the hidden state of the last time step
        out = self.fc(out[:, -1, :])
        return out

#HYPER-PARAMETERS
sequence_length = 28
input_size = 4
hidden_size = 128
num_layers = 1
batchsize = 1 #number of sequences I want to process in parallel
num_epochs = 1 #train the data 1 time
learning_rate = 0.01 #learning rate

model = RNN(input_size, hidden_size, num_layers)
#Loss, optimizer
criterion = nn.CrossEntropyLoss()
optimizer = torch.optim.Adam(model.parameters(), lr=learning_rate)

Here I posted a snippet of my code because some of the questions are related to issues that can be seen in the code.

  • Do I have to specify the number of classes for the output?, as I said before I am talking about positions so I think that it doesn’t really makes sense

  • Should I create an init_hidden_layer? If so, why?

  • Which would be the best parameters to initialize my model? Some of them like sequence_length or number of layers seem to be a little bit arbitrary.

  • I do have a function that returns me Variables of the input/output of the training and the testing. But how am I supposed to use them in my model?

    def grouplen(sequence, chunk_size):
      return list(zip(*[iter(sequence)] * chunk_size))
    
    def load_data_sets():
        filename = "train.p"
        train_list= pickle.load(open(filename , "rb" ))
        train_data=[]
        train_position=[]
        for event in train_list:
            for moment in event.moments:
                for player in moment.players:
                    train_data.append(player.get_info())
                    train_position.append(player.x)
                    train_position.append(player.y)
    
        #has team,id,x,y for every player for every moment       
        train_data = grouplen(train_data,4)
        torch.set_printoptions(precision=8)
        train_data = torch.tensor(np.array(train_data))
        train_data = Variable(train_data)
        #train_position has every player position every event
        train_position = grouplen(train_position,2)
        train_position = torch.tensor(np.array(train_position))
        train_position = Variable(train_position)
    
        data = pickle.load(open("test_data.p" , "rb" ))
        for j in range(151):
            test_data=[]
            test_position=[]        
            for i in range(6):
                for obj in vars(data[j][i])["players"]:
                    test_data.append(obj.get_info())
                    test_position.append(player.x)
                    test_position.append(player.y)
    
        #test_data has team,id,x,y for every player for every moment with (-1,-1)       
        test_data = grouplen(test_data,4)
        test_data = torch.tensor(np.array(test_data))
        test_data = Variable(test_data)
    
        #test_position has x,y for every player for every moment with (-1,-1)
        test_position = grouplen(test_position,2)
        test_position = torch.tensor(np.array(test_position))
        test_position = Variable(test_position)
    
        return train_data,train_position,test_data,test_position
    
    • Is there something you would change? Thanks in advance, as you may have seen I am really new in the topic but I really want to learn a lot.:smile: