Using LSTM for Sine wave prediction

Hello,
I am pretty new on the deep learning subject and I am hoping to predict sine waves using LSTM’s in PyTorch.
My data includes numbers that are increasing with constant interval in the first column and in the second (target) column, only sin(x) values are present.

Key parts in my code are:
Config

input_dim  = 1 (Only one x value)
output_dim = 1 (Only one sin(x) value)
hidden_dim = 50
learning_rate = 0.001
criterion     = torch.nn.MSELoss()

Data Set Adjustment

def data_adjustment(data, seq_length,input_dim,output_dim):
    x = []
    y = []
    i = 0
    Data_input = data[:,0:input_dim]
    Data_output= data[:,input_dim].reshape(-1,1) **Adjusting Dimensions**
    while i < (len(data)-seq_length-1):
        _x = Data_input[i:(i+seq_length),:].reshape(seq_length,input_dim)
        _y = np.reshape(Data_output[i+seq_length,:],(output_dim,-1)) **Adjusting Dimensions**
        x.append(_x)
        y.append(_y)
        i = i+1
    return np.array(x),np.array(y)

Preparing Data Set for DataLoader

class FeatureDataset(Dataset):
    def __init__(self,Data1,Data2):
        self.X_train = torch.tensor(Data1,dtype = torch.float32)
        self.Y_train = torch.tensor(Data2,dtype = torch.float32)        
    def __len__(self):
        return len(self.Y_train)   
    def __getitem__(self,idx):
        return self.X_train[idx],self.Y_train[idx]

Creating Dataloader

> [Training_feature,Training_output] = data_adjustment(Training_Set,seq_dim,input_dim,output_dim)
> Training_Set     = FeatureDataset(Training_feature,Training_output)
> train_dataloader       = torch.utils.data.DataLoader(Training_Set, batch_size=batch_size, shuffle=True)

Model

class LSTMsine(nn.Module):
    def __init__(self,hidden_dim,input_dim,output_dim,seq_dim):
        super(LSTMsine,self).__init__() 
        self.hidden_dim = hidden_dim
        self.input_dim  = input_dim
        self.output_dim = output_dim
        self.seq_dim    = seq_dim

        self.lstm       = nn.LSTM(self.input_dim,self.hidden_dim,batch_first=True)
        self.fc1        = nn.Linear(self.hidden_dim,1)
        
    def forward(self,x):
        **I initialized hidden state at first as well.**
        h_t , (_, _) = self.lstm(x)
        output = self.fc1(h_t)
        output = output[:,-1,:].reshape(-1,self.output_dim,self.output_dim) **Taking only the latest output at the end of the sequence, to model sequence to single output**
        return output
**Training Function**
def train_one_epoch(Model, dataloader, optimizer, criterion):
    Losses = []

    for parameters,potential in dataloader:
        parameters,potential = parameters.to(device),potential.to(device)
        output = Model(parameters)
        optimizer.zero_grad()
        loss  = criterion(output,potential)
        loss.backward()
        optimizer.step()
        loss_forepoch = loss
        return loss_forepoch

In the end, model is not learning anything. Both validation loss and training loss making ups and downs. I also, tried to adjust every hyperparameter.

  1. Additionally, batch size only takes the stated number of sequences, to the gpu in my example. But, I couldn’t grasp its effect on the performance. Does the system backpropagate with the average loss for the batches?
  2. Is it logical to only standardizing the test set with mean and std. (Test-mean)/Std and saving these values for later. These mean and std values than be repatedly used to scale the data and at the end scaling outputs back.

If anything imporant is missing I can update it of course.
Thanks.