When I load vectors using nn.Embedding.from_pretrained(), training accuracy of the model doesn’t change after every epoch.
But I when initialize them randomly, training accuracy changes.
Here is my code:
This file has been truncated.
import numpy as np
import pandas as pd
from sklearn.model_selection import train_test_split,StratifiedKFold
from sklearn.metrics import accuracy_score,f1_score
Here is the data:
Any help is very much appreciated.
Can you please help me to solve this problem??
Did you try to set
In case you would like to keep the embedding frozen, could you try to overfit a small data sample (e.g. just 10 samples)? If your model is not able to learn even this small data sample, something else might be wrong with your code.
problem occurs when I use pretrained word embeddings. If I initialise embeddings randomly using nn.Embedding(vocabsize,embeddingdim) LSTM trains properly.
I found out the issue. I didn’t shuffle the data. Hence it is not training. Thank you
@ptrblck and @SimonW for your valuable time.