|
Flex_attention returning logits
|
|
0
|
113
|
December 4, 2024
|
|
Unable to import torchtext (from torchtext.datasets import IMDB from torchtext.vocab import vocab)
|
|
4
|
3436
|
December 1, 2024
|
|
How does one set the pad token correctly (not to eos) during fine-tuning to avoid model not predicting EOS?
|
|
0
|
1182
|
November 29, 2024
|
|
How to compute the Validation loss
|
|
2
|
79
|
November 24, 2024
|
|
Computation of nn.Linear and nn.Embedding
|
|
1
|
261
|
November 22, 2024
|
|
Log softmax probabilities all equal in rnn decoder because pointer network scores are all < -90.0
|
|
0
|
159
|
November 19, 2024
|
|
How to correct TypeError: zip argument #1 must support iteration training in multiple GPU
|
|
6
|
1173
|
November 13, 2024
|
|
Training starting again in sampling code
|
|
3
|
77
|
November 9, 2024
|
|
AutoModelForCausalLM dataset process
|
|
1
|
359
|
November 9, 2024
|
|
Can someone explain the benefits of Batches?
|
|
2
|
359
|
November 8, 2024
|
|
Teacher forcing ratio
|
|
0
|
316
|
November 8, 2024
|
|
Search in documents
|
|
2
|
203
|
November 7, 2024
|
|
Torch using two GPUs with NV link
|
|
8
|
872
|
November 5, 2024
|
|
Could not get the file at http://www.quest.dcs.shef.ac.uk/wmt16_files_mmt/training.tar.gz. [RequestException] None
|
|
6
|
2178
|
October 29, 2024
|
|
Regarding Scaled Dot Product Attention
|
|
4
|
308
|
October 25, 2024
|
|
Memory Leak with a simple code
|
|
3
|
110
|
October 22, 2024
|
|
Build Auto Tagging System
|
|
7
|
367
|
October 22, 2024
|
|
Why transformer model is predicting only one random word repetatively in every iteration
|
|
1
|
126
|
October 19, 2024
|
|
LogSoftmax vs Softmax
|
|
26
|
57128
|
October 15, 2024
|
|
Why transformer model is behaving like this?
|
|
1
|
86
|
October 14, 2024
|
|
Variable length time series data
|
|
1
|
185
|
October 12, 2024
|
|
I want to eliminate the accumulation of memory usage during the learning loop
|
|
0
|
43
|
October 7, 2024
|
|
The forward function of a multi-layer Elman RNN from tutorial has two errors
|
|
0
|
33
|
October 1, 2024
|
|
Hi everyone, I'm new in nlp, I'm trying to build a machine translation model using BERT and I'm having trouble training the model, my predicted tokens all return the id of the token <eos> ( 3) in the first epoch. How do I handle this. Note: I used label s
|
|
0
|
43
|
September 29, 2024
|
|
Transformer example: Position encoding function works only for even d_model?
|
|
4
|
2808
|
September 25, 2024
|
|
Is the nn.Transformer package missing nn.Generate
|
|
0
|
126
|
September 23, 2024
|
|
Flex Attention Extremely Slow
|
|
1
|
537
|
September 20, 2024
|
|
How tokens per second calculated for LLM training
|
|
0
|
58
|
September 18, 2024
|
|
Drop row from tensor in cuda
|
|
3
|
241
|
September 14, 2024
|
|
Unhashable list while training sbert
|
|
0
|
124
|
September 14, 2024
|