About the nlp category
|
|
2
|
3119
|
November 30, 2022
|
Slow attention when using kvCache
|
|
1
|
6
|
February 21, 2025
|
Why facing "CUDA error: device-side assert triggered" while training LSTM model?
|
|
5
|
12
|
February 14, 2025
|
LSTM for classification (fraud detection) over several lines of text
|
|
0
|
75
|
February 7, 2025
|
Importing torchtext
|
|
1
|
87
|
February 3, 2025
|
Left / right side padding
|
|
0
|
10
|
February 1, 2025
|
Feed a model with cumulative sum of sampled classified sequences
|
|
0
|
16
|
January 30, 2025
|
TransformerDecoder masks shape error using model.eval()
|
|
3
|
117
|
January 27, 2025
|
What is the right way to structure `input` and `label` while fine-tuning decoder only model
|
|
0
|
13
|
January 27, 2025
|
combining TEXT.build_vocab with BERT Embedding
|
|
0
|
29
|
January 27, 2025
|
Multi-node, Multi-gpu training
|
|
0
|
51
|
January 24, 2025
|
Why my Traing accuracy remains constant
|
|
2
|
90
|
January 20, 2025
|
My Accuracy remains constant
|
|
1
|
23
|
January 18, 2025
|
Getting NaN training and validation loss when training BERT model on pytorch
|
|
2
|
87
|
January 17, 2025
|
How to properly apply causal mask for next char prediction in MLP
|
|
1
|
38
|
January 10, 2025
|
Documents as parametric memory
|
|
0
|
45
|
January 11, 2025
|
Need help with Recurrent lstms
|
|
0
|
14
|
January 10, 2025
|
How to Implement Flash Attention in a Pre-Trained BERT Model on custom dataset?
|
|
0
|
63
|
January 8, 2025
|
Embedding a float into a vector for transformer models
|
|
1
|
60
|
January 7, 2025
|
Building a Model for Multi-Output Embedding Generation: Seeking Advice and Insights
|
|
0
|
26
|
January 4, 2025
|
Is the code correct for character level generation in lstm?
|
|
12
|
1510
|
December 27, 2024
|
What's a good replacement for torchtext?
|
|
0
|
173
|
December 18, 2024
|
Correct way to batch custom masks in SDPA
|
|
0
|
35
|
December 12, 2024
|
Weight Decay for tied weights (embedding and linear layers)
|
|
1
|
917
|
December 10, 2024
|
RuntimeError: CUDA error: device-side assert triggered CUDA kernel errors might be asynchronously reported at some other API call,so the stacktrace below might be incorrect
|
|
10
|
108167
|
December 7, 2024
|
Model performance decrease to nearly 1/4 when loading a checkpoint, but works fine for "simpler" data and in-script
|
|
5
|
1606
|
December 6, 2024
|
Help Needed: Transformer Model Repeating Last Token During Inference
|
|
3
|
197
|
December 5, 2024
|
Understanding logits in GPT2
|
|
0
|
85
|
December 5, 2024
|
Flex_attention returning logits
|
|
0
|
55
|
December 4, 2024
|
Unable to import torchtext (from torchtext.datasets import IMDB from torchtext.vocab import vocab)
|
|
4
|
1648
|
December 1, 2024
|