PyTorch Forecasting Index Error Dataloader Validation

Hi,

I am quite new to PyTorch forecasting and I am trying to build a TimeSeriesDataSet (number of infections per country).

dateRep                            object
day                              category
month                            category
year                             category
cases                               int64
deaths                              int64
countriesAndTerritories            object
geoId                              object
countryterritoryCode               object
popData2020                         int64
continentExp                       object
NEWTIME                    datetime64[ns]
TIME_IDX                            int64
weekday                          category
dtype: object

Made a TimeSeriesDataset out of it:

{'time_idx': 'TIME_IDX',
 'target': ['cases'],
 'group_ids': ['countriesAndTerritories', 'geoId', 'countryterritoryCode'],
 'weight': None,
 'max_encoder_length': 14,
 'min_encoder_length': 7,
 'min_prediction_idx': 0,
 'min_prediction_length': 1,
 'max_prediction_length': 7,
 'static_categoricals': ['countriesAndTerritories',
  'geoId',
  'countryterritoryCode'],
 'static_reals': ['popData2020', 'encoder_length'],
 'time_varying_known_categoricals': ['day', 'month', 'year', 'weekday'],
 'time_varying_known_reals': ['TIME_IDX'],
 'time_varying_unknown_categoricals': [],
 'time_varying_unknown_reals': ['cases'],
 'variable_groups': {},
 'constant_fill_strategy': {},
 'allow_missing_timesteps': True,
 'lags': {},
 'add_relative_time_idx': False,
 'add_target_scales': False,
 'add_encoder_length': True,
 'target_normalizer': MultiNormalizer(normalizers=[NaNLabelEncoder()]),
 'categorical_encoders': {'data': NaNLabelEncoder(add_nan=True),
  '__group_id__countriesAndTerritories': NaNLabelEncoder(),
  '__group_id__geoId': NaNLabelEncoder(),
  '__group_id__countryterritoryCode': NaNLabelEncoder(),
  'countriesAndTerritories': NaNLabelEncoder(),
  'geoId': NaNLabelEncoder(),
  'countryterritoryCode': NaNLabelEncoder(),
  'day': NaNLabelEncoder(),
  'month': NaNLabelEncoder(),
  'year': NaNLabelEncoder(),
  'weekday': NaNLabelEncoder()},
 'scalers': {'popData2020': StandardScaler(),
  'encoder_length': StandardScaler(),
  'TIME_IDX': StandardScaler()},
 'randomize_length': None,
 'predict_mode': False}

Everything works fine until trying to find the correct learning rate. I get an index error for the validation dataset.

# find optimal learning rate
res = trainer.tuner.lr_find(
    tft,
    train_dataloader=train_dataloader,
    val_dataloaders=val_dataloader,
    max_lr=10.0,
    min_lr=1e-6,
)

print(f"suggested learning rate: {res.suggestion()}")
fig = res.plot(show=True, suggest=True)
fig.show()

Error:

/Users/quirly/opt/anaconda3/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py:817: LightningDeprecationWarning: `trainer.tune(train_dataloader)` is deprecated in v1.4 and will be removed in v1.6. Use `trainer.tune(train_dataloaders)` instead. HINT: added 's'
  rank_zero_deprecation(

   | Name                               | Type                            | Params
----------------------------------------------------------------------------------------
0  | loss                               | QuantileLoss                    | 0     
1  | logging_metrics                    | ModuleList                      | 0     
2  | input_embeddings                   | MultiEmbedding                  | 1.4 K 
3  | prescalers                         | ModuleDict                      | 64    
4  | static_variable_selection          | VariableSelectionNetwork        | 1.6 K 
5  | encoder_variable_selection         | VariableSelectionNetwork        | 1.7 K 
6  | decoder_variable_selection         | VariableSelectionNetwork        | 1.1 K 
7  | static_context_variable_selection  | GatedResidualNetwork            | 1.1 K 
8  | static_context_initial_hidden_lstm | GatedResidualNetwork            | 1.1 K 
9  | static_context_initial_cell_lstm   | GatedResidualNetwork            | 1.1 K 
10 | static_context_enrichment          | GatedResidualNetwork            | 1.1 K 
11 | lstm_encoder                       | LSTM                            | 2.2 K 
12 | lstm_decoder                       | LSTM                            | 2.2 K 
13 | post_lstm_gate_encoder             | GatedLinearUnit                 | 544   
14 | post_lstm_add_norm_encoder         | AddNorm                         | 32    
15 | static_enrichment                  | GatedResidualNetwork            | 1.4 K 
16 | multihead_attn                     | InterpretableMultiHeadAttention | 1.1 K 
17 | post_attn_gate_norm                | GateAddNorm                     | 576   
18 | pos_wise_ff                        | GatedResidualNetwork            | 1.1 K 
19 | pre_output_gate_norm               | GateAddNorm                     | 576   
20 | output_layer                       | Linear                          | 119   
----------------------------------------------------------------------------------------
20.1 K    Trainable params
0         Non-trainable params
20.1 K    Total params
0.080     Total estimated model params size (MB)
/Users/quirly/opt/anaconda3/lib/python3.8/site-packages/pytorch_lightning/trainer/data_loading.py:105: UserWarning: The dataloader, val dataloader 0, does not have many workers which may be a bottleneck. Consider increasing the value of the `num_workers` argument` (try 16 which is the number of cpus on this machine) in the `DataLoader` init to improve performance.
  rank_zero_warn(
---------------------------------------------------------------------------
IndexError                                Traceback (most recent call last)
/var/folders/vd/h0c811n10fq4tk4gw79gg5jw0000gn/T/ipykernel_2661/1514959711.py in <module>
      1 # find optimal learning rate
----> 2 res = trainer.tuner.lr_find(
      3     tft,
      4     train_dataloader=train_dataloader,
      5     val_dataloaders=val_dataloader,

~/opt/anaconda3/lib/python3.8/site-packages/pytorch_lightning/tuner/tuning.py in lr_find(self, model, train_dataloaders, val_dataloaders, datamodule, min_lr, max_lr, num_training, mode, early_stop_threshold, update_attr, train_dataloader)
    186         """
    187         self.trainer.auto_lr_find = True
--> 188         result = self.trainer.tune(
    189             model,
    190             train_dataloaders=train_dataloaders,

~/opt/anaconda3/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py in tune(self, model, train_dataloaders, val_dataloaders, datamodule, scale_batch_size_kwargs, lr_find_kwargs, train_dataloader)
    835         )
    836 
--> 837         result = self.tuner._tune(model, scale_batch_size_kwargs=scale_batch_size_kwargs, lr_find_kwargs=lr_find_kwargs)
    838 
    839         assert self.state.stopped

~/opt/anaconda3/lib/python3.8/site-packages/pytorch_lightning/tuner/tuning.py in _tune(self, model, scale_batch_size_kwargs, lr_find_kwargs)
     51         if self.trainer.auto_lr_find:
     52             lr_find_kwargs.setdefault("update_attr", True)
---> 53             result["lr_find"] = lr_find(self.trainer, model, **lr_find_kwargs)
     54 
     55         self.trainer.state.status = TrainerStatus.FINISHED

~/opt/anaconda3/lib/python3.8/site-packages/pytorch_lightning/tuner/lr_finder.py in lr_find(trainer, model, min_lr, max_lr, num_training, mode, early_stop_threshold, update_attr)
    246 
    247     # Fit, lr & loss logged in callback
--> 248     trainer.tuner._run(model)
    249 
    250     # Prompt if we stopped early

~/opt/anaconda3/lib/python3.8/site-packages/pytorch_lightning/tuner/tuning.py in _run(self, *args, **kwargs)
     61         self.trainer.state.status = TrainerStatus.RUNNING  # last `_run` call might have set it to `FINISHED`
     62         self.trainer.training = True
---> 63         self.trainer._run(*args, **kwargs)
     64         self.trainer.tuning = True
     65 

~/opt/anaconda3/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py in _run(self, model)
    920 
    921         # dispatch `start_training` or `start_evaluating` or `start_predicting`
--> 922         self._dispatch()
    923 
    924         # plugin will finalized fitting (e.g. ddp_spawn will load trained model)

~/opt/anaconda3/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py in _dispatch(self)
    988             self.accelerator.start_predicting(self)
    989         else:
--> 990             self.accelerator.start_training(self)
    991 
    992     def run_stage(self):

~/opt/anaconda3/lib/python3.8/site-packages/pytorch_lightning/accelerators/accelerator.py in start_training(self, trainer)
     90 
     91     def start_training(self, trainer: "pl.Trainer") -> None:
---> 92         self.training_type_plugin.start_training(trainer)
     93 
     94     def start_evaluating(self, trainer: "pl.Trainer") -> None:

~/opt/anaconda3/lib/python3.8/site-packages/pytorch_lightning/plugins/training_type/training_type_plugin.py in start_training(self, trainer)
    159     def start_training(self, trainer: "pl.Trainer") -> None:
    160         # double dispatch to initiate the training loop
--> 161         self._results = trainer.run_stage()
    162 
    163     def start_evaluating(self, trainer: "pl.Trainer") -> None:

~/opt/anaconda3/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py in run_stage(self)
    998         if self.predicting:
    999             return self._run_predict()
-> 1000         return self._run_train()
   1001 
   1002     def _pre_training_routine(self):

~/opt/anaconda3/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py in _run_train(self)
   1033             self.progress_bar_callback.disable()
   1034 
-> 1035         self._run_sanity_check(self.lightning_module)
   1036 
   1037         # enable train mode

~/opt/anaconda3/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py in _run_sanity_check(self, ref_model)
   1120             # run eval step
   1121             with torch.no_grad():
-> 1122                 self._evaluation_loop.run()
   1123 
   1124             self.on_sanity_check_end()

~/opt/anaconda3/lib/python3.8/site-packages/pytorch_lightning/loops/base.py in run(self, *args, **kwargs)
    109             try:
    110                 self.on_advance_start(*args, **kwargs)
--> 111                 self.advance(*args, **kwargs)
    112                 self.on_advance_end()
    113                 self.iteration_count += 1

~/opt/anaconda3/lib/python3.8/site-packages/pytorch_lightning/loops/dataloader/evaluation_loop.py in advance(self, *args, **kwargs)
    108         dl_max_batches = self._max_batches[self.current_dataloader_idx]
    109 
--> 110         dl_outputs = self.epoch_loop.run(
    111             dataloader_iter, self.current_dataloader_idx, dl_max_batches, self.num_dataloaders
    112         )

~/opt/anaconda3/lib/python3.8/site-packages/pytorch_lightning/loops/base.py in run(self, *args, **kwargs)
    109             try:
    110                 self.on_advance_start(*args, **kwargs)
--> 111                 self.advance(*args, **kwargs)
    112                 self.on_advance_end()
    113                 self.iteration_count += 1

~/opt/anaconda3/lib/python3.8/site-packages/pytorch_lightning/loops/epoch/evaluation_epoch_loop.py in advance(self, dataloader_iter, dataloader_idx, dl_max_batches, num_dataloaders)
    109         # lightning module methods
    110         with self.trainer.profiler.profile("evaluation_step_and_end"):
--> 111             output = self.evaluation_step(batch, batch_idx, dataloader_idx)
    112             output = self.evaluation_step_end(output)
    113 

~/opt/anaconda3/lib/python3.8/site-packages/pytorch_lightning/loops/epoch/evaluation_epoch_loop.py in evaluation_step(self, batch, batch_idx, dataloader_idx)
    156             self.trainer.lightning_module._current_fx_name = "validation_step"
    157             with self.trainer.profiler.profile("validation_step"):
--> 158                 output = self.trainer.accelerator.validation_step(step_kwargs)
    159 
    160         return output

~/opt/anaconda3/lib/python3.8/site-packages/pytorch_lightning/accelerators/accelerator.py in validation_step(self, step_kwargs)
    209         """
    210         with self.precision_plugin.val_step_context(), self.training_type_plugin.val_step_context():
--> 211             return self.training_type_plugin.validation_step(*step_kwargs.values())
    212 
    213     def test_step(self, step_kwargs: Dict[str, Union[Any, int]]) -> Optional[STEP_OUTPUT]:

~/opt/anaconda3/lib/python3.8/site-packages/pytorch_lightning/plugins/training_type/training_type_plugin.py in validation_step(self, *args, **kwargs)
    176 
    177     def validation_step(self, *args, **kwargs):
--> 178         return self.model.validation_step(*args, **kwargs)
    179 
    180     def test_step(self, *args, **kwargs):

~/opt/anaconda3/lib/python3.8/site-packages/pytorch_forecasting/models/base_model.py in validation_step(self, batch, batch_idx)
    368     def validation_step(self, batch, batch_idx):
    369         x, y = batch
--> 370         log, out = self.step(x, y, batch_idx)
    371         log.update(self.create_log(x, y, out, batch_idx))
    372         return log

~/opt/anaconda3/lib/python3.8/site-packages/pytorch_forecasting/models/base_model.py in step(self, x, y, batch_idx, **kwargs)
    490             loss = loss * (1 + monotinicity_loss)
    491         else:
--> 492             out = self(x, **kwargs)
    493 
    494             # calculate loss

~/opt/anaconda3/lib/python3.8/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
   1100         if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
   1101                 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1102             return forward_call(*input, **kwargs)
   1103         # Do not call functions when jit is used
   1104         full_backward_hooks, non_full_backward_hooks = [], []

~/opt/anaconda3/lib/python3.8/site-packages/pytorch_forecasting/models/temporal_fusion_transformer/__init__.py in forward(self, x)
    400         timesteps = x_cont.size(1)  # encode + decode length
    401         max_encoder_length = int(encoder_lengths.max())
--> 402         input_vectors = self.input_embeddings(x_cat)
    403         input_vectors.update(
    404             {

~/opt/anaconda3/lib/python3.8/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
   1100         if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
   1101                 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1102             return forward_call(*input, **kwargs)
   1103         # Do not call functions when jit is used
   1104         full_backward_hooks, non_full_backward_hooks = [], []

~/opt/anaconda3/lib/python3.8/site-packages/pytorch_forecasting/models/nn/embeddings.py in forward(self, x)
     95                 )
     96             else:
---> 97                 input_vectors[name] = emb(x[..., self.x_categoricals.index(name)])
     98         return input_vectors

~/opt/anaconda3/lib/python3.8/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
   1100         if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
   1101                 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1102             return forward_call(*input, **kwargs)
   1103         # Do not call functions when jit is used
   1104         full_backward_hooks, non_full_backward_hooks = [], []

~/opt/anaconda3/lib/python3.8/site-packages/torch/nn/modules/sparse.py in forward(self, input)
    156 
    157     def forward(self, input: Tensor) -> Tensor:
--> 158         return F.embedding(
    159             input, self.weight, self.padding_idx, self.max_norm,
    160             self.norm_type, self.scale_grad_by_freq, self.sparse)

~/opt/anaconda3/lib/python3.8/site-packages/torch/nn/functional.py in embedding(input, weight, padding_idx, max_norm, norm_type, scale_grad_by_freq, sparse)
   2042         # remove once script supports set_grad_enabled
   2043         _no_grad_embedding_renorm_(weight, input, max_norm, norm_type)
-> 2044     return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse)
   2045 
   2046 

IndexError: index out of range in self

Something seems to be wrong with the val_dataloader size. Can you help?

Here is my code:

Thank you very much in advance!!