Title: RTX 5080 (Blackwell) Training Stagnation vs RTX A4000 Normal Learning - Same Code, Different Outcomes

Identical FastReID training code produces drastically different learning behaviors on different GPU architectures:

  • RTX A4000 (Ampere): Normal training progression - Loss decreases from ~317 to ~17, metrics improve from 6% to 79% rank-1 accuracy over 40 epochs.

  • RTX 5080 (Blackwell): Training appears “stuck” - Loss remains nearly constant (~317-337), metrics show minimal improvement (~53-54% rank-1).

RTX A4000 (Working):

  • Loss trajectory: 442.6 → 17.36

  • Metrics: Rank-1 from 7.18% → 86.79%, mAP from 5.36% → 70.37%

  • Clear learning progression with consistent loss reduction

RTX 5080 (Stuck):

  • Loss trajectory: ~317 → ~338 (no meaningful decrease)

  • Metrics: Rank-1 stays at ~53-54%, mAP stays at ~26-27%

  • Training appears frozen despite correct optimizer steps

System Configuration Differences

Component RTX A4000 (Working) RTX 5080 (Problematic)
GPU Architecture Ampere Blackwell
PyTorch Version 1.7.1+cu110 2.6.0a0+df5bbc09d1.nv24.12
CUDA Version 11.0 12.6

why 5080 giving no result as compare to A4000 with same FastReid setup
just pytorch version is change only.?

Update your PyTorch binaries to the latest stable or nightly release and rerun your code.

Thanks @ptrblck for the reply.

I have tried this on the latest docker image which support Blackwell architecture but same issue on the same.

Below is docker image.

Could you post a minimal and executable code snippet reproducing the issue, please?

I just clone GitHub - JDAI-CV/fast-reid: SOTA Re-identification Methods and Toolbox

And train the model with custom and marketnet dataset.

on A4000 and 3070Ti The training is working fine but with 5060 and 5080 the training loss is nearly constant.

If u want I can share the detailed training logs via gdrive link.

Hi @ptrblck

Any suggestion?

Sorry, but I would need to get a smaller reproduction code snippet than “And train the model with custom and marketnet dataset”. I.e. if a PyTorch itself is creating the convergence issue I would assume trying to overfit random data might also trigger the error, but would depend on your inputs here.

Okay for reproducing issue at your end I am providing complete steps. Instead of custom dataset I reproduce this on open source data as well.

Steps for reproducing the issue.

Clone the container : sudo docker pull nvcr.io/nvidia/pytorch:26.01-py3
Then run container : sudo docker run -v /home/smarg/Documents/ContainerData/OPT_DATA/fast-reid-master-5080/fast-reid-master:/home/appuser/data --name fastreid --net=host --ipc=host --gpus all -it nvcr.io/nvidia/pytorch:26.01-py3 bash

cd /home/appuser/data
git clone GitHub - JDAI-CV/fast-reid: SOTA Re-identification Methods and Toolbox

run command : python3 tools/train_net.py --config-file ./configs/Market1501/bagtricks_vit.yml --num-gpus 1

Now u will see logs

root@smarg:/home/appuser/data# python3 tools/train_net.py --config-file ./configs/Market1501/bagtricks_vit.yml --num-gpus 1
Command Line Args: Namespace(config_file=‘./configs/Market1501/bagtricks_vit.yml’, resume=False, eval_only=False, num_gpus=1, num_machines=1, machine_rank=0, dist_url=‘tcp://127.0.0.1:49152’, opts=)
[02/03 13:53:16 fastreid]: Rank of current process: 0. World size: 1
[02/03 13:53:16 fastreid]: Environment info:


sys.platform linux
Python 3.12.3 (main, Nov 6 2025, 13:44:16) [GCC 13.3.0]
numpy 2.1.0
fastreid 1.3 @/home/appuser/data/fastreid
FASTREID_ENV_MODULE
PyTorch 2.10.0a0+a36e1d39eb.nv26.01.42222806 @/usr/local/lib/python3.12/dist-packages/torch
PyTorch debug build False
GPU available True
GPU 0 NVIDIA GeForce RTX 5080
GPU 1 NVIDIA GeForce RTX 3070 Ti
CUDA_HOME /usr/local/cuda
TORCH_CUDA_ARCH_LIST 7.5 8.0 8.6 9.0 10.0 12.0+PTX
Pillow 12.1.0
torchvision 0.25.0a0+6b56de1c.nv26.01.42222806 @/usr/local/lib/python3.12/dist-packages/torchvision
torchvision arch flags sm_100, sm_120, sm_75, sm_80, sm_86, sm_90


PyTorch built with:

  • GCC 11.2
  • C++ Version: 201703
  • Intel(R) oneAPI Math Kernel Library Version 2021.1-Product Build 20201104 for Intel(R) 64 architecture applications
  • Intel(R) MKL-DNN v3.7.1 (Git Hash N/A)
  • OpenMP 201511 (a.k.a. OpenMP 4.5)
  • LAPACK is enabled (usually provided by MKL)
  • NNPACK is enabled
  • CPU capability usage: AVX2
  • CUDA Runtime 13.1
  • NVCC architecture flags: -gencode;arch=compute_75,code=sm_75;-gencode;arch=compute_80,code=sm_80;-gencode;arch=compute_86,code=sm_86;-gencode;arch=compute_90,code=sm_90;-gencode;arch=compute_100,code=sm_100;-gencode;arch=compute_120,code=sm_120;-gencode;arch=compute_120,code=compute_120
  • CuDNN 91.7.1
  • Magma 2.6.2
  • Build settings: BLAS_INFO=mkl, BUILD_TYPE=Release, COMMIT_SHA=Unknown, CUDA_VERSION=13.1, CUDNN_VERSION=9.17.1, CXX_COMPILER=/opt/rh/gcc-toolset-11/root/usr/bin/c++, CXX_FLAGS=-fno-gnu-unique -Werror=deprecated-declarations -fvisibility-inlines-hidden -DUSE_PTHREADPOOL -DNDEBUG -DUSE_KINETO -DLIBKINETO_NOROCTRACER -DLIBKINETO_NOXPUPTI=ON -DUSE_FBGEMM -DUSE_FBGEMM_GENAI -DUSE_PYTORCH_QNNPACK -DUSE_XNNPACK -DSYMBOLICATE_MOBILE_DEBUG_HANDLE -O2 -fPIC -DC10_NODEPRECATED -Wall -Wextra -Werror=return-type -Werror=non-virtual-dtor -Werror=range-loop-construct -Werror=bool-operation -Wnarrowing -Wno-missing-field-initializers -Wno-unknown-pragmas -Wno-unused-parameter -Wno-strict-overflow -Wno-strict-aliasing -Wno-stringop-overflow -Wsuggest-override -Wno-psabi -Wno-error=old-style-cast -faligned-new -Wno-maybe-uninitialized -fno-math-errno -fno-trapping-math -Werror=format -Wno-stringop-overflow, LAPACK_INFO=mkl, PERF_WITH_AVX=1, PERF_WITH_AVX2=1, TORCH_VERSION=2.10.0, USE_CUDA=ON, USE_CUDNN=ON, USE_CUSPARSELT=ON, USE_GFLAGS=OFF, USE_GLOG=OFF, USE_GLOO=ON, USE_MKL=ON, USE_MKLDNN=ON, USE_MPI=ON, USE_NCCL=1, USE_NNPACK=ON, USE_OPENMP=ON, USE_ROCM=OFF, USE_ROCM_KERNEL_ASSERT=OFF, USE_XCCL=OFF, USE_XPU=OFF,

[02/03 13:53:16 fastreid]: Command line arguments: Namespace(config_file=‘./configs/Market1501/bagtricks_vit.yml’, resume=False, eval_only=False, num_gpus=1, num_machines=1, machine_rank=0, dist_url=‘tcp://127.0.0.1:49152’, opts=)
[02/03 13:53:16 fastreid]: Contents of args.config_file=./configs/Market1501/bagtricks_vit.yml:

MODEL:
META_ARCHITECTURE: Baseline
PIXEL_MEAN: [127.5, 127.5, 127.5]
PIXEL_STD: [127.5, 127.5, 127.5]

BACKBONE:
NAME: build_vit_backbone
DEPTH: base
FEAT_DIM: 768
PRETRAIN: False
PRETRAIN_PATH: /export/home/lxy/.cache/torch/checkpoints/jx_vit_base_p16_224-80ecf9dd.pth
STRIDE_SIZE: (16, 16)
DROP_PATH_RATIO: 0.1
DROP_RATIO: 0.0
ATT_DROP_RATE: 0.0

HEADS:
NAME: EmbeddingHead
NORM: BN
WITH_BNNECK: True
POOL_LAYER: Identity
NECK_FEAT: before
CLS_LAYER: Linear

LOSSES:
NAME: (“CrossEntropyLoss”, “TripletLoss”,)

CE:
  EPSILON: 0. # no smooth
  SCALE: 1.

TRI:
  MARGIN: 0.0
  HARD_MINING: True
  NORM_FEAT: False
  SCALE: 1.

INPUT:
SIZE_TRAIN: [ 256, 128 ]
SIZE_TEST: [ 256, 128 ]

REA:
ENABLED: True
PROB: 0.5

FLIP:
ENABLED: True

PADDING:
ENABLED: True

DATALOADER:
SAMPLER_TRAIN: NaiveIdentitySampler
NUM_INSTANCE: 4
NUM_WORKERS: 8

SOLVER:
AMP:
ENABLED: False
OPT: SGD
MAX_EPOCH: 120
BASE_LR: 0.008
WEIGHT_DECAY: 0.0001
IMS_PER_BATCH: 64

SCHED: CosineAnnealingLR
ETA_MIN_LR: 0.000016

WARMUP_FACTOR: 0.01
WARMUP_ITERS: 1000

CLIP_GRADIENTS:
ENABLED: True

CHECKPOINT_PERIOD: 30

TEST:
EVAL_PERIOD: 5
IMS_PER_BATCH: 128

CUDNN_BENCHMARK: True

DATASETS:
NAMES: (“Market1501”,)
TESTS: (“Market1501”,)

OUTPUT_DIR: logs/market1501/sbs_vit_base

[02/03 13:53:16 fastreid]: Running with full config:
CUDNN_BENCHMARK: True
DATALOADER:
NUM_INSTANCE: 4
NUM_WORKERS: 8
SAMPLER_TRAIN: NaiveIdentitySampler
SET_WEIGHT:
DATASETS:
COMBINEALL: False
NAMES: (‘Market1501’,)
TESTS: (‘Market1501’,)
INPUT:
AFFINE:
ENABLED: False
AUGMIX:
ENABLED: False
PROB: 0.0
AUTOAUG:
ENABLED: False
PROB: 0.0
CJ:
BRIGHTNESS: 0.15
CONTRAST: 0.15
ENABLED: False
HUE: 0.1
PROB: 0.5
SATURATION: 0.1
CROP:
ENABLED: False
RATIO: [0.75, 1.3333333333333333]
SCALE: [0.16, 1]
SIZE: [224, 224]
FLIP:
ENABLED: True
PROB: 0.5
PADDING:
ENABLED: True
MODE: constant
SIZE: 10
REA:
ENABLED: True
PROB: 0.5
VALUE: [123.675, 116.28, 103.53]
RPT:
ENABLED: False
PROB: 0.5
SIZE_TEST: [256, 128]
SIZE_TRAIN: [256, 128]
KD:
EMA:
ENABLED: False
MOMENTUM: 0.999
MODEL_CONFIG:
MODEL_WEIGHTS:
MODEL:
BACKBONE:
ATT_DROP_RATE: 0.0
DEPTH: base
DROP_PATH_RATIO: 0.1
DROP_RATIO: 0.0
FEAT_DIM: 768
LAST_STRIDE: 1
NAME: build_vit_backbone
NORM: BN
PRETRAIN: False
PRETRAIN_PATH: /export/home/lxy/.cache/torch/checkpoints/jx_vit_base_p16_224-80ecf9dd.pth
SIE_COE: 3.0
STRIDE_SIZE: (16, 16)
WITH_IBN: False
WITH_NL: False
WITH_SE: False
DEVICE: cuda
FREEZE_LAYERS:
HEADS:
CLS_LAYER: Linear
EMBEDDING_DIM: 0
MARGIN: 0.0
NAME: EmbeddingHead
NECK_FEAT: before
NORM: BN
NUM_CLASSES: 0
POOL_LAYER: Identity
SCALE: 1
WITH_BNNECK: True
LOSSES:
CE:
ALPHA: 0.2
EPSILON: 0.0
SCALE: 1.0
CIRCLE:
GAMMA: 128
MARGIN: 0.25
SCALE: 1.0
COSFACE:
GAMMA: 128
MARGIN: 0.25
SCALE: 1.0
FL:
ALPHA: 0.25
GAMMA: 2
SCALE: 1.0
NAME: (‘CrossEntropyLoss’, ‘TripletLoss’)
TRI:
HARD_MINING: True
MARGIN: 0.0
NORM_FEAT: False
SCALE: 1.0
META_ARCHITECTURE: Baseline
PIXEL_MEAN: [127.5, 127.5, 127.5]
PIXEL_STD: [127.5, 127.5, 127.5]
QUEUE_SIZE: 8192
WEIGHTS:
OUTPUT_DIR: logs/market1501/sbs_vit_base
SOLVER:
AMP:
ENABLED: False
BASE_LR: 0.008
BIAS_LR_FACTOR: 1.0
CHECKPOINT_PERIOD: 30
CLIP_GRADIENTS:
CLIP_TYPE: norm
CLIP_VALUE: 5.0
ENABLED: True
NORM_TYPE: 2.0
DELAY_EPOCHS: 0
ETA_MIN_LR: 1.6e-05
FREEZE_ITERS: 0
GAMMA: 0.1
HEADS_LR_FACTOR: 1.0
IMS_PER_BATCH: 64
MAX_EPOCH: 120
MOMENTUM: 0.9
NESTEROV: False
OPT: SGD
SCHED: CosineAnnealingLR
STEPS: [30, 55]
WARMUP_FACTOR: 0.01
WARMUP_ITERS: 1000
WARMUP_METHOD: linear
WEIGHT_DECAY: 0.0001
WEIGHT_DECAY_BIAS: 0.0005
WEIGHT_DECAY_NORM: 0.0005
TEST:
AQE:
ALPHA: 3.0
ENABLED: False
QE_K: 5
QE_TIME: 1
EVAL_PERIOD: 5
FLIP:
ENABLED: False
IMS_PER_BATCH: 128
METRIC: cosine
PRECISE_BN:
DATASET: Market1501
ENABLED: False
NUM_ITER: 300
RERANK:
ENABLED: False
K1: 20
K2: 6
LAMBDA: 0.3
ROC:
ENABLED: False
[02/03 13:53:16 fastreid]: Full config saved to /home/appuser/data/logs/market1501/sbs_vit_base/config.yaml
[02/03 13:53:16 fastreid.utils.env]: Using a generated random seed 16525539
[02/03 13:53:16 fastreid.engine.defaults]: Prepare training set
[02/03 13:53:16 fastreid.data.datasets.bases]: => Loaded Market1501 in csv format:

subset # ids # images # cameras
train 751 12936 6
[02/03 13:53:16 fastreid.data.build]: Using training sampler NaiveIdentitySampler
[02/03 13:53:16 fastreid.engine.defaults]: Auto-scaling the num_classes=751
[02/03 13:53:19 fastreid.engine.defaults]: Model:
Baseline(
(backbone): VisionTransformer(
(patch_embed): PatchEmbed_overlap(
  (proj): Conv2d(3, 768, kernel_size=(16, 16), stride=(16, 16))
)
(pos_drop): Dropout(p=0.0, inplace=False)
(blocks): ModuleList(
  (0): Block(
    (norm1): LayerNorm((768,), eps=1e-06, elementwise_affine=True)
    (attn): Attention(
      (qkv): Linear(in_features=768, out_features=2304, bias=True)
      (attn_drop): Dropout(p=0.0, inplace=False)
      (proj): Linear(in_features=768, out_features=768, bias=True)
      (proj_drop): Dropout(p=0.0, inplace=False)
    )
    (drop_path): Identity()
    (norm2): LayerNorm((768,), eps=1e-06, elementwise_affine=True)
    (mlp): Mlp(
      (fc1): Linear(in_features=768, out_features=3072, bias=True)
      (act): GELU(approximate='none')
      (fc2): Linear(in_features=3072, out_features=768, bias=True)
      (drop): Dropout(p=0.0, inplace=False)
    )
  )
  (1-11): 11 x Block(
    (norm1): LayerNorm((768,), eps=1e-06, elementwise_affine=True)
    (attn): Attention(
      (qkv): Linear(in_features=768, out_features=2304, bias=True)
      (attn_drop): Dropout(p=0.0, inplace=False)
      (proj): Linear(in_features=768, out_features=768, bias=True)
      (proj_drop): Dropout(p=0.0, inplace=False)
    )
    (drop_path): DropPath()
    (norm2): LayerNorm((768,), eps=1e-06, elementwise_affine=True)
    (mlp): Mlp(
      (fc1): Linear(in_features=768, out_features=3072, bias=True)
      (act): GELU(approximate='none')
      (fc2): Linear(in_features=3072, out_features=768, bias=True)
      (drop): Dropout(p=0.0, inplace=False)
    )
  )
)
(norm): LayerNorm((768,), eps=1e-06, elementwise_affine=True)

)
(heads): EmbeddingHead(
(pool_layer): Identity()
(bottleneck): Sequential(
(0): BatchNorm(768, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
(cls_layer): Linear(num_classes=751, scale=1, margin=0.0)
)
)
/home/appuser/data/fastreid/data/transforms/functional.py:46: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage()
img = torch.ByteTensor(torch.ByteStorage.from_buffer(pic.tobytes()))
/home/appuser/data/fastreid/data/transforms/functional.py:46: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage()
img = torch.ByteTensor(torch.ByteStorage.from_buffer(pic.tobytes()))
/home/appuser/data/fastreid/data/transforms/functional.py:46: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage()
img = torch.ByteTensor(torch.ByteStorage.from_buffer(pic.tobytes()))
/home/appuser/data/fastreid/data/transforms/functional.py:46: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage()
img = torch.ByteTensor(torch.ByteStorage.from_buffer(pic.tobytes()))
/home/appuser/data/fastreid/data/transforms/functional.py:46: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage()
img = torch.ByteTensor(torch.ByteStorage.from_buffer(pic.tobytes()))
/home/appuser/data/fastreid/data/transforms/functional.py:46: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage()
img = torch.ByteTensor(torch.ByteStorage.from_buffer(pic.tobytes()))
/home/appuser/data/fastreid/data/transforms/functional.py:46: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage()
img = torch.ByteTensor(torch.ByteStorage.from_buffer(pic.tobytes()))
/home/appuser/data/fastreid/data/transforms/functional.py:46: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage()
img = torch.ByteTensor(torch.ByteStorage.from_buffer(pic.tobytes()))
[02/03 13:53:19 fastreid.utils.checkpoint]: No checkpoint found. Training model from scratch
[02/03 13:53:19 fastreid.engine.train_loop]: Starting training from epoch 0
[02/03 13:53:49 fastreid.utils.events]: eta: 0:55:42 epoch/iter: 0/199 total_loss: 15.98 loss_cls: 6.66 loss_triplet: 9.319 time: 0.1402 data_time: 0.0008 lr: 1.66e-03 max_mem: 6157M
[02/03 13:53:49 fastreid.utils.events]: eta: 0:55:41 epoch/iter: 0/201 total_loss: 15.95 loss_cls: 6.66 loss_triplet: 9.3 time: 0.1401 data_time: 0.0009 lr: 1.67e-03 max_mem: 6157M
[02/03 13:54:17 fastreid.utils.events]: eta: 0:55:13 epoch/iter: 1/399 total_loss: 15.88 loss_cls: 6.662 loss_triplet: 9.207 time: 0.1397 data_time: 0.0008 lr: 3.24e-03 max_mem: 6157M
[02/03 13:54:18 fastreid.utils.events]: eta: 0:55:12 epoch/iter: 1/403 total_loss: 15.87 loss_cls: 6.663 loss_triplet: 9.189 time: 0.1395 data_time: 0.0009 lr: 3.27e-03 max_mem: 6157M
[02/03 13:54:45 fastreid.utils.events]: eta: 0:54:39 epoch/iter: 2/599 total_loss: 15.98 loss_cls: 6.665 loss_triplet: 9.34 time: 0.1391 data_time: 0.0008 lr: 4.82e-03 max_mem: 6157M
[02/03 13:54:46 fastreid.utils.events]: eta: 0:54:38 epoch/iter: 2/605 total_loss: 16 loss_cls: 6.666 loss_triplet: 9.364 time: 0.1391 data_time: 0.0008 lr: 4.87e-03 max_mem: 6157M
[02/03 13:55:13 fastreid.utils.events]: eta: 0:54:07 epoch/iter: 3/799 total_loss: 15.94 loss_cls: 6.667 loss_triplet: 9.269 time: 0.1389 data_time: 0.0007 lr: 6.41e-03 max_mem: 6157M
[02/03 13:55:14 fastreid.utils.events]: eta: 0:54:05 epoch/iter: 3/807 total_loss: 15.94 loss_cls: 6.668 loss_triplet: 9.265 time: 0.1388 data_time: 0.0007 lr: 6.47e-03 max_mem: 6157M
[02/03 13:55:41 fastreid.utils.events]: eta: 0:53:41 epoch/iter: 4/999 total_loss: 15.9 loss_cls: 6.656 loss_triplet: 9.27 time: 0.1390 data_time: 0.0008 lr: 7.99e-03 max_mem: 6157M
[02/03 13:55:42 fastreid.engine.defaults]: Prepare testing set
[02/03 13:55:42 fastreid.data.datasets.bases]: => Loaded Market1501 in csv format:

subset # ids # images # cameras
query 750 3368 6
gallery 751 15913 6
[02/03 13:55:42 fastreid.evaluation.evaluator]: Start inference on 19281 images
/home/appuser/data/fastreid/data/transforms/functional.py:46: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage()
img = torch.ByteTensor(torch.ByteStorage.from_buffer(pic.tobytes()))
/home/appuser/data/fastreid/data/transforms/functional.py:46: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage()
img = torch.ByteTensor(torch.ByteStorage.from_buffer(pic.tobytes()))
/home/appuser/data/fastreid/data/transforms/functional.py:46: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage()
img = torch.ByteTensor(torch.ByteStorage.from_buffer(pic.tobytes()))
/home/appuser/data/fastreid/data/transforms/functional.py:46: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage()
img = torch.ByteTensor(torch.ByteStorage.from_buffer(pic.tobytes()))
[02/03 13:55:44 fastreid.evaluation.evaluator]: Inference done 11/151. 0.0901 s / batch. ETA=0:00:12
[02/03 13:55:57 fastreid.evaluation.evaluator]: Total inference time: 0:00:13.639541 (0.093422 s / batch per device, on 1 devices)
[02/03 13:55:57 fastreid.evaluation.evaluator]: Total inference pure compute time: 0:00:13 (0.090480 s / batch per device, on 1 devices)
[02/03 13:56:03 fastreid.engine.defaults]: Evaluation results for Market1501 in csv format:
[02/03 13:56:03 fastreid.evaluation.testing]: Evaluation results in csv format:
Dataset Rank-1 Rank-5 Rank-10
:----------- :--------- :--------- :----------
Market1501 0.89 4.07 6.68
[02/03 13:56:03 fastreid.utils.events]: eta: 0:53:39 epoch/iter: 4/1009 total_loss: 15.92 loss_cls: 6.656 loss_triplet: 9.292 time: 0.1389 data_time: 0.0007 lr: 8.00e-03 max_mem: 6157M
[02/03 13:56:30 fastreid.utils.events]: eta: 0:53:11 epoch/iter: 5/1199 total_loss: 16 loss_cls: 6.665 loss_triplet: 9.319 time: 0.1388 data_time: 0.0007 lr: 8.00e-03 max_mem: 6157M
[02/03 13:56:31 fastreid.utils.events]: eta: 0:53:09 epoch/iter: 5/1211 total_loss: 15.98 loss_cls: 6.664 loss_triplet: 9.298 time: 0.1388 data_time: 0.0007 lr: 8.00e-03 max_mem: 6157M
[02/03 13:56:58 fastreid.utils.events]: eta: 0:52:44 epoch/iter: 6/1399 total_loss: 16.02 loss_cls: 6.664 loss_triplet: 9.367 time: 0.1390 data_time: 0.0008 lr: 7.99e-03 max_mem: 6157M
[02/03 13:57:00 fastreid.utils.events]: eta: 0:52:43 epoch/iter: 6/1413 total_loss: 16.01 loss_cls: 6.664 loss_triplet: 9.357 time: 0.1390 data_time: 0.0008 lr: 7.99e-03 max_mem: 6157M
[02/03 13:57:25 fastreid.utils.events]: eta: 0:52:17 epoch/iter: 7/1599 total_loss: 15.89 loss_cls: 6.658 loss_triplet: 9.233 time: 0.1389 data_time: 0.0007 lr: 7.99e-03 max_mem: 6157M
[02/03 13:57:27 fastreid.utils.events]: eta: 0:52:15 epoch/iter: 7/1615 total_loss: 15.89 loss_cls: 6.657 loss_triplet: 9.226 time: 0.1389 data_time: 0.0006 lr: 7.99e-03 max_mem: 6157M
[02/03 13:57:53 fastreid.utils.events]: eta: 0:51:50 epoch/iter: 8/1799 total_loss: 15.87 loss_cls: 6.66 loss_triplet: 9.201 time: 0.1389 data_time: 0.0006 lr: 7.98e-03 max_mem: 6157M
[02/03 13:57:55 fastreid.utils.events]: eta: 0:51:47 epoch/iter: 8/1817 total_loss: 15.9 loss_cls: 6.662 loss_triplet: 9.238 time: 0.1389 data_time: 0.0008 lr: 7.98e-03 max_mem: 6157M
[02/03 13:58:21 fastreid.utils.events]: eta: 0:51:20 epoch/iter: 9/1999 total_loss: 15.86 loss_cls: 6.667 loss_triplet: 9.201 time: 0.1388 data_time: 0.0007 lr: 7.96e-03 max_mem: 6157M
[02/03 13:58:23 fastreid.engine.defaults]: Prepare testing set
[02/03 13:58:24 fastreid.data.datasets.bases]: => Loaded Market1501 in csv format:
subset # ids # images # cameras
:--------- :-------- :----------- :------------
query 750 3368 6
gallery 751 15913 6
[02/03 13:58:24 fastreid.evaluation.evaluator]: Start inference on 19281 images
/home/appuser/data/fastreid/data/transforms/functional.py:46: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage()
img = torch.ByteTensor(torch.ByteStorage.from_buffer(pic.tobytes()))
/home/appuser/data/fastreid/data/transforms/functional.py:46: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage()
img = torch.ByteTensor(torch.ByteStorage.from_buffer(pic.tobytes()))
/home/appuser/data/fastreid/data/transforms/functional.py:46: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage()
img = torch.ByteTensor(torch.ByteStorage.from_buffer(pic.tobytes()))
/home/appuser/data/fastreid/data/transforms/functional.py:46: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage()
img = torch.ByteTensor(torch.ByteStorage.from_buffer(pic.tobytes()))
[02/03 13:58:25 fastreid.evaluation.evaluator]: Inference done 11/151. 0.0870 s / batch. ETA=0:00:14
[02/03 13:58:38 fastreid.evaluation.evaluator]: Total inference time: 0:00:13.211242 (0.090488 s / batch per device, on 1 devices)
[02/03 13:58:38 fastreid.evaluation.evaluator]: Total inference pure compute time: 0:00:12 (0.087885 s / batch per device, on 1 devices)
[02/03 13:58:43 fastreid.engine.defaults]: Evaluation results for Market1501 in csv format:
[02/03 13:58:43 fastreid.evaluation.testing]: Evaluation results in csv format:
Dataset Rank-1 Rank-5 Rank-10
:----------- :--------- :--------- :----------
Market1501 0.92 3.80 6.71
[02/03 13:58:43 fastreid.utils.events]: eta: 0:51:18 epoch/iter: 9/2019 total_loss: 15.9 loss_cls: 6.667 loss_triplet: 9.237 time: 0.1388 data_time: 0.0009 lr: 7.96e-03 max_mem: 6157M
[02/03 13:59:08 fastreid.utils.events]: eta: 0:50:53 epoch/iter: 10/2199 total_loss: 16.07 loss_cls: 6.657 loss_triplet: 9.41 time: 0.1388 data_time: 0.0010 lr: 7.95e-03 max_mem: 6157M
[02/03 13:59:11 fastreid.utils.events]: eta: 0:50:50 epoch/iter: 10/2221 total_loss: 16.02 loss_cls: 6.657 loss_triplet: 9.371 time: 0.1388 data_time: 0.0008 lr: 7.95e-03 max_mem: 6157M
[02/03 13:59:35 fastreid.utils.events]: eta: 0:50:23 epoch/iter: 11/2399 total_loss: 15.91 loss_cls: 6.661 loss_triplet: 9.245 time: 0.1388 data_time: 0.0009 lr: 7.93e-03 max_mem: 6157M
[02/03 13:59:39 fastreid.utils.events]: eta: 0:50:20 epoch/iter: 11/2423 total_loss: 15.88 loss_cls: 6.657 loss_triplet: 9.227 time: 0.1388 data_time: 0.0008 lr: 7.93e-03 max_mem: 6157M
[02/03 14:00:03 fastreid.utils.events]: eta: 0:49:55 epoch/iter: 12/2599 total_loss: 15.91 loss_cls: 6.656 loss_triplet: 9.251 time: 0.1388 data_time: 0.0009 lr: 7.91e-03 max_mem: 6157M
[02/03 14:00:07 fastreid.utils.events]: eta: 0:49:52 epoch/iter: 12/2625 total_loss: 15.94 loss_cls: 6.662 loss_triplet: 9.305 time: 0.1388 data_time: 0.0009 lr: 7.91e-03 max_mem: 6157M
[02/03 14:00:31 fastreid.utils.events]: eta: 0:49:30 epoch/iter: 13/2799 total_loss: 15.87 loss_cls: 6.659 loss_triplet: 9.204 time: 0.1388 data_time: 0.0010 lr: 7.88e-03 max_mem: 6157M
[02/03 14:00:35 fastreid.utils.events]: eta: 0:49:26 epoch/iter: 13/2827 total_loss: 15.92 loss_cls: 6.658 loss_triplet: 9.235 time: 0.1388 data_time: 0.0008 lr: 7.88e-03 max_mem: 6157M
[02/03 14:00:59 fastreid.utils.events]: eta: 0:49:06 epoch/iter: 14/2999 total_loss: 15.94 loss_cls: 6.661 loss_triplet: 9.262 time: 0.1389 data_time: 0.0008 lr: 7.85e-03 max_mem: 6157M
[02/03 14:01:03 fastreid.engine.defaults]: Prepare testing set
[02/03 14:01:03 fastreid.data.datasets.bases]: => Loaded Market1501 in csv format:
subset # ids # images # cameras
:--------- :-------- :----------- :------------
query 750 3368 6
gallery 751 15913 6
[02/03 14:01:03 fastreid.evaluation.evaluator]: Start inference on 19281 images
/home/appuser/data/fastreid/data/transforms/functional.py:46: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage()
img = torch.ByteTensor(torch.ByteStorage.from_buffer(pic.tobytes()))
/home/appuser/data/fastreid/data/transforms/functional.py:46: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage()
img = torch.ByteTensor(torch.ByteStorage.from_buffer(pic.tobytes()))
/home/appuser/data/fastreid/data/transforms/functional.py:46: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage()
img = torch.ByteTensor(torch.ByteStorage.from_buffer(pic.tobytes()))
/home/appuser/data/fastreid/data/transforms/functional.py:46: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage()
img = torch.ByteTensor(torch.ByteStorage.from_buffer(pic.tobytes()))
[02/03 14:01:05 fastreid.evaluation.evaluator]: Inference done 11/151. 0.0912 s / batch. ETA=0:00:12
[02/03 14:01:18 fastreid.evaluation.evaluator]: Total inference time: 0:00:13.822130 (0.094672 s / batch per device, on 1 devices)
[02/03 14:01:18 fastreid.evaluation.evaluator]: Total inference pure compute time: 0:00:13 (0.092548 s / batch per device, on 1 devices)
[02/03 14:01:24 fastreid.engine.defaults]: Evaluation results for Market1501 in csv format:
[02/03 14:01:24 fastreid.evaluation.testing]: Evaluation results in csv format:
Dataset Rank-1 Rank-5 Rank-10
:----------- :--------- :--------- :----------
Market1501 0.89 4.01 6.77
[02/03 14:01:24 fastreid.utils.events]: eta: 0:49:02 epoch/iter: 14/3029 total_loss: 15.91 loss_cls: 6.658 loss_triplet: 9.244 time: 0.1389 data_time: 0.0008 lr: 7.85e-03 max_mem: 6157M
[02/03 14:01:48 fastreid.utils.events]: eta: 0:48:36 epoch/iter: 15/3199 total_loss: 15.83 loss_cls: 6.657 loss_triplet: 9.189 time: 0.1389 data_time: 0.0006 lr: 7.82e-03 max_mem: 6157M
[02/03 14:01:52 fastreid.utils.events]: eta: 0:48:32 epoch/iter: 15/3231 total_loss: 15.84 loss_cls: 6.657 loss_triplet: 9.192 time: 0.1389 data_time: 0.0007 lr: 7.82e-03 max_mem: 6157M
[02/03 14:02:16 fastreid.utils.events]: eta: 0:48:12 epoch/iter: 16/3399 total_loss: 15.8 loss_cls: 6.661 loss_triplet: 9.101 time: 0.1390 data_time: 0.0007 lr: 7.79e-03 max_mem: 6157M
[02/03 14:02:20 fastreid.utils.events]: eta: 0:48:08 epoch/iter: 16/3433 total_loss: 15.86 loss_cls: 6.662 loss_triplet: 9.185 time: 0.1390 data_time: 0.0007 lr: 7.79e-03 max_mem: 6157M
[02/03 14:02:43 fastreid.utils.events]: eta: 0:47:45 epoch/iter: 17/3599 total_loss: 15.91 loss_cls: 6.665 loss_triplet: 9.251 time: 0.1389 data_time: 0.0007 lr: 7.75e-03 max_mem: 6157M
[02/03 14:02:48 fastreid.utils.events]: eta: 0:47:40 epoch/iter: 17/3635 total_loss: 15.91 loss_cls: 6.665 loss_triplet: 9.248 time: 0.1389 data_time: 0.0006 lr: 7.75e-03 max_mem: 6157M
[02/03 14:03:11 fastreid.utils.events]: eta: 0:47:14 epoch/iter: 18/3799 total_loss: 16.04 loss_cls: 6.656 loss_triplet: 9.397 time: 0.1389 data_time: 0.0007 lr: 7.71e-03 max_mem: 6157M
[02/03 14:03:16 fastreid.utils.events]: eta: 0:47:07 epoch/iter: 18/3837 total_loss: 16.1 loss_cls: 6.656 loss_triplet: 9.443 time: 0.1389 data_time: 0.0008 lr: 7.71e-03 max_mem: 6157M
[02/03 14:03:39 fastreid.utils.events]: eta: 0:46:42 epoch/iter: 19/3999 total_loss: 15.99 loss_cls: 6.666 loss_triplet: 9.328 time: 0.1390 data_time: 0.0006 lr: 7.67e-03 max_mem: 6157M
[02/03 14:03:45 fastreid.engine.defaults]: Prepare testing set
[02/03 14:03:46 fastreid.data.datasets.bases]: => Loaded

Added one change in code

replace this from collections import Mapping with from collections.abc import Mapping