Pytorch 128 and 129

I am using google translater:

I have a problem with the next-newest pytorch 128 alts¨2.8.0 When I train in RVC I see that white glow is not supported device. Also the old pytorch 118 alts¨2.0 could allocate my ram automatically. I use a program called RVC
What does this mean and does it affect the trainingtrain.py:429: FutureWarning: torch.cuda.amp.autocast(args…)is deprecated. Please usetorch.amp.autocast(‘cuda’, args…)instead. with autocast(enabled=hps.train.fp16_run): train.py:457: FutureWarning:torch.cuda.amp.autocast(args…)is deprecated. Please usetorch.amp.autocast(‘cuda’, args…)instead. with autocast(enabled=False): train.py:476: FutureWarning:torch.cuda.amp.autocast(args…)is deprecated. Please usetorch.amp.autocast(‘cuda’, args…)instead. with autocast(enabled=False): train.py:486: FutureWarning:torch.cuda.amp.autocast(args…)is deprecated. Please usetorch.amp.autocast(‘cuda’, args…)instead. with autocast(enabled=hps.train.fp16_run): train.py:489: FutureWarning:torch.cuda.amp.autocast(args…)is deprecated. Please usetorch.amp.autocast(‘cuda’, args…)` instead.
with autocast(enabled=False):

The warning points out you are using deprecated methods and should replace them with the suggested newer ones. Your training wont be affected.

and what about the gloo device i can train on pytorch 128 (2.7.1 but not on 128/129 (2.8.0/2.9.0)

and i get this in pytorch 2.8.0: Process Process-1:
Traceback (most recent call last):
File “multiprocessing\process.py”, line 315, in _bootstrap
File “multiprocessing\process.py”, line 108, in run
File “C:\run\RVC20240604Nvidia50x0\infer\modules\train\train.py”, line 129, in run
dist.init_process_group(
File “C:\run\RVC20240604Nvidia50x0\runtime\lib\site-packages\torch\distributed\c10d_logger.py”, line 81, in wrapper
return func(*args, **kwargs)
File “C:\run\RVC20240604Nvidia50x0\runtime\lib\site-packages\torch\distributed\c10d_logger.py”, line 95, in wrapper
func_return = func(*args, **kwargs)
File “C:\run\RVC20240604Nvidia50x0\runtime\lib\site-packages\torch\distributed\distributed_c10d.py”, line 1764, in init_process_group
default_pg, _ = _new_process_group_helper(
File “C:\run\RVC20240604Nvidia50x0\runtime\lib\site-packages\torch\distributed\distributed_c10d.py”, line 1991, in _new_process_group_helper
backend_class = ProcessGroupGloo(
RuntimeError: makeDeviceForHostname(): unsupported gloo device

I’m not deeply familiar with Gloo and recommend using NCCL if you are using GPUs.

I am using a screen reader. the problem is the train.py file in the RVC