Disk Quota Exceeded error

Hi, I’m not even sure if this is aboslutely related to Pytorch. Anyways, it is my first time writing a query so please ask any questions that will help you better understand the problem.

This is the final error that I am getting:

torch._dynamo.exc.BackendCompilerFailed: backend=‘inductor’ raised:
OSError: [Errno 122] Disk quota exceeded: <directory/.triton/dump/long_alphanumeric>

I have run the same code previously and did not encounter this error. This is the first time. Please provide any guidance if this has happened with you too

You might need to clear the default cache folder on your system and/or set the Triton cache to another dir via TRITON_CACHE_DIR.

Hi @ptrblck, it is a shared system and several other users are also using the same GPU machine. Won’t clearing the default cache cause issue for them?

That’s a valid concern unless each user is working in e.g. an isolated docker container and is using local folders for their cache.

So, for anyone facing this issue later:
This is the code to find out the directory in which the Triton cache is currently getting stored
import os
import triton

current_cache_dir = os.getenv(‘TRITON_CACHE_DIR’, os.path.expanduser(‘~/.triton/’))
print(f"Current Triton cache directory: {current_cache_dir}“)

and then this is the code to change the directory
os.environ[‘TRITON_CACHE_DIR’] = ‘/path/to/new/cache/dir’

Now, moving onto my issue. It is somehow automatically resolved, but if it happens again, I guess I can use this directory change method.
I am keeping this open so that if the problem arises next time and I’m unable to fix, I’ll continue here.