About the Memory Format category
|
|
0
|
1546
|
March 18, 2020
|
Libtorch CPP Api for Memory Format Channels Last
|
|
1
|
5
|
September 7, 2024
|
Replacing torch.zeros internals with cudaMemset instead of fill kernel
|
|
2
|
28
|
September 5, 2024
|
Fancy idexing memory footprint
|
|
0
|
10
|
August 19, 2024
|
Unable to free all GPU memory
|
|
3
|
49
|
August 12, 2024
|
Why does tf.tile not make use of strided layout? And what about "inverse" strides?
|
|
0
|
10
|
August 3, 2024
|
Advanced Slicing
|
|
3
|
47
|
July 28, 2024
|
Why aren't inputs to conv1d channels last?
|
|
3
|
1899
|
July 22, 2024
|
Frombuffer() → "The given buffer is not writable"
|
|
1
|
249
|
June 19, 2024
|
TensorDataset with lazy loading?
|
|
4
|
219
|
June 7, 2024
|
Understanding GPU memory visualization result
|
|
1
|
153
|
May 11, 2024
|
Do operations between tensors and scalars move the tensor to CPU?
|
|
5
|
863
|
April 20, 2024
|
Understanding error msg "view size is not compatible with input tensor's size and stride"
|
|
5
|
2821
|
April 19, 2024
|
Torch + pytest leads to memory fragmentation: How to do proper integration testing of a lot of torch models?
|
|
0
|
263
|
April 19, 2024
|
Using 128 bit floating point datatype with Pytorch (not a complex number)
|
|
4
|
3304
|
March 31, 2024
|
What does the term "peak memory" should be referring to, max allocated vs. reserved?
|
|
0
|
210
|
March 16, 2024
|
Computing to a sub-tensor portion of the output tensor?
|
|
0
|
185
|
February 28, 2024
|
What is the most optimal shape of a tensor for storage and computational efficiency?
|
|
0
|
257
|
February 14, 2024
|
`torch.cuda.is_available()` allocates unwanted memory?
|
|
1
|
314
|
February 8, 2024
|
Large disk usage for some torch tensors (200MB vs 4MB) with same shape and dtype
|
|
2
|
339
|
January 22, 2024
|
Help with CUDA memory allocation during forward Linear
|
|
5
|
626
|
January 9, 2024
|
Discrepancy Between Expected and Actual GPU Memory Usage for Large Tensors
|
|
0
|
293
|
January 8, 2024
|
Making a slice contiguous
|
|
2
|
612
|
January 3, 2024
|
How to share data among DataLoader processes to save memory
|
|
5
|
12207
|
January 2, 2024
|
After calling torch.nn.Module.cuda(), model doesn't seem to be freed from RAM
|
|
2
|
315
|
December 27, 2023
|
Rewriting the CUDA cache memory allocator
|
|
0
|
393
|
November 28, 2023
|
What determines the stride of the output of einsum?
|
|
1
|
440
|
November 23, 2023
|
Question about Tensor storage lifespan
|
|
0
|
392
|
November 20, 2023
|
Question about tensor assign time
|
|
0
|
362
|
November 10, 2023
|
MAX GPU memory allocated during training for different torch version
|
|
0
|
424
|
November 10, 2023
|