About the Memory Format category
|
|
0
|
1566
|
March 18, 2020
|
How can I decrease the pytorch confidence to hold too much reserved memory
|
|
3
|
31
|
January 8, 2025
|
`torch.cuda.is_available()` allocates unwanted memory?
|
|
2
|
418
|
December 11, 2024
|
Guarantee traversal order for optimiser states
|
|
0
|
42
|
October 12, 2024
|
How to share data among DataLoader processes to save memory
|
|
6
|
13205
|
October 10, 2024
|
Libtorch CPP Api for Memory Format Channels Last
|
|
1
|
14
|
September 7, 2024
|
Replacing torch.zeros internals with cudaMemset instead of fill kernel
|
|
2
|
92
|
September 5, 2024
|
Fancy idexing memory footprint
|
|
0
|
45
|
August 19, 2024
|
Unable to free all GPU memory
|
|
3
|
163
|
August 12, 2024
|
Why does tf.tile not make use of strided layout? And what about "inverse" strides?
|
|
0
|
33
|
August 3, 2024
|
Advanced Slicing
|
|
3
|
114
|
July 28, 2024
|
Why aren't inputs to conv1d channels last?
|
|
3
|
2004
|
July 22, 2024
|
Frombuffer() → "The given buffer is not writable"
|
|
1
|
540
|
June 19, 2024
|
TensorDataset with lazy loading?
|
|
4
|
641
|
June 7, 2024
|
Understanding GPU memory visualization result
|
|
1
|
186
|
May 11, 2024
|
Do operations between tensors and scalars move the tensor to CPU?
|
|
5
|
1030
|
April 20, 2024
|
Understanding error msg "view size is not compatible with input tensor's size and stride"
|
|
5
|
3212
|
April 19, 2024
|
Torch + pytest leads to memory fragmentation: How to do proper integration testing of a lot of torch models?
|
|
0
|
319
|
April 19, 2024
|
Using 128 bit floating point datatype with Pytorch (not a complex number)
|
|
4
|
3568
|
March 31, 2024
|
What does the term "peak memory" should be referring to, max allocated vs. reserved?
|
|
0
|
228
|
March 16, 2024
|
Computing to a sub-tensor portion of the output tensor?
|
|
0
|
191
|
February 28, 2024
|
What is the most optimal shape of a tensor for storage and computational efficiency?
|
|
0
|
261
|
February 14, 2024
|
Large disk usage for some torch tensors (200MB vs 4MB) with same shape and dtype
|
|
2
|
371
|
January 22, 2024
|
Help with CUDA memory allocation during forward Linear
|
|
5
|
686
|
January 9, 2024
|
Discrepancy Between Expected and Actual GPU Memory Usage for Large Tensors
|
|
0
|
311
|
January 8, 2024
|
Making a slice contiguous
|
|
2
|
711
|
January 3, 2024
|
After calling torch.nn.Module.cuda(), model doesn't seem to be freed from RAM
|
|
2
|
329
|
December 27, 2023
|
Rewriting the CUDA cache memory allocator
|
|
0
|
404
|
November 28, 2023
|
What determines the stride of the output of einsum?
|
|
1
|
468
|
November 23, 2023
|
Question about Tensor storage lifespan
|
|
0
|
414
|
November 20, 2023
|