embedded_cat = torch.cat([ embedded, embedded_dirs], -1), but I found the GPU memory increased 3G. It seems that
torch.cat() will create new space for data. Is there any way to take advantage of the original data space (i.e. embedded and embedded_dirs)?
No, I don’t think this is possible as both tensors could (and most likely are) allocated in different memory regions.