RNNs use a loop, unrolling the time dimension, so time-major RNNs process contiguous sequential memory blocks. How important that is, depends on implementation, device and tensor size (i.e. whether it fits in cache).
RNNs use a loop, unrolling the time dimension, so time-major RNNs process contiguous sequential memory blocks. How important that is, depends on implementation, device and tensor size (i.e. whether it fits in cache).