I donâ€™t think you can produce batches of random samples without needing to copy the data. However, there are some things you can do to improve the stochasticity of the training.

The code you posted takes batches in order, starting from the first sample available and maybe leaving a few samples unused at the end when there arenâ€™t enough left to fill a batch.

```
for i in range(100):
num_batches = n_examples // batch_size
for k in range(num_batches):
start, end = k * batch_size, (k + 1) * batch_size
```

You could improve this by randomly skipping a few samples at the beginning of the list instead of discarding samples only at the end of the listâ€¦

```
for i in range(100):
num_batches = n_examples // batch_size
unused_examples = n_examples % epoch_batch_size
random_start = np.random.randint(unused_examples)
for k in range(num_batches):
start, end = random_start + k * batch_size, random_start + (k + 1) * batch_size
```

Another thing you can do is to randomise the order of the batchesâ€¦

```
for i in range(100):
num_batches = n_examples // batch_size
unused_examples = n_examples % epoch_batch_size
random_start = np.random.randint(unused_examples)
for k in np.random.permutation(range(num_batches)):
start, end = random_start + k * batch_size, random_start + (k + 1) * batch_size
```

Yet another idea would be to randomise the size of the batches within a reasonable rangeâ€¦

```
for i in range(100):
epoch_batch_size = np.random.randint(int(batch_size*.9), int(batch_size*1.1))
num_batches = n_examples // epoch_batch_size
unused_examples = n_examples % epoch_batch_size
random_start = np.random.randint(unused_examples)
for k in np.random.permutation(range(num_batches)):
start, end = random_start + k * epoch_batch_size, random_start + (k + 1) * epoch_batch_size
```

If you really need random batches tensor.index_select(0, indices) might be more efficient though it still copies the data.