In C++, when using LibTorch (The C++ version of PyTorch), what should you store a batch of tensors in? I’m running into the problem of not being able to reset the batch on the next step because C++ doesn’t allow storing a new variable over an existing variable.

In my attempt my batch of tensors is one single 385x42 tensor. The batch size is 385. In a for loop I use `torch::cat`

to concatenate 385 smaller 1D tensors, which are 42 numbers long. (Maybe ‘stack’ or ‘append’ are better terms for what I’m doing since the are stacked together picket fence style more than ‘concatenated’, but that’s what I’m using.) Anyways, there is not problem with this shape. It seems to work fine for one forward and backward pass but then the tensor becomes 770x42 on the next pass instead of a 385x42 tensor of the next 385, 42 long arrays. I hope I am painting a picture and not being too verbose.

The code.

Near the bottom I have the line `all_step_obs = torch::tensor({});`

to try to wipe out the contents of the tensor, AKA, the batch, but this gives me a `Segmentation fault (core dumped)`

. I guess for trying to access the tensor outside of the loop(?)

If I don’t have this line I get a 770x42 tensor after the next `step`

.

```
int max_steps = 385;
int steps = 2000;
auto l1_loss = torch::smooth_l1_loss;
auto optimizer = torch::optim::Adam(actor.parameters(), 3e-4);
torch::Tensor train() {
torch::Tensor all_step_obs;
for (int i = 0; i<steps; ++i)
{
for (int i = 0; i<max_steps; ++i)
{
all_step_obs = torch::cat({Gym().step().unsqueeze(0), all_step_obs});
}
auto mean = actor.forward(all_step_obs);
auto loss = l1_loss(mean, torch::rand({385, 42}), 1, 0);
optimizer.zero_grad();
loss.backward();
optimizer.step();
all_step_obs = torch::tensor({});
if (steps == 1999) {
return loss;
}
}
};
```