Libtorch tutorial : Fixing resizing warning / reshape 101

While completing the first c++ tutorial with recent libtorch, I hit the resize warning described here : DCGAN C++ warning after PyTorch update · Issue #819 · pytorch/examples · GitHub

One of the comments mentions reshaping with fake_labels.size() (instead of batch_size), which I promptly tried. However, the training results seem underwhelming compared to the expected results posted in the guide (trained 60 epochs). Here are some beginner questions related to this.

  • It’s unclear to me how the batch size differs from fake_labels.size(). Are we talking only portability / code robustness here, or would they ever differ?
  • Assuming I got it right and I can simply think of tensors’ data as multi-dimentional arrays (to start), how does one reshape their array dimensions without data loss?
  • Can I investigate the loaded dataset schema? This tutorial uses MNIST (characters), but the memory layout / loaded data is a black-box, which makes it really difficult to understand how the batching works. Can you print / reflect / debug dataset schemas?

Any links to other documentation appreciated, thank you!

The tut : Using the PyTorch C++ Frontend — PyTorch Tutorials 2.0.0+cu117 documentation

My reshape changes :

			// Train discriminator with real images.
			discriminator->zero_grad();
			torch::Tensor real_images = batch.data.to(device);
			torch::Tensor real_labels = torch::empty(batch.data.size(0), device)
												.uniform_(0.8, 1.0);
			torch::Tensor real_output = discriminator->forward(real_images)
												.reshape({ batch_size });
			torch::Tensor d_loss_real
					= torch::binary_cross_entropy(real_output, real_labels);
			d_loss_real.backward();

			// Train discriminator with fake images.
			torch::Tensor noise = torch::randn(
					{ batch.data.size(0), k_noise_size, 1, 1 }, device);
			torch::Tensor fake_images = generator->forward(noise);
			torch::Tensor fake_labels
					= torch::zeros(batch.data.size(0), device);
			torch::Tensor fake_output
					= discriminator->forward(fake_images.detach())
							  .reshape(fake_labels.sizes());
			torch::Tensor d_loss_fake
					= torch::binary_cross_entropy(fake_output, fake_labels);
			d_loss_fake.backward();

			torch::Tensor d_loss = d_loss_real + d_loss_fake;
			d_optimizer.step();

			// Train generator.
			generator->zero_grad();
			fake_labels.fill_(1);
			fake_output = discriminator->forward(fake_images)
								  .reshape(fake_labels.sizes());