You concatenate a tensor embedding[-1,:,:] of shape {32, 768} (you only select the last element of the first dimension) to a tensor batch.feat.unsqueeze(1) along the second dimension (dim=1).
of shape {32, 1} ( i guess batch.feat is of shape {32})

So you keep your embedding tensor as a 3d tensor, but reshape your batch.feat to a 3d tensor of shape {1, 32, 1}. Because your embedding tensor is of shape {4, 32, 1} you need to repeat your batch tensor along the first dim, so they are of the same shape. Finally you can concatenate them along the third dimension (dim=2 - your embedding dimension).