Thanks for the code. Using the provided model definition, and trying out some input shapes, this code seems to work fine and gives the desired output batch size:
num_channels = 52
depth_1 = 128
kernel_size_1 = 7
stride_size = 3
depth_2 = 128
kernel_size_2 = 3
num_hidden = 512
model = CharCNN()
x = torch.randn(64, 52, 300)
out = model(x)
print(out.shape)
> torch.Size([64, 11])