What does model-compiler do and when is it used?
Do you know, how kdim is defined and what it stands for?
It seems that it’s another placeholder for the kernel and tries to compare the shapes with the weight tensor in the transposed convolution?
static void assertConvTransposeDims(NodeValue input, NodeValue filter,
NodeValue bias,
llvm::ArrayRef<unsigned_t> kernels,
llvm::ArrayRef<unsigned_t> strides,
llvm::ArrayRef<unsigned_t> pads,
unsigned_t group) {
ShapeNHWC idim = ShapeNHWC(input.dims());
(void)idim;
ShapeHW kdim(kernels);
(void)kdim;
assert(idim.c % group == 0 && "channels number must be divisible by groups");
// NOTE: here the N in NHWC is abnormal because it is the number of filters
// (and therefore the number of output channels of the conv) and not the
// batch size. The rest of the dimensions are representative of the input
// dimensions to the convolution.
ShapeNHWC filterDims(filter.dims());
(void)filterDims;
assert(filterDims.n % group == 0 && filterDims.h == kdim.height &&
filterDims.w == kdim.width && filterDims.c == idim.c / group &&
"Invalid filter dims");
assert(bias.getType()->size() == filterDims.n && "Invalid bias size");
}
It looks like ConvTranspose2d must have out_channels=in_channels*group to pass the assertion.
in_channels and out_channels should be both divisible by groups.
If your code is working fine in PyTorch, I’m a bit confused if glow adds some other checks to the transposed convolutions.
Did you narrow it down to a nn.ConvTranpose2d layer, which creates the issue in the export?
That’s what I’m wondering. If your model works fine in your PyTorch code, I would assume that it should also pass all Glow checks.
Are you seeing this error for all settings other than groups=1?
I’ve tested with group=2 and group=32 – it fails on the same assertion assertConvTransposeDims.
If I comment the assertion it fails somewhere later.