While analyzing inferMixedNet unit test with OpenCL backend I noticed something strange/unexpected and I’d appreciate some explanation.
Generated lowlevel IR looks as follows (code part only)
code {
0 %tr_res = allocactivation { Ty: float<2 x 16 x 16 x 3>} // size: 6144 // Users: @in 2, @out 5, @out 1
1 %tr = transpose @out %tr_res, @in %var { Shuffle: [0, 2, 3, 1]}
2 %tr_res2 = tensorview @in %tr_res { Ty: float<2 x 768>, Offsets: [0, 0, 0, 0]} // Users: @in 4
3 %fc_add_bias_res = allocactivation { Ty: float<2 x 16>} // size: 128 // Users: @in 6, @out 4, @in 10, @out 9, @out 11, @in 9, @in 8, @out 6
4 %fc_dot = matmul @out %fc_add_bias_res, @in %tr_res2, @in %weights
5 %dealloc = deallocactivation @out %tr_res // size: 6144
6 %fc_add_bias = batchedadd @out %fc_add_bias_res, @in %fc_add_bias_res, @in %bias
7 %tanh_res = allocactivation { Ty: float<2 x 16>} // size: 128 // Users: @in 13, @out 10, @out 14, @in 10, @out 8
8 %tanh = tanh @out %tanh_res, @in %fc_add_bias_res
9 %sig = sigmoid @out %fc_add_bias_res, @in %fc_add_bias_res
10 %add = elementadd @out %tanh_res, @in %tanh_res, @in %fc_add_bias_res
11 %dealloc3 = deallocactivation @out %fc_add_bias_res // size: 128
12 %fc_dot1_res = allocactivation { Ty: float<2 x 16>} // size: 128 // Users: @in 16, @out 15, @out 17, @in 15, @out 13
13 %fc_dot1 = matmul @out %fc_dot1_res, @in %tanh_res, @in %weights1
14 %dealloc4 = deallocactivation @out %tanh_res // size: 128
15 %fc_add_bias1 = batchedadd @out %fc_dot1_res, @in %fc_dot1_res, @in %bias1
16 %SM = softmax @out %ret, @in %fc_dot1_res
17 %dealloc7 = deallocactivation @out %fc_dot1_res // size: 128
}
I have problems in understanding what TensorView operation is doing (at line #2)?
In OpenCL backend sources it is only used in allocateMemory, because during execute() loop it is nop. Also, the code in allocateMemory is not clear when it comes to this operation tbh.
From what I see in IR it looks like this operation is behaving as reshape only changing the size of input. However, this introduced much more questions. Why not just use reshape instruction? What’s even more confusing is the fact that OCL backend in execute() doesn’t have Reshape. Actually, even there is no ReshapeInst (at least I couldn’t spot it in ClassGen files  only node). To add to this  dot file with this network has Reshape block in place where TensorView should be
I noticed same behavior even in cases where createReshape is explicitly written, e.g. in inferComplexNet1. Instead of Reshape instruction in IR I see TensorView instructions.
To summarize, what I’d like to understand:

What TensorView instruction is doing?

What is the difference between TensorView and Reshape?

How I can get the output size of TensorView? In other words, how I can tell to what dimensions I should reshape my Tensor. Should it be something like TV>getTy()>getDims()?

TensorView operation has only one operand  @in and no @out, yet in next instructions (in example above line #4) it is used as an input. How I can take the name of the output? Normally I’d do something like TV>getDest()>getName(), but here it’s not possible. Any suggestion?
Thanks for support!