I try to make multi-classification without one-hot encoding (just an experiment).
I do like as follows at output;
def BinRepresent(bins): b_size, a, b = shape(bins) out= np.zeros(b_size) for b in range(bsize): for i in range(int(np.ceil(np.log(NUM_CLASSES)))): out[b] = out[b] + 2**bins[b, 0, I] return out
Input of bins is “4-bit” (NUM_CLASSES==10) through fully-connected later with sigmoid aiming to make zero or one for every bit. The power of two constructs binary code of class number. The out is fed into loss function. I do test training, then its learning is very slow of course depending on setting of hyper parameters.
So, I reconsidered to develop output layer because of this approach affects to all 4-bits by the single loss value, it should be four losses for every bit. Then, I meet issue of how to construct such the multiple losses.
Can anyone advice me about it?