how do I handle many Cnn outputs and loss functions?

I am trying it make a sudoku solver using CNNs. So I need to output 81 values (between 1 and 9, 0 is empty) and calculate 81 losses. How would I handle that? I know you can easily do this for two or three outputs but is there a more convenient way than the way outlined here A model with multiple outputs . Also, what would be the best cost function to use? I am using a resnet based architecture.

So let’s first begin with representation. Our board is 9x9 so it at least makes sense that we’ll have that as our height and width. But then we need to consider how to represent the value of each tile. We could just use their value (1.0, 2.0, …, 9.0), but this is sort of problematic. This suggests that values close together are related (8.0 and 9.0 are close to each other numerically, but conceptually they’re no closer than 1.0 and 9.0 in the game of sudoku ). So we want to avoid that sort of representation as it’ll make the problem harder for the network.

So instead, we probably want to explore the idea of a one-hot encoding. That is, for each tile we’ll have a 9x1 vector where there’s a single 1.0 somewhere. This sort of representation is easier for the model to understand. We can also use this for our output and treat this problem similar to how we might go about classifying an image. We can have our model take in a 9x9x9 (values x height x width) input and output a 9x9x9 output. Where in the context of a convolution, our values (the one-hots) are our channels. I would suggest we treat the outputs of our model as probabilistic logits. That is, we’ll follow up with a softmax operation to determine which value a tile should hold.

Now for the topic of training, this is sort of a hard problem for the game of sodoku. My first guess would be to make it so that all values in each column, row, and within a 3x3 square sum up to 45. But that’s sort of a heavily constrained problem so I don’t really know what the model will do under this case. This problem is sort of more suited to be treated as a reinforcement learning problem instead. Where the model is tasked with just winning the game. I would suggest reading more into those problems.

1 Like