I don't know why the accuracy improvement of my network…(proposing method)

I’ve suggested some method that can increase the accuracy of classification with the convolutional neural network.

I experimented with intuitive thinking, and as a result, the accuracy of the network was improved.

The proposed method is something like a new loss function.

I just wrote some conference paper about it, but there was no reason why the accuracy was improved. On my conference paper, there was just method, experiment, and result. There were no reasons.

Now, I just want to write some SCI paper, but you guys know that if I want to propose some method with SCI paper, then the reasons are must be needed.

I’m a newbie of deep learning stuff, so which paper can help me to find out the reason for accuracy improvement. Most famous papers just suggest like operation methods(receptive field of convolution layer) or structure(the block structure like ResNet, SENet, etc…).

So my question is,

Can you guys recommend some papers that suggest the new way to converge the network with more good quality weight parameters to get help to find out the reason for my method’s accuracy improvement?

Well, if you have no real idea about why it works the best you can do is trying to plot the loss surface. There are some papers in which you can see that, for example, DenseNet loss has a single local minima and it’s a convex structure. Papers in the same line shows that residual connections makes loss surfaces smother and more convex. You can also try to plot the manifold of the classification in order to probe that by using your loss classes are better clusterized than using others.

Anyway I guess when you try a new loss you usually have a reason why. I don’t think you could create a new loss just applaying random operators like hey im gonna take the log l1^2/norm(output). You probably have had some insight about why it should work better right?

1 Like