Due to the hardware computing requirement, the bias may diffcult to fit into my model on hardware. Can anyone tell me how to train the deep learning model without bias? Or just force them to be 0 at each epoch. I did some search on my own that the bias may have little effect if not used in training while not quite sure? I tried to use only the weights part of the training result it looks like the prediction accuracy pretty bad so should the bias have such big impact on the final prediction result? I am using the Le-net 5 model and inputs are 1 dimension with thousands long.
The bias might usually help training your model. In some cases it can be dropped (e.g. DCGAN).
Usually the bias takes very little memory compared to your weights. Are you sure you’ll same that much by removing it?
Thanks for your reply.
I don’t mean drop it after the model trained with it. But mean there is no bias at all in training so the error backpropgate will only update the weights rather than toghther with bias. I know it takes small memory usage but sometimes the layout of circuit or other implementation condition just more simplifed without it. I am not sure about it. But i do have one example in which only the weights are updated and there is no bias and it works well. I think just remove it can cause some problems. There is one paper says that DL may don’t need bias due to the weights are already enough but i just can’t find that paper now. I will keep looking for it.