Is there any problem with an entirely binary neural network, that is the input to it is in binary, all its parameters in binary, what would be the substitute for gradient descent be in such a case, to keep everything in binary?

Hello vainaijr!

There is no conceptual problem with such a network, although I

don’t see what advantage one would have (other than, perhaps,

economy of storage).

I suppose one might view a binary network as being a better

approximation to some particular biological or physical system.

There’s the rub. This is a binary (special case of an integer)

optimization problem. (See *integer programming.*) This is known

to be hard. There are various (inexact) optimization algorithms you

could use – branch-and-bound, simulated annealing – but training

will be expensive, and likely prohibitively so for networks of any

significant size.

Differentiability and gradient descent are your friend.

Good luck.

K. Frank

hello, thanks for your reply, but is there a substitute to convolutional neural network, in the case of a binary neural network, and if I do not want filters as parameters in my neural network, that keep updating for every image, is some work done on it?

I think the whole notion of gradient descent would be simplified by the use of a logic gate such as NAND or XOR or XNOR, because in the case of a binary neural network, one needs to take care of flipping bits from 0 to 1, or 1 to 0, or let them stay as they were, that is 1 to 1, or 0 to 0.