Network using Dropout giving different results with eval() than with train()

I am experiencing unexpected changes to the results of my network when I switch between model.eval() and model.train(). The model gives good results while (~88% accuracy on test data) but when I switch to model.eval() the accuracy is reduced to ~55% (the same test data).
The network aims to predict one of two outputs A or B. Model architecture is: 278 inputs, 2 hidden layers of 90 neurons, 2 outputs. Optimiser: SGD. Activation: Tanh. Dropout is used with default arguments.

Here is the breakdown by outcome, A and B:

with model.train():
percentage correct predicting A: 91%
percentage correct predicting B: 79%

with model.eval():
percentage correct predicting A: 42%
percentage correct predicting B: 98%

There is a large swing towards predicting B rather than A when eval() mode is turned on.

There are a number of inputs to the model that are either 0 or 1, one of the inputs will be 1 at a time, alongside other inputs of varying value - 34 of 278 inputs are of this binary nature. Could using inputs that are 0 a lot of the time with dropout be the cause of these changes in predictions? When dropout is not used this does not occur; the results are the same in eval mode as they are in training mode.

Any help would be appreciated.