UserWarning: tensor1/other is not broadcastable to self, but they have the same number of elements. Falling back to deprecated pointwise behavior

I am getting these errors when using the torch.legacy.optim L-BFGS and Adam optimizers:

Running optimization with ADAM:

/usr/local/lib/python2.7/dist-packages/torch/legacy/optim/adam.py:65: UserWarning: tensor1 is not broadcastable to self, but they have the same number of elements.  Falling back to deprecated pointwise behavior.
  x.addcdiv_(-stepSize, state['m'], state['denom'])

Running optimization with L-BFGS:

('<optim.lbfgs>', 'creating recyclable direction/step/history buffers')
/usr/local/lib/python2.7/dist-packages/torch/legacy/optim/lbfgs.py:197: UserWarning: other is not broadcastable to self, but they have the same number of elements.  Falling back to deprecated pointwise behavior.
  x.add_(t, d)

Are these warnings indicative of a problem with either optimzer? A problem with the inputs I am feeding them? Or are these warning safe to ignore?

I am trying to diagnose a problem with my code, and would like to know if either of these warning messages may be causing my issues, or even related to my issues?

I was using the latest pip version when I got these errors.

Edit:

This is my feval function:

num_calls = [0]
def feval(x):
  num_calls[0] += 1
  net.updateOutput(x.cuda())
  grad = net.updateGradInput(x.cuda(), dy.cuda())
  loss = 0
  for mod in content_losses:
    loss = loss + mod.loss
  for mod in style_losses:
    loss = loss + mod.loss
  return loss, grad.view(grad.nelement())

And this is how I run it:

optim_state = None
if params.optimizer == 'lbfgs':
  optim_state = {
    "maxIter": params.num_iterations,
    "verbose": True,
    "tolX": -1,
    "tolFun": -1,
  }
  if params.lbfgs_num_correction > 0:
    optim_state.nCorrection = params.lbfgs_num_correction
elif params.optimizer == 'adam':
    optim_state = {
      "learningRate": params.learning_rate,
    }

# Run optimization.
if params.optimizer == 'lbfgs':
  print("Running optimization with L-BFGS")
  x, losses = optim.lbfgs(feval, img, optim_state)
elif params.optimizer == 'adam':
  print("Running optimization with ADAM")
  for t in xrange(params.num_iterations):
    x, losses = optim.adam(feval, img, optim_state)

Using the latest Github version, I get this error instead:

Traceback (most recent call last):
  File "test2.py", line 124, in <module>
    net.updateOutput(content_image_caffe)
  File "/usr/local/lib/python2.7/dist-packages/torch/legacy/nn/Sequential.py", line 36, in updateOutput
    currentOutput = module.updateOutput(currentOutput)
  File "/usr/local/lib/python2.7/dist-packages/torch/legacy/nn/SpatialConvolution.py", line 84, in updateOutput
    self._viewWeight()
  File "/usr/local/lib/python2.7/dist-packages/torch/legacy/nn/SpatialConvolution.py", line 75, in _viewWeight
    self.gradWeight = self.gradWeight.view(self.nOutputPlane, self.nInputPlane * self.kH * self.kW)
RuntimeError: invalid argument 2: size '[64 x 27]' is invalid for input with 0 elements at /home/ubuntu/pytorch/aten/src/TH/THStorage.c:41