RuntimeError: expected scalar type Double but found Float

Hi,
I got an error like in the tittle and it sounds easy (I wonder that I just need to change type of loaded data) but I have a problem with that… I changed types but error still appears.

In my main function:

inputs = load_images(glob.glob(args.input))
outputs = predict(model, inputs)


def load_images(image_files):
    loaded_images = []
    for file in image_files:
        img = Image.open(file)
        new_size = (640, 480)
        img = img.resize(new_size)
        x = np.clip(np.asarray(img) / 255, 0, 1).astype(np.float64)
        loaded_images.append(torch.DoubleTensor(torch.from_numpy(x)))
    return np.stack(loaded_images, axis=0)
def predict(model, images, minDepth=10, maxDepth=1000):
    # Support multiple RGBs, one RGB image, even grayscale
    if len(images.shape) < 3:
        images = np.stack((images, images, images), axis=2)
    if len(images.shape) < 4:
        images = images.reshape((1, images.shape[0], images.shape[1], images.shape[2]))
    # Compute predictions
    images_tensor = []
    for i in images:
        i = torch.DoubleTensor(i)
        i = i.permute(2, 0, 1)
        images_tensor.append(i)
    images_tensor = torch.stack(images_tensor)
    predictions = model(images_tensor.double())
    return (
        np.clip(DepthNorm(predictions, maxDepth=maxDepth), minDepth, maxDepth)
        / maxDepth
    )

Ideas?

This error might be raised, if your model parameters are in float32 while the input is in float64.
Based on your code snippet you are indeed passing the input as DoubleTensors (float64) so you would need to make sure the model parameter have the same dtype via model.double().
Alternatively, you could also cast the input to FloatTensors via input = input.float().

2 Likes