Shape '[1, 255, 3025]' is invalid for input of size 689520

Here is my code in which I am getting error

class Darknet(nn.Module):
    def __init__(self, cfgfile):
        super(Darknet, self).__init__()
        self.blocks = parse_cfg(cfgfile)
        self.net_info, self.module_list = create_modules(self.blocks)
        
    def forward(self, x, CUDA):
        modules = self.blocks[1:]
        outputs = {}   #We cache the outputs for the route layer
        
        write = 0
        for i, module in enumerate(modules):        
            module_type = (module["type"])
            
            if module_type == "convolutional" or module_type == "upsample":
                x = self.module_list[i](x)
    
            elif module_type == "route":
                layers = module["layers"]
                layers = [int(a) for a in layers]
    
                if (layers[0]) > 0:
                    layers[0] = layers[0] - i
    
                if len(layers) == 1:
                    x = outputs[i + (layers[0])]
    
                else:
                    if (layers[1]) > 0:
                        layers[1] = layers[1] - i
    
                    map1 = outputs[i + layers[0]]
                    map2 = outputs[i + layers[1]]
                    x = torch.cat((map1, map2), 1)
                
    
            elif  module_type == "shortcut":
                from_ = int(module["from"])
                x = outputs[i-1] + outputs[i+from_]
    
            elif module_type == 'yolo':        
                anchors = self.module_list[i][0].anchors
                #Get the input dimensions
                inp_dim = int (self.net_info["height"])
        
                #Get the number of classes
                num_classes = int (module["classes"])
        
                #Transform 
                x = x.data
                x = predict_transform(x, inp_dim, anchors, num_classes, CUDA)
                if not write:              #if no collector has been intialised. 
                    detections = x
                    write = 1
        
                else:       
                    detections = torch.cat((detections, x), 1)
        
            outputs[i] = x
        
        return detections

model = Darknet("cfg/yolov3.cfg")
inp = get_test_input()
pred = model(inp, torch.cuda.is_available())
print (pred)

my script for predict_transform where I am getting error

def predict_transform(prediction, inp_dim, anchors, num_classes, CUDA = True):

    
    batch_size = prediction.size(0)
    stride =  inp_dim // prediction.size(2)
    grid_size = inp_dim // stride
    bbox_attrs = 5 + num_classes
    num_anchors = len(anchors)
    
    prediction = prediction.view(batch_size, bbox_attrs*num_anchors, grid_size*grid_size)
    prediction = prediction.transpose(1,2).contiguous()
    prediction = prediction.view(batch_size, grid_size*grid_size*num_anchors, bbox_attrs)
    anchors = [(a[0]/stride, a[1]/stride) for a in anchors]

    #Sigmoid the  centre_X, centre_Y. and object confidencce
    prediction[:,:,0] = torch.sigmoid(prediction[:,:,0])
    prediction[:,:,1] = torch.sigmoid(prediction[:,:,1])
    prediction[:,:,4] = torch.sigmoid(prediction[:,:,4])
    
    #Add the center offsets
    grid = np.arange(grid_size)
    a,b = np.meshgrid(grid, grid)

    x_offset = torch.FloatTensor(a).view(-1,1)
    y_offset = torch.FloatTensor(b).view(-1,1)

    if CUDA:
        x_offset = x_offset.cuda()
        y_offset = y_offset.cuda()

    x_y_offset = torch.cat((x_offset, y_offset), 1).repeat(1,num_anchors).view(-1,2).unsqueeze(0)

    prediction[:,:,:2] += x_y_offset

    #log space transform height and the width
    anchors = torch.FloatTensor(anchors)

    if CUDA:
        anchors = anchors.cuda()

    anchors = anchors.repeat(grid_size*grid_size, 1).unsqueeze(0)
    prediction[:,:,2:4] = torch.exp(prediction[:,:,2:4])*anchors
    
    prediction[:,:,5: 5 + num_classes] = torch.sigmoid((prediction[:,:, 5 : 5 + num_classes]))

    prediction[:,:,:4] *= stride
    
    return prediction

error message

 result = self.forward(*input, **kwargs)

  File "F:/Detection/darknet.py", line 219, in forward
    x = predict_transform(x, inp_dim, anchors, num_classes, CUDA)

  File "F:\Detection\util.py", line 56, in predict_transform
    prediction = prediction.view(batch_size, bbox_attrs*num_anchors, grid_size*grid_size)

RuntimeError: shape '[1, 255, 3025]' is invalid for input of size 689520

I’m not familiar with the Darkflow model, but your reshape does not work for the passed shapes.
Based on the error message it looks like prediction contains 689520 values, which could be reshaped to e.g. [1, 255, 2704]. Could you check if your input size, the grid_size etc. are set to their expected values?

I am trying to check the values for the input parameters from the data I have been able to find out values for the function predict_transform.
inp_dim=608
anchors=9
num_classes=80
But here the problem lies I am not able to determine prediction.size(0)

x = x.data

so I need to size(0) of this x… for that I have tried printing it in the function so for batch_size=prediction.size(0) I get 1
stride is 11
grid size is around 55
bbox_attrs is 85
num_anchors=9

Now what might be the error??

edit: I got value of batch_size
by adding print(batch_size) in predict_transform before the line which produces error same for stride…

Just a guess, but if your grid_size would be 52, prediction would have the shape [1, 255, 2704], which would perfectly match the input size of 689520.
Could this be a problem?

I am sorry I did not revert back, my code was working when I made some changes in the cfg file after changing in to text format…
Now it is working . The final shape of my prediction is 1x10647x85

what did you change in the cfg file
i am facing the same problem

Can you post your code then I might be able to suggest a change?

I work on the similar project. The problem is, pytorch keep gives me error in

    prediction[:,:,:2] += x_y_offset

RuntimeError: expected type torch.FloatTensor but got torch.cuda.FloatTensor

edit : already solve it by adding this line in top of def predict_transform

    prediction = prediction.to(torch.device("cuda"))

but right now, i found the same exact problem with you …

I am stuck with the same problem. Can you please mention the changes that you’ve made in cfg file?

I changed the height and width in cfg file and problem is solved.

3 Likes

you can see the
x = predict_transform(x, inp_dim, anchors, num_classes, CUDA)
the second parameter represents input image dim。this value is 608 in yolov3.cfg,
but in the get_test_input() function,the image is resized 416.so it confict.
img = cv2.resize(img, (416,416))
so you can change them same.
you have two choice

2 Likes
def get_test_input():
    img = cv2.imread("dog-cycle-car.png")
    img = cv2.resize(img, (416,416))          #Resize to the input dimension
    img_ =  img[:,:,::-1].transpose((2,0,1))  # BGR -> RGB | H X W C -> C X H X W 
    img_ = img_[np.newaxis,:,:,:]/255.0       #Add a channel at 0 (for batch) | Normalise
    img_ = torch.from_numpy(img_).float()     #Convert to float
    img_ = Variable(img_)                     # Convert to Variable
    return img_

The size of the resized input image in the get_test_input() must be the same with width and height in cfg file. In this case, it has to be resized to (608, 608) instead of (416, 416). Otherwise, self.net_info[“height”] in forward function have to be 416.

1 Like

Thanks, I solved by the same.
I changed both values ‘width’ and ‘height’ to 416.

1 Like

Thanks. It works for me too!