How to convert coordinates to tensors?

Hi, I am trying to develop a Lane Detector using Pytorch. Basically, I’m reading the video frame by frame using cv2, then finding edges using Canny Edge Detector and then using average lane algorithm I’m getting lane’s coordinates.

These coordinates look like [[x1, y1, x2, y2], [x3, y3, x4, y4]] and my .csv dataset is looks like:

image-0.jpg,"[[216, 312, 1458, 615], [315, 314, 465, 165]]"
image-1.jpg,"[[165, 951, 468, 1654], [465, 654, 416, 654]]"
...

*(I’m saving images frame by frame from the video to specify the road lanes. That’s why my database has names of every frame. image-0.jpg, image-1.jpg, image-2, etc.)

*(([x1, y1, x2, y2] is left road lane, [x3, y3, x4, y4] is right road lane))

But in the __getitem__ method of my dataset class, I’m getting this error:

    y_label = torch.tensor(int(self.annotations.iloc[index, 1]))
ValueError: invalid literal for int() with base 10: '[[449, 576, 353, 600], [696, 576, 722, 600]]'

How can I fix this code? I want these coordinates to be readable by PyTorch so I can train my model on these coordinates and road images that I save. Should I convert this list of coordinates to list of tensors?

My full database code:

import os

import pandas as pd

import torch

from torch.utils.data import Dataset

from torchvision import transforms

from skimage import io

class RoadLanesDataset(Dataset):

    def __init__(self, csv_file, root_dir, transform=None):

        self.annotations = pd.read_csv(csv_file)

        self.root_dir = root_dir

        self.transform = transform

    def __len__(self):

        return len(self.annotations)

    def __getitem__(self, index):

        img_path = os.path.join(self.root_dir, self.annotations.iloc[index, 0])

        image = io.imread(img_path)

        y_label = torch.tensor(int(self.annotations.iloc[index, 1]))

        if self.transform:

            image = self.transform(image)

        return (image, y_label)

The issue is caused as the nested list is a string and cannot be directly parsed to an int.
Trying to unwrap the nested list also won’t be easy, since not the actual integers are stored as string, but the entire nested list.
I would thus recommend to use e.g. pandas for the parsing:

a = "[[216, 312, 1458, 615], [315, 314, 465, 165]]"
int(a)
# ValueError: invalid literal for int() with base 10: '[[216, 312, 1458, 615], [315, 314, 465, 165]]'

b = pd.read_json(a)
arr = b.values
print(arr)
# [[ 216  312 1458  615]
#  [ 315  314  465  165]]

x = torch.from_numpy(arr)
print(x)
# tensor([[ 216,  312, 1458,  615],
#         [ 315,  314,  465,  165]])