Using the VIA annotation tool

I have a fairly large dataset annotated manually for the instance segmentation task, using Visual Geometry Group - University of Oxford. Is it possible to read the VIA format into torchvision? If not, are there any tools to convert VIA json annotations in a format which can be consumed by torchvision? If not (again :grinning:) which is the best annotation format for doing instance segmentation with torchvision?

In my work, we use the VIA annotation tool for almost every DL task. If you would like to code (models, training loops, data loaders) from scratch, you can certainly use the VIA’s default JSON format. Or if you would like to use an already existing Instance Segmentation package like Facebook’s Detectron2 or YOLACT, you can convert your annotations to COCO format which is pretty standard for semantic/instance segmentation tasks. The VIA has such functionality for conversion. But remember that you may still need to write a custom python script to convert your COCO style JSON into the one that is accepted by the FB’s package.