How to load large image data without memory issue

I am using libtorch.

I tried to train classification with my own image data.
I concern if I load plenty of images, it must cause a memory issue.

This is my custom dataloader.

class ClassificationDataset : public torch::data::Dataset<CustomDataset> {
private:
	std::vector<torch::Tensor> images, labels;
	vector<string> image_list;
	int num_classes;

public:
	ClassificationDataset(std::string path, int num_cls) {
		vector<torch::Tensor> label_t = vector<torch::Tensor>(num_cls);
		for (int i = 0; i < num_cls; i++)
		{
			auto image_path = path + "\\" + to_string(i) + "\\";
			auto images = get_files_inDirectory(image_path, "*.jpg");
			for(auto img : images)
			{
				image_list.push_back(image_path + "\\" + img);
				auto label = torch::tensor(i);
				labels.push_back(label);
			}
		}
		images = process_images(image_list);
		
		num_classes = num_cls;
	};

	torch::data::Example<torch::Tensor, torch::Tensor> get(size_t index) override {
		torch::Tensor sample_img = images.at(index);
		torch::Tensor sample_label = labels.at(index);
		torch::data::Example<torch::Tensor, torch::Tensor> value{ sample_img, sample_label };
		return value;
	};

	torch::optional<size_t> size() const override {
		return labels.size();
	};

	std::vector<torch::Tensor> process_images(std::vector<std::string> list_images)
	{
		std::vector<torch::Tensor> tensor_images;
		for (auto image : list_images)
		{
			cv::Mat img = cv::imread(image);
			cv::resize(img, img, cv::Size(224, 224));

			torch::Tensor tensor_image = torch::from_blob(img.data, { img.rows, img.cols,3 }, at::kByte);
			tensor_image = tensor_image.toType(at::kFloat);
			tensor_image = tensor_image.div_(255);
			tensor_image = tensor_image.permute({ 2, 0, 1 });
			
			tensor_images.push_back(tensor_image);
		}

		return tensor_images;
	}

Then I use this dataloader.

        auto dataset_train = ClassificationDataset("D:\\train", num_classes).map(torch::data::transforms::Stack<>());
		auto data_loader_train = torch::data::make_data_loader(std::move(dataset_train), torch::data::DataLoaderOptions().batch_size(5).workers(4));

		auto dataset_val = ClassificationDataset("D:\\val", num_classes).map(torch::data::transforms::Stack<>());
		auto data_loader_val = torch::data::make_data_loader(std::move(dataset_val), torch::data::DataLoaderOptions().batch_size(2).workers(4));

If I use my code with 10000 images of train data, does it makes memory issue? right?
If it does, how can I fix it?

Can’t you just load the image into memory only when ‘get’ is called (instead of loading all of them in constructor) ?
Something like this →

torch::data::Example<torch::Tensor, torch::Tensor> get(size_t index) override {
		torch::Tensor sample_img = process_images({ image_list[index] })[0];
		torch::Tensor sample_label = labels.at(index);
		torch::data::Example<torch::Tensor, torch::Tensor> value{ sample_img, sample_label };
		return value;
	};

Thanks for your reply!

I tried to load an image at get time.
But the exception occurred.
Debugger pointed at dataloader/base.h file line.

 optional<BatchType> next() {
    if (options_.workers > 0) {
      while (optional<Result> result = this->pop_result()) {
        if (result->exception) {
          **throw WorkerException(result->exception);**
        } else if (result->batch) {
          prefetch(1);
          return std::move(result->batch);
        }
      }
    } else if (auto batch_request = get_batch_request()) {
      return this->main_thread_dataset_->get_batch(std::move(*batch_request));
    }
    return nullopt;
  }

I am stucked …

I tried following code, I simplified it a bit, seems to work fine, can you try ?

class ClassificationDataset : public torch::data::Dataset<ClassificationDataset>
{
private:
	std::vector<std::tuple<std::string, int>> m_data;

	torch::Tensor LoadImageToTensor(const std::string& path)
	{
		cv::Mat img = cv::imread(path);
		cv::resize(img, img, cv::Size(224, 224));

		torch::Tensor tensor_image = torch::from_blob(img.data, { img.rows, img.cols, 3 }, at::kByte);
		tensor_image = tensor_image.toType(at::kFloat);
		tensor_image = tensor_image.div_(255);
		tensor_image = tensor_image.permute({ 2, 0, 1 });

		return tensor_image;
	}

public:
	ClassificationDataset(const std::string& path) 
	{
		for (const auto& dir : std::filesystem::directory_iterator(path))
		{
			int label = std::atoi(dir.path().stem().string().c_str());

			for (const auto& file : std::filesystem::directory_iterator(dir))
			{
				m_data.push_back({ file.path().string(), label });
			}
		}
	};

	torch::data::Example<torch::Tensor, torch::Tensor> get(size_t index) override 
	{
		auto [path, label] = m_data[index];

		torch::Tensor sample_img = LoadImageToTensor(path);
		torch::Tensor sample_label = torch::tensor(label);

		return { sample_img, sample_label };
	};

	torch::optional<size_t> size() const override 
	{
		return m_data.size();
	};
};

void main(int argc, char** argv)
{
	try
	{
		auto dataset_train = ClassificationDataset("D:\\train").map(torch::data::transforms::Stack<>());
		auto data_loader_train = torch::data::make_data_loader(std::move(dataset_train), torch::data::DataLoaderOptions().batch_size(5).workers(4));

		auto dataset_val = ClassificationDataset("D:\\val").map(torch::data::transforms::Stack<>());
		auto data_loader_val = torch::data::make_data_loader(std::move(dataset_val), torch::data::DataLoaderOptions().batch_size(2).workers(4));

		for (auto& batch : *data_loader_train)
		{
			auto data = batch.data;
			auto labels = batch.target;

			std::cout << data.sizes() << std::endl;
			std::cout << labels.sizes() << std::endl;
		}

		for (auto& batch : *data_loader_val)
		{
			auto data = batch.data;
			auto labels = batch.target;

			std::cout << data.sizes() << std::endl;
			std::cout << labels.sizes() << std::endl;
		}
	}
	catch (std::runtime_error& e)
	{
		std::cout << e.what() << std::endl;
	}
	catch (const c10::Error& e)
	{
		std::cout << e.msg() << std::endl;
	}

	system("PAUSE");
}

I really appreciate your help!
My issue is solved now.

I wonder, in case of this code aim to avoid memory issues with large data.
However, could it make a speed issue compared with loading all data in the constructor?
And I’d like to know whether it reads and processes images over and over at each epoch or not?

I really want to know how to avoid memory issues and get high processing speed.
Do you know the way?

whether it reads and processes images over and over …

yes, there is no way around it unless you can fit all data in your memory

could it make a speed issue …

yes, it will be somewhat slower depending on how fast your hard drive is

Do you know the way?

Not really, most of us probably deal with datasets that will never fit in memory. You could add optional parameter to your dataset that instructs it to load everything in constructor for situations when you’re sure your RAM can handle all of it (probably measure the time difference beforehand to make sure you’re not wasting time on something that will bring only marginal improvement).