Unallocating memory of Tensor created in CPP extension

I have written a CPP extension and I am loading it via JIT. Within the CPP extension I am allocating a CPP array and return it as a Tensor. It looks roughly like this:

   float *data = new float[sizeX*(N-n)*sizeY];
   .... some stuff that fills data ...
   auto f = torch::CPU(kFloat).tensorFromBlob(data, {sizeX,N-n,sizeY});
   return f

This creates a memory leak because I assume “data” is never garbage collected. What’s the proper way of doing this?

Create the tensor using torch API and then fill the data using tensor.data<float>().