I’ve been trying to learn torchScript and have been running into some issues trying to apply it to a notebook I’m working on in Google Colab. Currently, I have a gaussian transformation function shown below:
def gaussian(x,mu,sigma):
return 1-(sigma/(((x-mu)**2) + sigma**2) / np.pi)
Ideally, I want to run this function on gpu in parallel a large number of times (Ex. 100-1000 times). Here is my current torchScript code, mostly from https://pytorch.org/tutorials/advanced/torch-script-parallelism.html:
import numpy as np
import torch
from torch import nn
from torchvision.transforms import ToTensor
from timeit import default_timer as timer
@torch.jit.script
def example(x):
start = timer()
futures: List[torch.jit.Future[torch.Tensor]] = []
for _ in range(1000):
futures.append(torch.jit.fork(gaussian, x, 0, 1))
results = []
for future in futures:
results.append(torch.jit.wait(future))
end = timer()
print(f'elapsed time: {end - start}')
return torch.sum(torch.cat(results))
x_ax = np.linspace(-100, 100, 101)
cen = 2
sigma = 10
y_ax = gaussian(x_ax,cen,sigma)
However, when I try to run this, I get the following error:
RuntimeError:
Python builtin <built-in function perf_counter> is currently not supported in Torchscript:
File "<ipython-input-5-280f74a36289>", line 13
@torch.jit.script
def example(x):
start = timer()
~~~~~ <--- HERE
futures: List[torch.jit.Future[torch.Tensor]] = []
for _ in range(1000):
Is there a way to fix this? Is torchScript even compatible with this function definition at all (If not, are there any alternatives)? Any help would appreciated.