FWIW - it also occurs if the return type of the inference method in the custom handler is as simple as a string.
I have a custom_handler with a basic inference method like:
def inference(self, ds, *args, **kwargs):
# do stuff to data
with torch.inference_mode():
self.model.eval()
pred = self.model.forward(inp).squeeze()
return pred
I actually updated the code. I know that pred is of type Tensor now.
How should I return this so that it works?
If I just do: return pred then I see: Invalid return type: <class 'torch.Tensor'>.
if i do return [pred] then I get:
Traceback (most recent call last):
File ".../site-packages/requests/models.py", line 971, in json
return complexjson.loads(self.text, **kwargs)
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/json/__init__.py", line 346, in loads
return _default_decoder.decode(s)
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/json/decoder.py", line 337, in decode
obj, end = self.raw_decode(s, idx=_w(s, 0).end())
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/json/decoder.py", line 355, in raw_decode
raise JSONDecodeError("Expecting value", s, err.value) from None
json.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0)
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "rest_inf.py", line 18, in <module>
print("Inference result: ", response.json())
File ".../site-packages/requests/models.py", line 975, in json
raise RequestsJSONDecodeError(e.msg, e.doc, e.pos)
requests.exceptions.JSONDecodeError: Expecting value: line 1 column 1 (char 0)
not sure what is expected here?
do i have to convert the tensor to a json or to some array?