Seeing the following error when I make an inference request

xyz-stdout MODEL_LOG - ----- INITIALIZE -----
xyz org.pytorch.serve.wlm.WorkerThread - sent a reply, jobdone: true
xyz org.pytorch.serve.wlm.WorkerThread - sent a reply, jobdone: true
xyz org.pytorch.serve.wlm.WorkerThread - Backend response time: 1980
xyz org.pytorch.serve.wlm.WorkerThread - Backend response time: 1980
xyz org.pytorch.serve.wlm.WorkerThread - xyz State change WORKER_STARTED -> WORKER_MODEL_LOADED
xyz org.pytorch.serve.wlm.WorkerThread - xyz State change WORKER_STARTED -> WORKER_MODEL_LOADED
xyz TS_METRICS - WorkerLoadTime.Milliseconds:2962.0|#WorkerName:xyz,Level:Host|#hostname:a9cb56cd2a24,timestamp:1688661575
xyz TS_METRICS - WorkerThreadTime.Milliseconds:60.0|#Level:Host|#hostname:a9cb56cd2a24,timestamp:1688661575
pool-2-thread-2 ACCESS_LOG - /172.17.0.1:44710 "GET /ping HTTP/1.1" 200 32
pool-2-thread-2 TS_METRICS - Requests2XX.Count:1.0|#Level:Host|#hostname:a9cb56cd2a24,timestamp:1688661594
epollEventLoopGroup-3-2 TS_METRICS - ts_inference_requests_total.Count:1.0|#model_name:model_xyz,model_version:default|#hostname:a9cb56cd2a24,timestamp:1688661607
xyz org.pytorch.serve.wlm.WorkerThread - Flushing req.cmd PREDICT to backend at: 1688661607935
xyz org.pytorch.serve.wlm.WorkerThread - Flushing req.cmd PREDICT to backend at: 1688661607935
xyz-stdout MODEL_LOG - 
xyz-stdout MODEL_LOG - Backend received inference at: 1688661607
xyz-stdout MODEL_LOG - Invoking custom service failed.
xyz-stdout MODEL_LOG - Traceback (most recent call last):
xyz-stdout MODEL_LOG -   File "/home/venv/lib/python3.10/site-packages/ts/service.py", line 134, in predict
xyz-stdout MODEL_LOG -     ret = self._entry_point(input_batch, self.context)
xyz-stdout MODEL_LOG -   File "/home/venv/lib/python3.10/site-packages/ts/torch_handler/request_envelope/base.py", line 26, in handle
xyz-stdout MODEL_LOG -     data = self.parse_input(data)
xyz-stdout MODEL_LOG -   File "/home/venv/lib/python3.10/site-packages/ts/torch_handler/request_envelope/json.py", line 21, in parse_input
xyz-stdout MODEL_LOG -     lengths, batch = self._batch_from_json(data)
xyz-stdout MODEL_LOG -   File "/home/venv/lib/python3.10/site-packages/ts/torch_handler/request_envelope/json.py", line 32, in _batch_from_json
xyz-stdout MODEL_LOG -     mini_batches = [self._from_json(data_row) for data_row in data_rows]
xyz-stdout MODEL_LOG -   File "/home/venv/lib/python3.10/site-packages/ts/torch_handler/request_envelope/json.py", line 32, in <listcomp>
xyz-stdout MODEL_LOG -     mini_batches = [self._from_json(data_row) for data_row in data_rows]
xyz-stdout MODEL_LOG -   File "/home/venv/lib/python3.10/site-packages/ts/torch_handler/request_envelope/json.py", line 41, in _from_json
xyz-stdout MODEL_LOG -     rows = (data.get("data") or data.get("body") or data)["instances"]
xyz-stdout MODEL_LOG - TypeError: list indices must be integers or slices, not str

what does your handler code look like? Usually I can fix issues like this by wrapping the return result of handle our pre/post processing and inference functions as a list with []

I get a similar error when I try run the Torchserve Quickstart within a Docker container, my error is:

2023-07-30T21:08:51,028 [INFO ] W-9000-densenet161_1.0-stdout MODEL_LOG -   File "/home/venv/lib/python3.9/site-packages/ts/torch_handler/request_envelope/base.py", line 26, in handle
2023-07-30T21:08:51.031782300Z 2023-07-30T21:08:51,030 [INFO ] W-9000-densenet161_1.0-stdout MODEL_LOG -     data = self.parse_input(data)
2023-07-30T21:08:51.033794200Z 2023-07-30T21:08:51,031 [INFO ] W-9000-densenet161_1.0-stdout MODEL_LOG -   File "/home/venv/lib/python3.9/site-packages/ts/torch_handler/request_envelope/json.py", line 21, in parse_input
2023-07-30T21:08:51.033906600Z 2023-07-30T21:08:51,031 [INFO ] W-9000-densenet161_1.0-stdout MODEL_LOG -     lengths, batch = self._batch_from_json(data)
2023-07-30T21:08:51.034106000Z 2023-07-30T21:08:51,032 [INFO ] W-9000-densenet161_1.0-stdout MODEL_LOG -   File "/home/venv/lib/python3.9/site-packages/ts/torch_handler/request_envelope/json.py", line 32, in _batch_from_json
2023-07-30T21:08:51.034317200Z 2023-07-30T21:08:51,033 [INFO ] W-9000-densenet161_1.0-stdout MODEL_LOG -     mini_batches = [self._from_json(data_row) for data_row in data_rows]
2023-07-30T21:08:51.034457600Z 2023-07-30T21:08:51,033 [INFO ] W-9000-densenet161_1.0-stdout MODEL_LOG -   File "/home/venv/lib/python3.9/site-packages/ts/torch_handler/request_envelope/json.py", line 32, in <listcomp>
2023-07-30T21:08:51.036864200Z 2023-07-30T21:08:51,035 [INFO ] W-9000-densenet161_1.0-stdout MODEL_LOG -     mini_batches = [self._from_json(data_row) for data_row in data_rows]
2023-07-30T21:08:51.037792600Z 2023-07-30T21:08:51,036 [INFO ] W-9000-densenet161_1.0-stdout MODEL_LOG -   File "/home/venv/lib/python3.9/site-packages/ts/torch_handler/request_envelope/json.py", line 41, in _from_json
2023-07-30T21:08:51.037982700Z 2023-07-30T21:08:51,036 [INFO ] W-9000-densenet161_1.0-stdout MODEL_LOG -     rows = (data.get("data") or data.get("body") or data)["instances"]
2023-07-30T21:08:51.038058400Z 2023-07-30T21:08:51,037 [INFO ] W-9000-densenet161_1.0-stdout MODEL_LOG - TypeError: bytearray indices must be integers or slices, not str
2023-07-30T21:08:59.066590000Z

I’m using the default, image_classifier handler, and running the provided curl prompt in the docs: curl http://127.0.0.1:8080/predictions/densenet161 -T kitten_small.jpg

Any advice would be hugely appreciated.