Torchserve stopped. Failed to bind. Address already in use

Hi,
I am trying to get Torchserve running locally on my Ubuntu Machine. But I encounter this error after loading the model:

2021-03-18 18:31:38,285 [INFO ] main org.pytorch.serve.ModelServer - Torchserve stopped.
java.io.IOException: Failed to bind
        at io.grpc.netty.shaded.io.grpc.netty.NettyServer.start(NettyServer.java:264)
        at io.grpc.internal.ServerImpl.start(ServerImpl.java:183)
        at io.grpc.internal.ServerImpl.start(ServerImpl.java:90)
        at org.pytorch.serve.ModelServer.startGRPCServer(ModelServer.java:396)
        at org.pytorch.serve.ModelServer.startGRPCServers(ModelServer.java:377)
        at org.pytorch.serve.ModelServer.startAndWait(ModelServer.java:116)
        at org.pytorch.serve.ModelServer.main(ModelServer.java:95)
Caused by: io.grpc.netty.shaded.io.netty.channel.unix.Errors$NativeIoException: bind(..) failed: Address already in use
Exception in thread "Thread-0" java.util.concurrent.RejectedExecutionException: event executor terminated
        at io.netty.util.concurrent.SingleThreadEventExecutor.reject(SingleThreadEventExecutor.java:926)
        at io.netty.util.concurrent.SingleThreadEventExecutor.offerTask(SingleThreadEventExecutor.java:353)
        at io.netty.util.concurrent.SingleThreadEventExecutor.addTask(SingleThreadEventExecutor.java:346)
        at io.netty.util.concurrent.SingleThreadEventExecutor.execute(SingleThreadEventExecutor.java:828)
        at io.netty.util.concurrent.SingleThreadEventExecutor.execute(SingleThreadEventExecutor.java:818)
        at io.netty.channel.AbstractChannelHandlerContext.safeExecute(AbstractChannelHandlerContext.java:989)
        at io.netty.channel.AbstractChannelHandlerContext.close(AbstractChannelHandlerContext.java:608)
        at io.netty.channel.AbstractChannelHandlerContext.close(AbstractChannelHandlerContext.java:472)
        at io.netty.channel.DefaultChannelPipeline.close(DefaultChannelPipeline.java:957)
        at io.netty.channel.AbstractChannel.close(AbstractChannel.java:232)
        at org.pytorch.serve.ModelServer.stop(ModelServer.java:473)
        at org.pytorch.serve.ModelServer$1.run(ModelServer.java:91)

I dont have anything running on 9000, 9001, 8081, 8082 or any other port.

Following is my config:

Torchserve version: 0.3.0
TS Home: /home/pc/anaconda3/envs/torch/lib/python3.7/site-packages
Current directory: /home/pc/lezwon
Temp directory: /tmp
Number of GPUs: 2
Number of CPUs: 20
Max heap size: 16008 M
Python executable: /home/pc/anaconda3/envs/torch/bin/python
Config file: config.properties
Inference address: http://127.0.0.1:8080
Management address: http://127.0.0.1:8081
Metrics address: http://127.0.0.1:8082
Model Store: /home/pc/lezwon/model_store
Initial Models: car_roi=car_roi.mar
Log dir: /home/pc/lezwon/logs
Metrics dir: /home/pc/lezwon/logs
Netty threads: 0
Netty client threads: 0
Default workers per model: 2
Blacklist Regex: N/A
Maximum Response Size: 6553500
Maximum Request Size: 6553500
Prefer direct buffer: false
Allowed Urls: [file://.*|http(s)?://.*]
Custom python dependency for model allowed: false
Metrics report format: prometheus
Enable metrics API: true

Torchserve runs fine through a docker container. Only fails locally. Any help here would be appreciated. Thanks.

Fixed it. There was AnyDesk running on port 7070. Closed it and it now works fine. :slight_smile:

1 Like