Hi, I’m building an application where I receive images from a socket connection, then I process them and return back the results in another socket connection. All three steps are separated in a multiprocessing independent process. Furthermore, since the dataflow will be constant they operate parallel in while loops. In order to allow communication between the processes I use queues. I pass the queues as arg to the processes, and in some processes I use queue.put() while in others queue.get() (normal producer-consumer behaviour).
I have an strange error, lets say the process that put the tensors from the socket into the queue is faster and puts 15 batches of images in the queue before the process with the pytorch model analyzes nothing. You would expect these 15 objects (cpu tensors) of the queue being different, however they are all the same which corresponds to the last objects. It’s like if the queue is filled in all their posititons with the last object it gets.
If I send the images from the socket very slowly (with break points for instance) then this does not happen and everything is computed properly.
Is this a know bug? Is there a limit size in terms of memory for the queues which I fill with my cpu tensors?