Do you get a different error after removing the lambda? Because the error is explicitly about a lambda which cannot be serialized.
@albanD No, it’s the same error. I have also made sure that no lambda functions remain in the code.
If the error still points at a lambda function. Then you have one left. If it points to something else, you can share it here so that we can take a look.
@albanD I have reproduced the error here : https://colab.research.google.com/drive/10Zxe40Tl14fWAaCAYg1fgQA13hHIiBo_?usp=sharing
The error is :
AttributeError Traceback (most recent call last) <ipython-input-6-adca9abddcf0> in <module>() 168 if __name__ == '__main__': 169 t1 = time.time() --> 170 main() 171 print("Time taken :", time.time() - t1) 5 frames <ipython-input-6-adca9abddcf0> in main(ways, shots, meta_lr, fast_lr, meta_batch_size, adaptation_steps, num_iterations, cuda, seed) 126 args = [maml, tasksets] 127 with Pool(4) as pool: --> 128 values = pool.map(partial(ParallelTasks, args), list(range(meta_batch_size))) 129 130 #for i in range(values): /usr/lib/python3.6/multiprocessing/pool.py in map(self, func, iterable, chunksize) 264 in a list that is returned. 265 ''' --> 266 return self._map_async(func, iterable, mapstar, chunksize).get() 267 268 def starmap(self, func, iterable, chunksize=None): /usr/lib/python3.6/multiprocessing/pool.py in get(self, timeout) 642 return self._value 643 else: --> 644 raise self._value 645 646 def _set(self, i, obj): /usr/lib/python3.6/multiprocessing/pool.py in _handle_tasks(taskqueue, put, outqueue, pool, cache) 422 break 423 try: --> 424 put(task) 425 except Exception as e: 426 job, idx = task[:2] /usr/lib/python3.6/multiprocessing/connection.py in send(self, obj) 204 self._check_closed() 205 self._check_writable() --> 206 self._send_bytes(_ForkingPickler.dumps(obj)) 207 208 def recv_bytes(self, maxlength=None): /usr/lib/python3.6/multiprocessing/reduction.py in dumps(cls, obj, protocol) 49 def dumps(cls, obj, protocol=None): 50 buf = io.BytesIO() ---> 51 cls(buf, protocol).dump(obj) 52 return buf.getbuffer() 53 AttributeError: Can't pickle local object 'omniglot_tasksets.<locals>.<lambda>'
@albanD The same error is seen in Linux environment too.
GIven the names, I guess the problem is that the taskset you’re using cannot be serialized. And so you cannot use it in the process Pool.
You can either unpack these in a different object that you can serialize or change the libary to make the taskset serializable.
Ah ok, I will try that approach. That’s for the insights @albanD.
That might be the case it seems.
I will post my solution here in case someone also faces the same error. As @albanD said, the error was because dataset couldn’t be serialized. I used
import dill and the error was solved. In case someone is still facing issue, they can try this too
from pathos.multiprocessing import ProcessingPool as Pool.
Dear @Asura, in which file(s) specifically did you include the
import dill statement?
Hi @lillepeder, I was using Colab and added it in the top cell.
But, I suppose it will work as long as you add it the file where serialization is taking place.
Otherwise, try the later approach,
pathos already has
dill, so you won’t need to figure out where to import it specifically.
I found an alternative solution: pass
num_workers=0 into the DataLoader, in case anyone else get a similar problem! A little slower, but it overcomes the pickling issue.
Thank you for the quick reply, but I seem to have solved it for now with
I also had this error in my mac with this transform:
self.train_transform = transforms.Compose([ lambda x: Image.fromarray(x), transforms.RandomCrop(84, padding=8), transforms.ColorJitter(brightness=0.4, contrast=0.4, saturation=0.4), transforms.RandomHorizontalFlip(), lambda x: np.asarray(x), transforms.ToTensor(), self.normalize ])
I don’t understand why there is pickling going on…seems weird. But I will try to change the things above to non-lambdas. I will first try:
lambda x: Image.fromarray(x) ---> Image.fromarray
hopefully that works.
yes it did work for me.