Thanks for the implementation, it is a great project.
I have a performance test, the result makes me confused, would you please help me understand.
`
while True:
socket.send(byte_message(myid, CODE_POLL, (jobid, data)), copy=False)
msg = pickle.loads(socket.recv())
jobid = msg['message'] # list converted to string(eg. "[id1, id2, ...]")
if jobid is None:
jobid = data = None
time.sleep(1)
else:
data = fetcher.fetch(ast.literal_eval(jobid))
`
In the upper codelines,
`
msg = pickle.loads(socket.recv())
data = fetcher.fetch(ast.literal_eval(jobid))
`
cost 90% of time, which makes the bottlenecks of training, I wonder how to fix. Pickle seems not the best performance serialization tools but it supports most of object types in python. fetcher.fetch seems cost much time than local dataloader within the same batch size, does it make sense?
@ildoonet Thanks for your reply in advance.
Thanks for the implementation, it is a great project.
I have a performance test, the result makes me confused, would you please help me understand.
`
while True:
socket.send(byte_message(myid, CODE_POLL, (jobid, data)), copy=False)
`
In the upper codelines,
`
msg = pickle.loads(socket.recv())
data = fetcher.fetch(ast.literal_eval(jobid))
`
cost 90% of time, which makes the bottlenecks of training, I wonder how to fix. Pickle seems not the best performance serialization tools but it supports most of object types in python. fetcher.fetch seems cost much time than local dataloader within the same batch size, does it make sense?
@ildoonet Thanks for your reply in advance.