I have access to a large CPU cluster that does not have GPUs. Is it possible to speed up YOLO training by parallelizing between multiple CPU nodes?
The docs say that device
parameter specifies the computational device(s) for training: a single GPU (device=0), multiple GPUs (device=0,1), CPU (device=cpu), or MPS for Apple silicon (device=mps).
What about multiple CPUs?
I have access to a large CPU cluster that does not have GPUs. Is it possible to speed up YOLO training by parallelizing between multiple CPU nodes?
The docs say that device
parameter specifies the computational device(s) for training: a single GPU (device=0), multiple GPUs (device=0,1), CPU (device=cpu), or MPS for Apple silicon (device=mps).
What about multiple CPUs?
1 Answer
Reset to default 0You can use torch.set_num_threads(int)
(docs) to control how many CPU processes pytorch uses to execute operations.
发布者:admin,转转请注明出处:http://www.yc00.com/questions/1745356351a4624140.html
评论列表(0条)