machine learning - How to utilize multiple CPUs for training of YOLO? - Stack Overflow

I have access to a large CPU cluster that does not have GPUs. Is it possible to speed up YOLO training

I have access to a large CPU cluster that does not have GPUs. Is it possible to speed up YOLO training by parallelizing between multiple CPU nodes?
The docs say that device parameter specifies the computational device(s) for training: a single GPU (device=0), multiple GPUs (device=0,1), CPU (device=cpu), or MPS for Apple silicon (device=mps). What about multiple CPUs?

I have access to a large CPU cluster that does not have GPUs. Is it possible to speed up YOLO training by parallelizing between multiple CPU nodes?
The docs say that device parameter specifies the computational device(s) for training: a single GPU (device=0), multiple GPUs (device=0,1), CPU (device=cpu), or MPS for Apple silicon (device=mps). What about multiple CPUs?

Share Improve this question edited Jan 18 at 0:55 Christoph Rackwitz 15.9k5 gold badges39 silver badges51 bronze badges asked Jan 17 at 16:05 Artem LebedevArtem Lebedev 1636 bronze badges
Add a comment  | 

1 Answer 1

Reset to default 0

You can use torch.set_num_threads(int) (docs) to control how many CPU processes pytorch uses to execute operations.

发布者:admin,转转请注明出处:http://www.yc00.com/questions/1745356351a4624140.html

相关推荐

发表回复

评论列表(0条)

  • 暂无评论

联系我们

400-800-8888

在线咨询: QQ交谈

邮件:admin@example.com

工作时间:周一至周五,9:30-18:30,节假日休息

关注微信