I am trying to understand the working of workers in NodeJS. My understanding is everytime we spawn a worker , it will create a new thread with it own Node/V8 instance.
So will the below code spawn 50 threads?
How is it distributed over the cpu cores?
This is the index.js
const { Worker } = require("worker_threads");
var count = 0;
console.log("Start Program");
const runService = () => {
return new Promise((resolve, reject) => {
const worker = new Worker("./service.js", {});
worker.on("message", resolve);
worker.on("error", reject);
worker.on("exit", code => {
if (code != 0) {
reject(new Error("Worker has stopped"));
}
});
});
};
const run = async () => {
const result = await runService();
console.log(count++);
console.log(result);
};
for (let i = 0; i < 50; i++) {
run().catch(error => console.log(error));
}
setTimeout(() => console.log("End Program"), 2000);
and this is the service.js file
const { workerData, parentPort } = require("worker_threads");
// You can do any heavy stuff here, in a synchronous way
// without blocking the "main thread"
const sleep = () => {
return new Promise(resolve => setTimeout(() => resolve, 500));
};
let cnt = 0;
for (let i = 0; i < 10e8; i += 1) {
cnt += 1;
}
parentPort.postMessage({ data: cnt });
I am trying to understand the working of workers in NodeJS. My understanding is everytime we spawn a worker , it will create a new thread with it own Node/V8 instance.
So will the below code spawn 50 threads?
How is it distributed over the cpu cores?
This is the index.js
const { Worker } = require("worker_threads");
var count = 0;
console.log("Start Program");
const runService = () => {
return new Promise((resolve, reject) => {
const worker = new Worker("./service.js", {});
worker.on("message", resolve);
worker.on("error", reject);
worker.on("exit", code => {
if (code != 0) {
reject(new Error("Worker has stopped"));
}
});
});
};
const run = async () => {
const result = await runService();
console.log(count++);
console.log(result);
};
for (let i = 0; i < 50; i++) {
run().catch(error => console.log(error));
}
setTimeout(() => console.log("End Program"), 2000);
and this is the service.js file
const { workerData, parentPort } = require("worker_threads");
// You can do any heavy stuff here, in a synchronous way
// without blocking the "main thread"
const sleep = () => {
return new Promise(resolve => setTimeout(() => resolve, 500));
};
let cnt = 0;
for (let i = 0; i < 10e8; i += 1) {
cnt += 1;
}
parentPort.postMessage({ data: cnt });
Share
Improve this question
asked Mar 2, 2020 at 6:04
Sourav ChatterjeeSourav Chatterjee
8102 gold badges14 silver badges29 bronze badges
5
- Might vary a little in implementation on different operating systems. Did you just try it and look at system metrics to see how many OS level threads are being used? – jfriend00 Commented Mar 2, 2020 at 6:07
- Distribution of OS level threads across the CPU cores is up to the operating system. It will generally time slice all the threads across all the cores leading to lots of thread context switching (at the OS level) if all the threads are actually active at the same time. – jfriend00 Commented Mar 2, 2020 at 6:09
- 4 Some relevant reading: Deep dive into worker threads in node.js. – jfriend00 Commented Mar 2, 2020 at 6:17
- Very nice explanation provided in the blog. – Sourav Chatterjee Commented Mar 3, 2020 at 4:10
- Also, indivdual V8 and Event Loop per thread and message passing via ports.I maybe wrong but looks conceptually similar to how Erlang/Elixir Processes run. – Sourav Chatterjee Commented Mar 3, 2020 at 4:17
2 Answers
Reset to default 3So will the below code spawn 50 threads?
.... for (let i = 0; i < 50; i++) { run().catch(error => console.log(error)); }
Yes.
How is it distributed over the cpu cores?
The OS will handle this.
Depending on the OS, there is a feature called processor affinity that allow you to manually set the "affinity" or preference a task has for a CPU core. On many OSes this is just a hint and the OS will override your preference if it needs to. Some real-time OSes treat this as mandatory allowing you more control over the hardware (when writing algorithms for self-driving cars or factory robots you sometimes don't want the OS to take control of your carefully crafted software at random times).
Some OSes like Linux allow you to set processor affinity with mand line mands so you can easily write a shell script or use child_process
to fine-tune your threads. At the moment there is no built-in way to manage processor affinity for worker threads. There is a third party module that I'm aware of that does this on Windows and Linux: nodeaffinity but it doesn't work on Max OSX (and other OSes like BSD, Solaris/Illumos etc.).
See try to understand this Nodejs is single threaded and when started it uses the thread so the number of workers depend on how much thread your system creates and trying forking the child process more the number of your threads wont help it will just slow down the whole process and result in lest productivity.
发布者:admin,转转请注明出处:http://www.yc00.com/questions/1745657546a4638634.html
评论列表(0条)