I am trying to build a facial recognition module for using it in my project. This module will later be used in electron.js for building a cross-platform application.
The basic idea is:
The user is presented with a webpage that shows his/her webcam feed. S/he can click on the capture button which will save the image on the server side. This will be repeated a number of times to get training data to train the facial recognition model. I implemented the image capture part using a third-party npm module called 'node-webcam':
const nodeWebCam = require('node-webcam');
const fs = require('fs');
const app = require('express')();
const path = require('path');
// specifying parameters for the pictures to be taken
var options = {
width: 1280,
height: 720,
quality: 100,
delay: 1,
saveShots: true,
output: "jpeg",
device: false,
callbackReturn: "location"
};
// create instance using the above options
var webcam = nodeWebCam.create(options);
// capture function that snaps <amount> images and saves them with the given name in a folder of the same name
var captureShot = (amount, i, name) => {
var path = `./images/${name}`;
// create folder if and only if it does not exist
if(!fs.existsSync(path)) {
fs.mkdirSync(path);
}
// capture the image
webcam.capture(`./images/${name}/${name}${i}.${options.output}`, (err, data) => {
if(!err) {
console.log('Image created')
}
console.log(err);
i++;
if(i <= amount) {
captureShot(amount, i, name);
}
});
};
// call the capture function
captureShot(30, 1, 'robin');
app.get('/', (req, res) => {
res.sendFile(path.join(__dirname, 'index.html'));
});
app.listen(3000, () => {
console.log("Listening at port 3000....");
});
However, I am lost after this part. I don't know how to get the live feed to be displayed on the webpage which the user sees. Also, I realized later that this is a server side code, and there is no way to call the captureShot() function from client side. Any help would be appreciated.
I am trying to build a facial recognition module for using it in my project. This module will later be used in electron.js for building a cross-platform application.
The basic idea is:
The user is presented with a webpage that shows his/her webcam feed. S/he can click on the capture button which will save the image on the server side. This will be repeated a number of times to get training data to train the facial recognition model. I implemented the image capture part using a third-party npm module called 'node-webcam':
const nodeWebCam = require('node-webcam');
const fs = require('fs');
const app = require('express')();
const path = require('path');
// specifying parameters for the pictures to be taken
var options = {
width: 1280,
height: 720,
quality: 100,
delay: 1,
saveShots: true,
output: "jpeg",
device: false,
callbackReturn: "location"
};
// create instance using the above options
var webcam = nodeWebCam.create(options);
// capture function that snaps <amount> images and saves them with the given name in a folder of the same name
var captureShot = (amount, i, name) => {
var path = `./images/${name}`;
// create folder if and only if it does not exist
if(!fs.existsSync(path)) {
fs.mkdirSync(path);
}
// capture the image
webcam.capture(`./images/${name}/${name}${i}.${options.output}`, (err, data) => {
if(!err) {
console.log('Image created')
}
console.log(err);
i++;
if(i <= amount) {
captureShot(amount, i, name);
}
});
};
// call the capture function
captureShot(30, 1, 'robin');
app.get('/', (req, res) => {
res.sendFile(path.join(__dirname, 'index.html'));
});
app.listen(3000, () => {
console.log("Listening at port 3000....");
});
However, I am lost after this part. I don't know how to get the live feed to be displayed on the webpage which the user sees. Also, I realized later that this is a server side code, and there is no way to call the captureShot() function from client side. Any help would be appreciated.
Share Improve this question asked Oct 3, 2018 at 16:58 mahieyin-rahmunmahieyin-rahmun 1,6571 gold badge16 silver badges24 bronze badges2 Answers
Reset to default 3Turn the capture shot into a promise, then render it in the route. We are going to setup a route that runs a function, then returns the path to image with a HTML string. I do not know how the data is returned from your function to make the image. But assuming it returns the exact path, you resolve the path for the callback.
You also need to make a static directory served by express. So you can use http://localhost:3000/myimage.jpg
const nodeWebCam = require('node-webcam');
const fs = require('fs');
const app = require('express')();
const path = require('path');
app.use(express.static('images')) // images folder to be served
// Now we can just say localhost:3000/image.jpg
// specifying parameters for the pictures to be taken
var options = {
width: 1280,
height: 720,
quality: 100,
delay: 1,
saveShots: true,
output: "jpeg",
device: false,
callbackReturn: "location"
};
// create instance using the above options
var webcam = nodeWebCam.create(options);
// capture function that snaps <amount> images and saves them with the given name in a folder of the same name
var captureShot = (amount, i, name) => {
// Make sure this returns a real url to an image.
return new Promise(resolve => {
var path = `./images/${name}`;
// create folder if and only if it does not exist
if(!fs.existsSync(path)) {
fs.mkdirSync(path);
}
// capture the image
webcam.capture(`./images/${name}/${name}${i}.${options.output}`, (err, data) => {
if(!err) {
console.log('Image created')
}
console.log(err);
i++;
if(i <= amount) {
captureShot(amount, i, name);
}
resolve('/path/to/image.jpg')
});
})
};
// call the capture function
app.get('/', (req, res) => {
captureShot(30, 1, 'robin');
.then((response) => {
// Whatever we resolve in captureShot, that's what response will contain
res.send('<img src="${response}"/>')
})
});
app.listen(3000, () => {
console.log("Listening at port 3000....");
});
If you are trying to design a page, with specific dynamic content. Use a templating engine with express such as EJS. http://ejs.co Then you can render the page, with dynamic objects. And set a <img src=<%= image %>/>
dynamically to the user after taking the picture.
I put an example of a promise, then using then using a static directory with express. You can get the idea of what I am saying.
function create() {
return new Promise(resolve => {
if (true) {
resolve('https://example./image.jpg')
} else {
reject('Error')
}
})
}
create()
.then((response) => {
console.log(`<img src="${response}"/>`)
})
.catch((error) => {
// Error
console.log(error)
})
I'm working on a solution based on puppeteer (headless google chrome) which is super portable and streams video acceptably fast (40fps 800x600 frames). Is super easy to install and use, and I'm already using it to capture video, audio and desktop on desktop apps based on gtk, cairo, opengl, and qt without problems.
https://www.npmjs./package/camera-capture
I'm familiar with opencv, but libraries based on that are sincerely hard to install by new users. My project doesn't require any native dependency or client-server unication although it could be non lightweight for embedding in small devices (puppeteer size is around 80mb). Feedback is wele! Thanks
发布者:admin,转转请注明出处:http://www.yc00.com/questions/1744936099a4602033.html
评论列表(0条)