javascript - How can I play PCM audio I receive from a websocket stream? - Stack Overflow

Question: I am making an application with NodeJS where a user loads a page and the microphone streams t

Question: I am making an application with NodeJS where a user loads a page and the microphone streams the data to the NodeJS (I am using Socket.IO for the websocket part). I have got the streaming working fine, but now I am wondering how can I play the audio I receive?

Here is a picture of the message I receive from the stream that I am trying to play on the browser, I am guessing it's PCM audio but I'm not the expert. .png this object is 1023 long.

The code I am using on the browser is as follows (too long to put directly here):

Problem: I ripped the socket.on("mic") from here. But I am not sure on how it make it efficently play the audio data that it is receiving.

This is not my first time using a WebSocket, I am pretty well aware of the basics of WebSockets work, but it is my first time using the Web Audio API. So I need some help for this.

Question: I am making an application with NodeJS where a user loads a page and the microphone streams the data to the NodeJS (I am using Socket.IO for the websocket part). I have got the streaming working fine, but now I am wondering how can I play the audio I receive?

Here is a picture of the message I receive from the stream that I am trying to play on the browser, I am guessing it's PCM audio but I'm not the expert. https://i.sstatic/bZzfs.png this object is 1023 long.

The code I am using on the browser is as follows (too long to put directly here): https://gist.github./ZeroByter/f5690fa9a7c20e2b24cccaa5a8cf3b86

Problem: I ripped the socket.on("mic") from here. But I am not sure on how it make it efficently play the audio data that it is receiving.

This is not my first time using a WebSocket, I am pretty well aware of the basics of WebSockets work, but it is my first time using the Web Audio API. So I need some help for this.

Share asked Apr 7, 2016 at 16:41 ZeroByterZeroByter 3742 gold badges9 silver badges22 bronze badges
Add a ment  | 

2 Answers 2

Reset to default 2

Yes your image clip does look like PCM audio which is Web Audio API friendly

I wrote such a Web Socket based browser client to render received PCM audio from my nodejs server using Web Audio API ... getting to render the audio is straightforward however having to babysit the Web Socket in the single thread javascript environment, to receive the next audio buffer, which is inherently preemptive, will without below outlined tricks cause audible pop/glitches

Solution which finally worked is to put all Web Socket logic into a Web Worker which populates a WW side circular queue. The browser side then plucks the next audio buffer worth of data from that WW queue and populates the Web Audio API memory buffer driven from inside the Wed Audio API event loop. It all es down to avoiding at all costs doing any real work on browser side which causes the audio event loop to starve or not finish in time to service its own next event loop event.

I wrote this as my first foray into javascript so ... also U must do a browser F5 to reload screen to playback different stream (4 diff audio source files to pick from) ...

https://github./scottstensland/websockets-streaming-audio

I would like to simplify the usage to bee API driven and not baked into the same codebase (separate low level logic from userspace calls)

hope this helps

UPDATE - this git repo renders mic audio using Web Audio API - this self contained example shows how to access audio memory buffer ... repo also has another minimal inline html example which plays the mic audio shown

<html><head><meta http-equiv="Content-Type" content="text/html; charset=ISO-8859-1">
<title>capture microphone then show time & frequency domain output</title>

<script type="text/javascript">

var webaudio_tooling_obj = function () {

    // see this code at repo :   
    //    https://github./scottstensland/webaudioapi-microphone

    // if you want to see logic to access audio memory buffer
    // to record or send to downstream processing look at
    // other file :  microphone_inline.html
    // this file contains just the mimimum logic to render mic audio

    var audioContext = new AudioContext(); // entry point of Web Audio API

    console.log("audio is starting up ...");

    var audioInput = null,
    microphone_stream = null,
    gain_node = null,
    script_processor_node = null,
    script_processor_analysis_node = null,
    analyser_node = null;

    // get browser media handle
    if (!navigator.getUserMedia)
        navigator.getUserMedia =navigator.getUserMedia || 
                                navigator.webkitGetUserMedia ||
                                navigator.mozGetUserMedia || 
                                navigator.msGetUserMedia;

    if (navigator.getUserMedia) { //register microphone as source of audio

        navigator.getUserMedia({audio:true}, 
            function(stream) {
                start_microphone(stream);
            },
            function(e) {
                alert('Error capturing audio.');
            }
            );

    } else { alert('getUserMedia not supported in this browser.'); }

    // ---

    function start_microphone(stream) {

        // create a streaming audio context where source is microphone
        microphone_stream = audioContext.createMediaStreamSource(stream);

        // define as output of microphone the default output speakers
        microphone_stream.connect( audioContext.destination ); 

    } // start_microphone

}(); //  webaudio_tooling_obj = function()

</script>

</head>
<body></body>
</html>

I give a How-To setup this file in above git repo ... above code shows the minimal logic to render audio (mic) using Web Audio API in a browser

In my experimentation, the right way to stream audio from an arbitrary source (such as a websocket), which has a low latency and doesn't give clicks, is by using a AudioWorkletProcessor.

Minimal code example. Assuming 16 bit 48 kHz PCM data ing from the socket. This connects to a websocket on http://localhost/ws . Call the start() function to start streaming.

const sample_rate = 48000; // Hz

// Websocket url
const ws_url = "http://localhost/ws"

let audio_context = null;
let ws = null;

async function start() {
    if (ws != null) {
        return;
    }

    // Create an AudioContext that plays audio from the AudioWorkletNode  
    audio_context = new AudioContext();
    await audio_context.audioWorklet.addModule('audioProcessor.js');
    const audioNode = new AudioWorkletNode(audio_context, 'audio-processor');
    audioNode.connect(audio_context.destination);

    // Setup the websocket 
    ws = new WebSocket(ws_url);
    ws.binaryType = 'arraybuffer';

    // Process ining messages
    ws.onmessage =  (event) => {
        // Convert to Float32 lpcm, which is what AudioWorkletNode expects
        const int16Array = new Int16Array(event.data);
        let float32Array = new Float32Array(int16Array.length);
        for (let i = 0; i < int16Array.length; i++) {
            float32Array[i] = int16Array[i] / 32768.; 
        }

        // Send the audio data to the AudioWorkletNode
        audioNode.port.postMessage({ message: 'audioData', audioData: float32Array });
    }
    
    ws.onopen = () => {
        console.log('WebSocket connection opened.');
    };

    ws.onclose = () => {
        console.log('WebSocket connection closed.');
    };
    
    ws.onerror = error => {
        console.error('WebSocket error:', error);
    };
}

async function stop() {
    console.log('Stopping audio');
    if (audio_context) {
        await audio_context.close();
        audio_context = null;
        ws.close();
        ws = null;
    }
}

this also needs:

audioProcessor.js

class AudioProcessor extends AudioWorkletProcessor {

    constructor() {
        super();
        this.buffer = new Float32Array();

        // Receive audio data from the main thread, and add it to the buffer
        this.port.onmessage = (event) => {
            let newFetchedData = new Float32Array(this.buffer.length + event.data.audioData.length);
            newFetchedData.set(this.buffer, 0);
            newFetchedData.set(event.data.audioData, this.buffer.length); 
            this.buffer = newFetchedData;
        };
    }

    // Take a chunk from the buffer and send it to the output to be played
    process(inputs, outputs, parameters) {
        const output = outputs[0];
        const channel = output[0];
        const bufferLength = this.buffer.length;
        for (let i = 0; i < channel.length; i++) {
            channel[i] = (i < bufferLength) ? this.buffer[i] : 0;
        }
        this.buffer = this.buffer.slice(channel.length);
        return true;
    }
}

registerProcessor('audio-processor', AudioProcessor);

发布者:admin,转转请注明出处:http://www.yc00.com/questions/1744908722a4600415.html

相关推荐

发表回复

评论列表(0条)

  • 暂无评论

联系我们

400-800-8888

在线咨询: QQ交谈

邮件:admin@example.com

工作时间:周一至周五,9:30-18:30,节假日休息

关注微信