javascript - Web Audio: Karplus Strong String Synthesis - Stack Overflow

Edit: Cleaned up the code and the player (on Github) a little so it's easier to set the frequencyI

Edit: Cleaned up the code and the player (on Github) a little so it's easier to set the frequency

I'm trying to synthesize strings using the Karplus Strong string synthesis algorithm, but I can't get the string to tune properly. Does anyone have any idea?

As linked above, the code is on Github: (the relevant bits are in strings.js).

Wiki has the following diagram:

So essentially, I generate the noise, which then gets output and sent to a delay filter simultaneously. The delay filter is connected to a low-pass filter, which is then mixed with the output. According to Wikipedia, the delay should be of N samples, where N is the sampling frequency divided by the fundamental frequency (N = f_s/f_0).

Excerpts from my code:

Generating the noise (bufferSize is 2048, but that shouldn't matter too much)

var buffer = context.createBuffer(1, bufferSize, context.sampleRate);
var bufferSource = context.createBufferSource();
bufferSource.buffer = buffer;

var bufferData = buffer.getChannelData(0);
for (var i = 0; i < delaySamples+1; i++) {
    bufferData[i] = 2*(Math.random()-0.5); // random noise from -1 to 1
}

Create a delay node

var delayNode = context.createDelayNode();

We need to delay by f_s/f_0 samples. However, the delay node takes the delay in seconds, so we need to divide that by the samples per second, and we get (f_s/f_0) / f_s, which is just 1/f_0.

var delaySeconds = 1/(frequency);
delayNode.delayTime.value = delaySeconds;

Create the lowpass filter (the frequency cutoff, as far as I can tell, shouldn't affect the frequency, and is more a matter of whether the string "sounds" natural):

var lowpassFilter = context.createBiquadFilter();
lowpassFilter.type = lowpassFilter.LOWPASS; // explicitly set type
lowpassFilter.frequency.value = 20000; // make things sound better

Connect the noise to the output and the delay node (destination = context.destination and was defined earlier):

bufferSource.connect(destination);
bufferSource.connect(delayNode);

Connect the delay to the lowpass filter:

delayNode.connect(lowpassFilter);

Connect the lowpass to the output and back to the delay*:

lowpassFilter.connect(destination);
lowpassFilter.connect(delayNode);

Does anyone have any ideas? I can't figure out whether the issue is my code, my interpretation of the algorithm, my understanding of the API, or (though this is least likely) an issue with the API itself.


*Note that on Github, there's actually a Gain Node between the lowpass and the output, but this doesn't really make a big difference in the output.

Edit: Cleaned up the code and the player (on Github) a little so it's easier to set the frequency

I'm trying to synthesize strings using the Karplus Strong string synthesis algorithm, but I can't get the string to tune properly. Does anyone have any idea?

As linked above, the code is on Github: https://github./achalddave/Audio-API-Frequency-Generator (the relevant bits are in strings.js).

Wiki has the following diagram:

So essentially, I generate the noise, which then gets output and sent to a delay filter simultaneously. The delay filter is connected to a low-pass filter, which is then mixed with the output. According to Wikipedia, the delay should be of N samples, where N is the sampling frequency divided by the fundamental frequency (N = f_s/f_0).

Excerpts from my code:

Generating the noise (bufferSize is 2048, but that shouldn't matter too much)

var buffer = context.createBuffer(1, bufferSize, context.sampleRate);
var bufferSource = context.createBufferSource();
bufferSource.buffer = buffer;

var bufferData = buffer.getChannelData(0);
for (var i = 0; i < delaySamples+1; i++) {
    bufferData[i] = 2*(Math.random()-0.5); // random noise from -1 to 1
}

Create a delay node

var delayNode = context.createDelayNode();

We need to delay by f_s/f_0 samples. However, the delay node takes the delay in seconds, so we need to divide that by the samples per second, and we get (f_s/f_0) / f_s, which is just 1/f_0.

var delaySeconds = 1/(frequency);
delayNode.delayTime.value = delaySeconds;

Create the lowpass filter (the frequency cutoff, as far as I can tell, shouldn't affect the frequency, and is more a matter of whether the string "sounds" natural):

var lowpassFilter = context.createBiquadFilter();
lowpassFilter.type = lowpassFilter.LOWPASS; // explicitly set type
lowpassFilter.frequency.value = 20000; // make things sound better

Connect the noise to the output and the delay node (destination = context.destination and was defined earlier):

bufferSource.connect(destination);
bufferSource.connect(delayNode);

Connect the delay to the lowpass filter:

delayNode.connect(lowpassFilter);

Connect the lowpass to the output and back to the delay*:

lowpassFilter.connect(destination);
lowpassFilter.connect(delayNode);

Does anyone have any ideas? I can't figure out whether the issue is my code, my interpretation of the algorithm, my understanding of the API, or (though this is least likely) an issue with the API itself.


*Note that on Github, there's actually a Gain Node between the lowpass and the output, but this doesn't really make a big difference in the output.

Share Improve this question edited Feb 27, 2013 at 0:30 Achal Dave asked Oct 31, 2012 at 7:21 Achal DaveAchal Dave 4,4193 gold badges28 silver badges32 bronze badges 10
  • I'm just fiddling with this, and I really don't know what I'm doing. But try setting the frequency to 241. On my Mac that creates some weird noise. Maybe that tells you something? You seem a lot more proficient with the maths and theoretics. :) – Oskar Eriksson Commented Oct 31, 2012 at 11:21
  • Hm, that's interesting. To be honest, apart from one EE course I'm not too familiar with the theory either, so much of this is piecing things together and asking around. Thanks for the help, though, this might give some insight if I poke around more. – Achal Dave Commented Oct 31, 2012 at 18:55
  • This probably isn't the issue since I think Lowpass is the default, but you should probably set your filter type explicitly in the code... something like lowpassFilter.type = lowpassFilter.LOWPASS. – Matt Diamond Commented Oct 31, 2012 at 19:33
  • That would probably make the code more explicit. I'll update that. (it unfortunately doesn't fix the issue..., thanks though) – Achal Dave Commented Oct 31, 2012 at 19:36
  • 1 This is really puzzling. At first I thought that maybe Web Audio delay nodes couldn't handle such low delay times, but if that was the case there should be an upper level where the pitch doesn't increase anymore, which doesn't seem to be the case. Another strange thing is that the pitch isn't consistent over the octaves either. Pitches of 220, 440 and 880 doesn't generate the same note in different octaves. This leads me to think that there might be an error in the calculations somewhere, but I can't see where. – Oskar Eriksson Commented Nov 1, 2012 at 9:59
 |  Show 5 more ments

1 Answer 1

Reset to default 8

Here's what I think is the problem. I don't think the DelayNode implementation is designed to handle such tight feedback loops. For a 441 Hz tone, for example, that's only 100 samples of delay, and the DelayNode implementation probably processes its input in blocks of 128 or more. (The delayTime attribute is "k-rate", meaning changes to it are only processed in blocks of 128 samples. That doesn't prove my point, but it hints at it.) So the feedback es in too late, or only partially, or something.

EDIT/UPDATE: As I state in a ment below, the actual problem is that a DelayNode in a cycle adds 128 sample frames between output and input, so that the observed delay is 128 / sampleRate seconds longer than specified.

My advice (and what I've begun to do) is to implement the whole Karplus-Strong including your own delay line in a JavaScriptNode (now known as a ScriptProcessorNode). It's not hard and I'll post my code once I get rid of an annoying bug that can't possibly exist but somehow does.

Incidentally, the tone you (and I) get with a delayTime of 1/440 (which is supposed to be an A) seems to be a G, two semitones below where it should be. Doubling the frequency raises it to a B, four semitones higher. (I could be off by an octave or two - kind of hard to tell.) Probably one could figure out what's going on (mathematically) from a couple more data points like this, but I won't bother.

EDIT: Here's my code, certified bug-free.

var context = new webkitAudioContext();

var frequency = 440;
var impulse = 0.001 * context.sampleRate;

var node = context.createJavaScriptNode(4096, 0, 1);
var N = Math.round(context.sampleRate / frequency);
var y = new Float32Array(N);
var n = 0;
node.onaudioprocess = function (e) {
  var output = e.outputBuffer.getChannelData(0);
  for (var i = 0; i < e.outputBuffer.length; ++i) {
    var xn = (--impulse >= 0) ? Math.random()-0.5 : 0;
    output[i] = y[n] = xn + (y[n] + y[(n + 1) % N]) / 2;
    if (++n >= N) n = 0;
  }
}

node.connect(context.destination);

发布者:admin,转转请注明出处:http://www.yc00.com/questions/1744974261a4604048.html

相关推荐

  • javascript - Web Audio: Karplus Strong String Synthesis - Stack Overflow

    Edit: Cleaned up the code and the player (on Github) a little so it's easier to set the frequencyI

    1天前
    40

发表回复

评论列表(0条)

  • 暂无评论

联系我们

400-800-8888

在线咨询: QQ交谈

邮件:admin@example.com

工作时间:周一至周五,9:30-18:30,节假日休息

关注微信