Manipulating PCM data and recording question
Hi guys,
I'm working on a project that implements LMS algorithm with direct sound.
LMS basicaly creates an adaptive filter using data from a playing wav file and data comming from a mic recording. I've created a full duplex object and playing a wav file (streaming) while recording what is being played using a mic.
My problem is as follows :
I need to calculate an error (=original sound - norm_factor*recorded sound)
the need for norm_factor comes from the difference in the volume of the original sound and the recorded sound (the mic is placed about 1.5m from the speaker. When I try to calculate that factor I always get a value that is close to 1 which looks to me not normal.
I am calculating this factor by selecting going over one buffer of the played sound and the equivalent buffer of the recorded sound :
for i=0;i<end of buffer;i++
{ sum_of_played+=(*(play_buffer+i));
sum_of_recorded+=(*(capture_buffer+i));
}
norm_factor=(sum_or_played/length_of_buffer)/(sum_of_recorded/length_of_buffer)
When I've tryed to debug the process and watch the value of *(play_buffer+i)
and *(capture_buffer+i) I see that they are the same and the value (which supposed to be of type double) looks some thing like this : 128 "ε"
What are these values and how does the capture buffer store its data (I understand that it's PCM data, but does it pass some kind of normalization? ) ?
Sorry for the long post
Thanks
Gena
I have read about the LMS algorithm as described here (http://cnx.rice.edu/content/m10481/latest/), and I think that I understand your problem.
I think I understand what you are trying to do here. I would first guess that, since you are using a real world system, you should experience a time delay between when you send the signal to the speaker and when you recieve the digitized signal from the microphone. I would suggest that you first (a) figure out exactly what this delay is, or better (b) write something to figure out this delay for you; and then compensate for the delay before applying your algorithm. Please note that this delay slows the adaptive filter's reactions to changes in the transfer function (in this case the distortion caused by the microphone/speaker system) of the system, but will probably not affect your resluts.
Heres what I would try to figure out the delay time:
I'll call the signal from the microphone your input signal, and the output to the speaker your output signal. Examine the last 1/10 of a second of samples. I would begin by normalizing your input signal to the same average level (use the absolute value of the samples, multiply all of the input samples by (avg_output / avg_input)). Then, using a loop, shift the input signal with respect to the output signal by one or a few samples, and for each shift, measure the correllation between the two samples. The point at which the correlation is the highest would probably be the point at which the distance is compensated for. You should have this shift in a number of samples (eg. input is the output delayed by X number of samples).
After compensating the input, your algorithm should work (as I understand it) with low error (I'm gonna guess less than 0.1).
Since sound travels at approx. 330m/s, and your microphone is about 1.5m from the speaker, this yields a delay of 4.5ms. Additionally, you will have delays from the sound passing through your sound card (bus transfer delays, delays from stream buffers in and out of the card, etc) that amount to probably another 1-30ms. This gives a delay of anywhere from 5-35ms.
I'll be interested in seeing if this works for you. This is kind of an interesting problem.
foreignkid
I think I understand what you are trying to do here. I would first guess that, since you are using a real world system, you should experience a time delay between when you send the signal to the speaker and when you recieve the digitized signal from the microphone. I would suggest that you first (a) figure out exactly what this delay is, or better (b) write something to figure out this delay for you; and then compensate for the delay before applying your algorithm. Please note that this delay slows the adaptive filter's reactions to changes in the transfer function (in this case the distortion caused by the microphone/speaker system) of the system, but will probably not affect your resluts.
Heres what I would try to figure out the delay time:
I'll call the signal from the microphone your input signal, and the output to the speaker your output signal. Examine the last 1/10 of a second of samples. I would begin by normalizing your input signal to the same average level (use the absolute value of the samples, multiply all of the input samples by (avg_output / avg_input)). Then, using a loop, shift the input signal with respect to the output signal by one or a few samples, and for each shift, measure the correllation between the two samples. The point at which the correlation is the highest would probably be the point at which the distance is compensated for. You should have this shift in a number of samples (eg. input is the output delayed by X number of samples).
After compensating the input, your algorithm should work (as I understand it) with low error (I'm gonna guess less than 0.1).
Since sound travels at approx. 330m/s, and your microphone is about 1.5m from the speaker, this yields a delay of 4.5ms. Additionally, you will have delays from the sound passing through your sound card (bus transfer delays, delays from stream buffers in and out of the card, etc) that amount to probably another 1-30ms. This gives a delay of anywhere from 5-35ms.
I'll be interested in seeing if this works for you. This is kind of an interesting problem.
foreignkid
This topic is closed to new replies.
Advertisement
Popular Topics
Advertisement