%0 Conference Paper %B IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP) %D 2016 %T Equalization matching of speech recordings in real-world environments %A François G. Germain %A Mysore, Gautham J. %A Takako Fujioka %C Shanghai, China %I IEEE %M 16021757 %R 10.1109/ICASSP.2016.7471747 %U http://ieeexplore.ieee.org/document/7471747/ %X When different parts of speech content such as voice-overs and narration are recorded in real-world environments with different acoustic properties and background noise, the difference in sound quality between the recordings is typically quite audible and therefore undesirable. We propose an algorithm to equalize multiple such speech recordings so that they sound like they were recorded in the same environment. As the timbral content of the speech and background noise typically differ considerably, a simple equalization matching results in a noticeable mismatch in the output signals. A single equalization filter affects both timbres equally and thus cannot disambiguate the competing matching equations of each source. We propose leveraging speech enhancement methods in order to separate speech and background noise, independently apply equalization filtering to each source, and recombine the outputs. By independently equalizing the separated sources, our method is able to better disambiguate the matching equations associated with each source. Therefore the resulting matched signals are perceptually very similar. Additionally, by retaining the background noise in the final output signals, most artifacts from speech enhancement methods are considerably reduced and in general perceptually masked. Subjective listening tests show that our approach significantly outperforms simple equalization matching.
Audio examples  %Z   %8 03/2016 %@ 978-1-4799-9988-0