Acoustic sensors were spotlighted as one of the most intuitive bilateral communication devices between humans and machines. However, conventional acoustic sensors use a condenser-type device for measuring capacitance between two conducting layers, resulting in low sensitivity, short recognition distance, and low speaker recognition rates.
The team fabricated a flexible piezoelectric membrane by mimicking the basilar membrane in the human cochlear. Resonant frequencies vibrate corresponding regions of the trapezoidal piezoelectric membrane, which they say converts voice to electrical signal with a highly sensitive self-powered acoustic sensor.
This multi-channel piezoelectric acoustic sensor exhibits sensitivity more than two times higher and allows for more abundant voice information compared to conventional acoustic sensors, which can detect minute sounds from farther distances, KAIST says. In addition, the acoustic sensor can achieve a 97.5% speaker recognition rate using a machine learning algorithm, reducing by 75% error rate than the reference microphone.
AI speaker recognition is the next big thing for future individual customised services. However, conventional technology attempts to improve recognition rates by using software upgrades resulted in limited speaker recognition rates. The team say they were able to enhance the speaker recognition system by replacing the existing hardware with a flexible piezoelectric acoustic sensor.
Further software improvement of the piezoelectric acoustic sensor will significantly increase the speaker and voice recognition rate in diverse environments.
<\center>