Electrolarynx Voice Recognition Utilizing Pulse Coupled Neural Network

Fatchul Arifin, Tri Arief Sardjono, Mauridhy Hery Purnomo

Abstract


The laryngectomies patient has no ability to speak normally because their vocal chords have been removed. The easiest option for the patient to speak again is by using electrolarynx speech. This tool is placed on the lower chin. Vibration of the neck while speaking is used to produce sound. Meanwhile, the technology of "voice recognition" has been growing very rapidly. It is expected that the technology of "voice recognition" can also be used by laryngectomies patients who use electrolarynx.This paper describes a system for electrolarynx speech recognition. Two main parts of the system are feature extraction and pattern recognition. The Pulse Coupled Neural Network – PCNN is used to extract the feature and characteristic of electrolarynx speech. Varying of β (one of PCNN parameter) also was conducted. Multi layer perceptron is used to recognize the sound patterns. There are two kinds of recognition conducted in this paper: speech recognition and speaker recognition. The speech recognition recognizes specific speech from every people. Meanwhile, speaker recognition recognizes specific speech from specific person. The system ran well. The "electrolarynx speech recognition" has been tested by recognizing of “A” and "not A" voice. The results showed that the system had 94.4% validation. Meanwhile, the electrolarynx speaker recognition has been tested by recognizing of “saya” voice from some different speakers. The results showed that the system had 92.2% validation. Meanwhile, the best β parameter of PCNN for electrolarynx recognition is 3.

Keywords


Electrolarynx speech recognation; Pulse Coupled Neural Network (PCNN); Multi Layer Perceptron (MLP)

Full Text:

PDF

References


Alvin G. Wee, BDS, MS, a Lisa A. Wee, 2004, “The use of an intraoral electrolarynx for an edentulous patient: A clinical report, Ohio State University, Columbus, Ohio; University of Toronto, Toronto, Vol. 91, pp. 521.

http://www.wikimu.com/News/Print.aspx?id=11467. Data medis departemen rehabilitasi medis, RSCM, February 2010

A. Subject2chul, S. Tri Arief, Mauridhy Hery, 2010, “ElectroLarynx, Esopahgus, and normal speech classification using gradient discent, gradient discent with momentum and learning rate, and levenberg-marquardt algorithm, ICGC.

K. Fellbaum, 1999, “Human-human communication and human-computer, interaction by voice”. Human Aspects of Telecommunications for Disabled and Older People, Donostia, Spain.

Magdalena Marlin Amanda, application of voice recognition in cryptograph of public key, Tugas Informatika, ITB, 2009

Achmad Basuki, Miftahul Huda, Tria Silvie Amalia, 2006, “Application of voice recognition in musical request,”Proceeding of IES (Industrial Elektronic Seminar) Politeknik Elektronika Negeri Surabaya-ITS.

Ajub Ajulian Z., Achmad Hidayatno, Muhammad Widyanto Tri Saksono, application of voice recognition in car controller, Majalah Transmisi, Teknik Elektro Undip, Jilid 10, Nomor 1, Maret 2008

Mohammed Bahoura, Pattern,2009, “Recognition methods applied to respiratory sounds classification into normal and wheeze classes”, Internationala journal: Computers in Biology and Medicine, Vol. 39 pp. 824-843. www.elsevier.com/locate/cbm

Taiji Sugiyama, 2004,“Speech recognition using pulsecoupled neural networks with a radial basis function”, ISAROB 2004, Vol. 7, pp. 156-159.




DOI: http://dx.doi.org/10.12962/j20882033.v21i3.45

Refbacks

  • There are currently no refbacks.


Creative Commons License

IPTEK Journal of Science and Technology by Lembaga Penelitian dan Pengabdian kepada Masyarakat, ITS is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.
Based on a work at https://iptek.its.ac.id/index.php/jts.