Dr. P. Vijayalakshmi

ece_vijaylak

Professor

Dr. P. Vijayalakshmi B.E.,M.E.,PhD.,

Email vijayalakshmip@ssn.edu.in

Ph. Extn: 327

 

Dr. P. Vijayalakshmi, (IEEE M ’08-15, SM ’16 onwards) (Member IEEE Signal Processing Society) (Fellow IETE) Professor in the Department of Electronics and Communication Engineering, has 24 years of teaching and research experience, including 4 years of exclusive research experience in the field of Speech signal processing and Speech pathology.

Education

She received B.E (ECE) degree first class with distinction from Bharathidasan University. She Completed M.E (Communication systems) from Regional Engineering College, Trichy (currently NIT, Trichy) and earned her Ph.D. degree from IIT Madras and worked as a doctoral trainee for a year at INRS – EMT, Montreal, Canada.

During her Ph.D she developed various speech recognition systems and a novel approach for detection and assessment of disordered speech such as hypernasal and dysarthric speech apart from analyzing normal speech. During her Ph.D she had an opportunity to work with Prof. Douglas O’Shaughnessy at National Institute of Scientific Research (INRS), Montreal, Canada as a doctoral trainee for a period of one year in a project titled “Speech recognition and analysis”.

Research

She has published over 100 research publications in refereed international journals and in proceedings of international conferences. As a principal investigator she is currently involved in DST-TIDE funded project. As a co- investigator she has completed projects funded by DeitY, MCIT, New Delhi, and Tamil Virtual Academy, a Government of Tamil Nadu organization, and as a principal investigator completed one AICTE funded project and two projects funded by SSN Trust. She is a recognized supervisor of Anna University and currently guiding three full time and one part-time Ph.D scholar in the field of speech technology.

Her areas of research include speech enhancement, voice conversion, polyglot speech synthesis, speech recognition, statistical parametric speech synthesis and speech enabled assistive technology

Publications

Patents filed

  1. P.Vijayalakshmi, N.Naren Raju and V. Aishwarya have filed a patent titled “Hidden Markov model-based sign language-to-speech conversion system in Tamil” in the Indian Patent office on 11.10.2018.  (Reference number: 201841038594)
  2. P.Vijayalakshmi, T. Nagarajan and T.A. MariyaCelin have filed a patent titled “A Speech-input Speech-output Communication Aid for Speakers with Cerebral Palsy” in the Indian Patent office on 02.08.2019.  (Reference number: 201941031287)

Book chapters

  1. P. Vijayalakshmi, T. Nagarajan, “Assessment and intelligibility modification for dysarthric speakers”, Chapter 3 – Voice Technologies for Reconstruction and Enhancement, De Gruyter Series in Speech Technology and Text Mining in Medicine and Healthcare, pp. 67 – 94, Feb. 2020.
  2. P. Vijayalakshmi, T. A. Mariya Celin, T. Nagarajan, “Selective pole modification-basedtechnique for the analysis and detection of hypernasality”, Chapter 2 – Signal and Acoustic Modeling for Speech and Communication Disorders, De Gruyter Series in Speech Technology and Text Mining in Medicine and Healthcare, pp. 33 – 68, Dec. 2018.
 

Journal Publications

  1. T. Lavanya, T. Nagarajan and P. Vijayalakshmi, “Multi-level single-channel speech enhancement using a unified framework for estimating magnitude and phase spectra” – IEEE Trans. On Audio, Speech and Language Processing, Apr. 2020 (Accepted for publication)
  2. P. Vijayalakshmi, M. R. Reddy and Douglas O’Shaughnessy– “Acoustic Analysis and detection of hypernasality using group delay function” – IEEE Trans. On Biomedical Engineering,Vol.54, No. 4, pp 621–629, April 2007.
  3. Mariya Celin, T.A, Nagarajan T and P. Vijayalakshmi, “ Data Augmentation using virtual microphone array synthesis and multi-resolution feature extraction for isolated word dysarthric speech recognition”, IEEE Journal of selected topics on signal processing Feb. 2020, DOI: 10.1109/JSTSP.2020.2972161
  4. T. A. MariyaCelin, G. Anushiya Rachel, T. Nagarajan, P. Vijayalakshmi, “A Weighted Speaker-Specific Confusion Transducer Based Augmentative and Alternative Speech Communication Aid for Dysarthric Speakers”, IEEE Transactions on Neural Systems and Rehabilitation Engineering, Vol. 27, Issue 2, pp. 187-197, Feb 2019.
  5. M. P. ActlinJeeva, T. Nagarajan, P. Vijayalakshmi, “Adaptive Multi-Band Filter Structure-based Far-End Speech Enhancement”, IET Signal Processing, Mar. 2020 DOI: 10.1049/iet-spr.2019.0226
  6. K. Mrinalini, T. Nagarajan, P. Vijayalakshmi, “Pause-Based Phrase Extraction and Effective OOV Handling for Low-Resource Machine Translation Systems”, ACM Transactions on Asian and Low Resource Language Information Processing, Vol. 18, Issue 2, pp. 12:1-12:22, Feb 2019.
  7. G. Anushiya Rachel, P. Vijayalakshmi, T. Nagarajan, “Estimation of Glottal Closure Instants from Degraded Speech using a Phase-Difference-Based Algorithm”, Computer,Speech, and Language (Elsevier),  vol. (46), pp. 136-153, Nov. 2017.
  8. V. Sherlin Solomi, P. Vijayalakshmi, T. Nagarajan, “Exploiting Acoustic Similarities between Tamil and Indian English in the Development of an HMM-based Bilingual Synthesizer”, IET Signal Processing, Vol. 11, Issue 3, pp. 332-340, May 2017.
  9. M. P. Actlin Jeeva, T. Nagarajan, P. Vijayalakshmi, “DCT derived spectrum-based speech enhancement algorithm using temporal-domain multiband filtering”, IET Signal Processing, pp. 965-980, Vol. 10, Issue 8, Oct. 2016.
  10. M. Dhanalakshmi, T. A. Mariya Celin, T. Nagarajan, P. Vijayalakshmi, “Speech-Input Speech-Output Communication for Dysarthric Speakers Using HMM-Based Speech Recognition and Adaptive Synthesis System”, Circuits, Systems and Signal Processing, DOI 10.1007/s00034-017-0567-9, May 2017.
  11. B. Ramani, M. P. Actlin Jeeva, P. Vijayalakshmi, T. Nagarajan, “A Multi- level GMM-Based Cross-Lingual Voice Conversion Using Language-Specific Mixture Weights for Polyglot Synthesis”, Circuits, Systems and Signal Processing, Vol. 35, pp. 1283-1311, Apr. 2016.

https://scholar.google.com/citations?hl=en&user=CRBglkoAAAAJ

Funded Research Projects:

1.   Project titled Speech-Input Speech-Output Communication Aid (SISOCA) for Speakers with Cerebral Palsy funded by DST-TIDE is carried out for a period of 3 years, from May 2017 to March 2020. A fund of 13.72 lakh is sanctioned for this project. This project aims at developing a communication aid for Dysarthric speakers. The SISOCA considers dysarthric speech as input, and error corrected synthesized speech in dysarthric speaker’s own voice as output. Our endeavor is to make the device accessible by the common multitude in the Indian scenario, especially in the language Tamil.
Project Investigators: Dr. P. Vijayalakshmi (PI), Dr. T. Nagarajan (co-PI)

2.   Project titled HMM-based Text-to-Speech Synthesis System for Malaysian Tamil funded by Murasu Systems Sdn Bhd, Malaysia is carried out for one year, from Nov. 2016 to Oct. 2017. A sum of 4 lakh has been funded for this project. The project aims at developing a small-footprint text-to-speech synthesis system for Malaysian Tamil. This system will finally be ported in to iphones and Android devices.
Project Investigators: Dr. T. Nagarajan (PI), Dr. P. Vijayalakshmi (Co-PI)

3.   The project titled “Development of Text-to-Speech Synthesis Systems for Indian Languages – high quality TTS and small footprint TTS integrated with disability aids“, is a joint venture taken up by a consortium of 12 organizations, with IIT Madras as the head. It is funded by the Department of Electronics and Information Technology (DeitY), Ministry of Communication and Information Technology (MCIT), Government of India, and its net worth is Rs. 12.66 crores, of which SSNCE has received Rs. 77 lakh.  SSNCE has been assigned the task of developing small footprint Tamil and bilingual (Tamil and Indian English) TTS systems. Till date, the team has developed monolingual and bilingual unit selection synthesis and HMM-based speech synthesis systems. Further polyglot HMM-based synthesizers, capable of synthesizing Tamil, Hindi, Malayalam, Telugu, and English speech have been developed, using voice conversion and speaker adaptation techniques.
Project Investigators at SSNCE: Dr. T. Nagarajan (PI), Dr. P. Vijayalakshmi (co-PI), Dr. A. Shahina (co-PI).

4.   Project titled “Speech enabled interactive enquiry system in Tamil” funded by Tamil Virtual Academy, a Tamilnadu Government Organization, is carried out for one year starting from March 2016. A sum of Rs. 9.52 lakh is sanctioned to carry out the project. A speech-enabled inquiry system in Tamil is proposed for use in tourism/agriculture. It consists primarily of a speech recognition system (that yields the text corresponding to the given speech input), a database, and a text-to-speech synthesis system.
Project Investigators: Dr. T. Nagarajan (PI), Dr. P. Vijayalakshmi (Co-PI), Dr. B. Bharathi (co-PI), Ms. B. Sasirekha (co-PI).

5.   We carried out a project titled “Assessment and intelligibility modification of dysarthric speakers” funded by All India Council for Technical Education (AICTE). This is a three year project (Dec. 2010 – Dec. 2013) with Rs. 9 lakh funding aimed at developing a detection and assessment system by analyzing the problems related to laryngeal, velopharyngeal and articulatory subsystems for dysarthric speakers using a speech recognition system and relevant signal processing-based techniques.
Project Investigators:Dr. P. Vijayalakshmi(PI), Dr. T. Nagarajan (co-PI)

6.   We have completed two research projects, funded by SSN Trust worth Rs. 2 lakh, titled “Design of a lab model of an improved speech processor for cochlear-implants” and “Anatomical vibration sensor speech corpus for speech applications in noisy environments” during the period Jun. 2010 – Jun. 2012. The objective of the first project is to design a lab model of the speech processor for a cochlear implant based on vocoders, so that effect of the system specific parameters, such a filter order, bandwidth etc., on speech intelligibility is analysed. The second project had an objective of building a speech corpus using throat microphone speech to develop a speaker identification system using the corpus.
Project Investigators: Dr. P. Vijayalakshmi, Dr. T. Nagarajan and Dr. A. Shahina

Students Associated
Ph.D Scholars

  1. B. Ramani (June 2010), “Multilingual to polyglot speech synthesis system for Indian languages by sharing common attributes”, Part time. – Completed
  2. M. Dhanalakshmi (June 2013), “An Assessment and Intelligibility modification system for Dysarthric speakers”, Part time. (thesis under preparation)
  3. M.P. ActlinJeeva (January 2014), “Dynamic Multi-Band Filter Structures for Simultaneous Improvement of Speech Quality and Intelligibility”, Full time – Completed
  4. K. Mrinalini (Jan. 2016), “A hybrid approach for speech to speech translation system”, Full time
  5. T. A. MariyaCelin (Jan. 2016), “Development of an Augmentative and Alternative Communication for Severe Dysarthric Speakers in Indian Languages”, Full time.
  6. T. Lavanya (Jan. 2017), “Maintaining Speech Intelligibility in Challenging Conditions”, Full time. 

Guided around 35 M.E thesis and 30 undergraduate projects.

Quick Links