Dr. P. Vijayalakshmi

B.E., M.E., PhD.

Professor and Head of the Department

Department of Electronics and Communication Engineering

Edit Template

Dr. P. Vijayalakshmi is a distinguished academician and a researcher with over 30 years of experience in teaching and research, specializing in speech signal processing and assistive speech technologies. She is a Senior Member of IEEE (since 2016) and a member of the IEEE Signal Processing Society. Additionally, she is a Fellow of the Institution of Electronics and Telecommunication Engineers (IETE). 

Through research, she has always aimed to enhance human-human and human-machine interaction by bridging the communication barriers that exist due to language barriers, disability, and literacy. 

She was recently awarded the IEEE MAS Best Researcher Award (2024) for her contributions to research

Dr. P. Vijayalakshmi earned her Bachelor of Engineering (B.E.) degree in Electronics and Communication Engineering with First Class and Distinction from Bharathidasan University. She then pursued her Master of Engineering (M.E.) in Communication Systems from the Regional Engineering College, Trichy, now known as the National Institute of Technology (NIT), Trichy. She earned her Ph.D. from the prestigious Indian Institute of Technology (IITM) Madras and Under the guidance of Prof. Douglas O’Shaughnessy, INRS-EMT Montreal, Canada she contributed to a project titled “Speech Recognition and Analysis,” further expanding her expertise in the domain.

Her research focuses on developing innovative solutions to enhance human–human and human–machine communication, particularly for individuals with speech and language disorders. She has authored over 150 publications in refereed journals including the IEEE Transactions on audio speech and language processing (TASLP), IET, Speech Communication, Computer speech and language, etc., and in the IEEE and ISCA flagship conferences. She holds two granted patents in the areas of sign language-to-speech conversion and speech aids for people with cerebral palsy. She has published two book chapters in the De Gruyter Series in Speech Technology and Text Mining in Medicine and Healthcare. She is one of the Area Chairs of the ISCA Flagship conference INTERSPEECH in Speech and language processing for health.

Dr. Vijayalakshmi has successfully completed 15 Government of India funded projects with total grants exceeding ₹4.6 crores, supported by agencies such as the Ministry of electronics and Information Technology (MeiTy), DST-TIDE, AICTE, and Tamil Virtual Academy, among others. Her projects emphasize socially relevant technologies, including screen readers for the visually impaired, communication aids for differently abled speakers, Assessment tools for dysarthric speakers, assistive speech technology for people with articulatory disorders, and multilingual speech translation systems for tourism.

She also spearheaded the release of India’s first Tamil dysarthric speech database through the Linguistic Data Consortium (LDC2021S04), University of Pennsylvania, developed in collaboration with NIEPMD. Dr. Vijayalakshmi’s work continues to bridge communication barriers arising from language, disability, and literacy, making impactful contributions to assistive and accessible technology research.

To carry out research related to Speech Technology, Dr. P. Vijayalakshmi established the Centre for speech Technology in 2021, which houses several PhD scholars, Junior research fellows, interns, UG, and PG students and contains state-of-the-art systems (GPU-based systems, speakers, microphones, audio mixers, etc.,) required to carry out research in Speech Technology and an acoustic anechoic chamber for recording speech data. There are around 10 faculty members across various departments (ECE, CSE, IT, BME, Mechanical) who contribute to the speech research in the lab. 

  • To significantly reduce communication barriers, between human beings and between humans and machines, that are predominantly caused by a language barrier or a disability or illiteracy.
  • Develop a multidisciplinary environment to promote fundamental and applied research in speech/audio.
  • Develop state-of-the-art, socially beneficial applications/products that would enhance human-computer and human-human interactions, by combining the skills of electrical and computer science engineers and linguists.
  • Promote collaborative research and development by establishing ties with industries to bridge the gap between advancements in research and presently available technology.
  • Establish a forum to interact with other researchers in the field and provide training and consultancy services to those that need them.

She has to her credit two granted patents.

  • P. Vijayalakshmi, N. Narenraju and V. Aiswarya, 530321 - Hidden Markov Model based sign language to speech conversion system in Tamil  -  Journey of developing the system made our team understand the necessity of making the system a bidirectional device where both deaf and hard of hearing as well as normal-hearing can communicate without any barrier. 
  • P. Vijayalakshmi, T. Nagarajan and T.A. Mariya Celin - 542440 - Speech-input speech output communication aid for speakers with cerebral palsy - Speech-Input Speech-Output Communication Aid (SISOCA) for people with cerebral palsy is developed with the intent to aid their employability. The aid converts the speech produced by its users into more intelligible speech, thereby enhancing communication with those around them.
Dr. Vijayalakshmi has successfully executed 15 Government of India-funded projects with total grants exceeding ₹4.6 crores, supported by agencies such as the Ministry of Electronics and Information Technology (MeiTy), DST-TIDE, AICTE, and Tamil Virtual Academy, among others. Her projects emphasize socially relevant technologies, including screen readers for the visually impaired, communication aids for differently abled speakers, Assessment tools for dysarthric speakers, assistive speech technology for people with articulatory disorders, and multilingual speech translation systems for tourism. Furthermore, she leads the "Centre for Speech Technology" at SSN. Apart from external funding, the center is supported by the SSNTrust, and the team members (faculty members and students) have executed various speech research projects worth Rs. 25 lakh.
  • Mrinalini K, Vijayalakshmi and T. Nagarajan, “SBSim: A sentence BERT Similarity based evaluation metric for Indian language neural machine translation systems” –IEEE/ACM Transactions on Audio, Speech and Language Processing, Vol. 30,  pp. 1396 – 1406, March 2022
  • T. Lavanya, T. Nagarajan and P. Vijayalakshmi, “Multi-level single-channel speech enhancement using a unified framework for estimating magnitude and phase spectra” – IEEE/ACM Transactions on Audio, Speech and Language Processing, Vol. 28, pp. 1315 – 1327, Apr.2020
  • T. Lavanya, P. Vijayalakshmi, K. Mrinalini and T. Nagarajan, “Higher order statistics-driven magnitude and phase spectrum estimation for speech enhancement” – Computer speech and Language, Vol.87, pp. 1 – 23, Mar. 2024
  • Nethra Sivakumar, Pooja Srinivasan, K. Mrinalini, P. Vijayalakshmiand T. Nagarajan, “PooRaa-Agri KG: An agricultural knowledge graph-based simplified multilingual query system”, Expert systems, Wiley, pp. 1 – 27, Aug. 2023
  • Mrinalini, T. Nagarajan, P. Vijayalakshmi, “Pause-Based Phrase Extraction and Effective OOV Handling for Low-Resource Machine Translation Systems”, ACM Transactions on Asian and Low Resource Language Information Processing, Vol. 18, Issue 2, pp. 12:1-12:22, Feb 2019.
  • P. Vijayalakshmi, M. R. Reddy and Douglas O’Shaughnessy – “Acoustic Analysis and detection of hypernasality using group delay function” – IEEE Transactions on Biomedical Engineering, Vol.54, No. 4, pp 621 – 629, April 2007.
  • Mariya Celin, T.A, Nagarajan T and P. Vijayalakshmi, “Data Augmentation using virtual microphone array synthesis and multi-resolution feature extraction for isolated word dysarthric speech recognition”, IEEE Journal of selected topics on signal processing, Vol. 14, No. 2, pp. 346 – 354, Feb. 2020.
  • T. A. Mariya Celin, G. Anushiya Rachel, T. Nagarajan, P. Vijayalakshmi, “A Weighted Speaker-Specific Confusion Transducer Based Augmentative and Alternative Speech Communication Aid for Dysarthric Speakers”, IEEE Transactions on Neural Systems and Rehabilitation Engineering, Vol. 27, Issue 2, pp. 187-197, Feb 2019.
  • Mrinalini K, Vijayalakshmi P, Nagarajan T, “Feature-weighted AdaBoost classifier for punctuation prediction in Tamil and Hindi NLP systems”, Expert Systems, Wiley, pp. 1 – 19, Dec. 2021.
  • Vijayalakshmi, Nagarajan, T., Jayapriya, R, Brathindara, S., Krithika, K., Nikhilesh, N. Naren Raju, N. Johanan Joysingh, S., Aiswarya, V. Mrinalini, K., “Development of a low-resource wearable continuous gesture-to-speech conversion system”, Disability and Rehabilitation: Assistive Technology. pp.1 – 13, Jan. 2022.
  • Dhanalakshmi, T. Nagarajan, P. Vijayalakshmi, ‘Significant sensors and parameters in assessment of dysarthric speech’, Sensor Review, Vol. 41 No. 3, pp. 271-286, July 2021.
  • P. Actlin Jeeva, T. Nagarajan, P. Vijayalakshmi, “Adaptive Multi-Band Filter Structure-based Far-End Speech Enhancement”, IET Signal Processing, Vol. 14, Issue 5 pp. 288 – 299, Jun. 2020.
  • Anushiya Rachel, P. Vijayalakshmi, T. Nagarajan, “Estimation of Glottal Closure Instants from Degraded Speech using a Phase-Difference-Based Algorithm”, Computer, Speech, and Language, Vol. 46, pp. 136 – 153, Nov. 2017.
  • V. Sherlin Solomi, P. Vijayalakshmi, T. Nagarajan, “Exploiting Acoustic Similarities Between Tamil and Indian English in the Development of an HMM-based Bilingual Synthesizer”, IET Signal Processing, Vol. 11, Issue 3, pp. 332-340, May 2017.
  • P. Actlin Jeeva, T. Nagarajan, P. Vijayalakshmi, “DCT derived spectrum-based speech enhancement algorithm using temporal-domain multiband filtering”, IET Signal Processing, pp. 965-980, Vol. 10, Issue 8, Oct. 2016.
  • Anushiya Rachel, Sreenidhi, Vijayalakshmi and T. Nagarajan, “Incorporation of happiness in neutral speech by modifying time-domain parameters of emotive keywords”, Circuits, systems and signal processing, 2022, Vol. 41, pp. 2061 – 2087, Mar. 2022
  • Nanmalar, P. Vijayalakshmi, T. Nagarajan, “Literary and colloquial Tamil dialect identification”, Circuits, systems and signal processing, 2022. pp. 1- 24, DOI: -10.1007/s00034-022-01971-2.
  • Maria Celin T. A., P. Vijayalakshmi and T. Nagarajan, “Data Augmentation Techniques for Transfer Learning-based continuous dysarthric speech recognition”, Circuits, systems and signal processing, pp. 1 – 22, Aug. 2022
  • Johanan Joysingh S., P. Vijayalakshmi and T. Nagarajan, “Chirp Group Delay based Onset Detection in Instruments with Fast Attack” Circuit systems and signal processing, pp. 1- 24, Sep. 2022.

She has completed two consultancy projects are completed with a total funding of 13 lakh, funded by HCL Technologies and 4S Medical systems. HCL project was a Shiksha initiative to educate rural students in their mother tongue given all the study material in Hindi. 4S Medical systems, New Delhi consultancy project concentrated on making the deaf and mute people learn to speak through visual inputs.

  • Title: Development of Hindi-Tamil hybrid machine translation system
    Company: HCL Technologies
    Consultant: Dr. P. Vijayalakshmi
    Funding: ₹10.28 lakh
    Duration: Nov. 2022 – June 2024
  • Title: Improving speech visualization for deaf and Mute through signal processing techniques
    Company: 4S Medical Research, New Delhi
    Consultant: Dr. P. Vijayalakshmi
    Funding: ₹2.74 lakh
    Duration: Mar. 2020 – June 2022
  • Title: Assistive Speech Technologies
    Funding Agency: MeiTy, Government of India (Consortium leader: IITM)
    Investigators: Dr. P. Vijayalakshmi, Dr. T. Nagarajan
    Duration: Apr. 2022 – Mar. 2026 (4 years)
    Sanctioned Amount: ₹85.26 lakh
    Status: Ongoing
  • Title: Prosody Modelling
    Funding Agency: MeiTy, Government of India (Consortium leader: IITM)
    Investigators: Dr. T. Nagarajan, Dr. P. Vijayalakshmi
    Duration: Apr. 2022 – Mar. 2026 (4 years)
    Sanctioned Amount: ₹99 lakh
    Status: Ongoing
  • Title: Standalone speech-to-speech translation systems for Hindi, Tamil and English
    Funding Agency: DST-SEED IMPRINT – IIC (Consortium leader: IIT Dharwad)
    Investigators: Dr. T. Nagarajan, Dr. P. Vijayalakshmi
    Duration: Apr. 2023 – Mar. 2026 (3 years)
    Sanctioned Amount: ₹60 lakh
    Status: Ongoing
  • Title: A multi-sensor-based bidirectional sign-language-to-speech (BiSLATS) translation system in Tamil
    Funding Agency: DST-TIDE
    Investigators: Dr. P. Vijayalakshmi, Dr. T. Nagarajan, Dr. B. Ramani
    Duration: May 2024 – Apr. 2026 (3 years)
    Sanctioned Amount: ₹18.92 lakh
    Status: Ongoing
  • Title: A stand-alone assistive device for the elderly to read vital information from packed consumer products
    Funding Agency: DST-TIDE
    Investigators: Dr. Jino Hans, Dr. B. Ramani, and Dr. P. Vijayalakshmi
    Duration: May 2025 – Apr. 2027(2 years)
    Sanctioned Amount: ₹15.92 lakh
    Status: Ongoing
  • Title: Real-time multi-dialect automatic speech recognition system for Tamil
    Funding Agency: Tamil Virtual Academy
    Investigators: Dr. B. Bharathi (PI), Dr. P. Vijayalakshmi (Co-PI), Dr. T. Nagarajan (Co-PI)
    Duration: Apr. 2023 – June 2024 (1 year)
    Sanctioned Amount: ₹13.72 lakh
    Status: Completed
  • Title: A Powered EMG-based embedded system controlled transfemoral prosthesis
    Funding Agency: DST-SERB CRG
    Investigators: Dr. G. Satheeshkumar, Dr. M. Dhanalakshmi, Dr. P. Vijayalakshmi
    Duration: Apr. 2023 – Mar. 2026 (3 years)
    Sanctioned Amount: ₹35 lakh
    Status: Ongoing
  • Title: Speech-Input speech-output communication Aid (SISOCA) for speakers with cerebral palsy
    Funding Agency: DST-TIDE
    Investigators: Dr. P. Vijayalakshmi (PI), Dr. T. Nagarajan (Co-PI)
    Duration: Mar. 2017 – Mar. 2020 (3 years)
    Sanctioned Amount: ₹13.72 lakh
    Status: Completed
  • Title: HMM-based Text-to-Speech Synthesis System for Malaysian Tamil
    Funding Agency: Murasu Systems Sdn Bhd, Malaysia
    Investigators: Dr. T. Nagarajan (PI), Dr. P. Vijayalakshmi (Co-PI)
    Duration: Nov. 2016 – Oct. 2017 (1 year)
    Sanctioned Amount: ₹4 lakh
    Status: Completed
  • Title: Development of Text-to-Speech Synthesis Systems for Indian Languages – high quality TTS and small footprint TTS integrated with disability aids (Consortium Project with IITM as leader)
    Funding Agency: Department of Electronics and Information Technology (DeitY)
    Investigators: Dr. T. Nagarajan (PI), Dr. P. Vijayalakshmi (Co-PI)
    Duration: Mar. 2012 – June 2018
    Funding: ₹77 lakh
    Status: Completed
  • Title: Speech enabled interactive inquiry system in Tamil
    Funding Agency: Tamil Virtual Academy, Chennai
    Investigators: Dr. T. Nagarajan (PI), Dr. P. Vijayalakshmi (Co-PI), Dr. B. Bharathi, Sasirekha (Co-PI)
    Duration: Mar. 2016 – Mar. 2017
    Sanctioned Amount: ₹9.52 lakh
    Status: Completed
  • Title: Tamil Pronunciation error detection for children – Prototype
    Funding Agency: Murasu Systems Sdn Bhd, Malaysia
    Investigators: Dr. T. Nagarajan, Dr. P. Vijayalakshmi
    Duration: Aug. 2019 – Sep. 2019
    Sanctioned Amount: ₹0.5 lakh
    Status: Completed
  • Title: Speech Assistive Aids for Visually-challenged people
    Funding Agency: Tamil Virtual Academy (TVA), Chennai
    Investigators: Dr. T. Nagarajan, Dr. P. Vijayalakshmi
    Duration: Aug. 2018 – Aug. 2019
    Sanctioned Amount: ₹25 lakh
    Status: Completed
  • Title: An assessment and intelligibility modification system for dysarthric speakers
    Funding Agency: AICTE-RPS (A), New Delhi
    Investigators: Dr. P. Vijayalakshmi, Dr. T. Nagarajan
    Duration: 2010 – 2012
    Sanctioned Amount: ₹9 lakh
    Status: Completed
  • Title: Design of lab model of speech processor for cochlear implants
    Funding Agency: SSN Trust
    Investigators: Dr. P. Vijayalakshmi, Dr. T. Nagarajan, Dr. A. Shahina
    Duration: 2010 – 2012
    Sanctioned Amount: ₹0.8 lakh
    Status: Completed

Under her guidance 10 PhD scholars have graduated and currently there are 4 research scholars working in various sub-domains of speech technology. She guided more than 30 postgraduate and 40 undergraduate student projects.

  • B. Ramani, Multilingual to polyglot speech synthesis system for Indian languages by sharing common attributes”.
  • M. Dhanalakshmi, “An Assessment and Intelligibility modification system for Dysarthric speakers”.
  • M. P. ActlinJeeva, “Dynamic Multi-Band Filter Structures for Simultaneous Improvement of Speech Quality and Intelligibility”.
  • K. Mrinalini, “Development and evaluation of hybrid machine translation systems for English-to-Indian Language under low resource conditions”.
  • T. A. MariyaCelin, Development of an Augmentative and Alternative speech Communication aid for Dysarthric Speakers”.
  • G. Anushiya Rachel, “A robust phase-difference-based approach to the estimation of glottal closure instants and its applications”.
  • Sherlin Solomi V, “Development of an HMM-based bilingual synthesizer for Tamil and Indian English by merging acoustically similar phonemes”.
  • M. Nanmalar – “Literary and Colloquial Tamil speech identification for inclusive human computer interaction”.
  • T. Lavanya, “Multilevel single channel speech enhancement using unified framework for magnitude and phase spectrum estimation”.
  • Johanan Joysingh, “Analyses and applications of Chirp spectrum”.
  • M. Malarvizhi, “Chirp group delay-based feature extraction
  • T. Preethi, “Prosody modelling
  • S. Sooriya, “Voice conversion”