Dr. P. Vijayalakshmi

ece_vijaylak

Professor

Dr. P. Vijayalakshmi B.E.,M.E.,PhD.,

Email vijayalakshmip@ssn.edu.in

Ph. Extn: 327

 

Dr. P. Vijayalakshmi, (IEEE M ’08-15, SM ’16 onwards) (Member IEEE Signal Processing Society) (Fellow IETE) Professor in the Department of Electronics and CommunicationEngineering, has 21 years of teaching and research experience, including 4 years of exclusive research experience in the field of Speech signal processing and Speech pathology.

 Education

 She received B.E (ECE) degree first class with distinction from Bharathidasan University. She Completed M.E (Communication systems) from Regional Engineering College, Trichy (currently NIT, Trichy) and earned her Ph.D. degree from IIT Madras and worked as a doctoral trainee for a year at INRS – EMT, Montreal, Canada.

 During her Ph.D she developed various speech recognition systems and a novel approach for detection and assessment of disordered speech such as hypernasal and dysarthric speech apart from analyzing normal speech. During her Ph.D she had an opportunity to work with Prof. Douglas O’Shaughnessy at National Institute of Scientific Research (INRS), Montreal, Canada as a doctoral trainee for a period of one year in a project titled “Speech recognition and analysis”.

 Research

 She has published over 70 research publications in refereed international journals and in proceedings of international conferences. As a principal investigator she is currently involved in DST-TIDE funded project. As a co- investigator she is currently involved in projects funded by DeitY, MCIT, New Delhi, and Tamil Virtual Academy, a Government of Tamil Nadu organization, and as a principal investigator completed one AICTE funded project and two projects funded by SSN Trust. She is a recognized supervisor of Anna University and currently guiding six fulltime and one part-time Ph.D scholars in the field of speech technology.

 Her areas of research include speech enhancement, voice conversion, polyglot speech synthesis, speech recognition, statistical parametric speech synthesis and speech technology for healthcare applications.

Publications

 Book chapters

  1. P. Vijayalakshmi, T. Nagarajan, “Assessment and intelligibility modification fordysarthric speakers”, Voice Technologies for Reconstruction and Enhancement, De Gruyter Series in Speech Technology and Text Mining in Medicine and Healthcare (submitted).
  2. P. Vijayalakshmi, T. A. Mariya Celin, T. Nagarajan, “Selective pole modification-basedtechnique for the analysis and detection of hypernasality”, Signal and Acoustic Modeling for Speech and Communication Disorders, De Gruyter Series in Speech Technology and Text Mining in Medicine and Healthcare (submitted).

 Journal Publications

  1. P. Vijayalakshmi, M. R. Reddy and Douglas O’Shaughnessy– “Acoustic Analysis anddetection of hypernasality using group delay function” – IEEE Trans. On BiomedicalEngineering, Vol.54, No. 4, pp 621–629, April 2007.
  2. G. Anushiya Rachel, P. Vijayalakshmi, T. Nagarajan, “Estimation of Glottal Closure Instants from Degraded Speech using a Phase-Difference-Based Algorithm”, Computer,Speech, and Language (Elsevier), DOI: 10.1016/j.csl.2017.05.008, vol. (46), pp. 136-153, Nov. 2017.
  3. V. Sherlin Solomi, P. Vijayalakshmi, T. Nagarajan, “Exploiting Acoustic Similarities between Tamil and Indian English in the Development of an HMM-based Bilingual Synthesizer”, IET Signal Processing, Vol. 11, Issue 3, pp. 332-340, May 2017.
  4. M. P. Actlin Jeeva, T. Nagarajan, P. Vijayalakshmi, “DCT derived spectrum-based speech enhancement algorithm using temporal-domain multiband filtering”, IET SignalProcessing, pp. 965-980, Vol. 10, Issue 8, Oct. 2016.
  5. M. Dhanalakshmi, T. A. Mariya Celin, T. Nagarajan, P. Vijayalakshmi, “Speech-Input Speech-Output Communication for Dysarthric Speakers Using HMM-Based Speech Recognition and Adaptive Synthesis System”, Circuits, Systems and Signal Processing, DOI 10.1007/s00034-017-0567-9, May 2017.
  6. B. Ramani, M. P. Actlin Jeeva, P. Vijayalakshmi, T. Nagarajan, “A Multi- level GMM-Based Cross-Lingual Voice Conversion Using Language-Specific Mixture Weights for Polyglot Synthesis”, Circuits, Systems and Signal Processing, Vol. 35, pp. 1283-1311, Apr. 2016.
  7. P. Vijayalakshmi, T. Nagarajan and M. R. Reddy– “Assessment of articulatory andvelopharyngeal sub-systems of dysarthric speech” – Intl. Jl of BSCHS, special issue onBiosensors: Data acquisition, Processing and Control, Vol. 14, No. 2, pp 87 -94, June2009.
  8. G. Anushiya Rachel, V. Sherlin Solomi, K. Naveenkumar, P. Vijayalakshmi and T. Nagarajan, “A small footprint context- independent HMM-based speech synthesizer forTamil”, International Journal of Speech Technology, Vol. 18, Issue 3, pp. 405 – 418, Sep. 2015.
  9. P. Vijayalakshmi, T. Nagarajan, and M. Preethi, “Improving speech intelligibility incochlear implants using acoustic models”, WSEAS Transactions on Signal Processing, Issue 4, Vol.7, pp. 103 – 116, Oct. 2011.

Selected conference Publications

1. G. Anushiya Rachel, P. Vijayalakshmi and T. Nagarajan, “Estimation of Glottal Closure Instants from Telephone Speech using a Group Delay-Based Approach that Considers Speech Signal as a Spectrum“, INTERSPEECH 2015, Germany. pp.1181-1185.

2. B. Ramani, M.P. Actlin Jeeva, P. Vijayalakshmi, T. Nagarajan, “Cross-Lingual Voice Conversion-Based Polyglot Speech Synthesizer for Indian Languages”, INTERSPEECH2014, Singapore, 2014, pp. 775 – 779.

3. Ramani B, S Lilly Christina, G Anushiya Rachel, Sherlin Solomi V, Mahesh Kumar Nandwana, Anusha Prakash, Aswin Shanmugam, Raghava Krishnan, S Kishore Prahalad (IIITH), K Samudravijaya (TIFR), P Vijayalakshmi, T Nagarajan and Hema Murthy(IITM), “A Common Attribute based Unified HTS framework for Speech Synthesis in Indian Languages”, ISCA SSW8, pp. 291 – 296,2013

4. T. Nagarajan, P. Vijayalakshmi and Douglas O’Shaughnessy – “Combining multiple-sized sub-word units in a speech recognition system using baseform selection” – in Proceedings of Int. Conf. on Spoken Language Processing (ICSLP), INTERSPEECH Pittsburgh, Sep. 2006, pp. 1595 – 1597.

5. P. Vijayalakshmi, M. R. Reddy and Douglas O’Shaughnessy – “Assessment ofarticulatory sub-systems of dysarthric speech using an isolated-style speech recognition system” – in Proceedings of Int. Conf. on Spoken Language Processing (ICSLP), INTERSPEECH, Pittsburgh, Sep. 2006, pp. 981 – 984.

6. P. Vijayalakshmi and M. R. Reddy – “Detection of hypernasality using statistical patternclassifiers”, in INTERSPEECH, Eurospeech, Lisbon, Portugal, Sep.2005, pp. 701 – 704.

7. P. Vijayalakshmi and M. R. Reddy – “The analysis of band- limited hypernasal speech using group delay based formant extraction technique”, in INTERSPEECH, Eurospeech, Lisbon, Portugal, Sep.2005, pp. 665 – 668.

8. P. Vijayalakshmi and M. R. Reddy – “Analysis of hypernasality by synthesis”, inProceedings of Int. Conf. on Spoken Language Processing (ICSLP), INTERSPEECH, Jeju, South Korea, Oct. 2004, pp. 525 – 528.

9. V. Sherlin Solomi, S. Lilly Christina, Anushiya Rachel Gladston, Ramani B, P.Vijayalakshmi, T. Nagarajan, “Analysis on Acoustic Similarities between Tamil andEnglish Phonemes using Product of Likelihood-Gaussians for an HMM-Based Mixed-Language Synthesizer”, in proc. of Intl. oriental COCOSDA 2013 conference, KIIT, Gurgaon, Nov. 25-27,2013.

10. Anushiya Rachel Gladston, S. Lilly Christina, V. Sherlin Solomi, Ramani B, P.Vijayalakshmi, T. Nagarajan, “Development and Analysis of Various Phone-Sized Unit-Based Speech Synthesizers”, in proc. of Intl. oriental COCOSDA 2013 conference, KIIT, Gurgaon, Nov. 25-27,2013.

11, T. A. Mariya Celin, T. Nagarajan, Vijayalakshmi P., “Dysarthric Speech Corpus in Tamil for Rehabilitation Research”, IEEE TENCON, Singapore, pp. 2612 – 2615, Nov. 2016. Mrinalini K., Sangavi G., Vijayalakshmi P., “Performance Improvement of Machine Translation System using LID and Post-editing”, IEEE TENCON 2016, Singapore, pp. 2136-2139, Nov. 2016.

12. V. SherlinSolomi, M.S. Saranya, G. Anushiya Rachel, P. Vijayalakshmi, T. Nagarajan, “Performance Comparison of KLD and PoG Metrics for Finding the Acoustic Similarity Between Phonemes for the Development of a Polyglot Synthesizer”, IEEE TENCON, Bangkok, Thailand, 2014, pp. 1 – 4.

13. 13. G. Anushiya Rachel, S. Sreenidhi, P. Vijayalakshmi, T. Nagarajan, “Incorporation of Happiness into Neutral Speech by Modifying Emotive-Keywords”, IEEE TENCON, Bangkok, Thailand, 2014, pp. 1 – 6.

14. Ramani. B, Actlin Jeeva M. P., P. Vijayalakshmi, T. Nagarajan, “Voice conversion based multilingual to polyglot speech synthesizer for Indian languages”, in Proc. of IEEETENCON 2013, China, pp. 1 – 4.

15. Preethi Mahadevan, Nagarajan T, Pavithra B, S Shri Ranjani, Vijayalakshmi P., “Design of a lab model of a Digital Speech Processor for cochlear implant”, IEEE TENCON 2011, Indonesia, pp. 307 – 311.

16. P. Vijayalakshmi, P. Mukesh Kumar, Ra. V. Jayanthan and T. Nagarajan – “Cochlearimplant models based on critical band filters” – IEEE TENCON 2009, Singapore, Nov. 23 – 26, Nov. 2009, pp. 1 – 5.

17. P. Vijayalakshmi, T. Nagarajan and Ra. V. Jayanthan – “Selective pole modification-based technique for the analysis and detection of hypernasality” – IEEE TENCON 2009, Nov. 23 – 26, Nov. 2009, pp. 1 – 5.

18. . Sripriya, P. Vijayalakshmi, C. Arun Kumar and T. Nagarajan – “Estimation of instants of significant excitation from speech signal using temporal phase periodicity” – IEEE TENCON 2009, Nov. 23 – 26, Nov. 2009 pp. 1–4.

 https://scholar.google.com/citations?hl=en&user=CRBglkoAAAAJ

 Funded Research Projects:

1.   Project titled Speech-Input Speech-Output Communication Aid (SISOCA) for Speakers with Cerebral Palsy funded by DST-TIDE is to be carried out for a period of 2 years, from May 2017 to April 2019. A fund of 13.72 lakh is sanctioned for this project. This project aims at developing a communication aid for Dysarthric speakers. Dysarthria, a neurological speech disorder caused due to cerebral palsy, is associated with a patient’s disability to communicate with the outside world. The impairment of these people may preclude them from the outside world irrespective of their potential in education and employment. Thus an augmentative and alternative communication (AAC) device, that is portable and less tiring, is an urgent requirement to support them. The current project aims at developing a speech-input speech-output communication aid (SISOCA), as an application in android-based handheld device, for dysarthric speakers. The SISOCA considers dysarthric speech as input, and error corrected synthesized speech in dysarthric speaker’s own voice as output. Our endeavor is to make the device accessible by the common multitude in the Indian scenario, especially in the language Tamil.

Project Investigators: Dr. P. Vijayalakshmi (PI), Dr. T. Nagarajan (co-PI)

2.   Project titled HMM-based Text-to-Speech Synthesis System for Malaysian Tamil funded by Murasu Systems Sdn Bhd, Malaysia is to be carried out for 9 months, from Nov. 2016 to Jul. 2017. A sum of 4 lakh has been funded for this project. The project aims at developing a small-footprint text-to-speech synthesis system for Malaysian Tamil. In this regard, a hidden Markov model-based synthesizer that is capable of producing highly intelligible speech has been developed, with Tamil data recorded from a native Malaysian speaker. This system will finally be ported in to iphones and Android devices.

Project Investigators: Dr. T. Nagarajan (PI), Dr. P. Vijayalakshmi (Co-PI)

3.   The project titled “Development of Text-to-Speech Synthesis Systems for Indian Languages – high quality TTS and small footprint TTS integrated with disability aids“, is a joint venture taken up by a consortium of 12 organizations, with IIT Madras as the head. It is funded by the Department of Electronics and Information Technology (DeitY), Ministry of Communication and Information Technology (MCIT), Government of India, and its net worth is Rs. 12.66 crores, of which SSNCE has received Rs. 77 lakh. The project primarily aims at developing small footprint text-to-speech (TTS) systems for 13 languages, namely, Hindi, Tamil, Malayalam, Telugu, Marathi, Odia, Manipuri, Assamese, Bengali, Kannada, Gujarati, Rajasthani, and Bodo. Other goals of the project include incorporating intonation and duration model for improving the quality of synthesis, developing an emotional speech synthesizer, and integrating TTS systems with OCR for reading stories online and with aids for disabilities. Specifically, SSNCE has been assigned the task of developing small footprint Tamil and bilingual (Tamil and Indian English) TTS systems. Till date, the team has developed monolingual and bilingual unit selection synthesis and HMM-based speech synthesis systems. Further polyglot HMM-based synthesizers, capable of synthesizing Tamil, Hindi, Malayalam, Telugu, and English speech have been developed, using voice conversion and speaker adaptation techniques.

Project Investigators at SSNCE: Dr. T. Nagarajan (PI), Dr. P. Vijayalakshmi (co-PI), Dr. A. Shahina (co-PI).

4.   Project titled “Speech enabled interactive enquiry system in Tamil” funded by Tamil Virtual Academy, a Tamilnadu Government Organization, is to be carried out for 6 months starting from March 2016. A sum of Rs. 9.52 lakh is sanctioned to carry out the project. A speech-enabled inquiry system in Tamil is proposed for use in tourism/agriculture. It consists primarily of a speech recognition system (that yields the text corresponding to the given speech input), a database, and a text-to-speech synthesis system. Initially, the system prompts the user to pose a question. The user may request information regarding tourist places (such as, general information about the place, distance/directions from a place of origin to the tourist spot, etc.) or regarding agriculture (such as, the weather conditions, price of a crop in the market, etc.). The question from the user (in the form of speech) is then given to a speech recognition system, which generates the corresponding text. Once the text is obtained from the recognition system, a text-to-speech synthesis system synthesizes the corresponding speech utterance and plays it back to the user for confirmation. On confirmation, the information requested by the user is fetched from a database containing details on tourist places/agriculture. This information is then converted to speech using the text-to-speech synthesis system and played to the user.

Project Investigators: Dr. T. Nagarajan (PI), Dr. P. Vijayalakshmi (Co-PI), Dr. B. Bharathi (co-PI), Ms. B. Sasirekha (co-PI).

5.   We carried out a project titled “Assessment and intelligibility modification of dysarthric speakers” funded by All India Council for Technical Education (AICTE). This is a three year project (Dec. 2010 – Dec. 2013) with Rs. 9 lakh funding aimed at developing a detection and assessment system by analyzing the problems related to laryngeal, velopharyngeal and articulatory subsystems for dysarthric speakers using a speech recognition system and relevant signal processing-based techniques. Using the evidence derived from the assessment system, dysarthric speech is corrected and resynthesized, conserving the speaker’s identity, thereby improving the intelligibility. The acoustic analysis is validated by the instruments such as Nasometer and Electroglottograph. The complete system that can detect the multisystem dysregulation due to dysarthria followed by correction and resynthesis will improve the lifestyle of the dysarthric speaker as they will be able to communicate easily with the society without any human assistance.

Project Investigators:Dr. P. Vijayalakshmi(PI), Dr. T. Nagarajan (co-PI)

6.   We have completed two research projects, funded by SSN Trust worth Rs. 2 lakh, titled “Design of a lab model of an improved speech processor for cochlear-implants” and “Anatomical vibration sensor speech corpus for speech applications in noisy environments” during the period Jun. 2010 – Jun. 2012. The objective of the first project is to design a lab model of the speech processor for a cochlear implant based on vocoders, so that effect of the system specific parameters, such a filter order, bandwidth etc., on speech intelligibility is analysed. The second project had an objective of building a speech corpus using throat microphone speech to develop a speaker identification system using the corpus.

Project Investigators: Dr. P. Vijayalakshmi, Dr. T. Nagarajan and Dr. A. Shahina 

Workshops organized:

  1. Title: Winter School on Speech and Audio Processing (WiSSAP 2016) Organizers: Dr. Hema A. Murthy (IITM), Dr. T. Nagarajan, Dr. P. Vijayalakshmi,.
    Dr. A. Shahina
    Venue: SSN College of Engineering
    Date: Jan. 8th- 11th2016
  2. Title: Two day workshop on Technologies for speaker and language recognition Coordinators: Dr. P. Vijayalakshmi, Dr. T. Nagarajan and Ms. B. Ramani
    Venue: SSN College of Engineering Date: April 29th- 30th2015
  3. Title: workshop on HMM-based speech synthesis Coordinators: Dr. T. Nagarajan and Dr. P. Vijayalakshmi
    Venue: SSN College of Engineering Date: Nov. 26th- 30th2012 Participants: TTS consortium members.
  4. Title:Workshop on automatic speech recognition
    Coordinators: Dr. T. Nagarajan, Dr. P. Vijayalakshmi and Dr. A. Shahina
    Venue : SSN College of Engineering, Chennai. Date: 26thto 29thDec. 2010
  5. Title:Workshop on Speech Processing and its Applications Coordinators: Dr. T. Nagarajan and Dr. P. Vijayalakshmi
    Venue : SSN College of Engineering, Chennai. Date: 21stand 22ndFeb. 2008
    Students Associated
    Ph.D Scholars

 As Supervisor

  1. B. Ramani (June 2010), “Multilingual to polyglot speech synthesis system for Indian languages by sharing common attributes”, Part time. – Completed
  2. M. Dhanalakshmi (June 2013), “An Assessment and Intelligibility modification system for Dysarthric speakers”, Part time.
  3. M.P. ActlinJeeva (January 2014), “Dynamic Multi-Band Filter Structures for Simultaneous Improvement of Speech Quality and Intelligibility”, Full time. (Thesis submitted)
  4. K. Mrinalini (Jan. 2016), “A hybrid approach for speech to speech translation system”, Full time
  5. T. AMariyaCelin (Jan. 2016), “Development of an Augmentative and Alternative Communication for Severe Dysarthric Speakers in Indian Languages”, Full time.
  6. T. Lavanya (Jan. 2017), “Maintaining Speech Intelligibility in Challenging Conditions”, Full time. 

As Joint-Supervisor 

  1. V. SherlinSolomi (January 2013), “Development of an HMM-based bilingual synthesizer for Tamil and Indian English by merging acoustically similar phonemes”, Full time. (synopsis submitted)
  2. G. Anushiya Rachel (January 2013), “Incorporation of Emotions in Neutral speech”, Full time.

Guided around 25 M.E thesis and 20 undergraduate projects.