Dr. P. Vijayalakshmi

ece_vijaylak

Professor
Dr. P. Vijayalakshmi B.E.,M.E.,PhD.,
Email vijayalakshmip@ssn.edu.in

Ph. Extn: 327

 

Dr. P. Vijayalakshmi, (IEEE M’ 08) (Fellow IETE) Professor in the Department of Electronics and Communication Engineering, has 20 years of teaching and research experience, including 4 years of exclusive research experience in the field of Speech signal processing and Speech pathology.

Education:

She received B.E (ECE) degree first class with distinction from Bharathidasan University. She Completed M.E (Communication systems) from Regional Engineering College, Trichy (currently NIT, Trichy) and earned her Ph.D. degree from IIT Madras and worked as a doctoral trainee for a year at INRS – EMT, Montreal, Canada.

During her Ph.D she developed various speech recognition systems and a novel approach for detection and assessment of disordered speech such as hypernasal and dysarthric speech apart from analyzing normal speech. During her Ph.D she had an opportunity to work with Prof. Douglas O’Shaughnessy at National Institute of Scientific Research (INRS), Montreal, Canada as a doctoral trainee for a period of one year in a project titled “Speech recognition and analysis”.

Research:

She has published over 50 research publications in refereed international journals and in proceedings of international conferences. As a co-investigator she is currently involved in projects funded by DeitY, MCIT, New Delhi, and Tamil Virtual Academy, a Government of Tamil Nadu organization, and as a principal investigator completed one AICTE funded project and two projects funded by SSN Trust. She is a recognized supervisor of Anna University and currently guiding three fulltime and two part-time Ph.D scholars in the field of speech technology.

Her areas of research include voice conversion, polyglot speech synthesis, speech recognition, statistical parametric speech synthesis, speech technology for healthcare applications, and speech enhancement.

Funded Research Projects:

  1. The project titled “Development of Text-to-Speech Synthesis Systems for Indian Languages – high quality TTS and small footprint TTS integrated with disability aids”, is a joint venture taken up by a consortium of 12 organizations, with IIT Madras as the head. It is funded by the Department of Electronics and Information Technology (DeitY), Ministry of Communication and Information Technology (MCIT), Government of India, and its net worth is Rs. 12.66 crores, of which SSNCE has received Rs. 77 lakhs. The project primarily aims at developing small footprint text-to-speech (TTS) systems for 13 languages, namely, Hindi, Tamil, Malayalam, Telugu, Marathi, Odia, Manipuri, Assamese, Bengali, Kannada, Gujarati, Rajasthani, and Bodo. Other goals of the project include incorporating intonation and duration model for improving the quality of synthesis, developing an emotional speech synthesizer, and integrating TTS systems with OCR for reading stories online and with aids for disabilities. Specifically, SSNCE has been assigned the task of developing small footprint Tamil and bilingual (Tamil and Indian English) TTS systems. Till date, the team has developed monolingual and bilingual unit selection synthesis and HMM-based speech synthesis systems. Further polyglot HMM-based synthesizers, capable of synthesizing Tamil, Hindi, Malayalam, Telugu, and English speech have been developed, using voice conversion and speaker adaptation techniques.
    Project Investigators at SSNCE: Dr. T. Nagarajan (PI), Dr. P. Vijayalakshmi (co-PI), Dr. A. Shahina (co-PI).
  2. Project titled “Speech enabled interactive enquiry system in Tamil” funded by Tamil Virtual Academy, a Tamilnadu Government Organization, is to be carried out for 6 months starting from March 2016. A sum of Rs. 9.52 lakhs is sanctioned to carry out the project.  A speech-enabled inquiry system in Tamil is proposed for use in tourism/agriculture. It consists primarily of a speech recognition system (that yields the text corresponding to the given speech input), a database, and a text-to-speech synthesis system. Initially, the system prompts the user to pose a question. The user may request information regarding tourist places (such as, general information about the place, distance/directions from a place of origin to the tourist spot, etc.) or regarding agriculture (such as, the weather conditions, price of a crop in the market, etc.).  The question from the user (in the form of speech) is then given to a speech recognition system, which generates the corresponding text. Once the text is obtained from the recognition system, a text-to-speech synthesis system synthesizes the corresponding speech utterance and plays it back to the user for confirmation. On confirmation, the information requested by the user is fetched from a database containing details on tourist places/agriculture. This information is then converted to speech using the text-to-speech synthesis system and played to the user.
    Project Investigators: Dr. T. Nagarajan (PI), Dr. P. Vijayalakshmi (Co-PI), Dr. B. Bharathi (co-PI), Ms. B. Sasirekha (co-PI).
  3. We carried out a project titled “Assessment and intelligibility modification of dysarthric speakers” funded by All India Council for Technical Education (AICTE). This is a three year project (Dec. 2010 – Dec. 2013) with  Rs. 9 lakhs funding aimed at developing a detection and assessment system by analyzing the problems related to laryngeal, velopharyngeal and articulatory subsystems for dysarthric speakers using a speech recognition system and relevant signal processing-based techniques. Using the evidence derived from the assessment system, dysarthric speech is corrected and resynthesized, conserving the speaker’s identity, thereby improving the intelligibility. The acoustic analysis is validated by the instruments such as Nasometer and Electroglottograph. The complete system that can detect the multisystem dysregulation due to dysarthria followed by correction and resynthesis will improve the lifestyle of the dysarthric speaker as they will be able to communicate easily with the society without any human assistance.
    Project Investigators:Dr. P. Vijayalakshmi(PI), Dr. T. Nagarajan (co-PI)
  4. We have completed two research projects, funded by SSN Trust worth Rs. 2 lakhs, titled Design of a lab model of an improved speech processor for cochlear-implants” and “Anatomical vibration sensor speech corpus for speech applications in noisy environments” during the period Jun. 2010 – Jun. 2012. The objective of the first project is to design a lab model of the speech processor for a cochlear implant based on vocoders, so that effect of the system specific parameters, such a filter order, bandwidth etc., on speech intelligibility is analysed. The second project had an objective of building a speech corpus using throat microphone speech to develop a speaker identification system using the corpus.
    Project Investigators: Dr. P. Vijayalakshmi, Dr. T. Nagarajan and Dr. A. Shahina.

Workshops organized:

  1. Title: Winter School on Speech and Audio Processing (WiSSAP 2016)
    Organizers: Dr. Hema A. Murthy (IITM), Dr. T. Nagarajan, Dr. P. Vijayalakshmi, Dr. A. Shahina
    Venue: SSN College of Engineering
    Date: Jan. 8th – 11th 2016

  2. Title: Two day workshop on Technologies for speaker and language recognition
    Coordinators: Dr. P. Vijayalakshmi, Dr. T. Nagarajan and Ms. B. Ramani
    Venue: SSN College of Engineering
    Date: April 29th – 30th 2015

  3. Title: workshop on HMM-based speech synthesis
    Coordinators: Dr. T. Nagarajan and Dr. P. Vijayalakshmi
    Venue: SSN College of Engineering
    Date: Nov. 26th – 30th 2012
    Participants: TTS consortium members.

  4. Title:Workshop on automatic speech recognition
    Coordinators: Dr. T. Nagarajan, Dr. P. Vijayalakshmi and Dr. A. Shahina
    Venue: SSN College of Engineering, Chennai.
    Date: 26th to 29th Dec. 2010

  5. Title:Workshop on Speech Processing and its Applications
    Coordinators: Dr. T. Nagarajan and Dr. P. Vijayalakshmi
    Venue: SSN College of Engineering, Chennai.
    Date: 21st and 22nd Feb. 2008

Publications : Google Scholar citations 

https://scholar.google.com/citations?hl=en&user=CRBglkoAAAAJ

Students associated with me:

Ph.D scholars:

As a supervisor:

  1. B. Ramani (June 2010), “Multilingual to polyglot speech synthesis system for Indian languages by sharing common attributes”, Part time.
  2. M. Dhanalakshmi (June 2013), “An Assessment and Intelligibility modification system for Dysarthric speakers”, Part time.
  3.  M.P. ActlinJeeva (January 2014), “Enhancement of speech exploiting speech production characteristics”, Full time.
  4. Mrinalini (Jan. 2016), “A hybrid approach for machine translation”, Full time
  5. MariyaCelin (Jan. 2016), “Speech input speech output communication aid for dysarthric speakers”, Full time.

As a co-supervisor:

  1. V. SherlinSolomi (January 2013), “Multiresolution Feature extraction for Hidden Markov Model-Based Speech synthesizer”, Full time.
  2. G. Anushiya Rachel (January 2013), “Incorporation of Emotions in Neutral speech”, Full time.

Guided around 25 M.E thesis and 20 undergraduate projects.