I have a PhD in data science at IRIT (Toulouse). I worked on corpora with a limited amount of impaired speech. I also keep a blog about visualizations I make, research I do and tips around Linux. Check it out.
I am focused on machine learning algorithms to model knowledge with few or many data. More specifically I like to use self-supervised learning to extract knowledge from data.
In my thesis, I adapted Deep Neural Networks techniques in a few-shots context for speech signals. To have additional information about it, see my PhD review article.
I participate in some open-source projects (bug reports, patch proposals, discussions, etc…).
People with ENT cancers have speech difficulties after surgery or radiation therapy. It is important for the practitioner to have a measure that reflects the severity of speech. I propose two approaches to create an automatic measure, although with little data (about 1h of audio recordings for 128 speakers). The first one is based on “few shot” methods, while the second one is based on entropic measurement of speech features (learned with a self-supervised model on an annexed corpus). Our results on the latter have allowed us to consider a medical application. Thus, I obtained a grant to supervise an engineer to realize an application delivered to the Toulouse University Hospital. For more information, see my article on my PhD report.