UK Speech 2019 - Invited Speakers

Monday 24th June, 1:15pm, Room 124 Chemical Engineering Building

Exploring core technologies for automated language teaching and assessment

Paula Buttery and Helen Yannakoudakis
ALTA Institute, Cambridge


Paula Buttery and Helen Yannakoudakis are members of the Automated Language Teaching and Assessment Institute (ALTA). This is an Artificial Intelligence institute that uses techniques from Machine Learning and Natural Language Processing to improve the experience of language learning online. ALTA carries out research that facilitates the creation of tools to promote the development of skills in Reading, Writing, Speaking and Listening for English language learners. In this talk, we will focus on core technologies for 1) automated assessment of learner language across these skills, and 2) automated generation of content for rapid expansion and diversification of (personalised) teaching and assessment materials. We will discuss how we can overcome some of the challenges we face in emulating human behaviour, and how we can visualise and inspect the internal 'marking criteria' and characteristics of automated models.

Tuesday 25th June, 10am, Room 124 Chemical Engineering Building

Automated processing of pathological speech

Heidi Christensen
Department of Computer Science, University of Sheffield


As speech technology is becoming increasingly pervasive in our lives, people with atypical speech and language are facing ever larger barriers to take full opportunity of this new technology. At the same time, recent advances in mainstream speech science and processing allows for increasingly sophisticated ways of addressing some of the specific needs that this population has. This talk will outline the major challenges faced by researcher in porting mainstream speech technology to the domain of healthcare applications; in particular, the need for personalised systems and the challenge of working in an inherently sparse data domain. Three areas in automatic processing of pathological speech will be covered: i) detection, ii) therapy/treatment and iii) facilitating communication. The talk will give an overview of recent state-of-the-art results and specific experiences from current projects in Sheffield's Speech and Hearing Group (SPandH).

Tuesday 25th June, 1:30am, Room 124 Chemical Engineering Building

The prospect of using accent recognition technology for forensic applications

Dr Georgina Brown, University of Lancaster


Forensic speech science is the forensic discipline concerned with speech recordings when they arise as pieces of evidence in a legal case or investigation. The most common type of task a forensic speech analyst is asked to conduct is forensic speaker comparison. This involves comparing multiple recordings in order to provide a view on whether or not the same speaker is featuring in these speech samples. In the UK, the most common way of approaching this task is to apply a comprehensive acoustic-phonetic analysis to these recordings. With the impressively low error rates produced by automatic speaker recognition systems, automatic speaker recognition is increasingly becoming an option for forensic speaker comparison cases. There is support for integrating such technologies into casework from the UK Forensic Science Regulator in order to boost the data-driven, repeatable and testable properties of forensic analyses (Tully, 2018). For numerous reasons, the integration of automatic speaker recognition into the UK forensic domain has been slow and work towards this is still ongoing. Forensic speaker comparison cases are not the only type of case encountered in practice. Rather than offering views on speaker identity, analysts may be asked to assess the characteristics of a speaker such as geographical background. Identifying a speaker’s accent could assist investigators in targeting their search for potential suspects (Watt, 2010). In view of the directions given by the UK Forensic Science Regulator, the present work has considered applying automatic accent recognition systems to these types of speaker profiling tasks (Brown 2016, 2018). This talk will discuss this research and will uncover the issues that arise.
Brown, G. (2016). Automatic accent recognition systems and the effects of data on performance. In Proceedings of Odyssey: The Speaker and Language Recognition Workshop. Bilbao, Spain. pp 94 - 100.
Brown, G. (2018). Segmental content effects on text-dependent automatic accent recognition. In Proceedings of Odyssey: The Speaker and Language Recognition Workshop. Les Sables d’Olonne, France. pp 9 – 15.
Tully, G. (2018), Forensic Science Regulator Annual Report, Technical report, The UK Government.
Watt, D. (2010), The identi?cation of the individual through speech, in C. Llamas & D. Watt, Eds, ‘Language and Identities’, Edinburgh University Press, Edinburgh, pp. 76–85.