References | [1] Ardila, Rosana, et al. "Common voice: A massively-multilingual speech corpus." arXiv preprint arXiv:1912.06670 (2019). [2] Liu, Chunxi, et al. "Towards measuring fairness in speech recognition: casual conversations dataset transcriptions." ICASSP 2022-2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2022. [3] Kendall, Tyler and Charlie Farrington. 2021. The Corpus of Regional African American Language. Version 2021.07. Eugene, OR: The Online Resources for African American Language Project. [4] Martin, Joshua L., and Kelly Elizabeth Wright. "Bias in Automatic Speech Recognition: The Case of African American Language." Applied Linguistics (2022). [5] Tatman, Rachael. "Gender and dialect bias in YouTube’s automatic captions." Proceedings of the first ACL workshop on ethics in natural language processing. 2017. [6] Koenecke, Allison, et al. "Racial disparities in automated speech recognition." Proceedings of the National Academy of Sciences 117.14 (2020): 7684-7689. [7] Garnerin, Mahault, Solange Rossato, and Laurent Besacier. "Investigating the impact of gender representation in speech-to-text training data: A case study on librispeech." 3rd Workshop on Gender Bias in Natural Language Processing. Association for Computational Linguistics, 2021. [8] Feng, Siyuan, et al. "Quantifying bias in automatic speech recognition." arXiv preprint arXiv:2103.15122 (2021). [9] Buolamwini, Joy, and Timnit Gebru. "Gender shades: Intersectional accuracy disparities in commercial gender classification." Conference on fairness, accountability and transparency. PMLR, 2018. [10] Cho, Won Ik, et al. "Towards cross-lingual generalization of translation gender bias." Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency. 2021. [11] Radford, Alec, et al. "Robust speech recognition via large-scale weak supervision." arXiv preprint arXiv:2212.04356 (2022). [12] Ware, Olivia R., et al. "Racial limitations of Fitzpatrick skin type." Cutis 105.2 (2020): 77-80. [13] Juhn, Young J., et al. "Assessing socioeconomic bias in machine learning algorithms in health care: a case study of the HOUSES index." Journal of the American Medical Informatics Association 29.7 (2022): 1142-1151. |
Author | Benedetta Cevoli |
Acknowledgements | Ana Olssen, Ben Leaman, Emma Davidson, Georgina Robertson, Harish Kumar, John Hughes, Liam Steadman, Markus Hennerbichler, Tom Young |