Microsoft is a leader in AI, which has definitely benefited Teams from an Accessibility standpoint. If you are familiar with Microsoft Teams you are probably already aware of the native in-meeting captions, translation and recording transcription services?
I recently created this video for some of my partner engagements so I could quickly and easily demonstrate some of the existing end user experiences.
The good news is that some additional AI driven Accessibility enhancements are in the pipeline. The two that I can talk about publicly are:
- Live Captions with speaker attribution. Teams already provides live captions as a way to follow along with what is being said in the meeting, we’re also adding speaker attribution so captions will specify who is speaking.
- Live Transcription with speaker attribution. Live transcripts provide another way to follow along with what has been said and who said it. After a meeting, the transcript file is automatically saved in the chat tab for that meeting.
If you are reading this post before these new features reach General Availability you can check their release status here.