I’ve jut started to look at Viva so I can help Microsoft Partners build practices and solutions. As you are probably area Syntex is one of the underlying components. I put together this demonstration to run through the Form Processing model creation process. I also triggered a Power Automate flow to perform additional processing using some of the outputs from my model.
To keep things real I used my own quote form I built using a standard Microsoft Word template. It has a straight forward format but as I was building my first model I decided not to over complicate things. I also triggered a Power Automate flow after the AI analysis had completed to prove I could do something useful with the results.
Who likes scheduling meetings? Does it burn time? Would you like someone to do it for you?
If the answer is yes to the above questions, then maybe it’s time to revisit Cortana. This has been around for a while but in case you aren’t aware Cortana provides an AI scheduling service. If you’ve not used it before it’s worth taking a look as it works! All you need to do is to register and configure your profile settings. You should be up and running in minutes and the link you need is:
I’ve also provided a 2 minute overview below that should help you get started.
Tip: Using the service is really intuitive but the one small piece of education needed for some attendees is to ensure they know to reply to Cortana and not directly back to you during the scheduling experience. I’ve previously used something like the example below in the email body:
“I’m going to ask Cortana to schedule our meeting, be sure to reply back to her (and not me) during the scheduling process”.
Microsoft is a leader in AI, which has definitely benefited Teams from an Accessibility standpoint. If you are familiar with Microsoft Teams you are probably already aware of the native in-meeting captions, translation and recording transcription services?
I recently created this video for some of my partner engagements so I could quickly and easily demonstrate some of the existing end user experiences.
The good news is that some additional AI driven Accessibility enhancements are in the pipeline. The two that I can talk about publicly are:
Live Captions with speaker attribution. Teams already provides live captions as a way to follow along with what is being said in the meeting, we’re also adding speaker attribution so captions will specify who is speaking.
Live Transcription with speaker attribution. Live transcripts provide another way to follow along with what has been said and who said it. After a meeting, the transcript file is automatically saved in the chat tab for that meeting.
If you are reading this post before these new features reach General Availability you can check their release status here.