I mentioned in my last post that I submitted my application to be a Microsoft Innovative Educator Expert (MIE-Expert) for the coming year. So as emails go, this was a nice one to see yesterday!
In case you haven’t come across the MIE-Expert programme before, it’s a professional learning community that focuses on the use of Microsoft EdTech tools (and other connected apps) at school, college and university level. Applicants must submit a self-nomination each year (you can see mine on my MIE-E page) and if successful you keep MIE-Expert status for the entire year, even if you don’t do anything with it…but I really think it’s one of those “the more you put in, the more you get out” things.
I spent a lot of my 2019/20 MIE-Expert stint observing the way other academics were using the various Microsoft tools. It was a great learning experience but this year I want to go a bit further. One of the areas I want to look into more is accessibility.
When I was a student, accessibility wasn’t something that was considered at all. When I started working in HE, it seemed to me that accessibility was considered when a student’s support plan indicated it but not otherwise. However in the past few years, I’ve noticed an increased emphasis on incorporating accessibility into our regular operations. Microsoft has introduced a number of features to help with this, and I’m starting to learn more about them. So here are a few that I’ve come across already.
Can you remember the ‘Microsoft Sam‘ text-to-speech function? It was so mechanical and artificial that I was too busy laughing to actually listen to what it was reading. By contrast, the Immersive Reader sounds incredibly natural. And not only does it read out the text, it helps the user focus on the text (by removing background images and using accessible fonts) and allows them to choose how they’d like the text to be read (line by line, translated into another language etc.).
You can test it out via the Microsoft Learning Tools website.
Whilst looking at the site in research for this blog post I discovered that Immersive Reader can read mathematical notation as well! I’m yet to see how it’ll cope with all the differential equations in my 3rd year Reactors module but what I have seen so far is pretty good.
The benefit of this feature for students with learning differences is obvious but I’m also wondering if it can be used as a proofreading tool for final year project students. All too often they write a sentence that goes on for a paragraph and by the time you get to the end, you’re none the wiser. Using the Immersive Reader may be an easy way to help students improve the readability of their reports.
The Dictate feature in Word and OneNote is the Immersive Reader in reverse, i.e. speech-to-text. It’s great for students who struggle with homophones (e.g. they’re/there/their) or students who lack confidence in their general spelling (perhaps if English is not their native language). And my experience has been pretty positive so far…
Update: Dictate has learnt ‘HAZOP’ on third attempt. 😱 pic.twitter.com/j8mtLxpRzi
— Samantha Gooneratne (@dr_samg) May 22, 2020
The unforeseen advantage of this feature (and the reason I tried it out in the first place) is that it’s also a good way to stave off the ol’ RSI!
Now there are a couple of tricky things about this – for one, it’s only available in Office 365 (so OneNote for Windows 10 ✅ but OneNote 2016 ❎). And I think it’s only available on the desktop apps and the browser version. Mobiles and tablets / iPads have their own dictate function but I believe they send the data to whoever’s made the operating system (so mostly Google/Apple) rather than Microsoft, and I’m not sure how that works in terms of GDPR compliance. I found a Windows 10 & Privacy Compliance document for Microsoft but if you’re going to use an android/iOS-based mobile device for speech-to-text, it might be worth checking the GDPR compliance first.
My last pick (for now) is the real time subtitle feature in PowerPoint. I first came across it during the year-end MIE-Expert celebration event, when I noticed how naturally (and accurately) the speaker’s presentation captions appeared. I haven’t done a live run of it myself yet but I’ve played around with it on the presentations I’ve created and I love it! I can speak naturally and it picks up and interprets pretty much everything in real time. I don’t think I have a particularly strong accent (a side effect of growing up all over the place) so I’m yet to see how much it can handle but I’m hopeful.
The only thing I worry about is that sometimes half of the audience may want subtitles and the other half might not. I’m sure there are ways around this (I have a feeling the ‘live presentations’ feature is the way to go) and that’s going to be something I look into.
So that’s a taster of what I’ll be looking at on the accessibility front. I’ll come back to this topic in the future to look at more of the features in depth but hopefully that’s whetted your appetite!
I’m currently obsessed with the new Taylor Swift album so I’ll leave you with one of its lesser known gems. Adios!