It is incredibly exciting to be living at a time when new assistive technology is being created regularly, removing barriers for users to perform daily tasks such as watching a lecture or a video.
This need has increased massively since the pandemic with a shift to online learning and working becoming the norm.
Understanding the contents of these sessions is a challenge for some with additional learning needs or hearing impairments – the removal of visual clues, such as body language, makes the comprehension difficult, or in the case of hearing impairment, nearly impossible.
Assistive technology is more commonly used than ever before, and dictation and captioning software have become the norm with companies such as Netflix reporting that 80 per cent of people watching their platform use subtitles.
You’ll find assistive tech beginning to feature in lots of software – Zoom, Teams, Netflix, YouTube all have their take on what’s needed to aid the user in understanding their content.
This is fantastic; however, I’m noticing a potential problem with this mass creation of new types of assistive tech.
You see, assistive technology is about sweating the small stuff. It’s about being specific to an individual’s needs and challenges. That’s what makes it so incredibly effective.
The challenge is that as different forms of assistive tech are built to work to specific platforms like Zoom or Teams, each different version has its own nuances that make the learning curve of the software steeper than it needs to be – every time you pick up a different bit of software, you have to remember how to make it useable for you so that the relevant information can be obtained from that session.
The information from those sessions is then either ephemeral, never to be seen again, or spread across so many bits of software, meaning that even if you were able to get the captions in the first place, they could be hard to ever find in the future – what software was that lecture on again?
Reducing the learning curve
Take captioning as an example, imagine having to learn how to use the software within Microsoft Teams, and then again with Zoom and then again to caption a YouTube video.
The subtle changes to the way the software functions are enough to cause frustration, particularly for those who may already have additional challenges and needs.
Assistive technology should always be built with the aim of being ‘only learn once.’
By this, I mean that a user should only have to learn how the software works once and be able to apply that knowledge across platforms.
This way, the tech can remain as accessible as possible to the people relying on it to level the playing field for them.
The aim should be to remove as many barriers to accessing the software as possible, after all, its aim is to provide that very access.
Learning to use a new piece of software or equipment can be difficult and while there are a number of ways to access user information and instructions such as reading manuals, asking for a different format, watching videos, seeking training and reaching out to customer support, the creators of assistive technology should aim to keep things as simple as possible.
This is something we kept in mind when creating Caption.Ed – we wanted to sit on top of all of these different services, meaning you only have to learn our software once.
Furthermore, the transcripts and notes all live in a central location, meaning you don’t have to try and remember where ‘that lecture’s’ transcript and notes are, was it in Teams, Zoom, somewhere else?
It’s all there waiting for you in one place.