131 shaares
Imagine listening to your favourite podcast. You rewind it to go over something you missed, but each time you replay it, it’s somehow different.
This sounds frustrating, right? But, it’s likely this is what would happen if we just stuffed large language models into screen readers, in a lazy attempt to avoid having to publish accessible content.
I’ve had this debate a few times on LinkedIn, but it came up again recently, after the awesome Access: Given conference, where Helen Dutson and Holly Tuke shared an example of emoji misuse. It was an RNIB post with clapping hands inserted between every word to highlight a common problem.