Categories
Uncategorized

New Assistive Tech Coming 2022

Robotics and Chrome Browser Extension Come to IMAGE

New Year’s Day signifies the start of the final quarter for the funding of the IMAGE Project by Innovation Science Economic Development Canada. In which the researchers at the Shared Reality Lab at McGill University are in high gear to complete the initial version of IMAGE and supporting devices by the projects end date March 31, 2022.

Although user testing has been limited to the use of spatial audio to assist blind users in exploring internet graphics. Haptic (touch) user testing will be the focus over January, February, and March. Including launch of the beta version of the IMAGE chrome browser extension.

Earlier in December I had the opportunity to assist with the Alfa version testing of a robotic device compatible with IMAGE. That has a passive user mode. In which, the Artificial Intelligence / Machine Learning technology moves a handheld pointer over the main elements of the graphic. Then permitting the user to actively explore the graphic. Gaming opportunities was an immediate thought as I explored the graphic. Moreover, the device is substantially less in cost to other haptic devices on the market.

But I am getting ahead of the current phase of the project. As the usefulness and the technologies being developed are ultimately up to the users they are being co-designed.

Look for our updates over the next few months as we go from testing to market and although the hardware devices will have costs attached to them The IMAGE chrome browser extension basic will be free for users to download and use.

Here are more details on IMAGE:

Welcome to IMAGE (Internet Multimodal Access to Graphical Exploration). This project is carried out by McGill University's Shared Reality Lab (SRL), in strategic partnership with Gateway Navigation CCC Ltd and the Canadian Council of the Blind (CCB). The project is funded by Innovation Science Economic Development Canada through the Assistive Technology Program. The motivation for this project is to improve the access to internet graphics for people who are blind or partially sighted.

The Challenge

On the internet, graphic material such as maps, photographs, and charts that represent numerical information, are clear and straightforward to those who can see it. For people with low vision, this is not the case. Rendering of graphical information is often limited to manually generated alt-text HTML labels, often abridged, and lacking in richness. This represents a better-than-nothing solution but remains woefully inadequate. Artificial intelligence (AI) technology can improve the situation, but existing solutions are non-interactive, and provide a minimal summary at best, without offering a cognitive understanding of the content, such as points of interest within a map, or the relationship between elements of a schematic diagram. So, the essential information described by the graphic frequently remains inaccessible.

Website Picture: A woman sitting at a computer that is displaying a web page with six images. She has a cup of coffee to her left and a phone to her right.​

Website Picture: A man who is blind or low sighted wearing a sweater and headphones, sitting in front of a computer in a library, reading a braille book.

Our Approach

We use rich audio (sonification) together with the sense of touch (haptics) to provide a faster and more nuanced experience of graphics on the web. For example, by using spatial audio, where the user experiences the sound moving around them through their headphones, information about the spatial relationships between various objects in the scene can be quickly conveyed without reading long descriptions. In addition, rather than only passive experiences of listening to audio, we allow the user to actively explore a photograph either by pointing to different portions and hearing about its content or nuance or use a custom haptic device to literally feel aspects like texture or regions. This will permit interpretation of maps, drawings, diagrams, and photographs, in which the visual experience is replaced with multimodal sensory feedback, rendered in a manner that helps overcome access barriers for users who are blind, deaf-blind, or partially sighted.

Try it out.

Engaging the Community

Collaborating with the community is key when creating accessible technology. Our team is partnering with Gateway Navigation CCC Ltd and the Canadian Council of the Blind (CCB), a consumer organization of Canadians who are blind, to ensure that our system is in line with the needs of the community. We are in regular contact with community members as part of our co-design approach, who are helping guide the development process but there is always room for more voices. If you'd like to contribute to the project, we invite you to fill out our community survey.

Participate in our community survey.

Website Picture: Two sets of hands going over a Braille book. An overhead view of a software engineer's desk featuring two monitors displaying code and a laptop.

Our Technology

Our project is designed to be as freely available as possible, as well as extensible so that artists, technologists, or even companies can produce new experiences for specific graphical content that they know how to render. If someone has a special way of rendering cat photos, they do not have to reinvent the wheel, but can create a module that focuses on their specific audio and haptic rendering and plug it into our overall system.

Learn more about how our system works.

Contact Us

For more information email: image@gnc3.com

 

With a little help from our friends - spread the word!
Yum

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.