Ressources numériques en sciences humaines et sociales OpenEdition Nos plateformes OpenEdition Books OpenEdition Journals Hypothèses Calenda Bibliothèques OpenEdition Freemium Suivez-nous

Evaluating technologies with and for visually impaired people

30 years of evaluating innovative accessible or assistive technology in Human-Computer Interaction research and how we could do better

Quantitative, if possible experimental, studies and evaluations are central in Human-Computer Interaction (HCI) and to motivate policy initiatives. Yet, few technologies for children with visual impairments have been evaluated this way in education research; it is also an issue in research on technology for locomotion and mobility. Even if they were evaluated: the current replication crisis calls for revisiting our practices, and standards for quantitative evaluations have changed through time. We set to investigate how quantitative empirical evaluations of technology for visually impaired people are conducted in papers published by top HCI venues in this area (CHI, Tochi, Assets, Taccess), and to identify areas of improvements. We suggest that single subject experiment designs might be more adequate with this group. We also outline practical steps for authors and reviewers to improve their evaluation practices.

Documented issues

There are well documented issues when conducting these evaluations with visually impaired people: typical usability scales appear to not adequately report experience and performance when compared to non-visually impaired participants; not only are involved participants not always visually impaired, but given the diversity of people in this population, it’s difficult to achieve representation and validity.

Our findings

An evaluation standard? Only 44.9% of papers in our corpus are what we could call standard artifact contributions (new prototype followed by a quantitative evaluation). It makes it more complex to build a strong body of evidence of the best technology options in a given application area. On the other hand, wealth of evaluation approaches that are both useful for the field to keep innovating on evaluation methods, and suggests an evolution towards more user-centered and iterative work.

A lack of consistency. There is little consistency in the measures or usability scales used. Moreover, participants were often not described in enough details or in a consistent way, impeding validity and our ability to reproduce evaluations. They were also not representative of the larger visually impaired population, in terms of age and type of impairments. Participants are much younger; and blindness is over-represented, and few participants have additional disabilities, compared to estimates that up to half of this population does.

Researchers’ difficulties (spoiler: you’re not alone): 17.3% of the papers in our corpus openly disclose difficulties. As the pool of participants is small, the number of participants researchers can realistically involve in evaluations is considered too small. This is even worse when the evaluation requires that participants have a specific expertise. Moreover, it confirms the diversity between participants is difficult to handle, which can lead to excluding participants froom a study, or to including sighted people for purposes of statistical validity. However, this becomes a limitation. Moreover, these issues are compounded by heightened practical issues such as scheduling, length of experiments that are not adapted and the lack of shared measures within the same application area.

Our recommendations

You can read the paper for the full list — but here are my two personal highlights. For evaluations where quantitative approaches are adequate (especially evaluation of performance gains), we should adopt a standard way of describing visual impairments. We suggest to use the World Health Organization typology.

It appears that as researchers and reviewers have expected evaluations to be conducted similarly across HCI (e.g., minimum 12 participants), researchers in accessibility have used practices such as blindfolding that are not appropriate if the aim is to evaluate usability for people with visual impairments. Let’s be careful about considering how evaluation standards need to differ in different application areas while advocating for more rigor!

Finally, here’s one of my favorite HCI quantitative evaluation: Interactivity Improves Usability of Geographic Maps for Visually Impaired People, by Anke Brock et al.


The pre-print of this article is available on the hal-archives repository. The data on which the review is available on the ACM library. Do use it for your own research questions and as a starting point for your related work maybe?

We had a small Q&A on Twitter.


OpenEdition vous propose de citer ce billet de la manière suivante :
Emeline Brulé (2 mai 2020). Evaluating technologies with and for visually impaired people. Design and Society. Consulté le 14 janvier 2025 à l’adresse https://doi.org/10.58079/uaai


Laisser un commentaire

Votre adresse e-mail ne sera pas publiée. Les champs obligatoires sont indiqués avec *

Ce site utilise Akismet pour réduire les indésirables. En savoir plus sur comment les données de vos commentaires sont utilisées.