An accessible ASL-navigated consent app ASL recognition technology researched, designed, and developed for Gallaudet University's REU for Accessible Information and Communication Technology
During my internship, I designed and developed the third iteration of the ASL Consent App, a web-based tool using sign language recognition technology to navigate consent forms in American Sign Language (ASL). The project aimed to overcome communication barriers and promote inclusion for the Deaf and Hard of Hearing community in academic and medical research. I conducted a thorough review of literature on Deaf healthcare experiences and accessible design practices, followed by creating an interactive wireframe in Figma. I then developed the website using JavaScript, HTML, and CSS. Through user testing with members of the Deaf community, I gathered critical feedback that enhanced the app’s usability and design.
The Deaf and Hard of Hearing (DHH) community faces significant health disparities due to communication accessibility barriers. Approximately 500,000 individuals in the United States primarily use American Sign Language (ASL), yet they are underrepresented in research and clinical trials. This underrepresentation stems from a lack of information in ASL, denied requests for ASL interpreters, mistrust of hearing researchers, and the absence of culturally and linguistically relevant informed consent methods (Kushalnagar et al., 2017; Kushalnagar et al., 2023).
REU 2022
Is the ASL navigation equivalent to the traditional informed consent process in terms of (1) comprehension of what is being consented to and (2) user friction in the consent process?
REU 2023
How can we re-evaluate the usability of sign language interaction in the ASL informed consent process through an improved user interface and incorporation of Sign Language Recognition?
(1) What set of signs are necessary and sufficient to navigate the app?
(2) How can a consent app incorporate a signature feature that legitimizes the consent process and aligns with the linguistic and cultural values of ASL?
(3) What visual cues added to the user interface can demonstrate the readiness of the recognition technology?
I redesigned the ASL consent app in Figma to include an onboarding phase with four signs, 'YES,' 'NO,' 'AGAIN,' and 'CONSENT,' a light and dark mode to increase accessibility, and an opening eye icon and skeleton to demonstrate to users that the sign language recognition technology is functioning. Additionally, I designed a signature page to serve as the conceptual design for future iterations of the app including a fingerspelling recognition feature.
HTML
CSS
JAVASCRIPT
TYPESCRIPT
This version of the ASL Consent App, built with HTML, CSS, and JavaScript, improves upon the original iOS app by allowing seamless integration of advanced sign language recognition models. Powered by Mediapipe’s gesture recognition system, the app identifies static handshapes and detects key ASL signs. My colleague, Nora Goodman, worked on implementing the sign language recognition model into the website.
Using the ASL Citizen Dataset, the app was trained on 84,000 videos, focusing on the signs “YES,” “NO,” “AGAIN,” and “CONSENT.” Screenshots of handshapes were manually curated, mirrored for both hands, and processed using Mediapipe's Model Maker tool.
The gesture recognition model runs directly in the browser, allowing real-time responses: signing 'YES' continues the process, 'AGAIN' restarts the video, 'CONSENT' moves to the signature page, and 'NO' exits the form.
Ten participants self-identifying as Deaf/deaf and Hard of Hearing were recruited to engage with the ASL Consent App by simulating the completion and signature of a consent form and responded to a survey containing an ASL-translated System Usability Scale and open-ended experience feedback.
The average SUS score was 78.75, with younger participants scoring slightly higher on usability. Six users rated the app “Excellent,” three rated it “Good,” and one called it the “Best Imaginable.”
Screen recordings revealed diverse user behaviors. Most participants skipped video content when possible, and some encountered errors when signing ‘STOP’ instead of ‘CONSENT.’ Participants intuitively repeated signs when the app failed to recognize them, demonstrating strong user understanding of the system.
To analyze the analyze the accuracy of interactions with the sign language recognition technology, I created a confusion matrix to visualize performed vs. outcome actions. The most accurate interaction was signing ‘YES,’ which was recognized by the model 84% of the time. The most inaccurate interactions were the ‘CONSENT’ and ‘AGAIN’ signs, which received scores of 65% and 80% respectively.
YES
NO
AGAIN
CONSENT
NONE
YES
92
0
0
0
18
NO
0
0
0
0
0
AGAIN
0
0
4
0
1
CONSENT
0
0
0
8
6
NONE
2
2
0
0
NA
Participants recorded video feedback in ASL, which was transcribed and analyzed for key themes. Recurring themes included Bilingual Language Access, Model Sensitivity, Timing, and User Choice. These insights highlighted the importance of providing users with customizable experiences, ensuring model accuracy, and optimizing timing for a smoother user interaction. This feedback will guide improvements in future iterations of the ASL Consent App.
Ensure that information provided in ASL matches information provided in English to avoid confusion, frustration, and distractions during the consent process. Information should be evenly matched in terms of concept, duration, and complexity.
The model used for the ASL navigation of the consent form must not be too sensitive as to catch a user's every move but be sensitive enough to detect signs from a user in a prompt and receptive manner. Users should be aware of the sensitivity threshold of the model.
To fully provide informed consent, a user must interact with and acknowledge all information present in a consent form. Users should not be able to skip through the videos and should intentionally interact with the form in a meaningful way. The visual cues that demonstrate the model's readiness to recognize signs should be adjusted to prevent users from passively interacting with the consent form.
Users should be able to customize elements of the app to improve personal access.Users may prefer to watch both the ASL videos and read the transcript, watch only the ASL videos, or read only the transcript. The current configuration of the app reduces autonomy of the user, forcing them to interact with the ASL videos to navigate to the next portion.