Abstract
People who are Deaf or Hard of Hearing (DHH) experience significant health disparities primarily due to barriers in communication accessibility. Approximately 500,000 individuals in the United States primarily use American Sign Language (ASL) and frequently encounter difficulties with informed consent or intake forms that are presented exclusively in written English rather than their native language. Recent advancements in machine learning for Individual Sign Language Recognition (ISLR)-the translation of individual ASL signs-provide an opportunity to develop fully bilingual forms in both ASL and written English. These bilingual forms have significant potential in healthcare and employment contexts, ensuring comprehensive accessibility for Deaf signers fluent in ASL and hearing intake personnel fluent in English. We propose the concept of "joint bilingual navigation," which allows users to interact with digital forms either by signing into a camera-equipped device or through touchscreen interaction with written English. In this paper, we evaluate a bilingual informed consent and intake form that simultaneously supports navigation via natural ASL signs and traditional touchscreen input. Our user studies assessed interest and usability among Deaf signers and hearing non-signing intake staff, particularly focusing on obtaining informed consent and signatures from Deaf participants. The results indicate that such bilingual approaches are effective and positively received. These findings have broader implications, demonstrating that similar bilingual navigation techniques can be applied effectively in other domains requiring inclusive and accessible interactions.