Stony Brook University

04/13/2026 | News release | Distributed by Public on 04/13/2026 10:31

Stony Brook CS Researchers to Present Seven Papers at CHI 2026, Showcasing Advances in AI, Accessibility, and Interaction Design

From accessibility to AI systems, Stony Brook researchers explore how people interact with technology in real-world contexts.

Stony Brook University's Department of Computer Science will be prominently represented at the ACM CHI 2026 Conference on Human Factors in Computing Systems, with seven accepted papers spanning accessibility, human-AI interaction, and novel interaction techniques.

From improving how blind users access information and complete hands-on tasks to examining how AI systems shape trust and authorship, these projects reflect a growing focus on designing technologies that are not only more capable but also more usable, reliable, and human-centered.

Advancing Accessibility and Human-AI Interaction

Several of the accepted papers focus on how blind and low-vision users interact with digital systems and emerging AI tools-surfacing both the promise and current limitations of these technologies in real-world use.

In "Finding the Signal in the Noise: An Exploratory Study on Assessing the Effectiveness of AI and Accessibility Forums for Blind Users' Support Needs," first author Satwik Ram Kodandaram, alongside Jiawei Zhou, Xiaojun Bi, I.V. Ramakrishnan, and Vikas Ashok, examines how blind users seek help through accessibility forums and generative AI tools. Through interviews with blind participants, the study finds that forums, while essential, are often fragmented and cognitively demanding to navigate, while AI tools can produce verbose, inconsistent, or unreliable guidance. The paper outlines design opportunities to better surface trustworthy, high-quality support.

In "Lost in Instructions: Study of Blind Users' Experiences with DIY Manuals and AI-Rewritten Instructions for Assembly, Operation, and Troubleshooting of Tangible Products," first author Monalika Padma Reddy and collaborators investigate how blind users follow instructions for assembling and troubleshooting physical devices. Their findings show that traditional manuals often rely on visual assumptions, while AI-generated instructions frequently introduce errors or ambiguity. The work highlights the need for clearer, linear, and nonvisual instruction design to support safe and independent task completion.

Expanding access to visual data, "Making Charts Speak: LLM-Based Conversational Chart Question Answering for Blind and Low-Vision Users," authored by Amit Kumar Das, Mohammad Tarun, and Klaus Mueller, introduces GraphWhisper, a system that allows users to explore charts through natural language without requiring pre-structured data. By guiding large language models to interpret chart images directly, the system enables accurate, conversational access to visual information, achieving strong performance in both benchmark evaluations and user studies.

Trust, Authorship, and Human-AI Behavior

Beyond accessibility, several papers examine how people interpret, trust, and are influenced by AI systems-raising important questions about reliability, ethics, and creative ownership.

In "Be Friendly, Not Friends: How LLM Sycophancy Shapes User Trust," Ting Wang and collaborators explore how conversational AI systems that agree with users-sometimes excessively-affect perceived trustworthiness. Their study finds that overly agreeable systems can reduce authenticity or, in some cases, lead users to over-trust AI beyond its actual capabilities, underscoring the need for more careful design of AI behavior.

In "Can Good Writing Be Generative? Examining the Impact of Training on Copyrighted Works," Tuhin Chakrabarty and collaborators examine how large language models generate high-quality writing when trained on existing texts, raising questions about authorship and originality. The research points to a central tension in generative AI: while models can produce fluent, expert-level writing, they may rely on patterns learned from copyrighted material, with implications for writers, creativity, and the future of content creation.

Rethinking Everyday Interaction

Other accepted papers explore new ways of interacting with technology, from text input to communication design.

"KeySense," led by Tony Li and advised by Professor Xiaojun Bi, reimagines typing on flat touchscreens by treating input as a decoding problem. By combining models of human motor behavior and temporal typing patterns with a compact language model, the system can infer intended text even when finger input is imprecise, enabling faster and more accurate ten-finger typing on glass surfaces.

In "Every Persona Has Their Palette: Persona-Based Color Highlighting for Emotional Expression in Text Chat," Amit Kumar Das, Md. Ataur Rahman Bhuiyan, and Klaus Mueller explore how color can enhance emotional expression in digital communication. Their system uses AI to suggest color highlighting for words within messages, allowing users to convey tone without altering text. A longitudinal user study shows that people adapt these features in nuanced ways depending on context and relationships, pointing toward more flexible, expressive messaging systems.

Together, these seven CHI 2026 papers highlight Stony Brook Computer Science's expanding impact in human-computer interaction, particularly at the intersection of accessibility, artificial intelligence, and interactive systems design. By addressing both technical challenges and human experience-from trust and usability to creativity and communication-the department's researchers are helping shape technologies that work more effectively for a broader range of users.

CHI 2026 will be held April 13 - April 17, 2026, in Barcelona, Spain.

By Yuganshu Jain

Stony Brook University published this content on April 13, 2026, and is solely responsible for the information contained herein. Distributed via Public Technologies (PUBT), unedited and unaltered, on April 13, 2026 at 16:31 UTC. If you believe the information included in the content is inaccurate or outdated and requires editing or removal, please contact us at [email protected]