10/03/2025 | News release | Distributed by Public on 10/03/2025 12:25
For blind and low-vision (BLV) individuals, visual assistant systems like BeMyAI and SeeingAI provide instant answers to visual questions, like "Can you tell me what this form says?" or "How much is my bill?" These applications advise users against capturing personally identifiable information, but it's unavoidable for blind users to unintentionally include private objects in their questions. Some users need to ask about personal items, further raising privacy concerns.
A new study led by Stony Brook University, in collaboration with the University of Texas at Austin and the University of Maryland, introduces FiG-Priv, a framework that selectively conceals only high-risk personal information. Details such as account numbers or Social Security digits are hidden, while safe context, like a form's type or a customer service number, remains visible.
Paola Cascante-Bonilla, co-author of the study and assistant professor in the Department of Computer Science at Stony Brook University, explained how it works: "Traditional masking techniques to protect sensitive information often blur or black-out entire objects. For blind and low-vision users, this is impractical. Masking too much destroys the utility of the content, while masking too little leaks sensitive data. FiG-Priv aims to allow BLV users to interact with AI systems without exposing personal information. It focuses only on the sensitive content."
The system detects and segments private objects within an image, such as a credit card or financial statement. The result is a redacted image in which risky content is clearly obscured with black squares, while the rest of the scene remains intact and interpretable by the visual assistant.
PhD student Jeffri Murrugarra-Llerena, the lead author, said, "Blind and low-vision users should be able to support both their independence and their privacy. In previous approaches, they were forced to choose one over the other. With our approach, users can ask questions more confidently, without worrying about what these systems might reveal."
Read the full story by Ankita Nagpal at the AI Innovation Institute website.