California State University, Long Beach

01/14/2026 | Press release | Distributed by Public on 01/14/2026 12:00

CSULB's Mohamed Abdelhamid places people at the center of AI

Artificial intelligence can prompt, depending on the individual, feelings of excitement, anxiety or confusion. As more businesses deploy the technology, CSULB's Mohamed Abdelhamid wants students to know how the tool can be used responsibly.

A technology expert in the College of Business, Abdelhamid serves as director of the Master of Science in Information Systemsprogram. The course, developed about a year after the 2022 debut of Chat GPT, rests on the reality that AI technologies are created by and for people. The upshot is that developers and users need to be careful that models are not compromised by biases or other errors.

"When people were focusing on utilizing AI, we were focusing on the consequences of AI," he said.

Q: What inspired you to research business technologies, and how did that lead to your interest in AI?

A: My background is computer engineering, and I worked as a programmer and data analyst and ultimately did my PhD in an information systems context. That always inspired me to improve technology.

We started development of responsible AI in early 2023 and that was the Chat GPT moment. We were already ahead, and then we started seeing universities talking about ethical AI, responsible AI.

Q: How do you define responsible AI use and how has your research influenced your view?

A: Responsible AI is basically the deployment of AI systems that are fair, unbiased, transparent, secure, safe and accountable. I've already been working in privacy and security. Now I look at more model-related dimensions like fairness, bias, accountability and transparency.

The average person, when they hear "responsible AI," they think about what users can do with AI, like using it unethically. For example, cheating or creating work that is not theirs. But the main concept of responsible AI is deployment and design of AI systems before it comes to users.

Q: What are the most important concepts you want Master of Science in Information Systems students to know about responsible AI use?

A: Fairness and transparency. And then, privacy and security. That's another thing. We have a track for security, and some people want to go AI. They actually cross paths because you can't have AI systems without security, without safety, without privacy. But mainly, the thing that people don't realize is the amount of bias in AI systems.

Q: If I'm a bank, what happens to me if there's some kind of bias built into a system evaluating credit worthiness? Isn't that creating liability if somebody says, 'Well, you're discriminatory?'"

A: What you described now is accountability. If there's a mistake, who is held accountable? In general, humans are not unbiased. You can't create a perfect system from an imperfect foundation, and that's where the problem is.

In my research, I focus on healthcare. What people don't realize is that you could actually lead an AI system to give you biased results without giving it direct unethical commands. My work focuses on benchmarking AI systems, specifically in healthcare.

Q: What do you see as the most promising opportunities for AI in business?

A: I see a lot of promise in healthcare. That's one area I think will benefit the most in the next few years.

Q: Are these benefits in terms of diagnoses? Or record keeping?

A: Both. Improving diagnoses, which could save lives, but also helping medical professionals. They don't have to spend time writing reports, and with that they could serve more patients or learn about new research.

Q: What are the pitfalls?

A: There is a lot of risk in terms of privacy and security because everyone is using AI more now. There is more about you and what you do in more places, and that makes your data and everyone's data very valuable for cybercriminals. And the same thing for companies, because the more they utilize AI systems, the more they are subject to attacks. Not only their data, but their users' data becomes subject to attacks.

Q: How would you like to see students' understanding grow after completing the Responsible Artificial Intelligence Course?

A: We want them to understand that AI does not understand context, does not understand policy, unless it's been embedded by humans. We want them to understand the importance of humans, especially developers of AI systems, and be responsible when they're embedding those models.

Ultimately, the responsibility - if something goes wrong - is on the developers. If something goes wrong, or if a system makes a mistake or is being biased or unfair, the institution or organization that is using the AI system is not being held responsible. It goes back to the developers, and that's what we want them to take from this.

California State University, Long Beach published this content on January 14, 2026, and is solely responsible for the information contained herein. Distributed via Public Technologies (PUBT), unedited and unaltered, on January 14, 2026 at 18:00 UTC. If you believe the information included in the content is inaccurate or outdated and requires editing or removal, please contact us at [email protected]