Zebra Technologies Corporation

11/04/2024 | Press release | Distributed by Public on 11/05/2024 23:15

Having a “Secure Network” or “Secure Devices” Isn’t Enough Anymore. So, What Is

By Tom Bianculli | November 4, 2024

Having a "Secure Network" or "Secure Devices" Isn't Enough Anymore. So, What Is?

Security-minded solution architects from Google Cloud and Qualcomm explain how far you're going to have to go - and the lengths to which they're going - to help protect your organization's data.

There's this notion that a secure network of devices is not good enough…that what you need is a network of secure devices. However, at Zebra, we believe the only thing that's acceptable these days is a secure network of secure devices.

That's why we're working with Google Cloud and Qualcomm Technologies, Inc. to look deep into on-prem and cloud architectures to implement the best security features at every potential access point. Checking and updating settings at the network and edge device level is no longer sufficient. The only way to protect your intellectual property (IP) and reputation is to identify potential vulnerabilities and secure access to the network, device, silicon, software and architecture layers.

If it seems like overkill, it is. Because it has to be.

Geopolitical trends will continue to accelerate the importance of ever higher levels of security. We've said it many times here at Zebra: any device that's connected to a network is a potential point of vulnerability if not properly secured. Any software component…any part of the software supply chain. And there are more people than ever who are targeting your organization, whether you realize they're doing it or not.

This takes me to the point I want to make today.

As you start bringing AI models into your environment, even if it's just a single AI model that's running at the edge, you must recognize the potential vulnerabilities it introduces into your organization and diligently assess, monitor, and manage its provenance and behavior.

Don't wait for a regulation to be put into place or the occurrence of a security incident to adopt this practice. Don't also just do it just for the sake of practicing responsible AI. Do it because there are risks that you must mitigate so you aren't burdening your organization with liabilities or potentially putting your employees, customers, or constituents in harm's way. It's not just tech companies that must worry about these things. It's anyone and everyone who integrates AI into on-prem, cloud-based, or on-device workflows.

Fortunately, there are developers, engineers, solution architects, and business leaders at tech companies who are thinking about how best to support you from an AI security perspective.

It Takes a Village - and Diligence - to Secure AI Models

On July 18, at the Aspen Security Forum, a group of leading technology companies including Google, Amazon, Intel, IBM, Microsoft and NVIDIA launched the Coalition for Secure AI (CoSAI). This is just another signal that AI security is more important than ever, especially given the rise in hackers using AI to make their phishing emails, text messages, deep fake videos, and audio attacks more sophisticated.

I asked Srikrishna (Sri) Shankavaram, the Principal Cybersecurity Architect on our AI & Advanced Development team here at Zebra, what his thoughts were about this coalition. He told me:

"The launch of CoSAI is important and timely. One of the key workstreams it will focus on is the software supply chain security for AI systems, which spans the entire lifecycle of AI systems from data collection and model training to deployment and maintenance. Due to the complexity and interconnectedness of this ecosystem, vulnerabilities at any stage can affect the entire system."

We started talking about this a bit more and he noted a few things I want to echo out to you:

  • AI systems often depend on third-party libraries, frameworks, and components, which can introduce potential vulnerabilities as they work to speed up development. It's crucial to use automated tools to regularly check and address security issues related to these dependencies.
  • The widespread availability of open-source large language models (LLMs) necessitates robust provenance tracking to verify the origin and integrity of models and datasets. Automated security tools should also be used to scan these models and datasets for vulnerabilities and malware, helping to ensure compliance with the OWASP Top 10 for Large Language Models (LLMs) to address specific security concerns associated with LLMs. These standards encompass a comprehensive range of best practices, including but not limited to data validation, secure model training, adversarial robustness, and privacy protection. Of course, if you run LLMs on the device rather than in the cloud, you'll enjoy some enhanced data security protection. That's one less access point to worry about securing.
  • When looking at closed AI source models, the proprietary nature of the model may provide security through obscurity, making it challenging for malicious actors to exploit vulnerabilities. However, this also implies that identifying and addressing security issues might be a prolonged process. Yet, with open source, you have security gains from the collaborative efforts of the community. The scrutiny of many eyes on the code facilitates the swift detection and resolution of security vulnerabilities. Nevertheless, the public exposure of the code may reveal potential weaknesses. So, keep this in mind when looking at AI models. Do your due diligence. Know the source and anyone else who has contributed to its training or oversight. Be sure you can trust those people.

Does On-Device AI (or Edge AI) Require a Different Security Approach?

Edge AI is often touted as a more secure approach than cloud-based AI models because you only have to worry about securing the device running the LLM. There's very little, if any, data flowing between that device and the cloud to run the application. That's one of the reasons we're seeing edge AI garner so much interest from retailers, manufacturers, healthcare providers, government leaders, and others who want to use generative AI (GenAI) to support frontline workers.

However, it's critical to understand what will be required to ensure data security and privacy when using on-device AI because it still requires diligent effort.

Some key points were made about this in a recent conversation I had with Art Miller, VP of Business Development at Qualcomm Technologies, Inc., and Rouzbeh Aminpour, Global Technical Solution and Engineering Manager from Google Cloud:

  • On-device AI is extremely valuable in terms of security and privacy. Processing data on the device itself, rather than transmitting it to the cloud, can enhance security and privacy by minimizing the risk of data breaches. Because on-device AI processes data locally, you're reducing the need for data transmission and storage in the cloud. This minimizes exposure to potential security threats. Plus, by keeping sensitive data on the device, organizations can better control access and reduce the risk of unauthorized access or data leakage.
  • When using on-device AI (or any AI), ensuring data security at every layer-from the device to the cloud-is paramount. You must implement superior encryption, use security-rich access controls, and conduct regular security audits. Encrypt data both in transit and at rest to help protect it from unauthorized access. Use advanced encryption standards (AES) and secure sockets layer (SSL) protocols to bolster data security. Implement strict access controls so that only authorized personnel can access sensitive data. Use multi-factor authentication (MFA) and role-based access control (RBAC) to enhance security. Regularly conduct vulnerability testing, code reviews, and compliance checks to identify and address vulnerabilities. Stay vigilant.
  • Clearly defining data ownership policies within your organization is essential, but having a policy alone won't prevent data from being shared without authorization. Establish a data governance framework that outlines the policies and procedures for data management, including data classification, ownership, and access rights. Define data retention policies to determine how long data should be kept and when it should be securely deleted. Implement protocols to ensure that data sharing is authorized and tracked. This includes maintaining logs of data access and sharing activities. Most importantly, ensure that employees are aware of data policies and understand their roles and responsibilities in maintaining data security.
  • It's imperative to stay up to date with regulatory requirements related to data privacy and security. If you operate in the European Union (EU), you know compliance with the General Data Protection Regulation (GDPR) is mandatory, and those considerations will have to extend to on-device AI use. In the U.S., the California Consumer Privacy Act (CCPA) requires businesses to provide consumers with transparency and control over their personal data. This includes the right to access, delete, and opt-out of data collection and selling. So, if these or other data privacy and security regulations apply to you, develop and implement frameworks to help manage compliance with all relevant data protection regulations extends to AI models. Regularly train employees on compliance requirements and update policies as regulations evolve. Also remember that other regulations, such as the EU AI Act, have security and privacy elements to them. If you're not sure what that means for you, consult with your legal team.
  • Along those lines, it's important to implement best practice governance frameworks that help manage compliance with data privacy regulations and maintain the security of edge AI systems. Develop governance frameworks that outline the policies, procedures, and responsibilities for data management and security. Identify and manage risks associated with data processing and AI implementation. This includes conducting risk assessments and implementing mitigation strategies. Continuously monitor data processing activities to support compliance with regulations and help to identify potential security threats.

Fortunately, AI security governance is going to be a key focus area for CoSAI, as the governance of AI security necessitates specialized resources to address the unique challenges and risks associated with AI. Developing a standard library for risk and control mapping helps in achieving consistent AI security practices across the industry. So, Sri told me he feels the CoSAI guidance could serve as a good template for you.

He also feels that creating an AI security maturity assessment checklist and a standardized scoring mechanism would enable organizations like yours to conduct self-assessments of AI security measures. It could provide you and your customers with assurance about the security of AI products. This approach also parallels the secure software development lifecycle (SDLC) practices already employed by organizations like Zebra through software assurance maturity model (SAMM) assessments. Therefore, it could help you extend our standard practices into your environment if you're using Zebra on-device AI tools. So, keep an eye on the work CoSAI is doing with regards to governance.

In the meantime, reach out if you have questions or need more clarity on anything I just mentioned.

You may want to listen to my conversation with Art and Rouzbeh if you haven't already:

Your browser does not support the video tag.

I'd be happy to put you in touch with Sri, Art, or Rouzbeh as well to talk more about the current security climate and the mechanisms you'll need throughout your entire tech stack to defend against threats and reduce vulnerabilities.

They can also explain more what Zebra, Google Cloud, and Qualcomm Technologies are doing to make it as easy as possible for you to keep your defenses strong at the edge and core of your business. We know security isn't a set-and-forget setting configuration.

###

What to Read or Watch Next:

Ask the Experts: Is On-Device AI Going to Prove to Be Hype or Helpful?

What do you think about on-device AI? Is it all hype or will it prove helpful to frontline workers? Zebra CTO Tom Bianculli sat down with Art Miller from Qualcomm and Rouzbeh Aminpour from Google Cloud in this new podcast episode to break down the benefits of this new approach to AI so you can decide for yourself.

Ask the Expert: "How Can I Put Responsible AI Into Practice?"

Responsible AI development, training, testing, and use require ongoing engagement and foundational practices to ensure long-term success and ethical implementation. Here is what you need to do to get started, whether you're an AI model developer, tester, or user.

Setting the Record Straight: AI Does Not 'Exist to Harm or Take Over Things' (Like Some People Have Been Led to Believe)

Zebra's Senior Director of AI and Advanced Technologies, Stuart Hubbard, sat down with inspired software engineer Saliha Demir to find out what she has learned - and once misunderstood - about AI. Her early AI experiences are certainly a wake-up call to us all. Listen to this…

Topics
Podcast, Podcast, Security, Partner Insight, Interview, Handheld Mobile Computers, Printing Solutions, Software Tools, Tablets, Wearables, AI, Field Operations, Public Sector, Healthcare, Manufacturing, Retail, Transportation and Logistics, Warehouse and Distribution, Hospitality, Banking, Energy and Utilities,

Tom Bianculli

Tom Bianculli serves as the Chief Technology Officer of Zebra Technologies. In this role, he is responsible for the exploration of emerging opportunities, coordinating with product teams on advanced product development and Internet of Things (IoT) initiatives. The Chief Technology Office is comprised of engineering, business, customer research and design functions.

Tom began his career in the tech industry at Symbol Technologies, Inc. (later acquired by Motorola) in 1994 as part of the data capture solutions business. In the following years, he held positions of increased responsibility including architectural and director of engineering roles.

Tom has been granted over 20 U.S. patents and is a Zebra Distinguished Innovator and Science Advisory Board associate. He was recently named one of the Top 100 Leaders in Technology 2021 by Technology Magazine.

Tom holds bachelor of science and master of science degrees in electrical engineering from Polytechnic University, NYU and serves on the board of directors for the School of Engineering at the New York Institute of Technology.

Zebra Developer Blog

Zebra Developer Blog

Are you a Zebra Developer? Find more technical discussions on our Developer Portal blog.

Zebra Story Hub

Zebra Story Hub

Looking for more expert insights? Visit the Zebra Story Hub for more interviews, news, and industry trend analysis.

Search the Blog

Search the Blog

Use the below link to search all of our blog posts.

Most Recent