Generative AI (GenAI) is a dominant topic of conversation and strategy in every enterprise conference room. It can deliver extraordinary efficiency in countless tasks and perform fast analysis without human intervention. However, as enterprise use of GenAI proliferated, CIOs and CISOs immediately identified a security downside in the commonly used public models: Public GenAI models can leak sensitive data to competitors and undermine compliance posture by divulging personally identifiable customer information. 

Security-conscious enterprises have responded by insisting on private models for most GenAI use cases to control privacy and data handling. In a private model, users can customize the model for specific business uses and control data handling when training the model. However, adopting a private model does not guarantee a bulletproof security posture. To use GenAI effectively and securely, CIOs and CISOs need to consider several other data security factors specific to GenAI. 

Internal Security Risks: Prevent Data Leaks to Employees

From a data security perspective, enterprises should carefully plan applications, interfaces and workflows to avoid intentional and unintentional misuse of the datasets used by the GenAI models. The problem can be complex. For example, sensitive data from one business unit may not be shareable with other business units, even in the same division. Some of the data to be analyzed may not be public information and only available to a handful of need-to-know employees. The goal should be to expose teams only to the data required for their roles/tasks. CISOs should include all relevant stakeholders – legal, security, HR, and other essential business units – in GenAI strategy deliberations to identify the sensitive data and access rights relative to individual business units.   

Third-Party Cybersecurity Risks: Contractors and Partners Using Open AI with Your Data

A corporate partner can undercut an enterprise’s security efforts by bringing sloppy GenAI practices into collaborative projects. Can CIOs and CISOs reap the benefits of collaborative GenAI projects and simultaneously sidestep the damages? The answer is yes, but GenAI must be implemented strategically and carefully. Corporate leaders should evaluate any project-related use of GenAI by partners and contractors. This will mean extending the enterprise’s acceptable GenAI practices into contractor and partner contracts to ensure conformity with corporate privacy policies, including assurances that data will not leak to public GenAI models.  

Model Training Risks : Data Leakage to the Model and Beyond

One of the balancing acts of GenAI is model training. All GenAI models train on and get better with more data. However, more data could lead to inaccurate outputs, false narratives, and potential exposure of sensitive data. Organizations using a private model will add their data for specific use cases, insights, and recommendations. Companies need to prevent the model from leaking that data externally by walling off company data and defining uses like “analysis only” to secure data and trade secrets. In addition, they can take steps to prevent platform vendors from ingesting company data to train and improve their platform, thereby inadvertently benefiting competitors. Professionals involved in the GenAI implementation strategy must fully understand the model training functions and how to protect the company’s intellectual property and data when used for model training. Boundaries must be strict and well-understood. Wipro’s State of Cybersecurity Report 2023 explains how organizations can establish an AI risk management framework with rules and controls to leverage GenAI safely.  

Where to Go from Here?

GenAI holds the promise of significant efficiency gains and enriched experiences. However, it can harm a business if an adequate data security plan is not in place. This technology will continue to evolve. Strategies to properly govern the data fed into GenAI models will change correspondingly. Governments globally have already started to regulate AI (NIST AI-100-1, the EU AI Act, the U.S. White House’s Blueprint for an AI Bill of Rights, etc.). However, AI regulation is challenging because few businesses, let alone governments, completely understand AI or the best regulation strategy. Inevitably, CISOs must monitor the evolving AI regulatory landscape to keep data secure while staying compliant.  

There is no doubt that CISOs have their plates full. However, this is not the first time a new technology has challenged how companies work. GenAI has the potential to make employees more productive and offer customers enriched experiences. Companies should launch their GenAI strategies by focusing on security issues, and keep those issues front and center throughout their use case design to safeguard business interests when implementing AI solutions.

 About the Author

Securing Data with Private GenAI Models

Saugat Sindhu
Global Head Strategy and Risk Practice, Wipro Cybersecurity and Risk Services

Saugat’s team provides business advisory, technology enablement, intelligent automation-based transformation, co-sourcing and managed services. His areas of focus include cybersecurity, risk strategy governance and compliance (GRC), third-party risk management (TPRM), data privacy, enterprise security architecture, enterprise resource planning (ERP) platform security and AI governance.