The most serious internal AI governance risk is not the model itself but rather the unmonitored prompts that employees use. What happened to Samsung in March 2023 is a critical example: well-intentioned engineers exposed proprietary semiconductor code and internal meeting notes to a third-party generative AI server. This exposure did not involve a cyberattack but was instead the consequence of well-meaning employees optimizing for speed, which then compromised enterprise data security and corporate trade secrets.
The Rising Exposure from Shadow AI
Empirical data underlines the growing threat. The IBM 2025 Cost of a Data Breach Report shows that about 13% of the breaches in its sample involved sanctioned AI systems, while about 20% involved shadow AI. Moreover, 97% of these AI-related data breaches occurred in environments without proper access controls.
Wide use of "shadow AI"-unauthorized, unvetted AI tools-can increase financial liability significantly, adding an average of about $670,000 to the cost of a breach for organizations with high levels of shadow-AI use.
Furthermore, Gartner forecasts that by 2027, over 40% of AI-related data breaches will originate from improper cross-border utilization of generative AI.
Prompts as Legal Liabilities
Employees in every department use AI to draft proposals all the way to analyzing contracts. In using such information, there is always the risk of embedding confidential client information, proprietary pricing or IP in it, which most likely will make a company non-compliant with DPDP, GDPR, or contractually signed NDAs.
A library of unmanaged prompts, one that centrally houses all reusable inputs, becomes a critical node of systemic risk.
The Legal function is uniquely positioned to govern this library by virtue of its capacity to:
● Set boundaries for regulatory and contractual compliance.
● Identify and flag high-risk language or sensitive terminology.
● Ensure that prompts do not contain Personally Identifiable Information (PII) or confidential data.
● Maintain necessary version controls and defensible audit trails.
Even routine prompts can accidentally reveal sensitive pricing information or produce unbalanced content without clear legal oversight. This risk is reflected in how quickly major financial institutions such as JPMorgan and Bank of America moved to limit employee access to public generative AI tools over data protection and compliance concerns.
The Business Mandate for Legal-Led Governance
The cost of poor AI governance frameworks is high. Overall breach costs may vary from year to year but organizations using shadow AI suffer significantly greater losses. Varonis research shows that 99% of organizations have sensitive information unnecessarily exposed to AI systems.
It follows that establishing modern AI governance is a business imperative. A robust framework should include:
● Prompt classification (e.g., restricted, high-risk, low-risk).
● Formal Legal sign-off protocols for sensitive use cases.
● Standards of redaction to avoid sensitive data being included in inputs
● Pre-vetted prompt templates that ensure compliance from the start.
The Samsung incident confirms that warnings are not enough. The essential learning is that the most significant threat to enterprise AI security will come from unmonitored inputs. Legal control over the prompt library is the foundational protection needed to convert potential AI liabilities into safeguarded competitive advantages.
● Source (media coverage):
● Samsung incident and ban:
● Additional coverage:
● https://mashable.com/article/samsung-chatgpt-ban
● Summary of IBM 2025 AI‑related breach stats:
● https://www.kiteworks.com/cybersecurity-risk-management/ibm-2025-data-breach-report-ai-risks/
●
● Another breakdown of shadow AI cost premium:
● https://vorlon.io/saas-security-blog/ibm-cost-of-data-breach-ai-governance-gap
●
● PDF commentary on the report:
● https://www.bakerdonelson.com/webfiles/Publications/20250822_Cost-of-a-Data-Breach-Report-2025.pdf
●
● Coverage:
●
●
● Press/Newswire:
●
● Coverage summary:
●
● Article summarizing “99% exposed” and “98% unverified apps”:
● https://www.secureworld.io/industry-news/orgs-expose-sensitive-data-ai-tools
●
● Link consolidating this:
● (section listing JPMorgan, Bank of America, Citigroup, Deutsche Bank, Wells Fargo, Goldman Sachs restrictions)
● https://mashable.com/article/samsung-chatgpt-ban
● (also mentions banks limiting ChatGPT)



