AI in Deals: Unlocking value without losing control
The appetite for AI-powered dealmaking in the community is high. But adoption continues to lag some way behind the ambition.
A common explanation for this is the significant hurdle posed by data security and data privacy concerns. It’s a recurring theme we frequently have to address at twisted loop. But in the high-stakes world of M&A, where confidentiality is paramount and the use of AI is constantly evolving, this caution is not just valid, it's absolutely essential.
That said, appropriate caution doesn't mean the immense potential of AI is off-limits. Addressing these concerns head-on allows you to navigate the complexities sooner and realise benefits quicker. Secure AI is not only possible but truly vital to maintaining your competitive edge in today's dynamic deal landscape.
So, here’s our quick Dealmaker's guide to secure AI adoption.
The key lies in two inter-connected elements: understanding the crucial differences between the types of AI tools available and implementing the right, practical safeguards.
Element 1: Know Your Tools
The first step is recognising that not all AI tools are created equal. There's a critical distinction between Consumer and Enterprise AI models which is the first step to secure AI.
Consumer AI Models: These are your publicly accessible tools, like the standard versions of Claude, ChatGPT or Gemini that many of us use in everyday life. Whilst really powerful for general tasks, the underlying terms potentially allow for user inputs and data to be used in future model training.
The warning here is stark: Never ever input confidential client information, non-public deal data, or privileged communications into these public, consumer-grade AI models. Doing so means losing control over that data, with no security guarantees and open-ended risk.
Enterprise-Grade AI Solutions: Fortunately, solutions like dedicated private LLM deployments on your cloud, Microsoft Copilot for M365, or Google Workspace with Gemini Enterprise are built differently. They are designed specifically for business use with security and privacy at their very core.
The right enterprise platform brings key security features, often as standard:
Data Isolation: Critically, your prompts and data are not used to train the provider's public models. Your information remains your information.
Security Controls: Expect robust encryption (both in transit and at rest), seamless integration with your existing company access management systems, and often options to disable data sharing or training inputs entirely.
Compliance: These platforms are typically designed to meet stringent standards like UK GDPR, ISO 27001, and SOC 2, offering assurances needed for regulated industries. Many also offer private cloud or even on-premise deployment options for maximum control.
Confidentiality: Strong contractual commitments from the provider will help to keep your data private, including clear, transparent policies on data retention and deletion.
Some vendors even offer AI environments that operate entirely inside your own firewall, keeping all data internal and under your strict control. These aren’t just better for compliance; they give your team the confidence to experiment without compromise.
Think of it like this: using Consumer AI for sensitive deal work is like logging onto the free public Wi-Fi at the airport. Using Enterprise AI is like connecting securely behind your company's VPN where nothing can be intercepted. Would you risk transmitting critical M&A analysis over an unsecured public network?
Element 2: Put up the Guardrails
Understanding the tools is the first step; implementing appropriate safeguards around their usage is the next.
Making AI an integral, secure part of your dealmaking framework requires thoughtful navigation. Guardrails that are well-crafted and right-sized for your organisation are essential – not to stifle innovation, but to guide it effectively.
Keep it simple, specific, and focused on enablement:
Clear policy: Establish clear AI usage, governance, and ethics policies. Define what's acceptable, what's prohibited, and who is responsible.
Mandate enterprise solutions: Explicitly mandate the use of approved Enterprise-Grade solutions for any task involving sensitive or confidential information. No exceptions for client or deal data.
Data handling: Train teams on data minimisation (using only the necessary data for the task) and anonymisation techniques where appropriate for less sensitive exploration.
Access & monitoring: Implement role-based access controls for AI tools, set user limits if needed, based on roles or projects, and consider appropriate usage monitoring (always respecting employee privacy regulations) to ensure policy adherence.
Controlled testing: Create controlled 'sandbox' environments for testing new AI tools or specific use cases before wider deployment, allowing you to assess risks in isolation.
Decentralised education: Ensure everyone using AI tools understands the policies, the risks, and the firm's expectations – this responsibility shouldn't sit solely with IT or compliance.
Supplier due diligence: Apply rigorous due diligence to all AI vendors, especially those integrated with critical systems like your virtual data room. Scrutinise their security posture, data handling practices, certifications, and contractual commitments.
Think of it like this: Consider a highly advanced diagnostic machine in a hospital; you wouldn't let untrained staff use it on patients without clear protocols, training certifications, approved procedures for specific tests (policy & mandates), regular calibration (supplier DD & monitoring), and controlled training scenarios. These rules don't stop the diagnosis; they ensure the powerful tool is used accurately and safely for the benefit of the patient (your deal).
Hopefully this quick guide helps you get started with more confidence on your journey towards secure AI adoption. Our recommended approach remains straightforward: understand the tools, implement clear policies, prioritise enterprise-grade solutions for sensitive data, and start experimenting with low-risk use cases first.
This structured approach doesn't just mitigate risk; it builds crucial trust. Your teams are far more likely to adopt AI effectively and responsibly if they know what 'good' looks like and they feel secure using them.
Confidentiality will always be a cornerstone of dealmaking. And as AI becomes more embedded in the process, those who master data security alongside innovation will be the ones who move fastest and safest. By addressing security concerns proactively, you pave the way for AI to become a powerful, trusted ally in the dealmaking arena.
How twisted loop can help your business
We’re a consultancy that helps ambitious businesses unlock new value, by turning vision into capability and capability into growth.
We work at the intersection of strategic definition & execution and Data & AI delivery, helping our clients to:
Scale operations with intelligence and precision
Embed the data & AI capabilities required for future growth
Unlock new sources of enterprise value
Build internal confidence to lead the next phase of growth
Whether you’re scaling a business or evolving a portfolio (whether you're seeking greater efficiency, looking to scale, or want to head in a new direction), we help you progress with clarity, confidence and capability.