Your Firm's ChatGPT Problem: How Consumer AI Is Creating Compliance Landmines
Published: March 4, 2026 · 7 min read
Someone at your firm pasted client data into ChatGPT today.
Maybe it was a client's portfolio summary to draft a meeting recap. Maybe it was a retirement projection to help write an email. Maybe it was a note from a prospect call to help craft a follow-up. It happened at firms run by careful, compliant, well-intentioned people. It's happening at yours.
The question isn't whether your staff is using AI for client work. They are — and most of the time, it's actually making them better at their jobs. The question is which AI, under which terms, and whether it just created a Gramm-Leach-Bliley violation.
The Gap Nobody Is Explaining
There are three meaningfully different ways to interact with ChatGPT, and they have very different implications for your client data. Most advisors don't know the difference.
Consumer ChatGPT (chat.openai.com — free and Plus tiers)
When your staff opens ChatGPT in a browser and types a message, they're using the consumer product. OpenAI's default settings — particularly on free accounts — have historically allowed conversations to be used for model training. Even where training can be disabled, conversations are retained on OpenAI's servers. Your client data is being transmitted to and processed by a third-party system with terms of service written for individual consumers, not regulated financial services firms.
ChatGPT Team and Enterprise
OpenAI's paid business tiers have stronger data commitments — training is off by default, data residency options exist, and there are BAAs (Business Associate Agreements) available. This is meaningfully different from the consumer product. It's not perfect, but it's defensible if configured correctly.
The OpenAI API
This is what developers use to build applications on top of OpenAI's models. By default, API inputs are not used for training. The data handling is contractually controlled. This is the version that sophisticated AI deployments use — and it's architecturally different from the chatbot your staff has bookmarked in their browser.
The problem: to a non-technical employee, all three feel like "ChatGPT." They all involve typing a question and getting an answer. The compliance implications are radically different.
What the Regulations Actually Say
Gramm-Leach-Bliley (GLBA) and Regulation S-P establish your firm's obligations around nonpublic personal information (NPI) — which includes essentially everything meaningful about your clients: their account values, retirement projections, income, family situation, and financial goals.
The core requirement isn't complicated: you must protect NPI from unauthorized disclosure and maintain reasonable safeguards to ensure it doesn't end up somewhere it shouldn't be.
When a staff member pastes client NPI into a consumer AI tool, they've disclosed that information to a third party — one that may retain it, may use it for training, and has no contractual relationship with your firm that establishes data handling obligations. That's a plausible Gramm-Leach-Bliley violation. Whether a regulator would characterize it that way depends on facts and circumstances, but "we didn't know the tool worked that way" is not a defense that holds up in an audit.
The SEC and FINRA have both issued risk alerts about AI tool usage at broker-dealers and investment advisers in the last two years. Reg S-P obligations — protecting customer records and information — apply to this scenario directly. The regulatory framework isn't waiting for AI; it already covers it.
How to Audit Your Firm in 30 Minutes
You don't need a consultant or a technology audit firm for the initial exposure check. Three questions will tell you whether you have a problem:
1. Ask your staff. Directly. "Do you use ChatGPT, Gemini, Copilot, or any other AI tool when working on client-related tasks?" Most will say yes. The ones who say no — follow up. Ask what they use when they want to "quickly draft an email" or "clean up meeting notes." You'll find it.
2. Check what accounts they're using. Personal Gmail accounts accessing consumer ChatGPT are not covered by any business agreement. Free ChatGPT accounts are a live exposure. If anyone is using personal accounts for business AI tasks, that's the first thing to address.
3. Look at your technology policy. Most RIA compliance manuals were written before ChatGPT existed. If your technology or data handling policy doesn't explicitly address AI tools, you have a policy gap that a regulator can point to.
What Compliant AI Deployment Looks Like
The fix isn't banning AI. Firms that ban AI tools will watch their staff use them anyway — through personal devices, on lunch breaks, through the back door. The tools are too useful and the habit is already formed. A prohibition policy with no enforcement mechanism is worse than no policy, because it creates documentation that you knew and failed to act.
The fix is a controlled deployment with clear guardrails.
- API-based access only for any AI that touches client data. This means staff interacts with AI through an application that uses the API under contractual data handling controls — not through the consumer chatbot.
- Explicit policy covering which AI tools are approved, what data can and cannot be used with each, and what the consequences are for non-compliant usage.
- Training for staff — not technology training, but compliance training. "Here is what ChatGPT does with your input. Here is what our approved tools do. Here is why the difference matters."
- A vendor agreement if you're using any AI product for client-related work. A reputable provider should be willing to provide one that establishes data handling obligations compatible with GLBA.
Microsoft Copilot for Microsoft 365 — deployed through enterprise licensing — is one option that many RIAs are already moving toward, because it operates within your Microsoft 365 tenant with data residency guarantees. It's not the only option, but it illustrates what "compliant AI" looks like structurally: your data stays in your environment, under your agreements, not in a third party's training pipeline.
The Bigger Picture
The compliance problem is the immediate issue. The bigger picture is that AI is not going away from financial services — it's accelerating. Firms that get ahead of it by establishing a compliant, controlled deployment will be in a fundamentally better position than firms that react after a client complaint or a regulator inquiry.
The advisors who win the next decade won't be the ones who avoided AI. They'll be the ones who deployed it carefully, early, and in a way that their clients can trust.
That starts with knowing what your staff is already using and fixing the exposure that exists right now.
Ridley Research helps financial advisors and RIA firms understand and deploy AI tools in compliance with applicable regulatory requirements. If you'd like help running an AI audit at your firm, reach out directly.