5 min readThought Leadership

You're Pasting Client Data Into ChatGPT and You Probably Have No Idea Where It Goes

If you're using the free version of ChatGPT to draft client emails or summarize contracts, your data is probably training OpenAI's models. Not hypothetically. By default.

data-privacychatgptopinion

If you're a small business owner using the free version of ChatGPT, or even the Plus plan, to draft client emails, summarize contracts, or brainstorm strategy, your data is probably being used to train OpenAI's models. Not hypothetically. By default. And most people have no idea because they never read the terms of service.

The short version: consumer AI tools like ChatGPT aren't private by default. The data you type in can be used to improve their models, which means your client's information becomes training material for a system serving millions of other users. If you're doing client work, that should make you uncomfortable.

The problem nobody's talking about at the 10-person company

Big companies have entire legal teams that vet AI tools before anyone touches them. Samsung banned ChatGPT internally after engineers accidentally leaked proprietary source code through it. Apple did the same. According to a 2024 Cisco survey, 48% of employees admitted to entering confidential company data into public AI tools.

But Samsung and Apple have the resources to catch that and respond. You probably don't.

If you're a bookkeeper, a marketing consultant, a therapist with a side practice, or a contractor managing client projects, you're making these decisions alone. Nobody's reviewing your workflow. Nobody's flagging that the client proposal you pasted into ChatGPT at 11pm contained revenue numbers, personal details, or proprietary strategy.

And nobody's gonna tell you when that becomes a problem... until it does.

What actually happens to your data

When you use ChatGPT through chat.openai.com on a Free or Plus plan, OpenAI's terms of service (updated March 2024) state that they can use your inputs and outputs to train their models unless you opt out. You can turn this off in Settings > Data Controls > "Improve the model for everyone." But it's on by default. Most people never touch it.

When you use the ChatGPT API, meaning you're building something or using a tool that connects to OpenAI programmatically, OpenAI says they do not use your data for training. That's a meaningful distinction, but it only matters if you know the difference and are deliberately choosing one over the other.

OpenAI's ChatGPT Enterprise and Team plans also don't train on your data. But the Team plan starts at $25/user/month (billed annually), and Enterprise pricing isn't public. For a solo operator or a five-person shop, those costs add up fast.

So the realistic picture for most small businesses is this: you're on the free or Plus tier, the training toggle is on, and every client detail you type in is fair game.

What most people get wrong

The biggest misconception I hear is, "Well, nobody's actually reading my data." And that's technically true, there isn't a person at OpenAI scrolling through your client contracts. But that misses the point entirely.

The risk isn't a nosy employee. The risk is bigger than that, and it comes from a few different directions.

Your data gets baked into a model that generates responses for other people. Could fragments of your client's information surface elsewhere? OpenAI says it's unlikely due to how training works. "Unlikely" is not "impossible," and it's definitely not a promise you'd want to make to a client.

You may be violating agreements you already signed. If you have an NDA, a confidentiality clause, or you operate in a regulated industry (healthcare, legal, financial services), pasting client data into a consumer AI tool could put you in breach. According to the American Bar Association's 2023 survey, 15% of lawyers reported using AI tools at work, but most bar associations have issued guidance warning that confidentiality obligations still apply. So yeah, the tools are running ahead of the rules and you're the one left holding the bag.

You have no recovery path. Once data is submitted and potentially used in training, you can't claw it back. There's no "undo" for model training. OpenAI introduced a data export and deletion request process, but deletion from a trained model isn't the same as deleting a file from a folder. That data is out there, diffused into the weights of a neural network, and you just... have to live with that.

What actually works

You don't have to stop using AI. That would be silly; the productivity gains are real and powerful and potentially transformative for a small operation. A 2023 Harvard Business School study found that consultants using GPT-4 completed tasks 25% faster and with 40% higher quality. The point isn't avoidance. It's intention.

Here's what I tell every client:

Turn off model training immediately. If you're on ChatGPT Free or Plus, go to Settings > Data Controls and disable "Improve the model for everyone." This takes ten seconds and costs you nothing. Seriously, go do it right now, I'll wait.

Strip client details before pasting. Change names, swap industries, remove dollar figures. You can still get 90% of the value from AI without giving it the 10% that's actually sensitive. Replace "Acme Corp's $340K Q3 revenue shortfall" with "a mid-size company's significant quarterly revenue decline." The AI doesn't care about the specifics. Your client does.

Use the API if you're technical enough (or hire someone to set it up, hi, that's literally what I do). OpenAI's API has clearer data policies. Alternatives like running local models through Ollama or LM Studio mean your data never leaves your machine. These options are free and surprisingly capable now; Meta's Llama 3 runs well on a decent laptop. I've been messing around with local models for months and honestly the gap between these and the cloud versions keeps shrinking.

Read the terms of service for every tool you use. I know. Nobody wants to. But you're running a business, and "I didn't know" isn't a defense your clients will accept. It takes 15 minutes. Set a calendar reminder to re-check quarterly, because these policies change constantly.

Document your AI policy, even if it's just a one-page internal doc. Write down what tools you use, what data you will and won't put into them, and what safeguards you've set up. If a client ever asks, and increasingly they will, you want an answer ready. Not a panic. An answer.

So where does that leave you?

This isn't about fear. It's about being the kind of business that takes client trust seriously when nobody's watching. The bar is incredibly low right now, most of your competitors aren't thinking about this at all. That means a small amount of diligence puts you way ahead.

The question isn't whether AI is safe. It's whether your specific use of it is safe enough for the promises you've made to the people who pay you.

Frequently asked questions

Is ChatGPT Plus safe to use for client work?

Not by default. The Plus plan still allows OpenAI to use your conversations for model training unless you manually disable it in Settings > Data Controls. Turning off that toggle makes it significantly safer, but you're still sending data to OpenAI's servers.

Does OpenAI sell my data to third parties?

OpenAI's privacy policy states they do not sell personal data. However, your inputs can still be used to train and improve their models on the consumer tiers, which means your data influences a product used by over 200 million weekly active users.

What's the safest way to use AI for sensitive business work?

The safest option is running a local model, tools like Ollama or LM Studio let you run models directly on your computer with zero data leaving your machine. If you need GPT-4 level performance, using OpenAI's API with training opt-out is your next best option.

Can I get my data deleted from ChatGPT?

You can request data deletion through OpenAI's privacy portal. However, if your data was already used in a training cycle before your request, removing its influence from the model isn't straightforward. Prevention is significantly more effective than cleanup.

Share:X / TwitterLinkedIn

Jesse Reynolds

Founder of Helios AI Services, helping small businesses build AI tools they own and control. More about Jesse

Keep reading