OpenAI published a plain-language guide on May 6 explaining how ChatGPT learns from information while trying to protect privacy.

For a Luxembourg SME, the useful part is not the reassurance. The useful part is the operational question it creates: if your team is already using ChatGPT, have you actually decided what they are allowed to put into it?

Because this is where a lot of businesses get the topic wrong. They ask, "Is ChatGPT safe?" as if there is one answer. There is not. The better question is: what kind of information are your people using, which privacy controls are switched on, and what rule would you expect them to follow tomorrow morning when they are under pressure?

That is the business decision.

What OpenAI clarified

OpenAI says the models behind ChatGPT are trained from a mix of sources: publicly available information, information accessed through partnerships, and information provided or generated by users, contractors, and researchers.

The company also says it applies safeguards to reduce personal information in training data, including its Privacy Filter, and that ChatGPT users have controls over whether their conversations help improve future models.

The practical controls matter. Users can turn off model improvement in ChatGPT settings. Temporary Chat does not appear in chat history, does not create memories, and is not used to improve models, although OpenAI says those conversations are retained for 30 days for safety purposes. Memory is optional and can be reviewed, edited, deleted, or turned off.

Those are useful product controls.

They are not a company policy.
Decision matrix separating provider controls from company policy.
The useful distinction: product controls answer what the tool can do; company policy answers what employees can do.

The SME problem is not only technical

In a 20 to 150 person business, AI use usually spreads before governance does.

One person uses ChatGPT to rewrite a proposal. Someone else uses it to summarize a client email. A manager uses it to prepare interview questions. A salesperson uses it to clean up CRM notes. None of this feels like a major IT rollout because nobody bought a new enterprise system. It feels like normal office work.

That is exactly why it needs a simple operating rule.

The risk is not just that a provider might train on something. The risk is that every employee invents their own rule in the moment. One person disables training. Another does not. One uses Temporary Chat. Another has Memory enabled. One pastes a public product description. Another pastes a contract clause, a client complaint, or an internal salary note.

You do not need a 40-page AI policy to fix this. But you do need a clear default.

Controls are like seatbelts

Provider privacy controls are a bit like seatbelts in a car. They matter. You want them. You should check that they exist.

But the seatbelt does not decide who is allowed to drive, where they are going, whether they can take passengers, or whether they should be driving in the first place.

That part is management.

For AI tools, the management question is simple: what information can enter the tool, under which settings, and for which tasks?

A practical policy for this quarter

A small business can start with three decisions.

First, define data categories in plain language.

Green, yellow, and red data categories for SME AI policy.
A simple data-category rule is easier to operationalize than a vague instruction to be careful.

Green data can go into approved AI tools: public website copy, public product descriptions, generic brainstorming, internal drafts with no personal or confidential information.

Yellow data needs judgment or manager approval: client-specific context, internal process documents, sales notes, or operational data that is not public but is not highly sensitive.

Red data does not go into general AI tools: passwords, IDs, health information, payroll, HR cases, legal disputes, confidential client material, unpublished financials, regulated data, or anything you would be uncomfortable seeing outside the company.

Second, define default settings.

If your team uses ChatGPT, decide whether model improvement should be turned off by default. Decide when Temporary Chat is required. Decide whether Memory is allowed for work use. Decide whether people can use personal accounts for company work, or whether approved work accounts are required.

Third, define the review habit.

Once per quarter, check the tools people are actually using, the settings they rely on, the type of work they are doing, and whether your rule still matches reality. AI governance fails when the document stays neat and the team quietly builds a different workflow underneath it.

My personal take, based on past experience

My opinion: privacy settings are useful, but they do not decide what your employees are allowed to paste into a tool.

I think a lot of SME leaders are waiting for the perfect legal answer before they write any rule at all. I understand why. Nobody wants to overstep on data protection, confidentiality, employment data, or client obligations.

But doing nothing is also a decision. It just pushes the decision down to every employee, every prompt, every deadline, and every moment where someone is trying to save 20 minutes.

The better move is to separate two things.

Provider controls tell you what the tool can do.

Your company policy tells your people what they are allowed to do.

You need both.

What to do now

This week, do not start by asking whether ChatGPT is safe in general.

Ask these four questions instead:

  1. What work are people already doing in ChatGPT or similar tools?
  2. What types of company or client information are they putting in?
  3. Which privacy settings are required for work use?
  4. What data is never allowed in a general-purpose AI tool?

If you can answer those four questions clearly, you are already ahead of most companies that are still treating AI use like an individual productivity preference.

And if you cannot answer them yet, that is the real story from OpenAI's privacy guide.

The tools are giving users more controls. Your business still needs to decide how those controls are used.