ChatGPT Team: Upgrade now to protect your data
How I learned to stop worrying and trust OpenAI's data security
“But what about data security?”
This is one of the most common questions I get from organizations as they consider adopting ChatGPT. Thanks to the rollout of ChatGPT Team in January, it’s now also one of the most easily answered.
In addition to other features, ChatGPT Team offers the following critical security advantages over regular ChatGPT:
OpenAI guarantees that they will not train new models on the conversations you have with ChatGPT under a Team account, which means your information cannot leak via GPT 4.5 or 5.0 repeating it to other users.
ChatGPT Team has centralized enterprise account management, which means your IT team can shut down people’s accounts if/when they leave the organization, hence removing their access to any sensitive data.
Moreover, ChatGPT Team is also only $5-10 more per month per user than individual accounts.
Three action steps you should take right now
If you read no further in this newsletter, I strongly recommend that most social impact organizations1 take the following action steps immediately:
Create a ChatGPT Team account;
Pay for seats for anyone on your team who wants one;
Require that staff use their ChatGPT Team account for work projects, not a personal ChatGPT account.
I go into more detail on security level and why you should trust OpenAI below. But first, two other points:
Keep in mind that many knowledge workers report using ChatGPT for work without informing their employers (not surprising for a tool that both saves workers lots of time and improves the quality of their work). So even if you still have some lingering concerns about security, I would argue that offering ChatGPT Team accounts is arguably likely to make your data safer rather than less safe.
Also, along with improved security, ChatGPT Team has a bunch of other useful features, like ~25,000 word context window and easy shared access to GPTs (structured reusable prompts) amongst your team.
Ok, but how safe is ChatGPT Team, really?
My rule of thumb is: Putting sensitive data or strategy information into ChatGPT Team is about as secure as putting it into any other medium-sized enterprise productivity tool like Canva, Asana, Airtable, or Notion.
The three main data security risks you have to worry about from using generative AI tools are:
Your data being used for training new foundational models, which then reproduce some of that information when queried by other users.
A general security breach — either a bug or hackers reveal swathes of your data to malicious actors, against the intentions of the company.
Malicious or careless ex-employees retaining sensitive data.
Note that #2 and #3 are not specific to generative AI tools!
Working backwards:
On #3 (ex-employees), ChatGPT Team reduces risk the same way every other enterprise productivity tool does — with centralized account management.
On #2 (general data breaches): My estimated likelihood of a data breach for a particular productivity tool is affected by a whole bunch of factors, including track record, stability and maturity of the company, certification processes they’ve cleared, etc. But roughly speaking, I’d put the order of magnitude of a data breach at OpenAI at a similar level to that of many other mid-sized productivity companies. OpenAI can afford very good security experts and demonstrably has competent product and engineering leadership.
Most organizations I work with wouldn’t think twice about making a campaign strategy slide deck or report in Canva, or hosting a table of donor contact information in Airtable. Your data stored at those companies could be breached by a bug or a hacker, but the risks seem low. If the risk of a security breach at those companies does not keep you up at night, then I don’t think it should for OpenAI either.
(On the flip side, if you work with data so sensitive that you wouldn’t want it put into Asana or Notion, then I wouldn’t make an exception for ChatGPT Team. None of these mid-sized companies have security teams or track records on par with Google or Microsoft — which might not be impregnable, but are about as safe as anywhere you can store data. )
So that leaves #1: Training models on your data. OpenAI guarantees they won’t do it, and you should believe them.
Why you can believe that OpenAI won’t train on your ChatGPT Team data
Let’s rewind a bit to the dark ages of generative AI, lo those many 14 months ago.
When ChatGPT first rolled out, OpenAI had no idea how big of a hit it would be. They built it as an experiment to collect data on usage patterns, not because they knew it would become the fastest growing product rollout in the history of the world.
Hence, they failed to build even a modicum of data security measures into the tool, and were explicit that they would use your conversations with the tool as data to train future iterations. There are two reasons for this:
High-quality text-based conversational data is extremely valuable for training generative AI models.
Because Large Language Models require so much “compute time” to run, on average, every conversation you have with ChatGPT costs a few cents.
So if you want to offer a free chatbot to a lot of people, there’s a neat solution: Use the conversations they’re having with the chatbot as training data, and you can offset a significant chunk of the costs of the tool, or potentially even see it as a winning investment in building your training dataset.
As the old adage goes, if you’re not paying for the product, you are the product.
However, this model quickly bought ChatGPT a lot of negative press about how unsafe the tool was to use. E.g.:
Meanwhile, competitors to ChatGPT and the OpenAI API tools, such as Writer.com and Cohere, were guaranteeing that they wouldn’t train on your conversations with their tools and positioning their data security as a competitive advantage.
Hence, data security was always a problem that OpenAI was going to have to fix. It’s critical to their business model: They need big corporations not only not to ban their staff from using the tool, but in fact to buy accounts!2
The completely unsurprising solution is: If you don’t want to be the product, you need to pay for the product. I assume this is why ChatGPT Team costs a bit more per seat than individual ChatGPT accounts.
If you don’t want to be the product, you need to pay for the product.
There are plenty of arenas in which you might not trust OpenAI — like how seriously their leadership really takes existential AI risks. But I do think that when they make guarantees about data security in public, you should believe them, because their business model depends on following through.
What about ChatGPT Enterprise? Should we have that instead?
Some of you might be in the weeds enough to ask me about ChatGPT Enterprise. In August 2023, OpenAI announced ChatGPT Enterprise, which solved a lot of security problems for enterprise customers:
Data security: No training on your data and conversations with the tool, and SOC-2 compliance. (SOC-2 is a cybersecurity framework that ensures that “third-party service providers store and process client data in a secure manner.” ChatGPT Team is not yet SOC-2 compliant, although OpenAI says it soon will be.)
Centralized account management: A normal ChatGPT+ account requires each individual employee to set up and pay for their own account, and to shut it down if necessary. (Also a security risk in the case of departing employees, as well as a significant logistical complication.) Just like with any cloud-based enterprise tool, like Google Workspace, ChatGPT Enterprise would allow your operations team to centrally set up, pay for, monitor usage of, and shut down ChatGPT seats.
Other useful features like longer context windows and increase usage caps.
However, you can’t purchase ChatGPT Enterprise online, you have to talk to a salesperson. The sales team was so understaffed that even though I had several clients submit requests, none of them ever got a reply — so I still don’t even know the pricing model, just that it was expensive! I assume the sales team was prioritizing the largest potential customers first, which left many of the nonprofits and small companies I work with high and dry.
ChatGPT Team appears to simply be ChatGPT Enterprise-lite, which you can sign up for immediately with a credit card.
I think it’s still worth it for larger orgs to contact the ChatGPT Enterprise salespeople; eventually, when they get back in touch with you, you can ask whether you should consider upgrading from ChatGPT Team. But that is obviously no reason not to upgrade to ChatGPT Team right now.
If you deal with health care data or other highly sensitive PII, then this might not apply to you and you should listen to your own security experts.
In my experience, activists often default to assuming that their security and data privacy needs are unique and special and very high. They are generally not, at least not for organizations that operate in the Global North and don’t engage in civil disobedience. Most of the world’s large corporations deal with very, very sensitive data in one form or another. Think of it this way — if an enterprise tool is secure enough for an evil company that is being sued by activists and has the $$ to pay for whatever security it wants, it’s probably also high enough for the activists themselves.
I was fortunate enough to have an email conversation about ChatGPT Enterprise with an OpenAI Sales rep. Price - starts around $100K/year