OpenAI has emerged as one of the government’s leading providers of artificial intelligence. According to the company, 37 federal agencies now have access to its tech, and about 80,000 government employees are now using it regularly.
This makes OpenAI a frontrunner in the race between the top AI companies to get their tech in front of government users. These workers are just a small fraction of these frontier labs’ total customer bases, but they’re symbolically valuable. Wooing the U.S. government is important enough to these companies that they’re offering their technology at a steep discount. And, in another bid to speed up the administration’s use of the tech, several of those labs—OpenAI, Perplexity, and Google—have now earned a fast-track to offer their AI on a government-approved cloud.
Of course, working with the U.S. government brings a host of logistical challenges. Between arduous cybersecurity requirements and arcane procurement rules, getting technology to federal agencies can be a real chore. Federal agencies also operate on far tighter budgets than the commercial sector, and are slow to adapt to new tech, which is why OpenAI, like other companies, is offering them access to ChatGPT for basically nothing.
Government contracting can also put tech companies under a microscope. Working for government agencies, particularly more polarizing ones (like the Department of Homeland Security) has become politically toxic—not just to the broader public, but also to tech workers. And as Anthropic is learning in real-time, the government can be a troublesome customer. The Pentagon, which has grown highly reliant on Claude, is now threatening to deem Anthropic a “supply chain risk,” should the company not accede to its demands for essentially unlimited usage terms.
Felipe Millon, who leads government sales at OpenAI, spoke with Fast Company about why the AI giant wants to work with the U.S. government, and its progress in getting federal employees to use its tech. This interview has been edited for length and clarity.
I can’t imagine that government sales are determinative for the success of OpenAI’s business model. Why do this? Why work with the government, if it’s so hard and there are all these extra complications involved with it.
I joined two years ago as our first government hire before we had anything here. It is absolutely very hard. It is also—I won’t say not material—but we don’t ever expect government sales to be a very large percentage of OpenAI’s revenue. If you want to think of it purely from a financial perspective, the reason is very mission-aligned, right? OpenAI has a mission as a public benefit corporation now, that is to ensure this technology called AGI, Artificial General Intelligence, benefits all of humanity.
And what we have discussed internally with our leadership team is that . . . creating a technology, AGI, that is better than humans at most economically viable tasks and deploying that to the world will not happen without the U.S. government being involved. They can’t understand it unless they’re users of the technology, right?
The best way to understand what’s happening in AI is to be a user and to see it for yourself, whether that’s a chatbot, coding, or other tools. We’re ready to start seeing where it can add value. And so part of our mission is really to ensure that the U.S. government understands what is coming by being able to unlock that for government use cases. If our mission is to ensure AGI benefits all humanity, one of the ways that [humanity] is benefited is by the delivery of citizen services—whether it be someone who is reliant on food stamps or someone who is getting housing support from Housing and Urban Development, or whether they are paying their taxes in an effective way with the IRS.
So you’re now able to host your own AI as a cloud service. Why does that matter, and how does it impact government users?
With the advent of cloud computing, a lot of government tools have moved to the cloud and so off a government-hosted computer. Previously, government [agencies] would host their own mainframes and their servers and their own personal data in their own data centers. . . . Business models emerged with cloud computing, where large hyperscalers, mainly Amazon, Microsoft, Google, Oracle. [They] said, “Hey, we can run this at scale, and you can just use this capacity from us on demand as a service.” So rather than owning your server, you get compute and storage and things like that . . . and you pay for it.
We use cloud-based services to host our tools, whether that be the models we operate and provide in an API service to developers, or as ChatGPT Enterprise. We would like to use that enterprise version of ChatGPT, at, for example, the Treasury or at HHS or at the State Department. But in order to do so, we need to be compliant with these cybersecurity rules. This accreditation means that now the government agencies are allowed to use our tools with real data and are able to really start getting value.
I understand that you don’t work on the defense side of OpenAI’s government business. Obviously we’ve seen in the news, there can be tensions between AI companies or any software company selling to the government what the government wants to do, and what you know a company might be interested or comfortable with. Can you talk a little bit about weighing that when you’re thinking about selling to the civilian side of the government?
I’m not going to cover a lot of the national security side that is outside of my specific purview. I focus on the civilian and state and local side. On the civilian side, we rarely encounter these things. It’s rare that these things will come up at places like the Treasury, etc. If they do come up, really, I think it’s just a good faith discussion and negotiation with the government.
I’m wondering about the penetration of OpenAI technology in the government right now, particularly after the OneGov deal, which saw you offer ChatGPT to the government at a major discount.
We have a commercial tool that is available . . . and anyone can download it on their phone. We saw that over 100,000 people had a government email address in ChatGPT, before we even launched an enterprise product. We also have a relationship with Microsoft. It’s a very complicated relationship, but they . . . deploy their own products called Azure OpenAI, which is our model hosted and run by Microsoft. But that’s a Microsoft product, and that product has been used in government for some time, because Microsoft has a very large and established government business. We want to work directly with the government as well. There’s two main barriers that have blocked government adoption of AI: authorization, which we’re just getting with FedRAMP, and then the other one is procurement and budgeting.
HHS, for example, is a very large user of ChatGPT Enterprise. They have tens of thousands of users. The U.S. Treasury also has tens of thousands of users through ChatGPT Enterprise. I would say around 50 or so federal agencies have taken advantage of our OneGov deal and have used it. It has been painful because they have to provide agency level authorization. So their authorizing officials and their security have to do their own cybersecurity review—either that or they don’t use the tool. We actually have our only on-premises deployment with Los Alamos, which was kind of a separate work that we had done. The majority of the national labs are enterprise customers.