On Thursday, OpenAI announced a plugin system for its ChatGPT AI assistant. The plugins give ChatGPT the ability to interact with the wider world through the Internet, including booking flights, ordering groceries, browsing the web, and more. Plugins are bits of code that tell ChatGPT how to use an external resource on the Internet.
Basically, if a developer wants to give ChatGPT the ability to access any network service (for example: “looking up current stock prices”) or perform any task controlled by a network service (for example: “ordering pizza through the Internet”), it is now possible, provided it doesn’t go against OpenAI’s rules.
Conventionally, most large language models (LLM) like ChatGPT have been constrained in a bubble, so to speak, only able to interact with the world through text conversations with a user. As OpenAI writes in its introductory blog post on ChatGPT plugins, “The only thing language models can do out-of-the-box is emit text.”
Bing Chat has taken this paradigm further by allowing it to search the web for more recent information, but so far ChatGPT has still been isolated from the wider world. While closed off in this way, ChatGPT can only draw on data from its training set (limited to 2021 and earlier) and any information provided by a user during the conversation. Also, ChatGPT can be prone to making factual errors and mistakes (what AI researchers call “hallucinations”).
To get around these limitations, OpenAI has popped the bubble and created a ChatGPT plugin interface (what OpenAI calls ChatGPT’s “eyes and ears”) that allows developers to create new components that “plug in” to ChatGPT and allow the AI model to interact with other services on the Internet. These services can perform calculations and reference factual information to reduce hallucinations, and they can also potentially interact with any other software service on the Internet—if developers create a plugin for that task.
What kind of plugins are we talking about?
In the case of ChatGPT, OpenAI will allow users to select from a list of plugins before starting a ChatGPT session. They present themselves almost like apps in an app store, each plugin having its own icon and description.
OpenAI says that a first round of plugins have been created by the following companies:
- Expedia (for trip planning)
- FiscalNote (for real-time market data)
- Instacart (for grocery ordering)
- Kayak (searching for flights and rental cars)
- Klarna (for price-comparison shopping)
- Milo (an AI-powered parent assistant)
- OpenTable (for restaurant recommendations and reservations)
- Shopify (for shopping on that site)
- Slack (for communications)
- Speak (for AI-powered language tutoring)
- Wolfram (for computation and real-time data)
- Zapier (an automation platform)
In particular, the Zapier plugin seems especially powerful since it grants ChatGPT access to an existing software automation system, or as Zapier puts it: “You can ask ChatGPT to execute any of Zapier’s 50,000 actions (including search, update, and write) with Zapier’s 5,000+ supported apps, turning chat into action. It can write an email, then send it for you. Or find contacts in a CRM, then update them directly. Or add rows to a spreadsheet, then send them as a Slack message. The possibilities are endless.”
OpenAI is also hosting three plugins itself, a web browser (that can grab info from the web in a manner similar to Bing Chat), a code interpreter for executing Python programs (in a sandbox), and a retrieval tool that allows access to “personal or organizational” information sources hosted elsewhere (basically, fetching information from documents).
While OpenAI calls the plugin selection process a “plugin store,” the company has not announced plans to sell individual plugins. But by using the “store” label, that outcome seems likely at some point.
Already, developers with early access have been rapidly prototyping plugins for ChatGPT. Compared to other approaches in plugin development, the way ChatGPT plugins work is notable. Instead of an arcane process of using “glue code” to interface an API with ChatGPT, the developer basically just “tells” ChatGPT how to use their service using natural language, and ChatGPT does it.
Beyond that, developers have been using ChatGPT and GPT-4 to write ChatGPT plugin manifests (a manifest is “a machine-readable description of the plugin’s capabilities and how to invoke them,” according to OpenAI), further simplifying the plugin development process.
This kind of self-compounding development capability feels like uncharted territory for some programmers. In one case, a Twitter user named Rohit worried aloud, “Guys. Existential crisis. Did OpenAI just finish software? What’s there left to do but clean-up and sweep?”
Sam Altman, the CEO of OpenAI, replied, “No.”
Is it safe?
Given that OpenAI has previously tested its AI models (such as GPT-4) to see if they have the agency to modify, improve, and spread themselves among the world’s computer systems, it’s unsurprising that OpenAI spends almost half of its ChatGPT plugins blog post talking about safety and impacts. “Plugins will likely have wide-ranging societal implications,” the company casually mentions in one section about potential impacts on jobs.
Beyond jobs, a recurring fear among some AI researchers involves granting an advanced AI model access to other systems, where it can potentially do harm. The AI system need not be “conscious” or “sentient,” just driven to complete a certain task it deems necessary. In this case with plugins, it seems like OpenAI is doing exactly that.
OpenAI appears to be aware of the risks, frequently referencing its GPT-4 system card that describes the kind of worst-case-scenario testing we described in a previous article. Beyond hypothetical doomsday scenarios, AI-powered harms could come in the form of accelerated versions of current online dangers, such as automated phishing rings, disinformation campaigns, astroturfing, or personal attacks.
“There’s a risk that plugins could increase safety challenges by taking harmful or unintended actions, increasing the capabilities of bad actors who would defraud, mislead, or abuse others,” writes OpenAI. “By increasing the range of possible applications, plugins may raise the risk of negative consequences from mistaken or misaligned actions taken by the model in new domains. From day one, these factors have guided the development of our plugin platform, and we have implemented several safeguards.”
One of these safeguards appears to be a gradual deployment of access to plugins. Also, while ChatGPT plugin use is covered by OpenAI’s blanket usage policy that prohibits using it to generate misinformation and prohibited forms of content, it also specifies rules for plugins, such as a prohibition on automating conversations with real people. Also, plugins that utilize content generated by ChatGPT (such as emails) must disclaim that the content was generated by AI.
Individual OpenAI plugins have their own safety disclaimers, including the ability to opt out of ChatGPT web crawling with a robots.txt file and the fact that the Python code interpreter runs in a “firewalled” sandbox. But will those restrictions apply to plugins to other services that can execute code? These are questions that OpenAI and developers will need to address and work out together in the coming days, weeks, and months ahead.
At the moment, ChatGPT plugins are only available on an alpha basis to select developers and those approved from a waitlist. “While we will initially prioritize a small number of developers and ChatGPT Plus users, we plan to roll out larger-scale access over time.”