OpenAI Sam Altman delivering the keynote at the company's first OpenAI DevDay
News

OpenAI Lets Everyone Build Their Own ChatGPT and Turbo-Charges GPT-4

4 minute read
David Barry avatar
By
SAVED
OpenAI announced a number of updates including the ability for anyone to build their own customized GPT at its inaugural DevDay conference.

OpenAI released the first version of GPT-4 in March of this year and made it generally available to developers in July. At the time, the company was clear that this was only a stepping-stone.

At its first developer conference, held yesterday in San Francisco, we saw exactly what that means.

OpenAI Developer Conference

At OpenAI DevDay, the company unveiled a number of updates including the ability to create customized versions of ChatGPT, the introduction of a cheaper GPT-4 Turbo, a new "copyright shield" to protect customers from legal action and the imminent launch of a digital store.

"We will be able to do more, to create more, and to have more," said Open AI chief executive Sam Altman at the conference. "As intelligence is integrated everywhere, we will all have superpowers on demand."

According to Altman, roughly two million developers are current building on the OpenAI platform, with 100 million people using it weekly.

 

Related Article: Where Does a Databricks-Micrososft AI Alliance Leave OpenAI?

GPTs for Everyone 

The first and potentially biggest announcement out of the conference is the new ability to create custom "agents, called GPTs, capable of handling specific tasks such as business negotiation advice, tips on removing stains from laundry, help with homework, and tech support," according to a company blog post.

The GPTs can connect to databases to feed it the data to complete tasks. What’s more, the company stated, anyone will be able to build a GPT, as no coding will be required.

The post continues: “Anyone can easily build their own GPT — no coding is required. You can make them for yourself, just for your company’s internal use, or for everyone. Creating one is as easy as starting a conversation, giving it instructions and extra knowledge, and picking what it can do, like searching the web, making images or analyzing data." 

While this might appear to the appealing to a wider, consumer-tech base — especially given some of the suggested use cases such as "Laundry Buddy" or even "Tech Advisor" — the real value will be in the enterprise.

The company noted it saw a high demand for a way of creating ChatGPTs for specific use cases, departments or proprietary datasets following the release of the Enterprise plan in late August.

Enterprise users have access to the new functionality starting Nov. 8, with the ability to design workplace GPTs that can be securely brought into whatever workspace you are in. The company was careful to point out that any conversations generated by these GPTs would not be used in developing or improving other models.

“As always, you are in control of your data with ChatGPT. Your chats with GPTs are not shared with builders. If a GPT uses third-party APIs, you choose whether data can be sent to that API,” the blog about the release explained.

"When builders customize their own GPT with actions or knowledge, the builder can choose if user chats with that GPT can be used to improve and train our models.”

It also created new systems to help review GPTs against existing usage policies.

Given that it is also offering access to "actions" which act like plugins and give one or more APIs available to the GPT and allow GPTs to integrate external data, external databases or even interact with the real-world, it would be wise to conduct an enhanced privacy or data use review in advance of an internal launch.

Related Article: Microsoft 365 Copilot Is Now on General Release. Are Your Permissions in Order?

'Turbo'-Charging GPT-4

The other notable announcement was the unveiling of GPT-4 Turbo. Turbo, Altman explained in his keynote, is better than GPT-4 and has knowledge of the world up to April 2023.

"It's a smarter model," Altman said. "We decided to prioritize price first, but we're going to work on speed next."

Turbo comes with a 128k context window, which means it can fit the equivalent of more than 300 pages of text in a single prompt.

To put this in perspective, given the context window of LLMs is the number of tokens the model can take as input when generating responses, in GPT-3 the context window size is 2k and in GPT-4 it is 32k. Clearly Turbo is a major advance.

Learning Opportunities

Along with the larger context window, Turbo is coming at a cheaper price. GPT-4 Turbo will be three times cheaper for input tokens and two times cheaper for output tokens compared to GPT-4.

“GPT-4 Turbo performs better than our previous models on tasks that require the careful following of instructions, such as generating specific formats (e.g., 'always respond in XML'),” according to the blog.

GPT 3.5 users will also get an upgrade in the form of GPT-3.5 Turbo, which supports a 16k context window by default.

At the end of this month, developers will be able to build and share GPTs publicly through a GPT Store, featuring creations by verified builders. Once in the store, GPTs become searchable and may climb the leaderboards.

Getting Legal With Copyright Shield

Finally, OpenAI is joining the ranks of companies that are prepared to pay the legal costs of customers accused of copyright infringement through the use of its AI models.

This commitment follows similar guarantees from the likes of Microsoft and Google, which announced the move earlier this year to try encouraging adoption of its LLMs and generative AI offerings.

“OpenAI is committed to protecting our customers with built-in copyright safeguards in our systems,” the company stated in a blog post on Monday.

“Today, we’re going one step further and introducing Copyright Shield — we will now step in and defend our customers, and pay the costs incurred, if you face legal claims around copyright infringement.”

The dust is still settling on the developer conference so we'll be following the fall out and response to the announcements in the days and weeks to come.

fa-regular fa-lightbulb Have a tip to share with our editorial team? Drop us a line:

About the Author
David Barry

David is a European-based journalist of 35 years who has spent the last 15 following the development of workplace technologies, from the early days of document management, enterprise content management and content services. Now, with the development of new remote and hybrid work models, he covers the evolution of technologies that enable collaboration, communications and work and has recently spent a great deal of time exploring the far reaches of AI, generative AI and General AI.

Main image: OpenAI
Featured Research