OpenAI introduced artificial intelligence (AI) to the general public and their latest keynote emphasized their influence on the development of this new technology. Since Chat-GPT was unveiled in November 2022, every major tech company has announced their own AI initiative, but OpenAI remains the trend setter — for now. In this blog, we breakdown some of the announcements made at the first DevDay conference and how they may influence your business’ AI strategy.
Open AI Conference Takeaways: GenAI’s Potential is Expanding
The OpenAI Conference Takeaways not only reflect the current state of Chat-GPT and other products, but also roadmap for AI’s future and potential. The keynote opened with OpenAI CEO Sam Altman addressing some limitations of the company’s models. Since Chat-GPT launched, there has been excitement for generative artificial intelligence (GenAI) and the potential for large language models (LLMs) but there has also been concerns about limitations. Some of these limitations are general problems with AI but others are specific to OpenAI’s approach. This blog post delves into the top OpenAI Conference Takeaways, offering insights into how these developments will shape the tech landscape and your AI strategy.
Takeaway #1: You Can Customize GenAI for Your Business
One of the most significant announcements during OpenAI’s keynote was a new offering from the company called “custom models.”
In short, this is the functionality everyone wants for their business’ AI strategy. Custom models allows individual businesses to modify the model training process and tailor it to specific domains. This means any business can take the base infrastructure of Chat-GPT’s GenAI capabilities and map it to their proprietary database of knowledge. Successful implementation of this functionality would resolve problems like hallucinations. It would also ensure the responses from an enterprise AI would be consistent with knowledge developed internally at your business.
While this sounds exciting, Altman referred to OpenAI’s lacking capacity to offer this service.
“We won’t be able to do this with many companies to start, it will take a lot of work, and […] initially it won’t be cheap,” said Altman.
This is consistent with Shelf’s analysis that your own data/content is the biggest hurdle to an effective implementation of AI for your business. Resolving challenges such as duplicates, outdated information, missing metadata, or other problems is vital to avoiding risk and preparing for the future of AI. This is a process that takes expertise to resolve in a timely and efficient manner. Shelf specializes in empowering your organization with knowledge management to take advantage of the potential of AI. This is a crucial step to integrating AI into your business. If this is a problem you’ve run into in your own AI strategy, chat with our team for how we can help.
Takeaway #2: More Context Means More Potential for GenAI
Altman began the keynote’s focus on improvements by announcing a new model of GPT called GPT-4 Turbo. The most significant advancement to GPT-4 Turbo is expanding its context limit from 8,000 tokens to 128,000 tokens. If you’re a casual user of GPT products, these terms may not be familiar to you.
“Token count” is sometimes conflated with “word count” but it’s an imprecise comparison. A token is any bit of text that conveys meaning within language. A token can be as short as a single character or as long as a full word. It’s for this reason your token count is typically bigger than your word count. The oxford comma is a good example of how the meaning of words can change dramatically with a single character. “Emily likes cooking, her family, and her dog” tells a completely different story from “Emily likes cooking her family and her dog.”
It’s easy for your token count to grow very quickly which has limited some potential use cases of GPT. The “context” given to GPT can give it guidelines for how to approach your prompt for more efficient outputs without repeated back-and-forth dialogues. For example, you could provide the context “you’re a college-level professor providing feedback on essays” to expedite the outputs given when you copy/paste an essay as a prompt. This is a simple and short context, but other use cases require more tokens. Context tokens could be used to apply a style guide for written materials, but style guides are often very long. If your style has more than 20 different rules, it’s likely you hit the previous 8,000 token limit. The new 128,000 token limit makes GPT’s functionality more accessible for complex contexts — while preventing users from getting confused when they see an error like “token max reached” without knowing what that means.
Takeaway #3: AI Continues to Gain New Features and Modalities
Among the key OpenAI Conference Takeaways was the announcement of a number of improvements to Chat-GPT including better knowledge, image-based GenAI functionality, and text-to-speech generation.
The most significant knowledge update is GPT’s access to the world’s knowledge. We previously written about how LLMs are highly reliant on the inputs for their outputs. Previously, all GPT models had a cutoff date of September 2021. If you were to ask a GPT model which candidate won in an election from 2022 — it simply could not tell you that information. The new cutoff date is April 2023. Altman said the company would work in the future to have the cutoff date be more recent, but didn’t provide any clear goal for when more recent cutoffs would be implemented.
Users also have the ability to upload documents to GPT models to inform their knowledge. This provides a way for users without an enterprise subscription to feed their business’ knowledge to influence responses. Of course, this functionality has its limits. You can’t simply dump thousands of documents into Chat-GPT and expect it to remember that information in perpetuity. You would need an AI-empowered knowledge management tool to achieve that end.
In addition to more avenues for knowledge input, GPT models have more functionality including image generation through DALL-E 3 and text-to-audio generation with Whisper v3 — OpenAI open source speech model. DALL-E 3 has been available since October 2023, but the text-to-audio generation is a new feature. The audio generated can be applied to six different voice models — one of which Altman demonstrated on stage.
Many of these features and improvements address the concerns raised when GenAI was first gaining prominence almost a year ago. It was true that AI was limited by its access to knowledge in the world, it couldn’t take documents, it couldn’t generate images, and it couldn’t generate audio. These limitations closed the door to some potential innovations, but now the environment has changed. We’ve always believed the functionality of AI would rapidly develop which is why we encourage businesses to consider formulating an AI strategy if they haven’t already. By the time your business is ready for AI, it’s functionality will have likely expanded.
The Business of AI is Expanding
Many of the OpenAI Conference Takeaways focused on developers — professionals utilizing OpenAI’s API to develop their own solutions. While some of these advancements are fairly “in the weeds” beyond most businesses interest in AI for their business, it is worth mentioning their influence on the market of AI solutions.
Takeaway #4: OpenAI is Making AI Cheaper to Use
One of the big questions about the future of AI is how much will it cost to use and how much do companies anticipate to make from using it? We’ve written before about the potential market for AI exceeding $1 trillion within the next 10 years. Business reporting fixated on Microsoft’s pricing for their AI solutions specifically because it seems significantly higher than anticipated by the market.
Altman announced GPT-4 Turbo would be cheaper than previous GPTs. At a rate of 1 cent for 1,000 input tokens and 3 cents for 1,000 output tokens, Altman said this pricing represents a blended rate that is 2.75 times cheaper to use than before.
Altman added OpenAI had focused on “price” to bring down the cost of using GPT products, but would shift its focus to “speed” for future iterations. This approach is similar to other disruptive technology where the market leader wants to quickly dominate the emerging market by making it accessible, then develop a superior product to maintain their position.
Takeaway #5: Microsoft Maintains Its Partnership with OpenAI
One of the guest speakers at OpenAI’s keynote was Microsoft CEO Satya Nadella. Altman and Nadella answered two questions. What does Microsoft think about its partnership with OpenAI? And how does Nadella view the future of AI? These questions prompted Nadella to affirm Microsoft’s commitment to its partnership with OpenAI.
“Ultimately it’s about being able to get the benefits of AI broadley disseminated to everyone. I think that’s going to be the goal for us,” said Nadella. His comments focused on Microsoft’s capacity to build infrastructure required to support the computational abilities of artificial intelligence.
Microsoft originally invested $1 billion in OpenAI back in 2019 before it had a product available to the public. After the successful launch of Chat-GPT in November 2022, Microsoft announced an additional $10 billion investment earlier this year. OpenAI has stated it is a “profit-capped” company, which suggests the company uses whatever profits it produces to invest further into its products’ capabilities. It has been speculated Microsoft receives a share of OpenAI’s profits but this has not been confirmed. Given these investments, it was unlikely Microsoft planned to launch its own competitor to OpenAI but Nadella’s appearance and comments at the event provided clarity on that speculation. His appearance counts as one of the more impactful OpenAI Conference Takeaways.
Takeaway #6: GPT Marketplace Expands Business Opportunities
Altman announced OpenAI would be launching a GPT Marketplace in tandem with a new feature called “GPTs.” Currently, the available GPT products are all-encompassing. They can effectively do anything, but that vast access to knowledge can act as a detriment at times. “GPTs” allow users to create custom GPTs with their own instructions, expanded knowledge, and actions.
For example, one custom GPT called “Game Time GPT” is specialized to concisely explain the rules of various board games. While some custom GPTs already exist, Altman said new custom GPTs could be created through the conversational inputs of GPT’s interface. This means businesses or entrepreneurial individuals can create custom GPTs without knowing how to code. Altman performed a live demo of how he created a custom GPT focused on providing advice for start-up founders — all though the conversational prompts users of Chat-GPT are familiar with already.
Custom GPTs can be sold on OpenAI’s GPT Marketplace, but the details of profit sharing were not shared during the keynote. Altman said users who create custom GPTs can choose to keep them private, share to the public, or only allow other users within your enterprise account to access it. Custom GPTs could be similar to Apple’s App Store for generating new use cases for GenAI or and extension to businesses’ desire to have a custom solution for their business.
OpenAI’s Influence on AI Strategy
This year’s event was a treasure trove of information, and we’re excited by the OpenAI Conference Takeaways showing promising growth of the AI market. New advancements in GenAI continue to be rolled out with more opportunities for businesses to take advantage of its potential or to benefit from creating AI solutions. Fully customized solutions for businesses continue to be a focus for AI companies, but the viability of these custom solutions largely rely on knowledge management solutions. OpenAI is clearly committed to expanding the potential of its product offering and its partnership with Microsoft – along with its strategic pricing decisions — suggest the company will continue to be a market leader for the artificial intelligence market.