The unveiling of OpenAI’s ground-breaking GPT-4 foundation model in March 2023 was a milestone in the history of generative artificial intelligence (AI). Yet it was not the only San Francisco event that month to grab the tech world’s attention. Just two weeks later, downtown San Francisco hosted another event, affectionately known as the “Woodstock of AI.”
The vibrant gathering served as a celebration of the rapid growth of a particular type of generative AI—the open-source kind—and of the community that has sprung up around it. In the months since there has been an explosion of new players, models, and use cases in the open-source ecosystem. It is likely that we will look back on this period as a defining moment, when the competition between two approaches to AI—proprietary and open-source—broke into the open.
In the past six months following the GPT-4 launch and the gathering known as “Woodstock of AI”, the dynamic between the rivals has come into ever sharper focus. To add some definitions: generative AI is categorised as “closed source” where proprietary foundation models, typically owned by big tech companies, charge users per API call. The open-source ecosystem, by contrast, promotes the free sharing and adaptation of AI model parameters (the companies involved make money indirectly, by, for example, sharing in cloud providers’ revenue from offering their models).
We are currently witnessing a showdown between the two approaches. Proponents of open source claim their movement is powerful and unstoppable. OpenAI just introduced GPT-Vision in October, another potent closed model that combines visuals with text. In a new book, “The Coming Wave”, DeepMind co-founder Mustafa Suleyman argues that open-source in the context of AI models should be banned for the sake of safety.
Whether the world’s businesses and consumers adopt mostly closed-source generative AI or mostly open-source generative AI—or a balance of the two types—will be key. The outcome is crucial, and not only from the point of view of ensuring AI develops in a way that is good for humanity. It will also shape the most transformative AI use cases in business and society, and it will determine who reaps the rewards of generative AI.
But first things first—what exactly was this ‘Woodstock of AI’ festival, and who was there? The ‘Open-Source AI Meetup’ was held in late March at the Exploratorium in San Francisco, with over 5,000 attendees. Like the rock festival after which it was named, it had a party atmosphere, reinforced by the collaborative spirit and innovative energy of the open-source movement.
Among the crowd, Clement Delangue, the CEO of AI firm Hugging Face, who organised the event, was dressed as the company’s aptly named mascot, a cheery yellow emoji resembling a ‘hugging face’ 🤗. Real llamas sauntered around the venue—an amusing nod to Meta’s large language model, ‘LLaMA’. ‘Free the Llama’ signs fluttered in the air, as various AI luminaires such as Andrew Ng, and the leaders of large language model (LLM) startup, Anthropic, circulated. Most of the (human) attendees were included in Time magazine’s recently-published list of the AI field’s 100 most influential people.
Though the scene was unlike any tech conference, game-changing ideas were being shared, with clear and genuine excitement at the colossal potential of generative AI – a potential that was recently estimated by McKinsey to be worth an additional US$2.6 to $4.4 trillion annually across 63 use cases.
Tech leaders everywhere share in the excitement. For instance, Tencent’s founder and CEO Pony Ma, speaking at the company’s 2023 Shareholders’ Meeting in May, observed: “We initially thought that AI was a once-in-a-decade opportunity for the internet industry, but the more we think about it, the more we realize that this is a rare opportunity that only comes along every few hundred years, similar to the industrial revolution and the harnessing of electricity”.
Which of the two types of generative AI model is leading the new industrial revolution? Right now, the proprietary type is ahead. Two reasons for this are clear: closed models lead in terms of capability, and they are perceived, for now, to be safer.
Start with performance. According to leading benchmarks such as the Massive Multitask Language Understanding, OpenAI’s GPT-4 currently stands out as the most powerful and capable LLM by a significant margin. Although the quality of open-source models is rapidly improving, they remain behind the leading closed-source alternatives.
The reason for this is the stark commercial reality of training leading foundation models. The upfront costs are immense, ranging from the acquisition of specialized hardware like Nvidia’s cutting-edge H100 GPU chips at around $30,000, to substantial cloud computing expenses. Additionally, the deployment of advanced training techniques, such as Reinforcement Learning with Human Feedback, requires specialized expertise. The spending pattern of startups like Cohere, Anthropic, Adept, Mistral, Aleph Alpha, AI21 Labs, and Imbue—known for allocating a significant portion of their budget to chips alone—serves to illustrate the point.
By and large, it is closed models that have had the most resources ploughed into them. In the case of OpenAI, the sheer scale of cost involved appears to have prompted a switch from open to closed. Founded in 2015 by CEO Sam Altman along with notable figures like Elon Musk, OpenAI initially pledged allegiance to the open-source movement. However, upon reaching the milestone of releasing their most powerful large language model to date, the organization dropped its original open-source commitment. This shift can partly be attributed to OpenAI’s need to protect its hefty investment.
Safety is seen (for now) as another closed-source advantage. OpenAI claims another reason it opted for closed is the ethical risks associated with LLMs. These models have the potential for misuse by bad actors, and as they become increasingly potent, the risks associated with making them openly accessible increase. OpenAI’s Chief Scientist, Ilya Sutskever, says: ‘If you believe, as we do, that at some point, AI—or AGI—will become extremely potent, then open-sourcing it simply doesn’t make sense. It’s a bad idea.’
Why, then, given arguments like Sutskever’s, and proprietary models’ strong performance lead, is there so much buzz about the open-source generative AI movement? The world’s biggest tech companies as well as startups and legions of developers are piling in.
One reason is that open source has slowly but surely succeeded in the tech world over time. Modern cloud infrastructure is largely run on Linux, machine learning is powered by languages such as Python, developed under an open-source license, and open source permeates many aspects of the technology landscape.
The excitement at Woodstock AI was about open source innovation. Open-source LLMs make their weights and parameters publicly available, enabling a global community of developers to fine-tune and enhance them, inspiring greater innovation than even the latest closed models.
The ability to easily fine-tune open source models is also hugely appealing for enterprises looking to adopt generative AI—it allows them to tailor these models on their own company-specific data to enable a specific use case that requires this knowledge.
Hugging Face, organiser of Woodstock AI, is one of the early pioneers of the open-source AI movement. Founded in 2016, one of the company’s open-source offerings is its Transformers library, which serves as an open repository of LLMs which customers can access to either adapt the model further themselves, or alternatively to call through APIs for typical LLM functions such as sentence completion, classification or text generation. This ‘Model-as-a-Service’ platform enables businesses of all sizes to transition from experimentation to deployment without the need for excessive in-house resources. Users can convert any model into their own API using managed infrastructure, demonstrating the open-source ethos of democratizing AI.
Giants such as Microsoft, Google, Meta, Intel, and eBay are among the more than 10,000 customers of Hugging Face. Its ‘Model as a Service’ concept has evolved to host over one million models, datasets, and apps. This diverse ecosystem underscores the broad applicability of its open-source tools, ranging from data security upgrades at pharmaceutical giants like Pfizer and Roche to specialized AI applications, such as Bloomberg’s finance-focused language model, BloombergGPT.
As the AI landscape continues to evolve, leading figures and key players are increasingly advocating for generative AI to be open source. Turing Award winner and chief AI scientist at Meta, Yann LeCun, captures why he thinks the world needs open-source LLMs: “Since AI base models are going to become a basic infrastructure, people (and the industry) will demand that it be open source. Just like the software infrastructure of the internet”.
Meta CEO Mark Zuckerberg has a different reason for championing open-source. “It gets more efficient every day,” he comments. “I just think that we’ll also learn a lot by seeing what the whole community of students and hackers and startups and different folks build with this”.
In line with this ethos, Meta’s July release of LLaMa-2 represents arguably the most robust and highest-capability open-source LLM available to the public so far, featuring pretrained and fine-tuned versions with 7, 13, and 70 billion parameters.
In addition to mainstream initiatives like LLaMa-2, other noteworthy projects are contributing to the open-source AI ecosystem, too. Runway, for example, started in 2018 with a focus on AI tools for filmmakers but has since shifted towards generative AI. Its flagship product, Gen-2, is pioneering in its ability to create videos from text prompts, and the company has also launched Runway Studios and an AI Film Festival to expand its reach.
LangChain, on the other hand, is a Python library designed to enhance the usability, accessibility, and versatility of LLMs, making it easier for developers to integrate these powerful tools into various applications. Each of these projects demonstrates the growing diversity and applicability of open-source AI models in different sectors.
Open-source models are also challenging the notion that bigger is always better in terms of model parameters. Smaller models can offer cost-effectiveness, greater agility, and may even outperform larger models when fine-tuned for specific applications.
There are good arguments on the side of open source, too, when it comes to the crucial question of making AI safe and responsible. Advocates of the proprietary approach say that making the models accessible to all and sundry is dangerous. Yet, advocates of open-source AI counter that open-source LLMs both offer transparency, and invite scrutiny from a diverse community. That can help identify and reduce biases, making them more equitable. Additionally, they provide transparency on how user data is utilized, unlike some closed-source models.
What does the future hold and which model will win? To sum up, each approach has its virtues. Proprietary models, such as GPT-4, bring unique advantages, including specialized customization, dedicated support, and robust security features. On the other hand, attributes such as efficiency, transparency and fairness make a strong case for open-source AI.
A rational strategy, of course, is for companies to offer and harness the best of both worlds. We at Tencent are thus adopting a dual approach. We have rolled out our proprietary foundation AI model, Hunyuan, for diverse applications, while also offering a ‘Model as a Service’ solution on Tencent Cloud. This service is designed to enable efficient deployment of open-source models across multiple industries. We anticipate a future landscape where a few closed foundation models will dominate but open source, specialized models for specific sectors and enterprise applications will also flourish. Personal AI assistants based on very small models (capable of running inside instant messengers on smartphones and laptops), will become our companions.
Meta’s LLaMa-2 is hosted by U.S. cloud providers like Microsoft Azure and Amazon’s AWS, underscoring that these tech giants similarly see value in supporting open-source models as well as closed.
A healthy rivalry between open-source and proprietary models is to be welcomed. Fortunately, there seems to be little prospect for now of one approach coming to dominate. The gap in quality between the two types of models has decreased over the past six months. The potential of open-source models to spur innovation, democratize AI, and promote responsibility and security is becoming clearer.
Professor Michael Wooldridge, computer science professor at Oxford University and Director of Foundational AI Research at the Turing Institute, is an AI pioneer, who will give the 2023 Royal Institution Christmas Lecture on “The truth about AI”. He wishes to see both approaches thrive. “In this pivotal year where mass-market, general-purpose AI tools like ChatGPT have emerged,” he says, “we’re at a critical juncture. Open-source and proprietary models each have their merits and limitations. As we proceed, it’s vital that we strike a balance to ensure that AI remains a tool that benefits broader society.” Like Woodstock and music in 1969, spring 2023 in San Francisco has won a place in the AI history books.