Yesterday, Elon Musk escalated his ongoing dispute with OpenAI by launching his latest AI initiative, xAI. This venture allows anyone to freely download and utilize the computer code for its primary software, Grok. This large language model is Musk’s counter to OpenAI’s GPT-4, which powers the most sophisticated version of ChatGPT.

Musk’s decision to share Grok’s code is a clear challenge to OpenAI. Musk, one of OpenAI’s initial supporters, departed in 2018 and recently filed a lawsuit alleging breach of contract. He contends that the startup and its CEO, Sam Altman, have abandoned the organization’s original ideals in their quest for profit, transforming a utopian vision of technology that “benefits all of humanity” into just another opaque corporation. Musk has spent recent weeks referring to the secretive company as “ClosedAI.”

While his jab may be lackluster, Musk raises a valid point. OpenAI is not very transparent about its operations. It established a “capped-profit” subsidiary in 2019 that broadened the company’s scope beyond public interest, and it’s now valued at $80 billion or more. Meanwhile, an increasing number of AI competitors are openly sharing their products’ code. Meta, Google, Amazon, Microsoft, and Apple—all companies whose fortunes were built on proprietary software and devices—have either released the code for various open-AI models or collaborated with startups that have done so. These “open source” releases theoretically allow academics, regulators, the public, and startups to download, test, and modify AI models for their own uses. The release of Grok, therefore, signifies not only a critical moment in a corporate rivalry but also a potential industry-wide turning point. OpenAI’s commitment to secrecy is beginning to appear outdated.

The debate around generative AI has been largely driven by the tension between secrecy and transparency since the arrival of ChatGPT in late 2022.

The debate around generative AI has been largely driven by the tension between secrecy and transparency since the arrival of ChatGPT in late 2022. If the technology truly poses an existential threat to humanity, as some suggest, does the risk increase or decrease based on how many people can access the relevant code? Setting aside apocalyptic scenarios, if AI agents and assistants become as ubiquitous as Google Search or Siri, who should have the authority to guide and scrutinize that transformation? Advocates of open-sourcing, a group that now apparently includes Musk, argue that the public should have the ability to thoroughly test AI for both potentially civilization-ending threats and the less dramatic biases and flaws that currently plague the technology. This is preferable to leaving all decision-making to Big Tech.

OpenAI has consistently justified its decision to raise enormous amounts of money and cease sharing its code: the cost of building AI skyrocketed, and the idea of releasing its underlying programming became incredibly risky. The company has stated that releasing complete products, such as ChatGPT, or even just demos, like the one for the video-generating Sora program, is sufficient to ensure that future AI will be safer and more useful. In response to Musk’s lawsuit, OpenAI released excerpts from old emails suggesting that Musk explicitly agreed with these justifications, even proposing a merger with Tesla in early 2018 to cover the technology’s future costs.

These costs present another argument for open-sourcing: making code publicly available can foster competition by allowing smaller companies or independent developers to create AI products without having to design their own models from scratch, a task that can be prohibitively expensive for anyone except a few ultra-wealthy companies and billionaires. However, both strategies—securing investments from tech companies, as OpenAI has done, or having tech companies open up their baseline AI models—are essentially two sides of the same coin: they are methods to overcome the technology’s enormous capital requirements that will not, by themselves, redistribute that capital.

For the most part, when companies release AI code, they withhold certain critical aspects; for instance, xAI has not shared Grok’s training data. Without training data, it’s difficult to investigate why an AI model exhibits certain biases or limitations, and it’s impossible to determine if its creator violated copyright law

For the most part, when companies release AI code, they withhold certain critical aspects; for instance, xAI has not shared Grok’s training data. Without training data, it’s difficult to investigate why an AI model exhibits certain biases or limitations, and it’s impossible to determine if its creator violated copyright law. And without insight into a model’s production—technical details about how the final code was created—it’s much harder to learn anything about the underlying science. Even with publicly available training data, AI systems are simply too large and computationally intensive for most nonprofits and universities, let alone individuals, to download and run. (A standard laptop doesn’t even have enough storage to download Grok.) xAI, Google, Amazon, and all the rest are not telling you how to build an industry-leading chatbot, much less giving you the resources to do so. Openness is as much about branding as it is about values. Indeed, in a recent earnings call, Mark Zuckerberg was blunt about why openness is good for business: it encourages researchers and developers to use and enhance Meta products.

Numerous startups and academic collaborations are releasing open code, training data, and comprehensive documentation alongside their AI products. But Big Tech companies tend to keep a tight lid. Meta’s flagship model, Llama 2, is free to download and use—but its policies prohibit using it to enhance another AI language model or to develop an application with more than 700 million monthly users. Such uses would, of course, represent actual competition with Meta. Google’s most advanced AI offerings are still proprietary; Microsoft has supported open-source projects, but OpenAI’s GPT-4 remains central to its offerings.

Regardless of the philosophical debate over safety, the fundamental reason for OpenAI’s closed approach, compared to the growing openness of the tech giants, might simply be its size. Trillion-dollar companies can afford to put AI code in the world, knowing that different products and integrating AI into those products—bringing AI to Gmail or Microsoft Outlook—are where profits lie. xAI has the direct backing of one of the world’s wealthiest individuals, and its software could be integrated into X (formerly Twitter) features and Tesla cars. Other startups, meanwhile, have to keep their competitive advantage under wraps. Only when openness and profit come into conflict will we get a glimpse of these companies’ true motivations.

5 COMMENTS

  1. Musk’s move to open-source Grok’s code is a game-changer. It’s a direct challenge to OpenAI’s business model and a call for transparency in the AI industry. The question is, will this lead to a more democratized AI landscape, or will it simply shift the power dynamics among tech giants?

  2. While Musk’s initiative is commendable, it’s important to note that open-sourcing code isn’t the same as democratizing AI. Without access to training data and the computational resources to run these models, the average user is still left in the dark.

  3. OpenAI’s decision to stop sharing its code was a controversial one, but it’s worth considering their reasoning. The risks associated with misuse of powerful AI models are real. However, their lack of transparency raises questions about accountability.

  4. It’s interesting to see how the tech giants are navigating this space. While some are embracing open-source, others are holding back, protecting their competitive edge. It’s clear that the industry is at a crossroads, and the decisions made now will shape the future of AI.

  5. The debate between secrecy and transparency in AI is complex. On one hand, open-sourcing can lead to innovation and scrutiny, which is crucial for addressing biases and flaws in AI. On the other hand, it can also pose security risks and ethical concerns.