What is the Google 'No Moat' Memo?
The "no moat" memo is a leaked document from a Google researcher, which suggests that Google and OpenAI lack a competitive edge or "moat" in the AI industry. The memo argues that open-source AI models are outperforming these tech giants, being faster, more customizable, more private, and more capable overallmachine learningmachine learning.
The memo's author believes that the real competition is coming from open-source communities, which have been solving major AI problems faster and more efficiently. The author suggests that Google's large AI models are no longer seen as an advantage, with open-source models being faster, more customizable, and more privatemachine learning.
However, Demis Hassabis, the CEO of Google's DeepMind, disagrees with the memo's conclusions. He believes that the competitive nature of Google's researchers will help push the company forward. He also suggests that the newly merged Google Brain and Google DeepMind teams will likely result in more breakthroughsmachine learning.
The memo has sparked discussions about the future of AI and the role of open-source models. Some believe that the memo is a call for Google to reassess its approach to AI development and to learn from the open-source communitymachine learning. Others argue that the moat for AI will be around building user-friendly tools for fine-tuning models to domain-specific applicationsmachine learning.
The memo also raises questions about the business model for AI firms. Some argue that the moat in a solution whose performance is a function of scale is owning all the compute, a lesson Google learned in search a long time agomachine learning.
The following document is a verified leak from a Google researcher, shared anonymously via Discord. It reflects the individual's views and not Google's stance. While we have reservations about the content, which we'll address separately for our subscribers, we present it here—stripped of internal links and reformatted—for its thought-provoking insights.
We Have No Moat
And neither does OpenAI
We've been closely monitoring OpenAI, wondering who would achieve the next breakthrough or make the next significant move.
However, the stark reality is that neither we nor OpenAI are in a position to win this race. While we've been preoccupied with each other, a third group has been steadily advancing: the open-source community.
To put it bluntly, open-source initiatives are outpacing us. They are solving problems we consider major challenges and are already delivering solutions to users. Here are a few examples:
- **LLMs on a Phone** — Foundation models are running on Pixel 6 devices at a rate of 5 tokens per second.
- **Scalable Personal AI** — Personalized AI models can be fine-tuned on a laptop in just one evening.
- **Responsible Release** — The issue of responsible release has been sidestepped rather than solved. Art models with no restrictions are widely available, and text-based models are following suit.
- **Multimodality** — The current state-of-the-art for multimodal ScienceQA was trained in just an hour.
While our models still have a slight edge in quality, the gap is closing at an astonishing rate. Open-source models are faster, more customizable, more private, and more capable for their size. They are achieving with $100 and 13 billion parameters what we struggle to do with $10 million and 540 billion parameters—and they're doing it in weeks, not months. This has significant implications for us:
- We lack a "secret sauce." Our best strategy is to learn from and collaborate with the open-source community. Prioritizing third-party integrations is essential.
- Users are unlikely to pay for restricted models when free, unrestricted alternatives of comparable quality are available. We need to reassess where our true value lies.
- Large models are hindering our progress. In the long term, the most effective models are those that can be quickly iterated upon. We should focus more on smaller variants, especially now that we understand the potential within the sub-20 billion parameter range.
For a detailed account of recent developments, visit: [https://lmsys.org/blog/2023-03-30-vicuna/](https://lmsys.org/blog/2023-03-30-vicuna/)
### What Happened
In early March, the open-source community gained access to their first highly capable foundation model when Meta's LLaMA was leaked. It lacked instruction or conversation tuning and RLHF, but its potential was immediately recognized.
This sparked a wave of innovation, with major developments occurring within days. Now, barely a month later, there are variants with instruction tuning, quantization, quality improvements, human evaluations, multimodality, RLHF, and more—many building upon each other's work.
Most notably, the barrier to entry for training and experimentation has plummeted from requiring the resources of a major research organization to just one person, an evening, and a powerful laptop.
### Why We Could Have Seen It Coming
This surge in open-source LLMs follows a similar explosion in image generation. The community refers to this as the "Stable Diffusion moment" for LLMs, drawing parallels to the image synthesis breakthroughs.
In both instances, a cheaper fine-tuning method called LoRA, combined with significant scale advancements (latent diffusion for images, Chinchilla for LLMs), enabled widespread public participation. Access to high-quality models ignited a flurry of ideas and rapid iteration from individuals and institutions globally, quickly surpassing the pace of larger organizations.
These community contributions were pivotal, propelling Stable Diffusion to a different trajectory than Dall-E and leading to rapid cultural impact. Whether LLMs will follow the same path is yet to be seen, but the structural elements are similar.
### What We Missed
The innovations driving open-source success directly address challenges we're still facing. Paying closer attention to their work could prevent us from duplicating efforts.
- **LoRA** — This technique, which represents model updates as low-rank factorizations, allows for cost-effective and rapid model fine-tuning. It's a game-changer for personalizing language models on consumer hardware, yet it's underutilized within Google.
- **Retraining Models** — Unlike full retraining, fine-tuning with LoRA is stackable, allowing for iterative improvements. As new datasets and tasks emerge, models can be updated cheaply without the need for full retraining, which can be prohibitively expensive.
- **Large vs. Small Models** — LoRA updates are inexpensive and quick, enabling rapid iteration and distribution. This democratizes model development, allowing the cumulative effect of fine-tuning to rival larger models in a fraction of the time.
- **Data Quality vs. Size** — Training on small, curated datasets suggests that data quality may be more important than size. These high-quality datasets, often open-source, are becoming the standard outside of Google.
### Directly Competing With Open Source Is a Losing Proposition
The advancements in open-source LLMs have immediate implications for our business strategy. With high-quality, unrestricted alternatives available for free, it's challenging to justify a paid, restricted Google product.
Moreover, the nature of open-source work, with its collaborative and iterative approach, is something we cannot replicate. We must recognize that we need the open-source community more than they need us.
### Owning the Ecosystem: Letting Open Source Work for Us
Meta, despite the leak of their model, has benefited from the global community's free labor. By owning the platform where innovation occurs, they can directly incorporate community-driven advancements into their products.
Google has seen success with this approach through Chrome and Android. By leading in the open-source space, we can shape the narrative and drive innovation.
### Epilogue: What about OpenAI?
OpenAI's closed policy may seem to put us at a disadvantage, but in reality, we're already sharing knowledge through the movement of researchers. OpenAI's relevance will diminish unless they adapt to the open-source trend.
### The Timeline
A series of events from the launch of LLaMA to the latest open-source achievements illustrates the rapid pace of innovation in the open-source community. Key milestones include the adaptation of LLaMA for various platforms, the introduction of instruction tuning, and the creation of models that rival Google Bard—all achieved with minimal resources.
What are the implications of the "no moat" memo for Google's AI strategy?
The "no moat" memo from Google has significant implications for the company's AI strategy. The memo, leaked from an internal Google researcher, suggests that Google and OpenAI do not have a competitive advantage or "moat" in the AI industry. Instead, the memo highlights the rise of open-source AI models, which are faster, more customizable, more private, and more capable, even with fewer resources.
The memo suggests that Google's large AI models are no longer seen as an advantage, with open-source models being faster, more customizable, and more efficient. It also emphasizes the need for Google to adapt its strategy to incorporate more open and collaborative approaches, and to act upon the challenges it faces more promptly.
The memo's author argues that the value of owning the ecosystem cannot be overstated, and that most open-source innovation is happening on top of their architecture, allowing them to directly incorporate it into their products. This suggests a shift in Google's AI strategy towards a more open, collaborative, and ecosystem-based approach.
However, it's important to note that not everyone at Google agrees with the memo's conclusions. Demis Hassabis, the CEO of Google's DeepMind, has publicly disagreed with the memo's conclusions, expressing confidence in the competitive nature of Google's researchers and the potential for more breakthroughs from the newly merged Google Brain and Google DeepMind teams.
The memo also raises broader questions about the future of AI. It suggests that the future of AI could be more open-source, with more collaboration and less emphasis on proprietary models. This could lead to a more democratized AI landscape, with more players able to contribute and compete.