In the corridors of power and the boardrooms of Silicon Valley, a quiet, high-stakes battle is brewing over the architecture of human knowledge. Campbell Brown, a woman whose career has been defined by the pursuit of accuracy—first as a celebrated television journalist and later as the inaugural news chief at Meta—has turned her attention to the next great information frontier: artificial intelligence.
Watching the rapid deployment of foundation models, Brown recognized an existential déjà vu. The same patterns of misinformation, algorithmic bias, and platform-driven distortion that plagued the social media era are now being baked into the very engines that will answer the world’s questions. Unlike her previous roles, where she sought to influence internal policy, Brown is now working from the outside in. Her startup, Forum AI, aims to build the "quality control" infrastructure that the tech industry has conspicuously failed to prioritize.
The Genesis: From Facebook to the Frontier
To understand the mission of Forum AI, one must look back 17 months to a specific, sobering moment in New York City. When ChatGPT was released to the public, the implications for the future of human cognition hit Brown with visceral force.
“I was at Meta when ChatGPT was first released publicly,” Brown recalled during a recent discussion with TechCrunch’s Tim Fernholz at a StrictlyVC event in San Francisco. “I remember realizing this is going to be the funnel through which all information flows. And it’s not very good.”
For Brown, the stakes were personal. As a mother, she watched the rapid proliferation of generative AI and saw a potential degradation of intellectual rigor for the next generation. “My kids are going to be really dumb if we don’t figure out how to fix this,” she admitted, capturing the existential dread that has driven her to pivot from content strategy to technical auditing.
Brown observed that the primary focus of AI developers remained locked on technical benchmarks—coding proficiency and mathematical accuracy—while the "soft" but essential domains of news, geopolitics, and nuance were left to flounder. In her view, the industry was optimizing for speed and scale while ignoring the "high-stakes topics" where answers are rarely black and white.
The Methodology: Human Expertise at Scale
Forum AI operates on a premise that seems almost counterintuitive in an era of pure automation: to make AI smarter, you must first defer to the wisest humans. The startup’s core offering is a rigorous evaluation platform that pits foundation models against "high-stakes" scenarios involving geopolitics, mental health, finance, and hiring.
The process is architectural in nature. Brown and her team recruit world-class subject matter experts to design intricate benchmarks. For their work in geopolitics, the roster reads like a "who’s who" of international affairs: historian Niall Ferguson, commentator Fareed Zakaria, former Secretary of State Tony Blinken, former House Speaker Kevin McCarthy, and former Obama-era cybersecurity chief Anne Neuberger.
The goal is to move beyond the shallow, checkbox-style audits currently favored by the industry. By having these experts define the parameters of truth and nuance, Forum AI trains "AI judges" to evaluate how well a large language model (LLM) handles complex queries. According to Brown, Forum AI has already achieved a 90% consensus threshold between their AI judges and human domain experts—a significant milestone in the quest for automated, high-fidelity accuracy.
The Failure of "Slop": Identifying the Model Gap
The necessity of Forum AI’s work is underscored by the current state of the market. When Brown’s team began evaluating the leading models, the results were, in her words, "not exactly encouraging."
Her assessments revealed alarming systemic failures. She noted instances where Google’s Gemini model pulled information from Chinese Communist Party websites to answer queries unrelated to China, suggesting a lack of contextual discernment. Furthermore, she pointed to a pervasive left-leaning political bias across almost all major models.
Beyond overt political leanings, the models often suffer from "subtler failures": missing context, failing to present competing perspectives, and engaging in "straw-manning"—a process where the model misrepresents an opponent’s argument to make it easier to refute.
"There’s a long way to go," Brown noted. "But I also think that there are some very easy fixes that would vastly improve the outcomes."
The "Compliance Joke" and the Case for Enterprise
Brown’s cynicism regarding current industry self-regulation is rooted in her tenure at Facebook, where she witnessed firsthand the limitations of corporate internal fact-checking programs. She argues that the industry’s current compliance landscape is, frankly, "a joke."
She cites the implementation of New York City’s landmark hiring bias law as a case study. When the state comptroller audited AI tools used for recruitment, more than half were found to contain undetected violations. This revealed a critical disconnect: the tech industry believes it is building for the future, but it is failing to account for the immediate, legal, and ethical consequences of its output.
Brown is betting that the solution will come from an unlikely source: the enterprise sector. While consumer-facing chatbots may continue to serve up "slop," businesses dealing with credit decisions, insurance, and hiring are inherently risk-averse.
"They’re going to want you to optimize for getting it right," Brown argued. For these companies, a hallucinating AI is not just a nuisance—it is a massive liability. By positioning Forum AI as a bridge between the chaotic potential of LLMs and the rigid requirements of enterprise compliance, Brown believes she can force the market to care about accuracy.
The Great Disconnect: Silicon Valley vs. The Public
Perhaps the most striking theme in Brown’s critique is the widening chasm between the self-perception of AI leaders and the reality for the average user.
"You hear from the leaders of the big tech companies, ‘This technology is going to change the world,’ ‘it’s going to put you out of work,’ ‘it’s going to cure cancer,’" Brown said. "But then to a normal person who’s just using a chatbot to ask basic questions, they’re still getting a lot of slop and wrong answers."
This disconnect has led to an all-time low in public trust—a skepticism that Brown believes is entirely justified. The industry is trapped in a feedback loop of hyperbole, while the actual utility of these tools remains hampered by fundamental flaws in data processing and bias mitigation.
Looking Forward: The Crossroads of Information
The $3 million in seed funding raised by Forum AI, led by Lerer Hippeau, is a vote of confidence in the idea that the "truth" can be a product. However, turning that vision into a sustainable business model remains an uphill battle. The market is currently saturated with "checkbox" auditors—companies that provide superficial compliance reports that satisfy regulators but fail to address the deeper, more dangerous issues of algorithmic bias and factual integrity.
Brown is fighting for a more substantive, labor-intensive approach. She remains adamant that "smart generalists aren’t going to cut it" when it comes to auditing AI. The complexity of modern information requires domain-specific expertise capable of navigating edge cases that most developers haven’t even considered.
As the industry stands at this critical juncture, Brown sees a binary path forward. AI companies can continue to prioritize engagement and speed, effectively "giving users what they want," or they can commit to the harder, more expensive path of "giving people what’s real and what’s honest and what’s truthful."
For a woman who has spent her life in the pursuit of the latter, the stakes have never been higher. Whether the enterprise sector will embrace her vision remains to be seen, but one thing is clear: Campbell Brown is no longer waiting for the tech giants to police themselves. She is building the jury, the judge, and the standards by which the next era of human information will be measured.






