Why We Built Forum AI

How do we ensure AI systems are accurate and trustworthy when they're becoming the primary way millions of people get information? We co-founded Forum AI to answer that question.

As AI becomes the default way we get answers to our most important questions, we're facing an enormous challenge: the systems we're building lack the judgment to handle complex, subjective scenarios reliably. Consider the role of chatbots, auto-generated content, and future technologies in explaining the news, educating our children, or offering mental health support.

Our careers have taught us how information systems shape belief at scale. At Meta, we watched algorithms and AI systems determine what billions see—Campbell leading teams on news and media partnerships, Robbie working on AI trust and safety. We saw how even well-intentioned systems can fail to deliver necessary context, reinforce hidden bias, and create trust problems. AI systems could make these problems exponentially worse, and not because they're inherently biased, but because they're opaque.

That's why we built Forum AI.

We've assembled a network of world-leading domain experts across economics, foreign policy, politics, healthcare, and education. Our network includes former Treasury Secretary Larry Summers, former Speaker of the House Kevin McCarthy, CNN's Fareed Zakaria, historian Niall Ferguson, political strategists Scott Jennings and Van Jones, author Salena Zito, former assistant CIA Director Rob Kee and many more. We've partnered with the top names in health care like the Mt. Sinai, along with leading think tanks and academic institutions.

The network is politically diverse and global, offering a broad range of perspectives. These are people who have spent decades building deep knowledge in their fields—through government service, academic work, and making consequential decisions under pressure. Former cabinet secretaries who have shaped economic policy. Intelligence officials who have analyzed geopolitical threats. Healthcare professionals who have treated patients through crises. Their expertise comes from the combination of rigorous training, peer recognition, and impact in the real world.

Our mission is to make AI systems trustworthy on questions where getting it wrong has real consequences, and we’ll do it by ensuring expert judgment shapes how these systems learn and what they produce.

We evaluate how major AI models—ChatGPT, Claude, Gemini, and others—handle subjective, high-stakes questions. When these systems address inflation, geopolitical conflicts, or mental health crises, our experts assess whether responses demonstrate appropriate tone, balance, and context, while identifying missing perspectives and hidden bias.

When we find gaps across multiple AI systems, our experts fill them with original content. When major news breaks, they provide real-time analysis—often within minutes—capturing critical context: the historical patterns an economist sees in market movements or the regional dynamics a foreign policy specialist sees in diplomatic developments.

As AI systems become the primary source for information, we need infrastructure that ensures they can handle questions where judgment matters as much as facts. That's what Forum AI provides.

Recent posts

Latest from us