Despite the warm weather in early May, Vinod Khosla still wore a black turtleneck and a suit jacket, looking serious. He surveyed the crowded auditorium inside the U.S. Capitol and set the stakes for the upcoming debate. "Winning the AI competition means having economic strength, which can influence social policies or ideologies."
Khosla's next point was that China's growing AI capabilities could pose a threat to the upcoming U.S. elections. At the one-day AI and Defense Conference on Capitol Hill with Valley Forum, hawkish congressional staff and policy experts agreed. The impact of AI on national security, especially in the hands of U.S. adversaries, was particularly important here. But Khosla's words were accompanied by a call to action, namely, to ban the broader use of our leading AI models, which put him in the midst of a broader and more intense debate in Silicon Valley.
The former Sun Microsystems CEO and Khosla Ventures founder, along with his investors and entrepreneurs, generally believe that AI heralds a technological revolution as revolutionary as mainframes or personal computers—even, in the words of billionaire and Greylock partner Reid Hoffman, as cars or steam engines. There will be an affordable virtual doctor on every smartphone, and every child will have a free tutor. AI can act as a great equalizer, a deflationary cheat code, helping to save lives and reduce poverty. "We can liberate people from the drudgery, allowing them to get rid of the 8-hour daily grind on the assembly line for 40 years," said Khosla, 69.
But such dreams may come at a painful price, and the unforeseen consequences may be far more severe than those of previous technological inflection points—such as a dystopian AI arms race with China. If social media has given us cultural wars and the weaponization of "truth," then what kind of collateral damage might AI bring?
For Khosla, Hoffman, and a group of powerful tech leaders, there is a clear way to mitigate regrettable unintended consequences: to control the development of AI and regulate its use. Giants like Google and Microsoft have joined the ranks, as has Open-AI, the maker of ChatGPT, with Khosla and Hoffman being early investors in Open-AI. They believe that guardrails are essential to realizing the utopian potential of AI, a view also endorsed by President Biden, to whom both venture capitalists are donors. French President Emmanuel Macron also agrees, inviting Hoffman to breakfast last fall to discuss what he calls the new "mind steam engine."
"How can we help as many good people as possible, such as doctors, while helping as few bad guys as possible, such as criminals?" Hoffman, the 56-year-old co-founder of LinkedIn, said when talking about this challenge. "My view is to find the fastest way to accelerate development while taking sensible risks, while also acknowledging these risks."
However, an increasingly fierce faction is doing everything it can to thwart Khosla, Hoffman, and everything they represent, led by 52-year-old Marc Andreessen, co-founder of Netscape and venture capital firm a16z. Among Andreessen's partners and his open-source absolutist ranks—amorphous groups that include CEOs of open-source AI startups Hugging Face and Mistral, Meta's Chief AI Scientist Yann LeCun, and Tesla CEO and X owner Elon Musk (sometimes)—such talk of disaster and national-level risks is often seen as shameless means taken by early AI power holders to maintain power.
"There are no safety issues. The current technology does not pose existential risks," LeCun said. "If you are in a leading position, you will say that you need regulation because it is dangerous, so it needs to be closed," Martin Casado, an AI investor and colleague of Andreessen, agreed. "This is a typical regulatory capture. This is the rhetoric people use to shut things down."
Andreessen and his allies envision a best future where AI can prevent diseases and premature death, and all artists and merchants can use AI assistants to improve their work efficiency. There will be no bloody human errors in warfare, and casualties will be reduced. AI-enhanced art and movies will be everywhere. Andreessen declined an interview request for this article, elaborating on his position in a manifesto last year, where he dreamed of an open-source paradise without regulatory barriers to hinder the development of AI, nor bureaucratic moats to protect large companies at the expense of startups.
This year, the three billionaire investors all appeared on the list of the world's best technology investors, with their investment areas not limited to AI—Hoffman ranked 8th, Khosla ranked 9th, and Andreessen ranked 36th—but their influence is most evident in emerging fields. These outstanding leaders of the last technological revolution are now voicing their views on the key issues of the next technological revolution.
"Bombs affect a region. AI will affect all regions at the same time."
Safe innovation or anti-competitive conspiracy? Technological utopia or chaotic wild west? Talking to the self-proclaimed spokespeople of these two camps, you will find that their views are largely diametrically opposed. The parties can't even agree on who is who—everyone is an optimist, except for each other. For "accelerationists" like Andreessen, anyone who wants to slow down near the bend, as Hoffman advocates, is a "slowdownist"; scholars and leaders who call AI a threat to human survival are "doomers." Meanwhile, Hoffman says he called himself a technological optimist long before Andreessen turned the term into a creed. "I appreciate Mark's advocacy," he said. "My view on open source is more nuanced than his."
They do agree on one thing: whoever's view prevails will affect what Andreessen calls "probably the most important, and best thing that our civilization has ever created." And, either way, it can bring huge profits.
In May 2023, OpenAI CEO Sam Altman appeared on Capitol Hill to attend a Senate AI subcommittee meeting. The essence of his message was: regulate us. For his opponents, this was the moment they had been waiting for to take off their masks. Three months earlier, Musk co-founded and funded OpenAI when it was still an open-source non-profit organization, and he had condemned OpenAI's recent injection of billions of dollars in capital from Microsoft on X. Musk said that OpenAI had evolved from its non-profit roots into a "closed-source, profit-maximizing company effectively controlled by Microsoft."
Hoffman said that numerous podcast appearances, LinkedIn posts, and even an AI-assisted book on the subject have helped him demonstrate that his position is consistent. It is also important to accept the fact that many citizens—from artists and scholars to merchants and scientists—may not think that the development of AI is a good thing. Due to the influence of science fiction, many people think that if something goes wrong with AI, it will become a killer robot or a superhuman intelligence that decides to eliminate humanity. "I understand people's concerns about AI very much," Hoffman said. "But it's like saying, 'I don't want the Wright brothers to fly until we know how to prevent plane crashes.' That's just not how things work."
Khosla said that he and Hoffman are "very similar" in policy views. "I think a balanced approach is more beneficial to society, which can reduce risks while retaining benefits," he said. He co-hosted a fundraising event for Biden in Silicon Valley during this campaign, and in October he submitted a comment to the U.S. Copyright Office in defense of AI models being trained on copyrighted material (with an opt-out option).
But recently, Khosla's tone has become more ominous, comparing the work of OpenAI to the Manhattan Project that built the atomic bomb. (On X, he directly posed this question to Andreessen: You're not going to open source, are you?) Khosla believes that if not controlled, AI will bring more serious security risks. "A nuclear bomb affects a region. AI will affect all regions at the same time," he said.
Hoffman is not worried about bombs. But he is worried about a freely available AI model that, once trained, could generate a biological weapon capable of wiping out 100 million people and then spread widely. "Once you open source one of these problems, you can't take it back," he said. "My position is, let's tackle the really urgent problems that could have a huge impact on millions of people. As for everything else, look, you can put the genie back in the bottle."
They believe that the appropriate response is a "fairly loose" regulation similar to Biden's executive order in October, which calls for more oversight of model makers, including sharing training test results, and working to establish new pre-release safety standards. But the Andreessen camp is not satisfied. Andreessen claims that "large technology companies" (such as Google and Microsoft) and "new incumbents" (OpenAI and Anthropic, heavily supported by large technology companies) have a common goal: to form a "cartel protected by the government" to lock in their "common agenda." Andreessen wrote on X: "The only viable option is Elon, startups, and open source - they are all under coordinated attack... and there are very few defenders."
Casado, who sold his networking startup Nicira to VMware for more than $1 billion in 2012, sees a story that he and Andreessen have seen before: lawmakers are dissatisfied with the huge power gained by social media companies like Meta and are still fighting the last regulatory battle.