Meta Made Its AI Tech Open-Source. Rivals Say It’s a Risky Decision.

Meta Made Its AI Tech Open-Source. Rivals Say It’s a Risky Decision.

In February, Meta took an unusual step in the rapidly evolving world of artificial intelligence: the company decided to give away its AI crown jewels.

The Silicon Valley giant, which owns Facebook, Instagram and WhatsApp, had developed an AI technology called LLaMA that can power online chatbots. But instead of keeping the technology to itself, Meta released the system’s underlying computer code. Academics, government researchers, and others who gave Meta their email address were able to download the code once the company verified the person.

Essentially, Meta gave away its AI technology as open-source software — computer code that can be freely copied, modified, and reused — giving outsiders everything they needed to quickly create their own chatbots.

“The platform that will win will be the open one,” Yann LeCun, Meta’s chief AI scientist, said in an interview.

As the race for AI leadership heats up across Silicon Valley, Meta is differentiating itself from its competitors by taking a different approach to technology. Driven by its founder and CEO Mark Zuckerberg, Meta believes the smartest thing to do is to share the underlying AI engines in order to spread its influence and ultimately move faster into the future.

Its actions contrast with those of Google and OpenAI, the two companies spearheading the new AI arms race. Concerned that AI tools like chatbots could be used to spread disinformation, hate speech, and other toxic content, these companies are increasingly hiding the methods and software behind their AI products.

Google, OpenAI, and others have criticized Meta, saying a fully open-source approach is dangerous. The rapid rise of AI in recent months has raised alarm bells about the technology’s risks, including that it could turn the job market upside down if not used properly. And just days after LLaMA’s release, the system was leaked to 4chan, the online message board notorious for spreading false and misleading information.

“We want to think more carefully about disclosing details or open-sourced code” of AI technology, said Zoubin Ghahramani, a Google vice president of research who helps oversee AI work. “Where can this lead to abuse?”

But Meta said it sees no reason to keep its code to itself. The increasing secrecy at Google and OpenAI is a “huge mistake,” said Dr. LeCun, and a “really bad view of what’s happening”. He argues that consumers and governments will refuse to accept AI unless it is beyond the control of companies like Google and Meta.

“Do you want every AI system to be under the control of some powerful American corporations?” he asked.

OpenAI declined to comment.

Meta’s open-source approach to AI isn’t new. The history of technology is littered with battles between open source and proprietary or closed systems. Some hoard the core tools used to build tomorrow’s computing platforms, while others give those tools away. Most recently, Google has made the mobile operating system Android available as an open source solution in order to take over Apple’s dominance in smartphones.

In the past, many companies have openly shared their AI technologies at the urging of researchers. But their tactics are changing due to the race for AI. This shift started last year when OpenAI released ChatGPT. The overwhelming success of the chatbot wowed consumers and increased competition in the AI ​​space. Google quickly began incorporating more AI into its products, and Microsoft invested $13 billion in OpenAI.

While Google, Microsoft, and OpenAI have received most of the attention in the AI ​​space since then, Meta has also been investing in the technology for nearly a decade. The company has invested billions of dollars building the software and hardware needed to implement chatbots and other “generative AIs” that produce text, images, and other media themselves.

For the past few months, Meta has been working hard behind the scenes to incorporate its years of AI research and development into new products. Mr. Zuckerberg is focused on building the company into a leading AI company and holds weekly meetings on this topic with his executive team and product leads.

As a sign of its commitment to AI, Meta announced Thursday that the company had developed a new computer chip and improved a new supercomputer specifically for the development of AI technologies. Also, it is designing a new computer data center with an eye on creating AI

“We’ve been building advanced infrastructure for AI for years, and this work reflects long-term efforts that will enable even more advances and better uses of this technology in everything we do,” Zuckerberg said.

Meta’s biggest AI move in recent months was the release of LLaMA, a so-called Large Language Model or LLM (LLaMA stands for “Large Language Model Meta AI”). LLMs are systems that learn skills by analyzing large amounts of text. including books, Wikipedia articles and chat logs. ChatGPT and Google’s Bard chatbot are also based on such systems.

LLMs locate patterns in the texts they analyze and learn to create their own writing, including term papers, blog posts, poems, and computer code. You can even have complex conversations.

In February, Meta released LLaMA openly, allowing academics, government researchers and others who provided their email address to download the code and use it to create their own chatbot.

But the company went further than many other open-source AI projects. It allowed users to download a version of LLaMA after being trained on vast amounts of digital texts found on the internet. Researchers call this “unlocking the weights,” referring to the particular mathematical values ​​the system learns as it analyzes the data.

This was important because analyzing all of this data typically requires hundreds of specialized computer chips and tens of millions of dollars—resources most companies don’t have. Those with the weight can deploy the software quickly, easily, and inexpensively while spending a fraction of what it would otherwise cost to build such powerful software.

As a result, many in the tech industry believed Meta had set a dangerous precedent. And within days someone posted the LLaMA weights on 4chan.

At Stanford University, researchers used Meta’s new technology to build their own AI system, which was made available on the internet. A Stanford researcher named Moussa Doumbouya soon used it to generate problematic text, according to New York Times screenshots. In one instance, the system provided instructions on how to dispose of a body without getting caught. Racist material also emerged, including comments supporting the views of Adolf Hitler.

In a private chat among the researchers seen by The Times, Mr Doumbouya said spreading the technology to the public is like “a grenade available to everyone in a grocery store”. He did not respond to a request for comment.

Stanford promptly removed the AI ​​system from the internet. The project was intended to provide researchers with technology that “captures the behavior of cutting-edge AI models,” said Tatsunori Hashimoto, the Stanford professor who led the project. “We stopped the demo as we were increasingly concerned about potential for abuse beyond a research framework.”

dr LeCun argues that this type of technology is not as dangerous as it seems. He said that even a small number of people can generate and spread disinformation and hate speech. He added that toxic material could be heavily restricted through social networks like Facebook.

“You can’t stop people from creating nonsense or dangerous information or whatever,” he said. “But you can prevent the spread.”

More people using open source software can also level the playing field for Meta as it competes with OpenAI, Microsoft and Google. If every software developer in the world builds programs using Meta’s tools, it could help prepare the company for the next wave of innovation and ward off potential irrelevance.

dr LeCun also referenced recent history to explain why Meta championed open-source AI technology. He said the evolution of the consumer internet was the result of open, community standards that have helped build the fastest and most widespread knowledge-sharing network the world has ever seen.

“Progress is faster when it’s open,” he said. “You have a more vibrant ecosystem where everyone can contribute.”