Categories: Technology

Meta has built a massive new language AI—and it’s giving it away for free

[ad_1]

Pineau helped change how research is published in several of the largest conferences, introducing a checklist of things that researchers must submit alongside their results, including code and details about how experiments are run. Since she joined Meta (then Facebook) in 2017, she has championed that culture in its AI lab. 

“That commitment to open science is why I’m here,” she says. “I wouldn’t be here on any other terms.”

Ultimately, Pineau wants to change how we judge AI. “What we call state-of-the-art nowadays can’t just be about performance,” she says. “It has to be state-of-the-art in terms of responsibility as well.”

Still, giving away a large language model is a bold move for Meta. “I can’t tell you that there’s no risk of this model producing language that we’re not proud of,” says Pineau. “It will.”

Weighing the risks

Margaret Mitchell, one of the AI ethics researchers Google forced out in 2020, who is now at Hugging Face, sees the release of OPT as a positive move. But she thinks there are limits to transparency. Has the language model been tested with sufficient rigor? Do the foreseeable benefits outweigh the foreseeable harms—such as the generation of misinformation, or racist and misogynistic language? 

“Releasing a large language model to the world where a wide audience is likely to use it, or be affected by its output, comes with responsibilities,” she says. Mitchell notes that this model will be able to generate harmful content not only by itself, but through downstream applications that researchers build on top of it.

Meta AI audited OPT to remove some harmful behaviors, but the point is to release a model that researchers can learn from, warts and all, says Pineau.

“There were a lot of conversations about how to do that in a way that lets us sleep at night, knowing that there’s a non-zero risk in terms of reputation, a non-zero risk in terms of harm,” she says. She dismisses the idea that you should not release a model because it’s too dangerous—which is the reason OpenAI gave for not releasing GPT-3’s predecessor, GPT-2. “I understand the weaknesses of these models, but that’s not a research mindset,” she says.

[ad_2]
Source link
Admin

Recent Posts

How to Remove Burnt-on Grease from Your Oven

Burnt-on grease isn't just an eyesore. It stinks up the kitchen and makes cooking a…

2 weeks ago

Air India: A Journey Through Time

Hey there! Ready to embark on a historical journey with Air India? Whether you're a…

4 weeks ago

The Rise of Smart Altcoins: How 2025 Is Reshaping the Crypto Hierarchy

In 2017, altcoins were seen as experimental side projects to Bitcoin. By 2021, they became…

1 month ago

5 Services That Can Transform Your Shopping Center in Las Vegas into a Must-Visit Destination

Shopping centers in Las Vegas have a unique opportunity to stand out by offering not…

1 month ago

Levitra Dosage: Guidelines for Safe Use

Levitra, a widely recognized medication for treating erectile dysfunction (ED), has proven to be a…

2 months ago

Practical Tips for Carpet Cleaning on a Budget

Have you ever looked down at your carpet and wondered if there’s a budget-friendly way…

3 months ago