SoulMete - Informative Stories from Heart. Read the informative collection of real stories about Lifestyle, Business, Technology, Fashion, and Health.

“Sentience” is the Wrong Question – O’Reilly

[ad_1]

Last weekend, Blake Lemoine, a Google engineer, was suspended by Google for disclosing a series of conversations he had with LaMDA, Google’s impressive large model, in violation of his NDA. Lemoine’s claim that LaMDA has achieved “sentience” was widely publicized–and criticized–by almost every AI expert. And it’s only two weeks after Nando deFreitas, tweeting about DeepMind’s new Gato model, claimed that artificial general intelligence is only a matter of scale. I’m with the experts; I think Lemoine was taken in by his own willingness to believe, and I believe DeFreitas is wrong about general intelligence. But I also think that “sentience” and “general intelligence” aren’t the questions we ought to be discussing.

The latest generation of models is good enough to convince some people that they are intelligent, and whether or not those people are deluding themselves is beside the point. What we should be talking about is what responsibility the researchers building those models have to the general public. I recognize Google’s right to require employees to sign an NDA; but when a technology has implications as potentially far-reaching as general intelligence, are they right to keep it under wraps?  Or, looking at the question from the other direction, will developing that technology in public breed misconceptions and panic where none is warranted?


Learn faster. Dig deeper. See farther.

Google is one of the three major actors driving AI forward, in addition to OpenAI and Facebook. These three have demonstrated different attitudes towards openness. Google communicates largely through academic papers and press releases; we see gaudy announcements of its accomplishments, but the number of people who can actually experiment with its models is extremely small. OpenAI is much the same, though it has also made it possible to test-drive models like GPT-2 and GPT-3, in addition to building new products on top of its APIs–GitHub Copilot is just one example. Facebook has open sourced its largest model, OPT-175B, along with several smaller pre-built models and a voluminous set of notes describing how OPT-175B was trained.

I want to look at these different versions of “openness” through the lens of the scientific method. (And I’m aware that this research really is a matter of engineering, not science.)  Very generally speaking, we ask three things of any new scientific advance:

  • It can reproduce past results. It’s not clear what this criterion means in this context; we don’t want an AI to reproduce the poems of Keats, for example. We would want a newer model to perform at least as well as an older model.
  • It can predict future phenomena. I interpret this as being able to produce new texts that are (as a minimum) convincing and readable. It’s clear that many AI models can accomplish this.
  • It is reproducible. Someone else can do the same experiment and get the same result. Cold fusion fails this test badly. What about large language models?

Because of their scale, large language models have a significant problem with reproducibility. You can download the source code for Facebook’s OPT-175B, but you won’t be able to train it yourself on any hardware you have access to. It’s too large even for universities and other research institutions. You still have to take Facebook’s word that it does what it says it does. 

This isn’t just a problem for AI. One of our authors from the 90s went from grad school to a professorship at Harvard, where he researched large-scale distributed computing. A few years after getting tenure, he left Harvard to join Google Research. Shortly after arriving at Google, he blogged that he was “working on problems that are orders of magnitude larger and more interesting than I can work on at any university.” That raises an important question: what can academic research mean when it can’t scale to the size of industrial processes? Who will have the ability to replicate research results on that scale? This isn’t just a problem for computer science; many recent experiments in high-energy physics require energies that can only be reached at the Large Hadron Collider (LHC). Do we trust results if there’s only one laboratory in the world where they can be reproduced?

That’s exactly the problem we have with large language models. OPT-175B can’t be reproduced at Harvard or MIT. It probably can’t even be reproduced by Google and OpenAI, even though they have sufficient computing resources. I would bet that OPT-175B is too closely tied to Facebook’s infrastructure (including custom hardware) to be reproduced on Google’s infrastructure. I would bet the same is true of LaMDA, GPT-3, and other very large models, if you take them out of the environment in which they were built.  If Google released the source code to LaMDA, Facebook would have trouble running it on its infrastructure. The same is true for GPT-3. 

So: what can “reproducibility” mean in a world where the infrastructure needed to reproduce important experiments can’t be reproduced?  The answer is to provide free access to outside researchers and early adopters, so they can ask their own questions and see the wide range of results. Because these models can only run on the infrastructure where they’re built, this access will have to be via public APIs.

There are lots of impressive examples of text produced by large language models. LaMDA’s are the best I’ve seen. But we also know that, for the most part, these examples are heavily cherry-picked. And there are many examples of failures, which are certainly also cherry-picked.  I’d argue that, if we want to build safe, usable systems, paying attention to the failures (cherry-picked or not) is more important than applauding the successes. Whether it’s sentient or not, we care more about a self-driving car crashing than about it navigating the streets of San Francisco safely at rush hour. That’s not just our (sentient) propensity for drama;  if you’re involved in the accident, one crash can ruin your day. If a natural language model has been trained not to produce racist output (and that’s still very much a research topic), its failures are more important than its successes. 

With that in mind, OpenAI has done well by allowing others to use GPT-3–initially, through a limited free trial program, and now, as a commercial product that customers access through APIs. While we may be legitimately concerned by GPT-3’s ability to generate pitches for conspiracy theories (or just plain marketing), at least we know those risks.  For all the useful output that GPT-3 creates (whether deceptive or not), we’ve also seen its errors. Nobody’s claiming that GPT-3 is sentient; we understand that its output is a function of its input, and that if you steer it in a certain direction, that’s the direction it takes. When GitHub Copilot (built from OpenAI Codex, which itself is built from GPT-3) was first released, I saw lots of speculation that it will cause programmers to lose their jobs. Now that we’ve seen Copilot, we understand that it’s a useful tool within its limitations, and discussions of job loss have dried up. 

Google hasn’t offered that kind of visibility for LaMDA. It’s irrelevant whether they’re concerned about intellectual property, liability for misuse, or inflaming public fear of AI. Without public experimentation with LaMDA, our attitudes towards its output–whether fearful or ecstatic–are based at least as much on fantasy as on reality. Whether or not we put appropriate safeguards in place, research done in the open, and the ability to play with (and even build products from) systems like GPT-3, have made us aware of the consequences of “deep fakes.” Those are realistic fears and concerns. With LaMDA, we can’t have realistic fears and concerns. We can only have imaginary ones–which are inevitably worse. In an area where reproducibility and experimentation are limited, allowing outsiders to experiment may be the best we can do. 



[ad_2]
Source link