Tracking the AI ​​Apocalypse – Politico

It’s time, readers, for us to answer The big question.

No, not “but how does the blockchain actually work The job— or “What I would actually do in the metaverse” — or even “one lambo. “

I am talking about Big Question: Will artificial intelligence kill us all?

Well, this might be a little worrying. But there is, in fact, a dedicated and passionate group of very smart people right now dedicated to answering that question — profiling the risks of “artificial general intelligence,” which is a term for an AI-powered system that matches human cognitive capabilities. What happens, ask these forward-thinking theorists, when you use this ability to challenge or frustrate human creators?

This isn’t exactly one of the big AI policy questions on the agenda right now — when regulators think of AI, they often think of the fairness of algorithms, the misuse of personal data, or how the technology could disrupt existing sectors like education or law. . But when this one lands on the radar, it’s going to be the biggest.

“The worry is that it is in its nature [a generally intelligent AI]It will have a completely different set of moral codes (things it should or shouldn’t do) and a vastly increased set of capabilities, which can lead to very disastrous results, writes venture capitalist and tech blogger Rohit Krishnan in blog post Last month I explored the idea in detail.

But there is one big problem for anyone seriously trying to tackle this: How likely is such a thing to be possible? This question is central as we try to figure out how much to worry about what AI might look like, and how quickly we should act to shape it. (For the record, some Very smart people They provide some of these questions Very serious thought.)

Krishnan’s post is now getting attention because he developed a framework of sorts to answer this question. His formula for predicting existential AI risks, which he calls the “strange loop equation”, is based on Drake equation, which in the 1960s provided a way to calculate another hard-to-guess number: the number of possible, contactable alien entities in the universe. Krishnan’s version includes a number of risk conditions in predicting the likelihood of a hostile AI.

I spoke with Krishnan about the post today, and he confirmed that he, himself, doesn’t freak out — in fact he’s skeptical of the idea that a runaway AI might be a harbinger of death. He said that “Like most technological advances, we are likely to see incremental progress, and with each step change we have to work a little bit on how to do it smartly and safely.”

Based on its own assessments of the likelihood of various conditions that could lead to a hostile AI — such as its speed of development, or its ability to lie — there is a 0.0144 percent chance that a power-seeking AI will kill or enslave us all.

This makes him more optimistic than some of the others who have tried their own version of the exercise, as in Market prediction metaculos (34 percent) or a A recent study by researcher Joseph Carlsmith (about five percent), Krishnan points out. (“Keep in mind that series equations like Drake’s are great to think about, not to mention the exact numbers of probabilities,” he added.)

So no sweat, right? Probably not: Despite the serious time and thought given to the problem, Krishnan cautions that any current speculation likely bears little resemblance to what form it will eventually take. “I basically don’t think we can make any credible engineering statements about how AI can safely align with the assumption that it is a relatively autonomous, intelligent, capable entity,” he wrote in conclusion.

“Let’s assume that at some point in the future we will be able to create systems with high levels of agency, giving them curiosity and the ability to act in the world and the things we possess as independent intelligent entities in that world,” he told me today.

“It wouldn’t really be possible to control them, because it’s very strange to create something that has the powers of a normal human being while at the same time, they can only do what you’ve asked them to do. We find it very difficult to do that with anything remotely intelligent in our daily lives; and as a result , the only way out I can see is to try to embed our values ​​in them.”

Then how do we define these values, and what role can non-engineers in government and elsewhere play in preventing an AI catastrophe? Krishnan is also cautious there, saying that it’s basically an engineering problem that has to be solved recursively as problems arise.

“I am reasonably skeptical of what governments here can actually do, if only because we are talking about things that are at the cutting edge of not just technology, but in some respects anthropology—discovering the science of life and behavior of what is effectively a new intelligent entity,” Krishnan said. I think some of the things the government might do is start making treaties with each other along the lines of what we did with nuclear weapons… [and] Ensure that the supply chain is still relatively strong, “It is best to keep humanity at the forefront of the development of artificial intelligence.

As the list of disruptive roles that AI may play goes on, It got even weirder: What about the processor?

Rob Morris, co-founder of a technology-focused company Nonprofit mental health organizationtweeted about a recent experience His company worked as it gave Koko users, who send and answer inquiries about mental health issues through various apps, the opportunity to answer these queries with the help of GPT-3. Morris said that about 30,000 messages have been answered overall with the help of AI, and that “messages composed by AI (and supervised by humans) are rated much higher than those written by humans themselves.”

However… people didn’t like That is, it seems, once they took a second to think about the implications. “Once people learned that the messages were co-generated by a machine, it didn’t work. Empathy simulations look weird and empty,” Morris tweeted. “The implications here are poorly understood. Instead of friends and family?

After a rant on Twitter, Maurice They fought to dispel The notion that users were somehow duped, implying in a tweet that GPT-3 was used as a tool by human respondents, and that the feature was announced to all users of the service. (He sent me a screenshot showing how when users receive a GPT-3 help response, it included a note that said “Written in collaboration with Koko Bot.”) Either way, this is a powerful example of how mindful we are of AI applications when it comes to how we experience it. for them.

A recurring topic in this newsletter is how often US regulators lag behind their EU counterparts when it comes to new technology.

There is one area, however, where this has definitely not been the case: TikTok. Politico’s Nicholas Vinokour, Clothilde Jugard, Osain Herrero, and Louis Westendarp have it today’s report On how European countries will deal with US wariness towards the Chinese-owned app, after US officials called for a ban from the app on government officials’ phones due to surveillance concerns.

“In light of the privacy and security risks posed by the app and the app’s far-reaching access rights, I consider the ban on TikTok on the work phones of US government officials appropriate,” a spokesperson for digital policy for the German liberal FDP party told reporters from the European side, adding that it “should Also examine the corresponding steps in Germany. And French President Emmanuel Macron is joining the board to take a tougher line, too, telling a group of American investors and French tech executives he wants to regulate the company.