Current:Home > MyExperts issue a dire warning about AI and encourage limits be imposed -GrowthInsight
Experts issue a dire warning about AI and encourage limits be imposed
View
Date:2025-04-14 20:40:13
A statement from hundreds of tech leaders carries a stark warning: artificial intelligence (AI) poses an existential threat to humanity. With just 22 words, the statement reads, "Mitigating the risk of extinction from AI should be a global priority alongside other societal scale risks such as pandemics and nuclear war."
Among the tech leaders, CEOs and scientists who signed the statement that was issued Tuesday is Scott Niekum, an associate professor who heads the Safe, Confident, and Aligned Learning + Robotics (SCALAR) lab at the University of Massachusetts Amherst.
Niekum tells NPR's Leila Fadel on Morning Edition that AI has progressed so fast that the threats are still uncalculated, from near-term impacts on minority populations to longer-term catastrophic outcomes. "We really need to be ready to deal with those problems," Niekum said.
This interview has been edited for length and clarity.
Interview Highlights
Does AI, if left unregulated, spell the end of civilization?
"We don't really know how to accurately communicate to AI systems what we want them to do. So imagine I want to teach a robot how to jump. So I say, "Hey, I'm going to give you a reward for every inch you get off the ground." Maybe the robot decides just to go grab a ladder and climb up it and it's accomplished the goal I set out for it. But in a way that's very different from what I wanted it to do. And that maybe has side effects on the world. Maybe it's scratched something with the ladder. Maybe I didn't want it touching the ladder in the first place. And if you swap out a ladder and a robot for self-driving cars or AI weapon systems or other things, that may take our statements very literally and do things very different from what we wanted.
Why would scientists have unleashed AI without considering the consequences?
There are huge upsides to AI if we can control it. But one of the reasons that we put the statement out is that we feel like the study of safety and regulation of AI and mitigation of the harms, both short-term and long-term, has been understudied compared to the huge gain of capabilities that we've seen...And we need time to catch up and resources to do so.
What are some of the harms already experienced because of AI technology?
A lot of them, unfortunately, as many things do, fall with a higher burden on minority populations. So, for example, facial recognition systems work more poorly on Black people and have led to false arrests. Misinformation has gotten amplified by these systems...But it's a spectrum. And as these systems become more and more capable, the types of risks and the levels of those risks almost certainly are going to continue to increase.
AI is such a broad term. What kind of technology are we talking about?
AI is not just any one thing. It's really a set of technologies that allow us to get computers to do things for us, often by learning from data. This can be things as simple as doing elevator scheduling in a more efficient way, or ambulance versus ambulance figuring out which one to dispatch based on a bunch of data we have about the current state of affairs in the city or of the patients.
It can go all the way to the other end of having extremely general agents. So something like ChatGPT where it operates in the domain of language where you can do so many different things. You can write a short story for somebody, you can give them medical advice. You can generate code that could be used to hack and bring up some of these dangers. And what many companies are interested in building is something called AGI, artificial general intelligence, which colloquially, essentially means that it's an AI system that can do most or all of the tasks that a human can do at least at a human level.
veryGood! (2)
Related
- The company planning a successor to Concorde makes its first supersonic test
- Transgender girl faces discrimination from a Mississippi school’s dress code, ACLU says
- College professor to stand trial in death of pro-Israel counter-protester last year
- US proposes ending new federal leases in nation’s biggest coal region
- US appeals court rejects Nasdaq’s diversity rules for company boards
- New Hampshire Senate passes bill to restrict transgender athletes in grades 5-12
- Kevin Spacey says he's 'enormously pleased' amid support from Sharon Stone, Liam Neeson
- US military says first aid shipment has been driven across a newly built US pier into the Gaza Strip
- The Daily Money: Spending more on holiday travel?
- Human rights group urges Thailand to stop forcing dissidents to return home
Ranking
- Cincinnati Bengals quarterback Joe Burrow owns a $3 million Batmobile Tumbler
- UAW’s push to unionize factories in South faces latest test in vote at 2 Mercedes plants in Alabama
- Texas Gov. Greg Abbott pardons Daniel Perry, who killed Black Lives Matter protester in 2020
- Proof Nicole Richie and Cameron Diaz's Bond Is Better Than a Best Friend's
- What were Tom Selleck's juicy final 'Blue Bloods' words in Reagan family
- Belarus targets opposition activists with raids and property seizures
- College professor to stand trial in death of pro-Israel counter-protester last year
- New York at Indiana highlights: Caitlin Clark, Fever handed big loss in first home game
Recommendation
Civic engagement nonprofits say democracy needs support in between big elections. Do funders agree?
Yemeni security forces deploy in Aden as anger simmers over lengthy power outages
Brittany Mahomes makes her Sports Illustrated Swimsuit Issue debut
Shop These Rare Deals on Shay Mitchell's BÉIS Before They Sell Out
Trump's 'stop
Review: Proudly bizarre 'I Saw the TV Glow will boggle your mind – and that's the point
NRA kicks off annual meeting as board considers successor to longtime leader Wayne LaPierre
China and Cambodia begin 15-day military exercises as questions grow about Beijing’s influence