Sunday, May 12, 2024
HomeGlobalScience & TechnologyWill the Evolution of AI lead to an Existential Catastrophe?

Will the Evolution of AI lead to an Existential Catastrophe?

-

Artificial intelligence (AI) can initiate an “existential catastrophe” for the human race, a scientist has warned.

Roman Yampolskiy, an associate professor of computer engineering and science at the Speed School of Engineering, has conducted a general evaluation of the related scientific literature, asserting that he has found no evidence AI can be controlled. And even if some limited controls are familiarized, these will probably be inadequate, he claims.

As an outcome, the scientists acknowledge that AI should not be advanced without this evidence. Even though AI may be one of the most significant problems facing humanity, the technology remains poorly understood, poorly demarcated, and poorly examined, according to Yampolskiy, who is an AI security expert.

The scientist’s upcoming book, AI: Unexplainable, Unpredictable, Uncontrollable, discovers the ways that AI can dramatically restructure society—maybe not always to our benefit.

We are confronting an almost certain event with the potential to lead to an existential catastrophe. No wonder many consider this to be the most significant problem humanity has ever confronted. The consequence could be success or extermination, and the future of the world hangs in the balance, Yampolskiy said in a media release.

He further stated: Why do numerous scientists assume that the AI control problem has an answer? To the best of our knowledge, there is no indication for that, no proof. Before embarking on a mission to create a controlled AI, it is significant to show that the situation is solvable. This, together with data that show the expansion of AI superintelligence is a virtually guaranteed event, indicating we should be supporting a substantial AI safety effort.

Yampolskiy claims that our capability to produce intelligent software far surpasses our capacity to control or even authenticate it. In light of his extensive appraisal of the literature, the scientist said cutting-edge AI systems can never fully be governable and will continuously present some form of danger, despite any benefits that they may provide.

In his opinion, the AI community should emphasize curtailing such hazards while similarly trying to maximize possible benefits.

If we grow comfortable to tolerate AI’s responses without an explanation, basically treating it as a Prophecy system, we would not be able to distinguish if it initiates providing incorrect or manipulative responses, Yampolskiy said.

As AI systems become more influential, their independence increases as control over them reduces—leading to possible safety risks.

Less intelligent agents (people) can’t eternally regulate more intelligent agents (artificial superintelligences). This is not because we might fail to discover a safe strategy for superintelligence in the massive space of all conceivable designs, it is because no such strategy is probable, it doesn’t exist. Superintelligence is not rebelling, it is uncontainable, to start with, Yampolskiy said.

He further stated the human race is facing an option, do we become like babies, taken care of but not in power or do we discard having a supportive guardian but remain in charge and free?

According to Yampolskiy, a probable path to ease risks would be to deprive some of the abilities of AI in return for partial control. He also proposes that AI systems should be adaptable with “undo” possibilities that are clear and easy to comprehend in human language.

In addition, the scientist said partial freezes or even partial bans on certain AI technology should be considered and called for improved efforts and backing for AI safety studies.

We may not get to 100% safe AI, nonetheless, we can make AI safer in proportion to our efforts, which is a lot healthier than doing nothing. It is essential to use this opening wisely, he said.

Yampolskiy’s work serves as a critical guidepost, prompting us that the trail toward a positive coexistence with AI demands knowledge, vigilance, and a solid commitment to ethical principles.

The full research was issued by the Taylor and Francis Group.

Related Articles

 

spot_img

LEAVE A REPLY

Please enter your comment!
Please enter your name here

LATEST POSTS

Follow us

51,000FansLike
50FollowersFollow
428SubscribersSubscribe
spot_img