HomeNewsBroadening The Scope of Artificial Intelligence (AI)

Broadening The Scope of Artificial Intelligence (AI)

-

Jeremy Brito, London

Artificial Intelligence (AI) has become so ingrained within our daily lives, these days most of us can simply ask a smart assistant such as Amazon’s Alexa or Google Home to play our favourite song, tell us the weather for the day or set an alarm reminding us of an upcoming meeting. As useful as these smart assistants have become, they fall under the category of ‘narrow AI’ recognising only a limited set of commands. If you were to deviate from these very specific requests, the system would fail and therefore not perform in the ‘smart’ way we would expect.

Beta testing for Telsa’s Autopilot system is a well-known example of an AI failure with unfortunate consequences. Tesla test drivers believed the car’s Autopilot system was fully autonomous, and as a result, they did not monitor the road conditions or keep their hands on the steering wheel during testing. The Autopilot system therefore failed to detect stationary objects, as well as, lacked the ability to analyse driver engagement and adapt its system to various road conditions leading to several tragic vehicle accidents.

According to the ‘National Highway Traffic Safety Administration’ (NHTSA), 94% of serious crashes are due to human error. So how can we ensure an AI system can adapt to being truly intelligent whilst minimising errors and system failures? For starters, the AI system should incorporate a real understanding of how our human senses function. Human beings use language, sight, sound and perception to help us learn and adapt to new and unexpected situations and conditions.

If we revisit the self-driving car for example, technology company Waymo (formerly a part of Google) created a fully autonomous car without the need for a driver. The company focuses on the full driving experience and takes into account the visual and audio cues humans would typically pick up on while operating a vehicle. Waymo uses real-world vehicles fitted with cameras to capture sensory data from over 20 million miles of public roads. This sensory data is then used to map out a simulated environment enabling its AI system to learn and review various driver viewpoints, situations, objects and human gestures. The system is then able to predict in real-time and take into account everything from crosswalks, lane markers, traffic lights and stop signs. Therefore, if a pedestrian were to suddenly step out on the road, the vehicle would act accordingly by breaking in order to prevent a collision. Waymo’s deep learning of human senses and rigorous testing have created a vehicle that essentially removes the potential for human error and places emphasis on complete safety.

With these advancements in technology, AI will soon become a part of our real-world decision making. However, we need to consider the ethical impact of broadening the scope of AI. How do AI systems account for human values and consciousness? We can continue to create deep learning simulations but human intelligence will still need to be a factor in ensuring things like intuition, inclusivity and diversity are integrated into the AI decision making process.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

LATEST POSTS

Follow us

51,000FansLike
50FollowersFollow
428SubscribersSubscribe
spot_img