Is Artificial Intelligence an existential threat to humanity?


The term “artificial intelligence” or “AI” was coined at the 1956 Dartmouth conference. The generally accepted definition is the Turing test, first proposed in 1950, as the ability of a machine communicating using natural language over a teletype to fool a person into believing it was a human. “AGI” or “artificial general intelligence” extends this idea to require machines to do everything that humans can do, such as understand images, navigate a robot, recognize and respond appropriately to facial expressions, distinguish music genres, and so on.

caption: Oswal Publishers


1. It will help us to create scary future

While artificial intelligence is quickly reaching formidable computing speeds, one of its biggest challenges is developing the kind of complex processing that looks so easy when humans do it.

For a computer, multiplying two 10-digit numbers instantly is easy; however, looking at a dog and deciding whether it’s a feline or canine is incredibly hard. The Turing Test was thereby developed by mathematician Alan Turing to measure a machine’s ability to exhibit intelligent behavior equivalent to a human’s.


We’re already seeing the beginnings of self-driving cars, though the vehicles are currently required to have a driver present at the wheel for safety. Despite these exciting developments, the technology isn’t perfect yet, and it will take a while for public acceptance to bring automated cars into widespread use. Google began testing a self-driving car in 2012, and since then, the U.S. Department of Transportation has released definitions of different levels of automation, with Google’s car classified as the first level down from full automation. Other transportation methods are closer to full automation, such as buses and trains.

3. CLIMATE CHANGE (This will help to solve climate change)

Solving climate change might seem like a tall order from a robot, but as Stuart Russell explains, machines have more access to data than one person ever could—storing a mind-boggling number of statistics. Using big data, AI could one day identify trends and use that information to come up with solutions to the world’s biggest problems.


4. Difficult Exploration:

Artificial intelligence and the science of robotics can be put to use in mining and other fuel exploration processes. Not only that, these complex machines can be used for exploring the ocean floor and hence overcoming the human limitations. Due to the programming of the robots, they can perform more laborious and hard work with greater responsibility. They do not wear out easily.

5. Digital Assistants:

Highly advanced organizations use ‘avatars’ which are replicas or digital assistants who can actually interact with the users, thus saving the need of human resources. For artificial thinkers, emotions come in the way of rational thinking are not a distraction at all. The complete absence of the emotional side, makes the robots think logically and take the right program decisions. Emotions are associated with moods that can cloud judgment and affect human efficiency. This is completely ruled out for machine intelligence.

6.Cyborg Technology

One of the main limitations of being human is simply our own bodies—and brains. Researcher Shimon Whiteson thinks that in the future, we will be able to augment ourselves with computers and enhance many of our own natural abilities. Though many of these possible cyborg enhancements would be added for convenience, others might serve a more practical purpose. Yoky Matsuka of Nest believes that AI will become useful for people with amputated limbs, as the brain will be able to communicate with a robotic limb to give the patient more control. This kind of cyborg technology would significantly reduce the limitations that amputees deal with on a daily basis.

7. We will be able to make better machines for destruction

AI is most controversial when it comes to its military applications.Battlefield robots and Drones are key priorities with other sci-fi—style technology like HAL — a suit that gives the wearer the power of 10 men — and Boston Dynamics’ Petman, an anthropomorphic robot used to test chemical protection clothing. Some believe developments like these signify an unacknowledged arms race amongst nations investing heavily in AI.

Among the tech companies that seek to ace the Turing Test, Google has emerged as a major AI force, investing hundreds of millions of dollars in AI startup DeepMind and robotics companies like Boston Dynamics. Google’s inventions include self-driving cars and ladder-climbing humanoid robots, complete with freaky, robotic pets.



One thing directly strikes our mind that will A.I. harm us in future? This is really very hard to say that will it harm us or not.There is a big difference between the present dangers of AIs and the future dangers of any possible AI. The present dangers can be found just by reading the newspaper. Some unemployment, some risk of an accident in a self-driving car, some privacy violations, a few industrial accidents with robots.

The future dangers are more open-ended. AI is growing at an exponential rate and is thought to be able to surpass human intelligence in the decades ahead.

As in this universe, nothing comes with one side. All the systems especially man-made systems are bi-directional, one with boon and other with curse. But this bi-direction system is totally depends on its point of purpose. The knife in your kitchen which has its main purpose to cut editable things but can be the deadly threat to someone.

Do most AI developers or experts agree that AI poses an existential threat to humanity?

Answering this directly, there have been a few surveys of AI developers that you may be interested in:

This survey from 2014 took views of researchers from major AI conferences (PT-AI, EETN, AGI) and the top 100 researchers according to citation. There were around 170 responses. They found that many researchers expected AI to be achieved by 2100. Check out this graph from that survey:

image source: quora

HLMI = “High Level Machine Intelligence”, a weaker AI than Superintelligence. You can see that over 70% of the researchers surveyed have 90% confidence in emergence of AI by 2100. For 50% confidence the number of researchers surges to 90%. So we can see that many AI researchers think it is probable in the next century or so, but on the other-hand, they don’t believe it is guaranteed.


I will argue this statement by only asking one question, what you will do when you are bitten by snake? Well, answer is simple you will find immediate medical help by taking antibiotics. What are antibiotics? They are the one type of poison also. So for killing poison we have to use another counter poison, hence if we are confident that we will face great threat in future due to high influence of AI ,then only one statement is suitable in this regard that “TO KEEP AI SAFE-USE AI.” So restricting advancement in AI will be proven more costly for us. “World is not bad because bad people do bad things but because of good people are not doing good things.”

So it is better for us to go further in advancement for benefit of mankind safety. On the other hand, most of interested scientist invested in A.I. and one of them is ‘ELON MUSK’.


I truly think that ‘Elon musk’ is the future of sci­-tech, he is the real Tony Stark of this era. Elon Musk has donated millions for the future of life institute, and now his organization is putting that money to use by funding research to keep artificial intelligence “robust and beneficial”.