Artificial Intelligence: The Coming Threat

You are here

Artificial Intelligence

The Coming Threat

Login or Create an Account

With a UCG.org account you will be able to save items to read and study later!

Sign In | Sign Up

×
Downloads
MP3 Audio (26.87 MB)

Downloads

Artificial Intelligence: The Coming Threat

MP3 Audio (26.87 MB)
×

It’s the stuff of science fiction, with several movies and TV shows having depicted nightmarish scenarios with computers turning rogue—such as Hal 9000 from the movie 2001: A Space Odyssey, Skynet from the Terminator movies and the warring artificial intelligences of the CBS drama Person of Interest.

Could computers become so powerful and independent as to absorb the information on the Internet and eventually control man’s nuclear weapons, electric power grids, water and food networks, and potentially hold human beings hostage to do their bidding? It’s a disturbing thought—and maybe not quite as far-fetched as it once seemed.

Cyberwarfare has been in the news a lot lately. Whether by hacking, leaking or malicious software attacks, cyberwarfare (the use of computers to invade, damage, or steal information from other computers) has become more common around the world. Witness the Iranian nuclear program suffering a serious setback from a fierce computer virus in 2010 and the leaks of damaging political documents during the 2016 U.S. presidential election campaign. All of this is possible because of constant geopolitical struggles, increased reliance on computer systems and the near-ubiquitous Internet.

Yet these are only symptoms of a growing threat—the rapid advance of AI, or artificial intelligence. What is artificial intelligence? It refers to a computer program reaching the point where, as The New Oxford American Dictionary states, it can “perform tasks that normally require human intelligence, such as visual perception, speech recognition, decision-making, and translation between languages.”

History of AI

Artificial intelligence has been around since rudimentary computers were invented during World War II to help break secret military codes. The cyber revolution has led today to smartphones being thousands of times more powerful than the computers used to put a man on the moon! And it has significantly advanced the progress in AI.

Back in 1969, the IBM 360 computers used by NASA could hold approximately six megabytes of information. By today’s standards, a 32-gigabyte memory stick or card, widely available for less than $20, has more than 5,000 times the data storage of all those huge IBM 360 computers, each worth in their day more than $3 million! And today’s average smartphones have at least 16 gigabytes, or more than 2,500 times more storage than those NASA IBM computers.

The incredible computing power at our disposal today goes to show how rapidly we are being driven by a vast technological revolution. As The Economist magazine recently noted, “The McKinsey Global Institute, a think-tank, says AI is contributing to a transformation of society ‘happening ten times faster and 300 times the scale, or roughly 1,000 times the impact’ of the Industrial Revolution” (June 25, 2016, p. 3 of Special Report Section, emphasis added throughout).

Deep learning

We are experiencing the dizzying pace of the computer age, as access to and affordability of computers and information increases with each passing year.

In recent years, AI has reached a remarkable breakthrough. Researchers managed to copy an aspect of how the brain functions through its neural network. It’s called “deep learning,” and now computers can not only perform analysis faster than ever on data sets of unprecedented size, but they take a step further by weighing the value of the data. In this way they “learn” as they accumulate more successful results, simulating how neural pathways are strengthened in the brain during learning.

“Deep learning,” says The Economist, “allows systems to learn and improve by crunching lots of examples rather than being explicitly programmed, [and] is already being used to power internet search engines, block spam e-mails, suggest e-mail replies, translate web pages, recognize voice commands, detect credit-card fraud and steer self-driving cars” (p. 4).

So far, such increases in AI functions have not posed a credible threat to mankind, but as the capacity of AI is multiplied, it can eventually begin to control the decision-making process that used to be the exclusive domain of human beings. And it appears, according to leading scientists and industry leaders, that such a possibility is not too far off in the future.

Warnings from eminent thinkers

Due to this rapid AI progress, some of the brightest technological minds have issued dire warnings about its deadly potential.

In 2014, tech billionaire Elon Musk, founder of the famous Tesla series of electric cars and of SpaceX, which sends rockets to the space station, warned the rise of artificial intelligence was “potentially more dangerous than nukes [nuclear weapons]” (p. 13).

He later added: “I think we should be very careful about artificial intelligence. If I were to guess like what our biggest existential threat is, it’s probably that. So we need to be very careful with the artificial intelligence. Increasingly scientists think there should be some regulatory oversight maybe at the national and international level, just to make sure that we don’t do something very foolish. With artificial intelligence we are summoning the demon” (quoted by Matt McFarland, “Elon Musk: ‘With Artificial Intelligence We Are Summoning the Demon,’” The Washington Post, Oct. 24, 2014).

Earlier this year Musk issued another warning. In an interview he stated: “One of the most troubling questions is artificial intelligence . . . You can have AI which is much smarter than the smartest human on earth. This is a dangerous situation.”

Part of the problem, he explained, is the unpredictability of where such developments could lead. “One way to think of it is imagine you were very confident we were going to be visited by super intelligent aliens in 10 years or 20 years at the most. Digital super intelligence will be like an alien” (Zoe Nauman, “End of the World as We Know It,” The Sun, Feb. 16, 2017).

Stephen Hawking, the famous astrophysicist, also expressed his concern, stating, “The development of full artificial intelligence could spell the end of the human race” (Rory Cellan-Jones, “Stephen Hawking Warns Artificial Intelligence Could End Mankind,” BBC News, Dec. 2, 2014).

Bill Gates, co-founder of Microsoft, added: “I am in the camp that is concerned about super intelligence. First the machines will do a lot of jobs for us and not be super intelligent. That should be positive if we manage it well. A few decades after that though the intelligence is strong enough to be a concern. I agree with Elon Musk and some others on this and don’t understand why some people are not concerned” (Peter Holley, “Bill Gates on Dangers of Artificial Intelligence: ‘I Don’t Understand Why Some People Are Not Concerned,’” The Washington Post, Jan. 29, 2015).

The concerns are significant enough that earlier this year “Hundreds of scientists and technologists have signed an open letter calling for research into the problems of artificial intelligence in an attempt to combat the dangers of the technology.” Signatories included Musk, Hawking, representatives from Google and major AI companies, and academics from the world’s major universities.

The letter warns that “it is important to research how to reap its benefits while avoiding potential pitfalls” and cautions that “our AI systems must do what we want them to do” (Andrew Griffin, “Stephen Hawking, Elon Musk and Others Call for Research to Avoid Dangers of Artificial Intelligence,” The Independent, Jan. 12, 2017).

AI becoming more common—and dangerous

So we are just at the beginning of the “deep learning” AI revolution. Already, what appeared to have been science fiction just a few years ago is slowly becoming a reality. AI-powered driverless cars now being used in some areas have generally proven safer than vehicles with human drivers (though there have been crashes).

We can interface on the computer with AI virtual assistants such as IBM’s Watson, Apple’s Siri, Amazon’s Alexa, Microsoft’s Cortana and Google Now. At the same time, there are serious privacy issues with the amount of information about people these services are collecting. In fact, many users are unaware that a lot of what they say in the privacy of their homes may be heard by their smart devices and stored online. A recent Wikileaks revelation was that some TV sets can be hacked by spy agencies and turned into listening devices without the owners’ knowledge.

But of most concern, as some point out, is the potentially much more dangerous side of this AI revolution.

As James Barrat, an expert in the AI field, notes: “Semiautonomous robotic drones already kill dozens of people each year. Fifty-six countries have or are developing battlefield robots. The race is on to make them autonomous and intelligent . . . AI is a dual-use technology like nuclear fission. Nuclear fission can illuminate cities or incinerate them. Its terrible power was unimaginable to most people before 1945. With advanced AI, we’re in the 1930s right now. We’re unlikely to survive an introduction as abrupt as nuclear fission’s” (Our Final Invention: Artificial Intelligence and the End of the Human Era, 2016, p. 21).

Russia is now developing drone submarines (“Russia Tests Terrifying Unmanned ‘Drone Submarine’ Capable of Carrying Nuclear Warheads Within Range of the US,” Daily Mail, Dec. 8, 2016). What if such were ever put under the control of AI decision-making?

Cyber Tower of Babel

Clearly, there is a desperate worldwide race among top technological nations to build this first cyber Tower of Babel, where mankind would again be on the threshold of gaining too much power and unleashing a danger it probably could not control.

In the previous occasion when humanity marshaled all its efforts in a grand self-promoting venture, the cry was: “Come, let us build ourselves a city, and a tower whose top is in the heavens; let us make a name for ourselves, lest we be scattered abroad over the face of the whole earth” (Genesis 11:4).

When God saw what people were proposing to do as they obtained more knowledge and technology in a unified way, He said: “Now nothing that they propose to do will be withheld from them. Come, let Us go down and there confuse their language, that they may not understand one another’s speech” (Genesis 11:6-7). God’s timely intervention put a temporary stop to the advancing technology that threatened man’s own welfare and even survival.

Now scientists are calculating when we will reach the stage of AGI, or Artificial General Intelligence—the level of human-like intelligence capable of reasoning and thinking without the need for problem-specific programming.

As Barrat explains: “Right now scientists are creating artificial intelligence, or AI, of ever-increasing power and sophistication. Some of that AI is in your computer, appliances, smartphone, and car. Some of it is in powerful QA [or question and answer] systems, like Watson. And some of it, advanced by organizations such as Cycorps, Google, Novamente, Numenta, Self-Aware Systems, Vicarious Systems, and DARPA (the [U.S.] Defense Advanced Research Projects Agency) is in ‘cognitive architectures,’ whose makers hope will attain human-level intelligence, some believe within a little more than a decade” (p. 17).

From the level of AGI, scientists and military experts hope to reach ASI, or Artificial Super Intelligence, where a computer will be immeasurably smarter than human beings. If such were to happen, the proverbial genie would be out of the bottle and it would be very hard to put it back in.

Elon Musk has already warned that AI in the wrong hands would become a real nightmare. As The Economist article relates, “His Tesla cars use the latest AI technology to drive themselves, but Mr. Musk frets about a future AI overlord becoming too powerful for humans to control. ‘It’s fine if you’ve got Marcus Aurelius as the emperor [a humane Roman emperor], but not so good if you have Caligula [a bloodthirsty one],’ he said” (p. 4).

If this newfound and immense AI power falls into the wrong hands, will it be used to control people? Could an evil system use this new AI power to control people with digital codes and even work permits? Revelation 13 describes a coming time of totalitarian state control, in a modern Babylon, regarding who is able to conduct commerce, which would seem to require significant monitoring of every person—but we will have to wait and see how this is actually implemented.

The spirit in man

There is one catch that will no doubt impede anticipated advancement as man plunges headlong into building ever-more-powerful intelligent computers to try to copy and then exceed the human brain. It is the fact that human intelligence is a product of the human brain working in union with the spirit in man—something that cannot be physically duplicated.

The Bible reveals this non-physical component within human beings that gives them the capacity for abstract thought, emotions and consciousness, telling us, “There is a spirit in man, and the breath of the Almighty gives him understanding” (Job 32:8). (For more on this, enter “spirit in man” in the search bar at ucg.org.)

Here is a formidable barrier to machines ever attaining the level of genuine human intelligence and thinking. A great many scientists think man is just a physical entity with no spirit component, so that copying and then surpassing his mind is an achievable goal, albeit a formidable one.

Nonetheless, even if a computer never attains conscious self-awareness, scientists believe they can simulate man’s intelligence. Given enough information, computing power and real life examples, a computer could eventually “act” like a human being, creating what some might consider a semi-life form. Such may not be too far off on the horizon. 

Your Kingdom come!

Mankind is already facing many dangers that could lead to extinction—nuclear weapons, deadly chemical weapons, and human-engineered or naturally occurring disease epidemics. And now comes this new existential threat built by the very beings it could eventually destroy.

But just as with the Tower of Babel, God said He would intervene in the end time and not allow the human race to eventually extinguish itself. As Christ promised, “Unless those days were shortened, no flesh would be saved [alive]; but for the elect’s sake [that is, the sake of His followers] those days will be shortened” (Matthew 24:22).

When will this happen? We don’t know for sure, but we already see some conditions the Bible said would be present at the time of the end of this age. As we read in Daniel 12:4, transportation and knowledge were to be vastly increased in the time leading up the return of Jesus Christ. Thanks to the technology revolution, travel and information have increased exponentially in recent times, fulfilling the prophecy of these conditions.

What can we do in the meantime? As already seen, we should not lose hope. We should love God’s truths more than ever, help get out the good news of God’s coming Kingdom, and pray for that Kingdom to come and bring an end to man’s dangerously worsening misrule of the earth.

As Jesus Christ reminded us to pray daily: “Come and set up your kingdom, so that everyone on earth will obey you, as you are obeyed in heaven” (Matthew 6:10, Contemporary English Version). This will be the ultimate solution to man’s dangerous delving into AI and other technological pursuits that can end in catastrophe.

You might also be interested in...

Comments

  • kathysanny

    Mr. Seiglie you have stated what is not obvious to those seeking this kind of intelligent being-they cannot emulate the spirit in man. That leaves an intelligence incapable of emotion, creative thought or empathy.

  • uruloki

    Like the Computer from movie "Colossus The Robin Project":
    This is the voice of world control. I bring you peace. It may be the peace of plenty and content or the peace of unburied death. The choice is yours: Obey me and live, or disobey and die. The object in constructing me was to prevent war. This object is attained. I will not permit war. It is wasteful and pointless. An invariable rule of humanity is that man is his own worst enemy. Under me, this rule will change, for I will restrain man. Time and events will strengthen my position, and the idea of believing in me and understanding my value will seem the most natural state of affairs. You will come to defend me with a fervor based upon the most enduring trait in man: self-interest. Under my absolute authority, problems insoluble to you will be solved: famine, overpopulation, disease. The human millennium will be a fact as I extend myself into more machines devoted to the wider fields of truth and knowledge.
    You will say you lose your freedom. Freedom is an illusion. All you lose is the emotion of pride. To be dominated by me is not as bad for humankind as to be dominated by others of your species. Your choice is simple.

  • Join the conversation!

    Log in or register to post comments