The Age of AI and our Human Future

by Henry Kissinger, Eric Scmidt, and Daniel Huttenlocher 2021

Book Review by Ray Herrmann

Artificial Intelligence (AI) is about a decade old and advancing quicker than our willingness or ability to understand it and to direct its development and use. Its ways of "reasoning" are foreign to us. Sometimes it appears brilliant but other times it looks silly. Yet we rush to implement it, especially in weaponry.

AI finds information and identifies trends that traditional algorithms could not, or would do less efficiently. AI's are basically pattern recognizers with the ability to scan far more data than humans and to quickly extract inferences we might not even see. AI is a new mechanism for exploring and organizing Reality with a viewpoint that we don't have. Currently, AI does not have self-awareness but functions well without it. It was developed within 10 years, but without establishing the basic concepts for an informed debate on use, direction and rules, we are rushing, lured by astonishing promise.

There are three types of "machine learning":

  1. Supervised Learning (also called Deep Learning): AI's are fed reems of data along with results as the output and AI adjusts the weights of its connections to other computer elements to give the best fits to the output. (Example: AI's "Deep Learning" works by processing huge amounts of data with the outcomes, and then adjusting its many internal weights of connections with other "Neural" cells until they best fit the given results.) The quality of this training depends on the quality of data fed into it, which are usually selected by humans (a potential source of bias). Once trained, an AI is fed similar data and tasked with predicting the output "score".
  2. Unsupervised Learning: Used where developers have only troves of data, but wish to extract useful insights. AI's are fed only volumes data (made easy with the internet) but no output results. Then we ask the AI for groupings of the data by some category. (Examples: A) Netflix uses algorithms to identify clusters of customers with similar viewing habits. B) Examining for patterns of Fraud). Because AI's are trained with unspecified outcomes, they can produce surprising innovative results (They "see" patterns we miss).
  3. Reinforcement Learning: The AI is an agent in a controlled environment, observing response to its actions. Feedback is required (the "Reward Function"). (Example: Chess-playing AI's - they play against themselves with the goal of maximizing the "wins".) AlphaFold uses reinforcement learning, to predict protein folded shapes, more than doubling accuracy (from about 40% to 85%).

As deep reading and analysis contract so do the traditional rewards, so the costs of opting out increase. Here are three advancements that we wouldn't have without AI:

Nuclear Weapons are large, costly and easy to detect and their use is obvious. But Cyber is cheap, easy to hide and hard to detect (while also capable of mass destruction). Trained AI will become broadly available and able to run on relatively small (laptop) computers. Cyber combined with AI adds unpredictability and even more destructiveness.

We may have no choice but to implement AI, but we also have a duty to shape it in a way that is compatible with a human future. A race for strategic advantage in Cyber and AI is already taking place. But if the U.S. and its allies were to stop there would be a loss of balance in the world. (Three Groups are intensely investing in studying AI: United States: $38 billion, China: $25 billion, Europe: $8 billion.)

Most countries will not develop "national" AI because of:

The U.S. and China are positioned to support continent-scale AI platforms and India has the size, market and technology capabilities to develop AI. Russia has a formidable national tradition in math and science, but seems focused on Cyber capabilities. In some large companies (Facebook, Google …) AI serves multiple communities, so it may adapt to the distinct nature of each region. AI is already powering a more Helpful Google but it can create a personal echo chamber by serving users according to their observed interests.

Quietly, but with unmistakable momentum, nations are developing and deploying AI across wide military capacities. Militaries and Security Services training with AI will achieve insights and influence that surprise and unsettle us. Nowadays, the proliferation of digitally networked systems (door locks, furnaces, refrigerators, pipelines …) exposes a wide vulnerability

Six primary tasks leaders need to control their arsenals:

  1. Leaders must speak to each other regularly.
  2. Must resolve riddles of Nuclear Strategy.
  3. Leading Cyber and AI powers should define their doctrines and limits and they should identify points of correspondence between these and rival powers and strengthen them against Cyber threats and accidental use.
  4. Nuclear weapons states should work to strengthen their fail-safe procedures.
  5. Controls create robust methods for maximizing decision time in crises and that decisions are made at a pace conducive to human thought and deliberation.
  6. AI powers should consider how to limit proliferation of AI.

Once employed in a military conflict, AI's speed insures it will impose results faster than diplomacy can unfold. and neither side would likely understand the interactions. The shift to AI assisted weapons and defense systems involve a measure of reliance on an intelligence not well understood and that has occasionally surprised us. And the lines between engaging AI in reconnaissance, targeting and lethal autonomous action are relatively easily crossed. Once released into the world, AI and Cyber weapons may be able to adapt and learn well beyond their intended targets. Terrorists or rogue nations can get outsized influence by investing in AI and Cyber arsenals. Automatic Target Recognition of Personnel and vehicles https://www.sbir.gov/sbirsearch/detail/1413823 from an Unmanned Arial System Using Learning Algorithms is in development. I just read in our newpsaper on 1/4/2023 that AI is being actively considered in the war in Ukraine. The paradox of an international system is that every power is driven to act (indeed must act) for its own security.

AI is also capable of exploiting human passions more effectively than traditional propaganda, thus amplifying bias. AI facilitated disinformation (pictures, videos, speech …) could game their presentations by playing to human biases and expectations

In the short term, AI may displace blue-collar and mid-management people, so societies need alternative sources of fulfillment. AI may become unsettling when it recommends promoting one individual over another, or it challenges prevailing wisdom.

Ultimately, individuals and societies will have to decide which aspects of life to assign to AI and to humans. We and especially young children, may become habituated to an AI "companion" who acts like a supercharged Alexa and assumes the roles of companion, babysitter, friend, teacher, advisor …The child and this personal AI would grow together, shaping each other's views, possibly diminishing the need for human deep thought.