15 Jan
15Jan

Humans need fiction. Stories inspire ideas. Every invention started as a simple idea. As humans, we crave stories, and this article will explore the history of AI juxtaposed against some works of fiction that might have inspired technological advancements over the years.

Ancient Myths and Early Concepts

The concept of artificial beings and autonomous machines has existed in myths and literature for centuries. In Jewish folklore, there was the Golem—a clay figure brought to life through mystical means to serve its creator. The Greeks had tales of automatons, like Talos, a giant bronze man who protected the island of Crete. These mythical creations weren't powered by intelligence or computations but by magic or divine intervention. It makes you wonder: Do we need spirituality to make sense of things until science gives us meaning? My friend believes that Science is man's way of trying to hack God's mind. 

1927: Metropolis and the Birth of the Robot in Film

Fast forward to 1927. The talk of the town among cinephiles is a silent movie called Metropolis, directed by Fritz Lang. This science fiction film introduces us to Maria, a humanoid robot created by the inventor Rotwang. Maria plays a central role in the movie's exploration of social and industrial themes, and her appearance has had a lasting impact on the portrayal of robots in popular culture. At this point in history, the world of computation and automation was experiencing the magic of mechanical calculators and adding machines. In robotics, humanity had developed automated machines and mechanical devices for tasks like assembly and packaging. It's like we were tinkering with the early building blocks of what would eventually become the backbone of modern technology.

1939: The Wizard of Oz and the Tin Man's Quest

It's 1939. We're on the brink of World War II, but fiction remains the best escape for troubled minds. What better distraction for an anxious soul than a classic musical fantasy? Moviegoers get to experience the Tin Man from The Wizard of Oz. Initially a woodsman, he's transformed into a mechanical man—or an automaton—by a wicked witch's curse.Though not a robot in the traditional sense, his story resonates today when the rise of AI and automation is seen by some as a "wicked witch's curse." The Tin Man's quest for a heart mirrors our own quest to humanize technology, to imbue machines with qualities that make them more like us—or perhaps, qualities we aspire to have ourselves.

The 1940s: Foundations Laid Amidst War

The true story of artificial intelligence begins in the 1940s, amidst the chaos of World War II. In 1945, Vannevar Bush published an article in The Atlantic titled "As We May Think." In it, he outlined his ideas about how technology could augment human knowledge and foresight, laying the groundwork for many future technological advancements, including the internet and personal computing. Bush introduced the concept of the "Memex," a theoretical machine that could store vast amounts of information and allow users to retrieve it with speed and flexibility—a visionary idea that foreshadowed hypertext and the World Wide Web. Think of it as the great-grandparent of your smartphone's search function.

1950: Alan Turing's Provocative Question

Five years later, Alan Turing, often regarded as the father of computer science, posed a wild idea that set the ball rolling in technological advancement: "Can machines think?" In his 1950 paper "Computing Machinery and Intelligence," Turing proposed what is now known as the Turing Test—a criterion for determining whether a machine can exhibit intelligent behavior indistinguishable from that of a human.The test involves a human judge engaging in a natural language conversation with a human and a machine, both hidden from view. If the judge cannot reliably tell which is which, the machine is said to have passed the test. It's like the ultimate game of "Guess Who?" but with far-reaching implications for our understanding of intelligence. Unfortunately for Turing, his idea remained theoretical at the time because computers needed a big upgrade. They could execute commands but not store them, meaning they couldn't "recall" the steps—a bit like trying to bake a cake without remembering the recipe. Computing was also prohibitively expensive. Tragically, on June 7, 1954, Alan Turing died of cyanide poisoning in a suspected suicide, leaving behind a legacy that would influence generations to come. The Turing Test remains a fundamental concept in the philosophy of artificial intelligence.

1956: The Dartmouth Conference and the Birth of AI

It wasn't until 1956 that the term "Artificial Intelligence" was coined by John McCarthy during the Dartmouth Conference. But before we credit McCarthy for his significant contribution, let's introduce the three math-keteers.While McCarthy imagined a world where AI was possible, three other visionaries were already working on a theory proving that machines could think. Herbert A. Simon, an economist and cognitive psychologist, was consulting for the RAND Corporation when he had a scientific epiphany. Watching a printing machine spit out maps using letters, digits, and punctuation as symbols, he wondered: If a machine could manipulate symbols and simulate decision-making, could it possibly attain the level of human thought? This thought process birthed the idea of automated thinking.The printing program that piqued Simon's interest had been written by Allen Newell, a scientist at RAND studying logistics and organizational theory. Newell and Simon began collaborating in 1955, aiming to teach machines to think. They developed a program that could think like a mathematician and enlisted the third math-keteer, Cliff Shaw, a programmer at RAND, to write the code. This became the Logic Theorist, considered by many to be the first artificial intelligence program. In 1956, John McCarthy and Marvin Minsky organized the Dartmouth Summer Research Project on Artificial Intelligence—a historic conference that brought together top researchers from various fields, including Simon, Newell, and Shaw. It was here that McCarthy coined the term "Artificial Intelligence." Although McCarthy considered the event a major flop because it didn't achieve all its ambitious goals, the attendees left with a renewed belief that machines could think, setting the stage for future advancements.

Late 1950s to Early 1960s: Optimism and Early Achievements

From 1956 to 1974, the field of AI blossomed. It's 1957. Sputnik I, the first artificial Earth satellite, has just been launched by the Soviet Union. Twelve years earlier, in 1945, Arthur C. Clarke had published his seminal paper proposing the concept of geostationary satellites for communication—an idea that seemed like pure science fiction at the time.The launch of Sputnik I not only embarrassed the United States but also ignited fears of Soviet technological superiority. America was not only lagging in the space race but also worried about national security. In response, the U.S. government began pouring funds into technology and innovation. The Advanced Research Projects Agency (ARPA) was created in 1958 to prevent technological surprises and to ensure that the United States would be a leader in science and technology.

The 1960s: The Dawn of Space and AI

It's the 1960s. America is in the midst of the Cold War, and the Space Race is heating up. The fervor for innovation fuels advancements in artificial intelligence. ARPA established the Information Processing Techniques Office (IPTO) in 1962, tasked with developing a resilient computer network to connect the Department of Defense's primary computers. This initiative led to the development of ARPANET in 1969, the precursor to the modern Internet. It's like the moment when the world started to weave the digital threads that now connect us all. 

AI in Popular Culture: Star Trek and 2001: A Space Odyssey

During this period, the intersection of art and science was vividly portrayed in cinema and television:

  • Star Trek (1966-1969): Created by Gene Roddenberry, this TV series introduced the world to intelligent computers and androids, such as the ship's computer and later characters like Data (who would appear in subsequent series). It explored themes of artificial intelligence, ethics, and humanity's future. Imagine flipping open your communicator (hello, flip phones!) and talking to a device that understands and responds—a concept that was pure sci-fi then but commonplace now.
  • 2001: A Space Odyssey (1968) Directed by Stanley Kubrick and based on the story by Arthur C. Clarke, this film showcased HAL 9000, an AI that serves as the shipboard computer of Discovery One, a spacecraft on a mission to Jupiter. The malfunction of HAL and the surrounding drama raised to this day the difficult issue about the reliability, autonomy and above all the ethics of creating machines that can think. 


The 1970s: From Hope to the First Winter of AI

The promising initial optimism in AI started to evaporate towards the 1970s. While they had some success in very focused areas, AI researchers struggled with their goals. This resulted in increased disappointments due to limitations of computer processing power, memory, and general intelligence programming. Over time, governments and organizations were doubting the return on investment. This led to budget cuts which was a bleak period for AI, called the “AI Winter”. It looked like the AI breakthrough had hit a technological wall. Those grand visions of the conscious machines were hushed down, because researchers were hitting the hard wall of computation limitations. 

Artificial Intelligence in the movies — mirrors existential angst. 

Despite the limitations, cinema continued to tackle AI thoughts throughout this time, regularly mirroring public society anxieties around technology:

  • Westworld (1973): The film from Micheal Crichton depicted a futuristic theme park with human-like robots serving women guests. The eventual malfunction and uprising of the robots foreshadows some of the fears that accompany advanced AI (such as dumbing down humanity, through loss of control) and is symptomatic in modern AI ethics discussions. 
  • Colossus: The Forbin Project (1970): Colossus, an AI supercomputer, becomes sentient and decides that humanity cannot be trusted with its own fate. No more spoilers. You can go watch it. The film explores modern day concerns on  AI autonomy and the ethical considerations of creating machines with too much power.


Conclusion: The Journey Continues

We imagine before we create, which is why the mind is such a powerful tool in forming our realities. These movies did not just entertain, but also inspired scientists and engineers to turn dreams into reality. The history of AI is still being written. A screenwriter might be, at this very moment, cooking a SciFi dystopian thriller that will inspire the next big idea in AI. Who knows? It could be you….Stay Tuned for Part Two

Comments
* The email will not be published on the website.