top of page
Ariella Brown

A Brief History of AI

Bibliography

Brown, A. (2018, 12 17). www.techopedia.com. Retrieved from A Brief History of AI: https://www.techopedia.com/a-brief-history-of-ai/2/33628

Takeaway: AI has a surprisingly long history, marked by periods of optimism and support followed by disenchantment. Now that we're at a new high point, we appear poised for the inevitable third round of AI winter. But perhaps this round will be different.

Today we have all kinds of “smart” devices, many of which can even be activated by voice alone and offer intelligent responses to our queries. This kind of cutting-edge technology may make us consider AI to be a product of the 21st century. But it actually has much earlier roots, going all the way back to the middle of the 20th century.

AI Roots

It may be said that Alan Turing’s ideas for computational thinking lay the foundation for AI. John McCarthy, Professor of Computer Science, Stanford University, gives credit to Turing for presenting the concept in a 1947 lecture. Certainly, it is something Turing thought about, for his written work includes a 1950 essay that explores the question, “Can machines think?” This is what gave rise to the famous Turing test. (To learn more, check out Thinking Machines: The Artificial Intelligence Debate.)

Even earlier though, in 1945, Vannevar Bush set out a vision of futuristic technology in an Atlantic Magazine article entitled “As We May Think.” Among the wonders he predicted was a machine able to rapidly process data to bring up people with specific characteristics or find images requested.

Emergence

Thorough as they were in their explanations, none of these visionary thinkers employed the term “artificial intelligence.” That only emerged in 1955 to represent the new field of research that was to be explored. It appeared in the title of “A Proposal for the Dartmouth Summer Research Project on Artificial Intelligence.” The conference itself took place in the summer of 1956.

As they were poised at the beginning of the decade of optimism, researchers expressed confidence in the future and thought it would take just a generation for AI to become a reality. There was great support for AI in the U.S. during the 1960s. With the Cold War in full swing, the U.S. didn’t want to fall behind the Russians on the technology front. MIT benefited, receiving a $2.2 million grant from DARPA to explore machine-aided cognition in 1963.

Progress continued with funding for a range of AI programs, including, MIT’s SHRDLU, David Marr’s theories of machine vision, Marvin Minsky’s frame theory, the Prolog language, and the development of expert systems. That level of support for AI came to an end by the mid-1970s, though.

The First AI Winter

The period of 1974-1980 is considered the first “AI winter,” a time when there is a shortage of funding for the field. This shift in attitude toward AI funding is largely attributed to two reports. In the U.S., it was “Language and Machines: Computers in Translation and Linguistics” by the Automatic Language Processing Advisory Committee (ALPAC), published in 1966. In the U.K. it was “Artificial Intelligence: A General Survey” by Professor Sir James Lighthill, FRS, published in 1973. Declaring, “in no part of the field have discoveries made so far produced the major impact that was then promised,” Lighthill’s take corroborated the view that continued funding would be throwing good money after bad.

This doesn’t mean that there was no progress at all, only that it happened under different names, as explained in “AI Winter and its lessons.” This is when the terms “machine learning,” “informatics,” “knowledge-based system” and “pattern recognition”started to be used.

Changing Seasons in the Last Two Decades of the 20th Century

In the 1980s, a form of AI identified as “knowledge-based” or so-called “expert systems (ES)” emerged. It had finally hit the mainstream as documented by the rates of sales in the U.S. The amount of “AI-related hardware and software” hit $425 million in 1986.

But AI hit a second winter in the year 1987, though this one only lasted until 1993. When desktop computers entered the picture, the far more expensive and specialized systems lost much of their appeal. DARPA, a major source of research funding, also decided that they were not seeing enough of a payoff.

At the end of the century, AI was once again in the limelight, particularly the victory of IBM’s Deep Blue over chess champion Garry Kasparov in 1997. But major corporate investment on a large scale would only happen in the next century.

The New Millennium

During the present century, AI has made far more advances, some of which have caught headlines as well. With Google’s parent company, Alphabet, backing such research on DeepMind, there have been a number of impressive feats, demonstrated in the tradition of Deep Blue, by beating expert human players through AlphaGo.

It’s not all about fun and games, though. AI can literally be a life-saver. It is currently being employed for personalized medicine with genomics and gene editing. Another major area for AI advancing is the push for autonomous cars by as many as 46 different companies.

While the large numbers show great interest, they also show a deep-seated division. This is symptomatic of the general lack of coherence in the field that James Moor noted in 2006. Writing on the 50th anniversary of the first AI conference in AI Magazine, he said, “Different research areas frequently do not collaborate, researchers utilize different methodologies, and there still is no general theory of intelligence or learning that unites the discipline.”

This is why you hear so much about AI, though many mean it to suggest somewhat different things. The other reason you hear a lot about it is general hype, and given the history we’ve already seen, that does not bode well. (What exactly is AI? And what isn't? Learn more in Will the Real AI Please Stand Up?)

Winter Is Coming

“AI Winter Is well on Its Way” is the title Flip Pieknniewski gave a blog he wrote in early 2018. He likened the inevitability of AI winters to those of stock market crashes, bound to occur “at some point,” though exactly when is hard to say. He notes indications in “a huge decline in deep learning (and probably in AI in general as this term has been abused ad nauseam by corporate propaganda) visible in plain sight, yet hidden from the majority by the increasingly intense narrative.”

Certainly, the pattern we’ve seen of the previous two winters would indicate that is what will happen. Expectations are raised, and when they are not met, disappointment leads people to disdain the shiny new thing they were chasing.

Perhaps This Winter Will not Mean a Deep Freeze

One difference in the AI field between the past and now is that a significant portion of the research is funded by companies with deep pockets of their own rather than primarily in research universities that rely on government grants. Consequently, it is possible that companies like Alphabet will keep on chugging along even if the government decides to stop its own flow of cash. Should that happen, it may only be a partial winter, and AI’s progress will not be frozen.

2 views0 comments
Post: Blog2_Post
bottom of page