top of page
banner.png

Introduction to AI

An introduction to modern AI, its origins, current uses and issues for the future. This course provides a comprehensive introduction to the field of AI, including its history, applications, and future prospects. Participants will learn about applications of AI and big data in different industries, including statistical analysis for medicine and generative processes for creative design. By the end of the course participants will have a solid understanding of the fundamental concepts of AI and be well-equipped to pursue further study in this exciting field.
 

Unit 1

Lesson 1 - Growth of computing power and modelling

DEMONSTRATION

We now have sophisticated AI programs that would have been a dream for researchers when AI began in the 1950s. Let’s use an AI image generator to demonstrate how far their research has brought us.

​

Go to https://app.leonardo.ai/ai-generations and use the prompt:

​

“a photograph from the 1950s of computer researchers using pen and paper to calculate the mathematics of AI. They are in an office with papers stacked around them. They are wearing smart clothes from that time.”

​

Remember, the image the program generates is not a real photograph, and the people do not exist.

AIC1U1L1.jpg

HISTORY OF AI

Early work on AI happened in the 1950s when the term artificial intelligence was first coined. A common area of work at this time was developing a way to use machines to mimic human intelligence. They called these neural networks. Work at this time was limited by computing power, but very similar ideas and philosophies to modern AI  existed even then. 

​

Around the 1970s and 80s, work on AI slowed down and funding decreased when scientists were unable to create results to match their vision. This was due to the lack of computing power at the time. The lack of funding and enthusiasm for AI meant that not much progress was made, causing this period to be called the “AI winter”.

​

In the 1990s, work on machine learning changed from knowledge-based to data-based. This meant that scientists working on machine learning started to use large amounts of data to get results, using the data as an input for programs to analyse. Some descriptions would say that the programs were “learning” from the data, but it’s important to remember that the programs were not intelligent or sentient, they simply had a very large amount of information that they were coded to analyse. This analysis of data was modelled on the way that humans analyse data, which is why the systems created are referred to as neural networks.

 

The 2000s brought new machine learning techniques that pushed the boundaries of what was possible in the field of AI. In this decade, scientists started to create programs that used unsupervised learning. Until now, the data used by machine learning programs had to be labelled so that the program could use that data in a defined way. This changed with unsupervised learning programs, where the programs used unlabelled data as an input and processed it in novel ways to reveal useful patterns in the data.

 

In the 2010s deep learning became possible. This involves using sophisticated artificial neural networks with large amounts of data. These program networks have multiple layers and are developed to automatically represent complex patterns from data. These deep neural networks are very successful at image and speech recognition, as well as understanding human language. The key to the success of these programs is the very large datasets and the powerful computing resources available for training - using the programs to arrange the data into a form that is useful for future computations.

​

DISCUSSION

See this timeline of machine learning, and pick two different examples from different decades to help you understand how machine learning evolved over time. 

​

For example the nearest neighbour algorithm was state of the art in 1967 - it was a way for a computer program to find a route on a basic map. 25 years after the nearest neighbour was developed, TD Gammon was made, which was a program that could beat humans at backgammon, a board game. These two examples of machine learning, 25 years apart, show that:

​

  • There is a long history of work in this subject by dedicated scientists.

  • Progress can seem slow when you look at the results, but the work that led to the result is very difficult and in-depth.

  • Each discovery was built on the work that came before it.

all innovation logos with new ES.jpg

Innovation Hub

©2023 by Innovation Hub. Part of SIP at UAEU

bottom of page