Skip to Main Content
Universiteitsbibliotheek – LibGuides

Artificial Intelligence (AI): What is AI?

Artificial Intelligence (AI)

What is AI?

AI (Artificial Intelligence) is technology that enables computers and machines to simulate human intelligence. As a result, AI can execute certain tasks that are usually done by people.

Based on data, algorithms and statistical models, computers and machines can recognise patterns and make predictions. This kind of AI is called predictive AI. This technology has been around longer and is used, for instance, to give the suggestions you find at video streaming services based on videos you have watched before or the auto correction option on your phone.

What is Machine Learning?

In machine learning computers learn to recognise patterns in data. These patterns help in making predictions and decisions. A machine learning model uses algorithms, step-by-step plans to solve problems. An example of a commonly used algorithm is classification. In classification data is subdivided into different groups.

For instance, machine learning can help with literature selection tools, such as ASReview. Users of ASReview first label a number of journal articles as relevant or not relevant. Based on this, the model learns and predicts what other articles may possibly be relevant (classification). These predictions are continuously improved by the feedback that is given. This saves time and increases the accuracy of the literature selection.

What is Deep Learning?

Deep learning is a subgroup of machine learning in which systems learn based on neural networks that imitate the human brain. These networks recognise complex patterns and hidden connections. However, they are often described as 'black boxes'  because the 'decisions' these networks take to reach outcomes are not always transparent.

What is GenAI?

Generative AI (GenAI) takes it a step further. By means of prompts you can give GenAI tools assignments or ask questions to generate new content such as text (for example to make summaries or reviews), code, audio, video and images. Well-known GenAI tools are Copilot (Microsoft), ChatGPT (OpenAI), Claude (Anthropic), Gemini (Google) and the Chinese DeepSeek.

GenAI tools can be of great use. However, be aware that a GenAI tool does not know everything. GenAI tools are built to provide a service and satisfy customers. That is why the output of the tool gives the impression of being friendly, self-confident and correct. But GenAI tools lack common sense, are not able to form their own opinions, cannot show empathy and cannot think in terms of cause and effect. But, a tool is as good as the data it is trained on. and that data may be incorrect or outdated. 

What is a Large Language Model?

A Large Language Model (LLM) is an AI technique that is trained on large amounts of text data and is therefore able to process natural language and generate new text. For example, GPT-4o (in January 2025) is the LLM of the free version of ChatGPT (OpenAI).

 

How does Generative AI work?

Well-known GenAI tools, such as ChatGPT and Copilot make use of GPTs (Generative Pre-trained Transformers). GPTs are Large Language Models (LLMs). These type of language models are trained on large datasets, work on the principle of deep learning and are based on the transformer architecture.

Transformer architecture is a type of underlying neural network architecture that is used in natural language acquisition. This type of architecture is able to decipher longer sentences and link words within those sentences that are further apart. Examples of natural language processing include generating or summarizing text. LLMs are trained on enormous amounts of data and function by the principle of deep learning. Transformer architecture takes care that the newly generated text is natural and smooth. That is why texts that are generated with a tool based on transformer architecture are more natural compared to older chatbots that are trained on other sorts of models and architecture.

But please note, despite the fact that the text has newly been generated by GenAI, it does not mean that the tool has conscious thoughts. GenAI tools don't think, have no ability to form their own opinions, have no empathy to show, cannot think in terms of cause and effect and only have knowledge of the data they are trained on.

What happens when you are using a GPT?

  1. Input
    When you make use of a generative AI tool, you start by giving input to the tool. This is called a prompt. You could compare a prompt with a study assignment you get in a course. The study assignment makes clear what you should focus on and what the lecturer expects you to hand in at the end of the course. 
  2. Tokenization
    The prompt you enter is first divided into little bits by means of tokenization. These small bits we call "tokens'. A token is usually a word, but if a word is too long, it can be split up into more tokens.
  3. Vectorization
    Prompting is done in natural, human language. However, a language model cannot understand this immediately, but must convert the tokens into numbers (vectors). Each LLM is trained on a dataset with coded human languages. Vectors are coordinates telling where tokens are in that particular dataset.
  4. Embedding
    After the prompt has been converted into vectors, the language model makes a table to record certain semantical and syntactical information about the prompt. This table is called an embedding and can be seen as the numerical representation of the prompt. During the training stage of the language model, tokens are divided in such a way that the relations between all tokens are coded. You could see embedding as making a map on which each token has its place and tokens with a similar meaning are grouped together. This is a first step in helping the computer to understand relations between tokens (or words).
  5. Neural Network
    The embedding goes through the transformer architecture to link the correct context to the tokens. A transformer is a type of neural architecture (also called a neural network) within language models that takes care that newly generated text gets across as natural and flowing. This network uses the theory of probability to determine the weight of each token within a prompt and to predict which tokens logically belong together and follow each other. This is the so-called 'self-attention' mechanism: each token 'looks' at the other tokens in the prompt (context) to determine what is relevant. If your prompt contains the words 'university', 'data' and 'literature', there is a large chance that a possible output contains the word 'studying' rather than 'cooking'. The better a language model is trained, the better the prompt is put in the right context and the more natural the output becomes in the end.
  6. Output
    In the last step of the process the context will be further refined. The vectors are translated back into letters and in this way you get an output of the language model in natural language.

The exact internal workings of many AI tools are not always communicated transparently and this way of working is difficult for many people to understand. The above steps show that the output you get from a GenAI tool is based on the training of the tool and probability calculation. Your natural words are translated into tokens, those tokens pass through a specific architecture that can generate smooth and natural output. Although these tools can simulate natural language very well, these tools have no consciousness. GenAI tools have limitations and sometimes display incorrect information as if it were facts (hallucinations). It is important to use GenAI tools responsibly.

Want to know more?

How does ChatGPT work? Explained by a deepfake Ryan Gosling

What Is ChatGPT Doing … and Why Does It Work? An article by Stephen Wolfram Writings