top of page

Glossary of Terms for Artificial Intelligence (AI)

A

Algorithm: A set of rules or instructions given to a machine to help it solve a problem or achieve a goal.

Artificial General Intelligence (AGI): A type of AI that can perform any intellectual task that a human can do, demonstrating a generalized intelligence across various domains.

Artificial Intelligence (AI): The simulation of human intelligence in machines programmed to think, learn, and make decisions.

Artificial Neural Network (ANN): A computational model inspired by the human brain’s network of neurons, used in machine learning and deep learning.

Autonomous System: A system capable of performing tasks without human intervention, often using AI.

B

Bias: A systematic error in an AI system caused by prejudiced or unbalanced training data.

Big Data: Extremely large datasets that can be analyzed computationally to reveal patterns, trends, and associations.

Black Box: A term used to describe AI systems whose internal workings are not easily interpretable or transparent.

C

Chatbot: A conversational AI application that simulates human-like interaction via text or voice.

Computer Vision: A field of AI that enables machines to interpret and make decisions based on visual data such as images or videos.

Convolutional Neural Network (CNN): A type of deep learning neural network used primarily in image and video recognition tasks.

Clustering: A machine learning technique that groups similar data points together without pre-defined labels.

D

Data Mining: The process of discovering patterns and insights from large datasets using machine learning, statistics, and database systems.

Deep Learning: A subset of machine learning involving neural networks with multiple layers, capable of learning complex patterns.

Decision Tree: A model used in machine learning that splits data into branches to make predictions or decisions.

Domain Adaptation: The process of adapting an AI model trained in one domain to perform well in another.

E

Edge AI: AI computations performed locally on devices (like smartphones or IoT devices) rather than in centralized servers or cloud environments.

Ethical AI: The practice of designing and deploying AI systems responsibly, considering fairness, transparency, accountability, and bias.

F

Facial Recognition: A technology that identifies or verifies individuals by analyzing facial features.

Feature Engineering: The process of selecting, modifying, or creating input variables for machine learning models to improve performance.

Federated Learning: A decentralized approach to training AI models across multiple devices or servers while preserving data privacy.

G

Generative Adversarial Networks (GANs): A type of AI model with two neural networks competing against each other to create realistic data, such as images or text.

GPT (Generative Pre-trained Transformer): A type of large language model designed to generate human-like text based on input prompts.

Gradient Descent: An optimization algorithm used to minimize a function by iteratively moving in the direction of steepest descent.

H

Hyperparameter Tuning: The process of adjusting the parameters of a machine learning model to optimize performance.

Heuristic: A problem-solving approach that uses practical methods or rules-of-thumb to find solutions.

I

Image Recognition: The ability of an AI system to identify objects, places, or people in an image.

Inferencing: The process by which an AI model makes predictions or decisions based on input data.

Intelligent Agent: An autonomous entity that observes its environment and acts upon it to achieve goals.

J

Joint Learning: A machine learning approach that trains models on multiple related tasks simultaneously to improve overall performance.

K

Knowledge Graph: A structured representation of facts and relationships between entities, often used in AI to enhance contextual understanding.

Kernel Method: A technique used in machine learning to analyze data in high-dimensional spaces.

L

Labeling: The process of annotating data with tags or labels to train supervised learning models.

Language Model: An AI model designed to understand and generate human language.

Linear Regression: A statistical method used to model the relationship between a dependent variable and one or more independent variables.

M

Machine Learning (ML): A subset of AI that enables machines to learn and improve from experience without explicit programming.

Model: A mathematical representation of a system, trained on data to make predictions or decisions.

Multi-modal Learning: A type of AI that integrates and processes multiple forms of data, such as text, images, and audio.

N

Natural Language Processing (NLP): A field of AI focused on enabling machines to understand, interpret, and respond to human language.

Neural Network: A machine learning model inspired by the structure of the human brain, consisting of layers of nodes (neurons).

Normalization: A preprocessing step in machine learning that adjusts data to a common scale without distorting its relationships.

O

Optimization: The process of improving a machine learning model’s performance by fine-tuning its parameters.

Overfitting: A situation where a machine learning model performs well on training data but poorly on unseen data due to excessive complexity.

P

Pretraining: The process of training an AI model on a large dataset before fine-tuning it on a specific task.

Predictive Analytics: The use of AI to analyze data and make predictions about future outcomes.

Prompt Engineering: The process of crafting effective input prompts to guide AI models like GPTs to produce desired outputs.

Q

Quantum Computing: An advanced computing paradigm that leverages quantum mechanics to perform computations far beyond the capabilities of classical computers.

Q-Learning: A reinforcement learning algorithm used to find the optimal actions in a given environment.

R

Reinforcement Learning (RL): A machine learning paradigm where an agent learns to make decisions by receiving rewards or penalties.

Regularization: Techniques used in machine learning to prevent overfitting by adding constraints to the model.

Recurrent Neural Network (RNN): A type of neural network designed for sequential data, such as time series or text.

S

Supervised Learning: A type of machine learning where models are trained on labeled data.

Synthetic Data: Artificially generated data used to train machine learning models.

Swarm Intelligence: Collective behavior emerging from decentralized, self-organized systems, often mimicked in AI.

T

Turing Test: A test designed to measure a machine’s ability to exhibit intelligent behavior indistinguishable from a human.

Transfer Learning: A machine learning technique where a model trained on one task is adapted for another.

Tokenization: The process of breaking down text into smaller units, such as words or subwords, for processing by AI models.

U

Unsupervised Learning: A type of machine learning where models learn patterns from unlabeled data.

Underfitting: A situation where a machine learning model fails to capture the complexity of the data, leading to poor performance.

V

Vision AI: AI applications focused on visual data processing, including image and video analysis.

Variational Autoencoder (VAE): A type of neural network used for generating new data samples similar to a training dataset.

W

Weights: Parameters in a neural network that are adjusted during training to minimize the error in predictions.

Word Embedding: A technique in NLP where words are represented as vectors in a continuous vector space.

X

Explainable AI (XAI): Techniques and methods to make AI decisions transparent, interpretable, and understandable by humans.

Y

YOLO (You Only Look Once): A real-time object detection algorithm widely used in computer vision.

Z

Zero-shot Learning: An AI model’s ability to make predictions on tasks it was not explicitly trained for, using generalized knowledge.

bottom of page