🤖

    AI & Machine Learning

    Master artificial intelligence and ML — 50 lessons from your first model to advanced LLMs, computer vision, and production deployment.

    Sign in to track your progress through this course

    👋 New to AI? The best time to start is now.

    You'll need basic Python — if you have that, you're ready. Start with Lesson 1 and build your first ML model.

    Start Your First Lesson
    1

    Beginner Track

    ✦ Start Here ~2–3 hours

    Understand what AI and ML are, set up Python, and train your first real machine learning models.

    Introduction to AI & ML

    What AI and machine learning are, and what you'll be able to build

    Start

    Python for Machine Learning

    NumPy, Pandas, and Matplotlib — the essential Python tools for ML

    Start

    Data Preprocessing

    Clean, transform, and prepare raw data before training any model

    Start

    Linear Regression

    Predict continuous values — your first machine learning model

    Start

    Classification Basics

    Categorise data into classes using logistic regression and k-NN

    Start
    2

    Intermediate Track

    ~3–4 hours

    Build decision trees, neural networks, and apply ML to real language and image tasks.

    Decision Trees & Random Forests

    Build interpretable tree models and ensemble them into random forests

    Start

    Neural Networks Introduction

    Understand neurons, layers, weights, and how neural networks learn

    Start

    Deep Learning Fundamentals

    Train deep neural networks with backpropagation and activation functions

    Start

    Natural Language Processing

    Process and understand text — tokenisation, embeddings, and sentiment analysis

    Start

    Computer Vision Basics

    Teach computers to understand images with CNNs and image classification

    Start
    3

    Expert Track

    ~3–4 hours

    Master advanced neural networks, transformers, reinforcement learning, and production deployment.

    Advanced Neural Networks

    Regularisation, batch normalisation, dropout, and advanced architectures

    Start

    Transformers & LLMs

    How attention mechanisms power GPT, BERT, and large language models

    Start

    Reinforcement Learning

    Train agents to make decisions with rewards, policies, and Q-learning

    Start

    Model Deployment

    Serve ML models in production with FastAPI, Docker, and cloud platforms

    Start
    4

    Advanced Track

    ~12–16 hours

    Cutting-edge AI engineering — LLMs, RAG, diffusion models, MLOps, distributed training, and ethical AI.

    Advanced Optimization Techniques (AdamW, LookAhead, Cyclical LR)

    AdamW, Lion, LR warmup, cosine annealing, and LR range tests

    Start

    Data Augmentation Strategies for Images, Text & Audio

    Expand small datasets with rotation, flipping, mixup, and text augmentation

    Start

    Transfer Learning & Fine-Tuning Pretrained Models

    Adapt pretrained models (ResNet, BERT) to new tasks with fine-tuning

    Start

    Attention Mechanisms & Self-Attention Explained

    Scaled dot-product attention, multi-head attention, and positional encoding

    Start

    Transformer Architecture Deep Dive (Q/K/V, Multi-Head Attention)

    Build a transformer from scratch — encoder, decoder, and positional embeddings

    Start

    Building Custom CNN Architectures From Scratch

    Design CNNs with conv layers, pooling, skip connections, and bottlenecks

    Start

    Residual Networks (ResNet), DenseNets & Modern CNN Design

    Understand skip connections, dense blocks, and modern CNN innovations

    Start

    Training Stability Techniques: Normalization, Initialization, Gradient Clipping

    Batch norm, layer norm, Xavier/He init, and gradient clipping for stable training

    Start

    Generative Models: Autoencoders, VAEs & GANs

    Build models that generate new data — autoencoders, VAEs, and GANs

    Start

    Diffusion Models Explained (Stable Diffusion, DDPM)

    How denoising diffusion models generate images from noise

    Start

    Large Language Models Architecture (GPT, LLaMA, Mistral)

    Decoder-only transformers, tokenisation, and the architecture of modern LLMs

    Start

    Tokenization Strategies (BPE, WordPiece, SentencePiece)

    How BPE, WordPiece, and SentencePiece convert text to tokens for LLMs

    Start

    Fine-Tuning LLMs: LoRA, QLoRA & PEFT Techniques

    Fine-tune LLMs efficiently on custom data with LoRA and QLoRA

    Start

    Reinforcement Learning Basics (MDP, Policies, Rewards)

    Markov Decision Processes, value functions, policies, and the Bellman equation

    Start

    Q-Learning & Deep Q-Networks (DQN)

    Implement Q-learning and DQN with experience replay and target networks

    Start

    Policy Gradient Methods (REINFORCE, PPO, A2C)

    Train agents directly on policy gradients with PPO and actor-critic methods

    Start

    Computer Vision Pipelines with OpenCV & PyTorch/TensorFlow

    Build end-to-end vision pipelines for classification, detection, and segmentation

    Start

    Object Detection: YOLO, SSD & Faster R-CNN Models

    Detect and localise objects in images with YOLO and two-stage detectors

    Start

    Semantic Segmentation (U-Net, DeepLab, Mask R-CNN)

    Label every pixel in an image with U-Net and DeepLab architectures

    Start

    Speech Recognition & Audio ML Models

    Build speech-to-text systems with Whisper, mel spectrograms, and CTC

    Start

    Advanced NLP: Transformers, BERT, T5, LLaMA, Mistral

    Fine-tune BERT for classification, T5 for generation, and LLaMA for chat

    Start

    Building Retrieval-Augmented Generation (RAG) Systems

    Combine LLMs with vector search to build knowledge-grounded chatbots

    Start

    Vector Databases & Embeddings (FAISS, Pinecone, ChromaDB)

    Store and search embeddings at scale with FAISS, Pinecone, and Chroma

    Start

    Evaluating AI Models: F1, ROC, Perplexity, BLEU, WER

    Choose and calculate the right metrics for classification, NLP, and generation tasks

    Start

    Model Compression: Quantization, Pruning, Distillation

    Make models smaller and faster with int8 quantisation, pruning, and distillation

    Start

    Optimizing Models for CPU/GPU/TPU Deployment

    Optimise inference for different hardware targets with ONNX, TensorRT, and XLA

    Start

    Distributed Training with Data Parallelism & Model Parallelism

    Train large models across multiple GPUs with DDP, FSDP, and pipeline parallelism

    Start

    Serving ML Models: TorchServe, FastAPI, TensorFlow Serving

    Deploy and serve ML models reliably with TorchServe, FastAPI, and TF Serving

    Start

    Monitoring Models in Production (Drift, Outliers, Bias)

    Detect data drift, outliers, and model degradation in production systems

    Start

    MLOps Fundamentals: Pipelines, CI/CD, Versioning

    Automate ML pipelines with MLflow, DVC, and CI/CD for model releases

    Start

    Building Recommender Systems (Content, Collaborative, Hybrid)

    Build content-based, collaborative filtering, and hybrid recommendation engines

    Start

    Graph Neural Networks (GNNs) for Social & Knowledge Graphs

    Apply GNNs to social networks, knowledge graphs, and molecular data

    Start

    AutoML & Neural Architecture Search (NAS)

    Automate model selection and architecture design with AutoML and NAS

    Start

    Ethical AI, Bias Mitigation & Safety Principles in ML

    Identify, measure, and reduce bias — build fair and responsible AI systems

    Start

    Final AI Project — Build & Deploy a Full End-to-End ML System

    Design, train, evaluate, and deploy a complete ML system from scratch

    Start

    Cookie & Privacy Settings

    We use cookies to improve your experience, analyze traffic, and show personalized ads. You can manage your preferences below.

    By clicking "Accept All", you consent to our use of cookies for analytics and personalized advertising. You can customize your preferences or reject non-essential cookies.

    Privacy PolicyTerms of Service