Lesson 48 • Advanced
Ethical AI, Bias Mitigation & Privacy ⚖️
Build responsible AI systems — learn to detect bias, implement fairness constraints, and protect user privacy with differential privacy and federated learning.
What You'll Learn in This Lesson
- • Detect algorithmic bias with demographic parity and equalized odds
- • Three mitigation strategies: pre-processing, in-processing, post-processing
- • Differential privacy with the Laplace mechanism
- • Federated learning for privacy-preserving model training
- • Real-world case studies of AI bias and harm
1️⃣ Why AI Ethics Matters
Real-world AI failures that caused harm:
| Case | What Happened | Impact |
|---|---|---|
| Amazon Hiring | Resume screener penalised women | Discriminatory hiring at scale |
| COMPAS | Recidivism model biased against Black defendants | Unfair sentencing recommendations |
| Healthcare Algorithm | Used cost as proxy for need, disadvantaging Black patients | Reduced care for millions |
| Facial Recognition | 35% error rate for dark-skinned women vs 1% for light-skinned men | Wrongful arrests |
Try It: Detecting Bias
Measure demographic parity and equalized odds in a hiring model
import numpy as np
# ============================================
# DETECTING BIAS IN ML MODELS
# ============================================
np.random.seed(42)
print("=== Measuring Bias: Demographic Parity ===")
print()
print("A hiring model screens resumes. Let's check if it's fair.")
print()
# Simulated hiring decisions
n = 2000
gender = np.random.choice(["male", "female"], n, p=[0.55, 0.45])
experience = np.random.uniform(0, 20, n)
education = np.random.choice(["bachelor", "master", "phd
...Try It: Bias Mitigation
Apply reweighting, threshold adjustment, and adversarial debiasing strategies
import numpy as np
# ============================================
# BIAS MITIGATION STRATEGIES
# ============================================
np.random.seed(42)
print("=== Three Stages of Bias Mitigation ===")
print()
print("You can fix bias at three points in the ML pipeline:")
print()
print(" 1. PRE-PROCESSING → Fix the training data")
print(" 2. IN-PROCESSING → Constrain the model during training")
print(" 3. POST-PROCESSING → Adjust predictions after training")
print()
# Original bi
...Try It: Differential Privacy & Federated Learning
Add calibrated noise for privacy and train models without sharing data
import numpy as np
# ============================================
# PRIVACY IN AI: DIFFERENTIAL PRIVACY
# ============================================
np.random.seed(42)
print("=== Differential Privacy ===")
print()
print("Can we train ML models on sensitive data WITHOUT")
print("leaking individual records? Yes, with differential privacy!")
print()
# Medical dataset: average blood pressure
true_values = np.random.normal(120, 15, size=100)
true_mean = true_values.mean()
print(f"True average b
...📋 Quick Reference — Ethical AI
| Concept | Definition |
|---|---|
| Demographic Parity | Equal positive rates across groups |
| Equalized Odds | Equal TPR and FPR across groups |
| Disparate Impact | 4/5ths rule: ratio ≥ 0.8 |
| Differential Privacy | Add noise so no individual is identifiable |
| Federated Learning | Train without centralising data |
🎉 Lesson Complete!
You now understand how to build fair and responsible AI systems! Next up: the Final Project — put everything together and build a complete end-to-end ML system.
Sign up for free to track which lessons you've completed and get learning reminders.