Foundations first
​
TABLE OF CONTENT
PART I: The Big Picture
-
What Is Artificial Intelligence?
-
Definition
-
Narrow vs. General vs. Super AI
-
-
The Branches of AI​
-
Machine Learning
-
Robotics
-
Computer Vision
-
Natural Language Processing (NLP)
-
-
A Brief History of AI
-
The Dartmouth Conference and early hopes
-
The AI winters
-
The deep learning revival
-
Transformers and the LLM era
-
PART II: Foundations of Machine Learning
-
What Is Machine Learning?
-
Types: Supervised, Unsupervised, Reinforcement
-
Algorithms: Decision Trees, SVMs, k-NN, etc.
-
Key concepts: training, overfitting, bias-variance tradeoff
-
-
From Data to Decisions
-
Data collection and preprocessing
-
Feature engineering
-
Model evaluation and tuning
-
PART III: Neural Networks & Deep Learning
-
What Are Neural Networks?
-
Structure: neurons, layers, activations
-
Training: backpropagation, gradient descent
-
-
Deep Learning Explained
-
Convolutional Neural Networks (CNNs)
-
Recurrent Neural Networks (RNNs)
-
Attention Mechanisms
-
Generative Adversarial Networks (GANs)
-
PART IV: Transformers and Language Models
-
The Transformer Revolution
-
Why RNNs struggled with language
-
Self-attention, multi-head attention, positional encoding
-
Encoder vs. decoder vs. encoder-decoder
-
-
Introduction to LLMs
-
What they are and how they work
-
Pre-training and fine-tuning
-
Why scale matters
-
-
Notable LLMs Through the Years
-
GPT family, BERT, PaLM, Gemini, Claude, LLaMA
-
Open-source vs. proprietary models
-
Model architectures and breakthroughs
-
PART V: Using, Tuning, and Talking to LLMs
-
Prompting 101
-
Prompts, zero-shot, few-shot
-
Chain-of-thought, ReAct, Role prompting
-
-
Fine-Tuning and Alignment
-
Supervised fine-tuning
-
RLHF, RLAIF, DPO
-
Parameter-efficient methods (LoRA, QLoRA)
-
-
Inference, Sampling, and Response Control
-
Sampling methods: Top-K, Top-P, temperature
-
Output control and decoding strategies
-