Build a Large Language Model (From Scratch)

by Sebastian Raschka

Artificial Intelligence

Book Details

Book Title

Build a Large Language Model (From Scratch)

Author

Sebastian Raschka

Publisher

Manning Publications Co

Publication Date

September 2024

ISBN

9781633437166

Number of Pages

593

Language

English

Format

PDF

File Size

7MB

Subject

Artificial Intelligence > Deep Learning > Large Language Models (LLMs)

Table of Contents

  • Contents
  • Build a Large Language Model (From Scratch)
  • Preface
  • Acknowledgments
  • About This Book
  • About the Author
  • About the Cover Illustration
  • 1 Understanding Large Language Models
  • 1.1 What is an LLM?
  • 1.2 Applications of LLMs
  • 1.3 Stages of building and using LLMs
  • 1.4 Introducing the transformer architecture
  • 1.5 Utilizing large datasets
  • 1.6 A closer look at the GPT architecture
  • 1.7 Building a large language model
  • 2 Working with Text Data
  • 2.1 Understanding word embeddings
  • 2.2 Tokenizing text
  • 2.3 Converting tokens into token IDs
  • 2.4 Adding special context tokens
  • 2.5 Byte pair encoding
  • 2.6 Data sampling with a sliding window
  • 2.7 Creating token embeddings
  • 2.8 Encoding word positions
  • 3 Coding Attention Mechanisms
  • 3.1 The problem with modeling long sequences
  • 3.2 Capturing data dependencies with attention mechanisms
  • 3.3 Attending to different parts of the input with self-attention
  • 3.4 Implementing self-attention with trainable weights
  • 3.5 Hiding future words with causal attention
  • 3.6 Extending single-head attention to multi-head attention
  • 4 Implementing a GPT Model from Scratch to Generate Text
  • 4.1 Coding an LLM architecture
  • 4.2 Normalizing activations with layer normalization
  • 4.3 Implementing a feed-forward network with GELU activations
  • 4.4 Adding shortcut connections
  • 4.5 Connecting attention and linear layers in a transformer block
  • 4.6 Coding the GPT model
  • 4.7 Generating text
  • 5 Pretraining on Unlabeled Data
  • 5.1 Evaluating generative text models
  • 5.2 Training an LLM
  • 5.3 Decoding strategies to control randomness
  • 5.4 Loading and saving model weights in PyTorch
  • 5.5 Loading pretrained weights from OpenAI
  • 6 Fine-Tuning for Classification
  • 6.1 Different categories of fine-tuning
  • 6.2 Preparing the dataset
  • 6.3 Creating data loaders
  • 6.4 Initializing a model with pretrained weights
  • 6.5 Adding a classification head
  • 6.6 Calculating the classification loss and accuracy
  • 6.7 Fine-tuning the model on supervised data
  • 6.8 Using the LLM as a spam classifier
  • 7 Fine-Tuning to Follow Instructions
  • 7.1 Introduction to instruction fine-tuning
  • 7.2 Preparing a dataset for supervised instruction fine-tuning
  • 7.3 Organizing data into training batches
  • 7.4 Creating data loaders for an instruction dataset
  • 7.5 Loading a pretrained LLM
  • 7.6 Fine-tuning the LLM on instruction data
  • 7.7 Extracting and saving responses
  • 7.8 Evaluating the fine-tuned LLM
  • 7.9 Conclusions
  • Appendix A: Introduction to PyTorch
  • A.1 What is PyTorch?
  • A.2 Understanding tensors
  • A.3 Seeing models as computation graphs
  • A.4 Automatic differentiation made easy
  • A.5 Implementing multilayer neural networks
  • A.6 Setting up efficient data loaders
  • A.7 A typical training loop
  • A.8 Saving and loading models
  • A.9 Optimizing training performance with GPUs
  • Appendix B: References and Further Reading
  • Appendix C: Exercise Solutions
  • Appendix D: Adding Bells and Whistles to the Training Loop
  • D.1 Learning rate warmup
  • D.2 Cosine decay
  • D.3 Gradient clipping
  • D.4 The modified training function
  • Appendix E: Parameter-Efficient Fine-Tuning with LoRA
  • E.1 Introduction to LoRA
  • E.2 Preparing the dataset
  • E.3 Initializing the model
  • E.4 Parameter-efficient fine-tuning with LoRA