InteractiveLabs
Explore hands-on implementations that bring each chapter to life. Experience real-world banking AI through immersive, code-driven learning that transforms complex concepts into actionable insights.
Visualizing Credit Risk with CNNs
This hands-on lab explores how convolutional neural networks (CNNs) can be used to analyze LIDAR image data to predict loan delinquency by region. Readers will learn to build a custom image-based model using PyTorch and torchvision—bridging spatial data with financial insight.
Forecasting Delinquencies with Deep Learning
Dive into sequential modeling using LSTM and GRU architectures to forecast loan delinquency trends. This lab introduces key techniques for temporal prediction using real-world loan performance data.
NLP for Financial Document Intelligence
Explore the power of Transformer models for analyzing financial text. Using the Hugging Face library, this lab walks through tokenization, fine-tuning, and evaluating pre-trained models on finance-specific language tasks.
Modeling Relationships in Loan Data with GNNs
This lab transforms Freddie Mac loan data into graph structures and applies Graph Neural Networks—including GCN, GAT, and GraphSAGE—to predict mortgage defaults. Learn how relational learning improves predictive accuracy in financial contexts.
Prompt Engineering and Fine-Tuning with LLaMA
Learn how to harness LLaMA 3.2B for domain-specific financial tasks. This lab covers prompt design, model customization, and practical fine-tuning techniques to build more intelligent language-based systems in banking.
Integrating Text, Images, Time-Series & Tabular Data
Bring everything together with a multimodal approach. This lab combines static financial variables, time-series trends, geospatial images, and text into a unified deep learning model—offering a holistic view of borrower risk.
Building Ethical and Transparent AI Models
A two-part lab focused on responsible AI. First, evaluate credit scoring fairness across demographic groups. Then, use SHAP to interpret predictions from language models—reinforcing trust and transparency in model outputs.