• Overview
  • Course Highlights
  • Takeaways
  • CS 188: Introduction to Artificial Intelligence

    Grade Received: A

    Search | MDP | RL | BNs | ML

    Link: Course Website

    Overview

    Introduction

    This course introduces the basic ideas and techniques underlying the design of intelligent computer systems. A specific emphasis is on the statistical and decision-theoretic modeling paradigm.

    Key Topics Covered

    1. Uninformed Search: A*, Heuristics, BFS, DFS, UCS

    2. Constraint Satisfaction Problems: Learning how computers solve CSPs

    3. Game Trees: Learning how artificial intelligence can play games

    4. Markov Decision Processes: How computers choose the next best action using information about a state space

    5. Reinforcement Learning: Topics building from MDPs, Q-Learning, Policy Iteration, Epsilon Greedy

    6. Bayes Nets: Data driven outcome models based on laws of probability

    Coursework Highlights

    Project 1: Search

    Project Visualization

    Implemented DFS, BFS, UCS, and A* with customized heuristic for A* passing benchmarks of performance

    Developed a game state that could capture necessary requirements for the pacman game A* heuristic

    Achieved 98.5% accuracy in email classification, placing me 15th on the leaderboard of 1120 submissions

    Project 3: Reinforcement Learning

    Project Visualization

    Implemented value iteration with Bellman equations, Policy Iteration, and various Q Learning based evaluation functions.

    Evaluation functions used for pacman to capture food given multiple ghosts / objectives.

    Produced an observable process of pacman learning a state space and improving on its policies with each iteration.

    Project 5: Machine Learning

    Project Visualization

    Implemented a Perceptron classifier from scratch to handle binary classification tasks, showcasing foundational understanding of supervised learning.

    Built a fully-connected neural network in PyTorch to classify MNIST digits with high accuracy, including techniques like mini-batch gradient descent and ReLU activations.

    Developed a recurrent neural network (RNN) for character-level language identification, experimenting with hidden state representations and long-sequence dependencies.

    Explored advanced architectures like convolutional neural networks (CNNs) and attention mechanisms to enhance performance in classification tasks.

    Created an attention block for better text prioritization and utilized the transformer architecture to build a mini Chat-GPT forward function.

    Takeaways

    This was an eye opening course that showed me the importance of data structures and probability in Artifical Intelligence.

    I had originally thought that all of AI was just Math and CS Theory applied but I was able to see AI in all of its different aspects than just ML.