Rules Atlas

Overview

The Problem

Modern board games often require rules clarification. Finding answers is slow and disruptive, forcing players to dig through 20–40-page rulebooks mid-game.

The Goal

An AI tool that quickly clarifies board game rules.

Stack

Research

Testing “Competitors”

I tested several AI rulebook clarification tools. They all used RAG through a standard chat interface. When they returned an answer, I inevitably wondered, “Is that correct?” It only took a few tests (asking questions I already knew the answer to) to get a wrong answer and decide I couldn’t trust the tool.

What Users Are Saying

I read through BoardGameGeek forums about AI rule clarification, and the take was consistent: players need certainty, and RAG-based rule clarification wasn’t trustworthy enough.

Building

Flipping the Design Process

Since I was coding with AI, I flipped the design process on its head. My biggest unknown wasn’t the design—it was whether I could even vibe-code certain key features. Before I designed anything, I used Cursor to build a bare-bones v0.

Solution: Verifiable RAG

I created an AI interface that gave answers users could quickly verify. Alongside the RAG chat, an adjacent Rulebook Viewer jumps to the relevant rulebook page, so users can confirm the answer.

The Edge of Vibe Coding

Initially, I hoped to highlight the exact quote in the rulebook, but it’s hard. This feature was beyond what I could reliably vibe-code. After many attempts, I decided that feature would have to wait for a future version (or a real coder).

How I Built It

I used Cursor to build the app. I built an upload flow that sent each rulebook (PDF) to its own OpenAI vector store through the API. The chat interface then used ChatGPT 5.1 to query those vector stores for answers about each game. However, I ran into similar hallucination issues as my “competitors”.

Why did uploading PDFs to OpenAI’s vector stores produce such unreliable answers?

After digging in, I learned why: the vectorization process was scanning pages left-to-right across the entire page. Most rulebooks use multi-column layouts, causing text to interleave and confuse the model. This is likely why my “competitors” were so unreliable.

To fix this, I added a preprocessing step to the Upload flow: Gemini 2.5 Flash can read PDFs as visual documents and extract clean text. So before ingestion:

This allows OpenAI to:

The OpenAI answers became consistently accurate.

Now that the core flow was working, I shifted to design.

Design

Note: since this project was focused on AI coding, I won’t cover my design process in detail.

Visual Design

I used Figma to mock up the main pages and compare different fonts, color palettes, page layouts, and other visual design choices.

My goal was to convey authority and trust, so the visual design leans established rather than AI-forward. Serif headers and a paper-and-brass color palette evoke printed rulebooks, shifting attention from AI to the authority of the source.

Frontend

I handled styling using Tailwind CSS (colors, radius, typography). I started from shadcn-style components (buttons, inputs, etc.) and created a hidden /ui page that acted as a component sandbox—showing all components, variants, text styles, and colors in one place.

Branding

With the name and logo, I tried to convey “authoritative” and “comprehensive” to match the product’s promise.

Vibe Coding Learnings

As a product designer with a high-level understanding of code, vibe coding still feels like flying blind. While much of the code was unfamiliar, I did learn some tricks to get better results.

Rules and Documentation

Clear constraints matter. I added User Rules in Cursor, such as:

I also ensured coding agents referenced two key context files:

Prompting Techniques

For simple changes: Cursor’s Composer model was fastest and most token-efficient.

For complex changes:

Debugging and Version Control

Errors are the norm. If a few rounds of telling Opus 4.5 to debug didn’t work, I could always roll back to the last stable version in GitHub.

Background

Team

In 2020, I joined Reflektive as a full-time product designer. Our team included 1 other product designer, 4 PMs, and around 30 engineers.

Next Case Study

Eagle Pay

Competitive Research

Prototyping

Visual Design

Design Systems