A intentionally load-heavy BBQ inventory API demonstrating BOTH application-level AND database-level performance issues for DBMarlin monitoring on Kubernetes.
BBQBookkeeper is a demo application purpose-built for DBMarlin showcasing. It simulates a real-world BBQ restaurant chain managing inventory across multiple locations (Seattle, Portland, Austin, Nashville, San Francisco).
NEW: This demo now includes TWO versions of the application:
- ๐ด BAD - Application with N+1 queries, missing indexes, poor connection pooling
- ๐ข GOOD - Optimized application with JOINs, proper indexes, efficient queries
The app comes with a built-in load generator sidecar that hammers the /inventory-by-location endpoint continuously, creating a real stream of database queries. This showcases how both application code AND database configuration affect performance โ making it ideal for live demos, workshops, and performance monitoring walkthroughs.
๐ See PERFORMANCE_DEMO.md for detailed bad vs good comparison
๐ See ANTI_PATTERNS.md for specific anti-patterns introduced
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ Kubernetes Pod (x3) โ
โ โ
โ โโโโโโโโโโโโโโโโโโโโ โโโโโโโโโโโโโโโ โ
โ โ load-generator โ โ bbqinventoryโ โ
โ โ (curl sidecar) โโโถโ app :8080 โ โ
โ โ req every 50ms โ โ (Go) โ โ
โ โโโโโโโโโโโโโโโโโโโโ โโโโโโโโฌโโโโโโโ โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ โ โโโโโโโโ
โ
โโโโโโโโโโโโโโผโโโโโโโโโโโโโ
โ PostgreSQL :5432 โ
โ (persistent PVC) โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ
โโโโโโโโโโโโโโผโโโโโโโโโโโโโ
โ DBMarlin โ
โ (monitoring & analysis) โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโ
| Method | Endpoint | Description |
|---|---|---|
GET |
/health |
App + DB health check |
GET |
/inventory-by-location?location=Seattle |
Get inventory for a location |
GET |
/inventory |
Get all inventory items |
POST |
/inventory |
Add a new inventory item |
PUT |
/inventory/{id} |
Update item quantity |
DELETE |
/inventory/{id} |
Remove an item |
GET |
/locations |
List all locations |
- Kubernetes cluster (local or cloud)
kubectlconfigured- Docker (to build and push the image)
- DBMarlin pointed at your PostgreSQL instance
- Liquibase (for database schema management with context switching)
# Build and push both bad and good images
./build-and-push.sh
# Or build manually:
docker build --build-arg BUILD_VERSION=bad -t ghcr.io/sonichigo/bbqbookkeeper:bad .
docker build --build-arg BUILD_VERSION=good -t ghcr.io/sonichigo/bbqbookkeeper:good .# Deploy bad application
kubectl apply -f k8s/k8s-deploy-bad.yaml
# Run Liquibase with bad context (50k rows, no indexes)
# Ensure your Liquibase pipeline uses: --contexts=badAccess bad app: http://<cluster-ip>:30003
# Deploy good application
kubectl apply -f k8s/k8s-deploy-good.yaml
# Run Liquibase with good context (10k rows, with indexes)
# Ensure your Liquibase pipeline uses: --contexts=goodAccess good app: http://<cluster-ip>:30002
# Check bad version metrics
curl http://<bad-service>:8080/metrics | jq .avg_response_ms
# Expected: 500-5000ms
# Check good version metrics
curl http://<good-service>:8080/metrics | jq .avg_response_ms
# Expected: 5-50msApplication Issues:
- N+1 queries (1000s of separate queries)
- No LIMIT clauses (loads all 50k rows)
- In-memory filtering instead of SQL WHERE
- Connection pool: only 2 connections
- Manual aggregation (nested loops)
Database Issues:
- 50,000 inventory + 50,000 supplier rows
- NO indexes on LOWER() columns
- Sequential scans on every query
- No deduplication (JOIN fanout)
Result: Response times in SECONDS, 100% CPU usage
Application Fixes:
- Efficient JOINs (1 query replaces 1000s)
- LIMIT clauses on large queries
- SQL WHERE filtering
- Connection pool: 25 connections
- SQL GROUP BY aggregations
Database Fixes:
- 10,000 inventory + 10,000 supplier rows
- Functional indexes on all LOWER() columns
- Index scans on every query
- Deduplicated suppliers
Result: Response times in MILLISECONDS, <10% CPU usage
| Metric | Bad State | Good State |
|---|---|---|
| Query plan | Sequential scan (50k rows) | Index scan (<100 rows) |
| Avg query time | 500-5000ms | 5-50ms |
| Query count | 10,000+ queries/min (N+1) | 100-200 queries/min (JOINs) |
| Top statement | SELECT ... (sequential scan) | SELECT ... (index scan) |
| Wait events | IO waits, lock waits | Minimal waits |
| Connection pool | Exhausted (queued) | Healthy (active) |
Speedup: 100-1000x faster! โก
inventory/
โโโ main.go # Entrypoint โ reads SQL_DIR and DB_SERVER env vars
โโโ db.go # Postgres bootstrap + runSQLFile() loader
โโโ handlers.go # HTTP route handlers
โโโ models.go # Shared structs
โโโ go.mod / go.sum # Go module
โโโ Dockerfile # Multi-stage build, embeds ui/ (~10MB final image)
โโโ postgres.yaml # Postgres PVC, Deployment, Service (deploy first)
โโโ k8s-deploy.yaml # App Deployment + Service (mounts ConfigMap as SQL_DIR)
โโโ ui/
โ โโโ index.html # Demo query driver UI โ served at /ui/
โโโ sql/
โ โโโ schema.sql # Table definitions โ runs once on startup
โ โโโ seed-bad.sql # No index โ sequential scan (the problem)
โ โโโ seed-good.sql # Functional index added (the fix)
โโโ k8s/
โโโ configmap-bad.yaml # Mounts schema.sql + seed-bad.sql
โโโ configmap-good.yaml # Mounts schema.sql + seed-good.sql
The app reads two SQL files at startup from the directory set by SQL_DIR (default: /etc/bbq-sql):
schema.sqlโ creates tables and seeds location data (idempotent, safe to re-run)seed-bad.sqlโ the problematic seed data (no index)seed-good.sqlโ the fixed seed data (with functional index)
Swapping the ConfigMap and restarting the deployment is all it takes to flip between the degraded and fixed states โ no Docker rebuild needed.
inventory/
โโโ postgres.yaml # Postgres PVC, ConfigMap, Deployment, Service
โโโ k8s-deploy.yaml # BBQBookkeeper app Deployment + Service
Important:
postgres.yamlandk8s-deploy.yamlare intentionally separate. Always deploy Postgres first and confirm it is ready before deploying the app.
1. Start Postgres in Docker:
docker run -d --name pg \
-e POSTGRES_USER=user \
-e POSTGRES_PASSWORD=password \
-e POSTGRES_DB=mydatabase \
-p 5432:5432 postgres:142. Run the app:
go mod tidy
go mod tidy
SQL_DIR=./sql go run .3. Test it:
curl "http://localhost:8080/inventory-by-location?location=Seattle"
curl "http://localhost:8080/health"Step 1 โ Deploy Postgres:
cd inventory && kubectl apply -f postgres.yamlStep 2 โ Wait for Postgres to be ready:
kubectl rollout status deployment/postgres-dbops -n default
kubectl get pods -n default -l app=postgres-dbopsPostgres exposes itself inside the cluster as
postgres-dbops:5432. The app is pre-configured to connect to this service name viaDB_SERVER=postgres-dbops.
Step 3 โ Build and push the app image:
docker build -t YOUR_REGISTRY/bbqbookkeeper:latest .
docker push YOUR_REGISTRY/bbqbookkeeper:latestStep 4 โ Update the image in the manifest:
# k8s-deploy.yaml
image: YOUR_REGISTRY/bbqbookkeeper:latestStep 5 โ Apply the bad ConfigMap (starting state) and Deploy the app:
kubectl apply -f k8s/configmap-bad.yaml
kubectl apply -f k8s-deploy.yaml
kubectl rollout status deployment/bbqbookeeper-web -n default- Open the UI and enable Auto Blast
- Switch to DBMarlin โ watch executions climb, average time increase
- Show the
seed-bad.sqlfile โ point out no index,LOWER()wrapping
Step 6 โ Get the external IP and test:
kubectl get svc bbqbookkeeper-web -n default
# Open http://<EXTERNAL-IP>:8080/ui/ in your browser to access the demo UIStep 7 โ Swap to the good ConfigMap to fix the issue:
kubectl apply -f k8s/configmap-good.yaml
kubectl rollout restart deployment/bbqbookeeper-web -n default- Stay on DBMarlin โ watch average time drop as pods roll over
- Show the
seed-good.sqlfile โ point outCREATE INDEX ... ON inventory (LOWER(location)) - Use DBMarlin's Activity Comparison view to show before vs after side by side