How I Built PawPost: An AI-Powered Pet Adoption Platform for Animal Shelters

When I set out to build a hackathon project that actually solves a real problem, I kept coming back to animal shelters. Most small shelters run on volunteers and thin budgets — yet they’re expected to maintain a social media presence, respond to adoption inquiries, and keep their pet listings up to date. The admin overhead alone is enough to burn people out.

PawPost is my answer to that problem: a multi-tenant SaaS platform that lets shelter staff upload a photo of a pet, have AI fill in the details and write the adoption post, and schedule it across Bluesky, Facebook, and Instagram — all in under two minutes.

This post walks through how it’s built, the technical decisions behind it, and what I learned.


What PawPost Does

A staff member at a shelter opens the app, uploads a photo of a new dog named Luna. Claude Vision reads the photo and auto-fills: species (dog), estimated breed (Lab Mix), approximate age (2 years), gender (female), personality notes (friendly, energetic). The staff member edits what needs adjusting, picks a tone (Warm & hopeful, Playful & fun, Professional, or Urgent need), and clicks Generate. Claude writes a 280-character adoption post optimized for social media.

From there, Luna goes into the posting queue. The shelter can post immediately or schedule her for a specific time. When she posts, she goes to Bluesky, Facebook, and Instagram simultaneously. When she gets adopted, one click moves her to the adopted column.

Every shelter gets their own public page at /org/their-slug — a browsable pet feed that any visitor can see. Each pet has its own detail page. Both pages have a floating AI chat widget powered by Claude Haiku, so potential adopters can ask questions like “do you have any cats under 2 years?” and get real answers based on actual shelter data.


Architecture

Stack

  • Frontend: React + TypeScript, built with Vite, deployed to Netlify
  • Backend: Express + TypeScript, deployed to Railway
  • Database + Storage: Supabase (Postgres + object storage for pet photos)
  • AI: Anthropic Claude API (vision for photo analysis, text generation for posts, Haiku for the chat widget)
  • Automation: n8n on Railway for scheduled posting

No ORM. The backend talks to Supabase directly via the JS client with typed queries. No Redux. Auth state lives in a React context backed by Supabase’s onAuthStateChange.

Multi-Tenancy

Every database record is scoped by org_id. The auth middleware resolves the org from the verified JWT on every request — staff at Happy Paws Shelter cannot read or modify data belonging to Riverside Rescue, even if they somehow obtained a valid token for the other org. This isn’t enforced at the row-level security layer (though Supabase supports it) — it’s enforced in every query in db.ts, which keeps the logic explicit and auditable.

The Pet Workflow

Pets move through five states: draft → queued → scheduled → posted → adopted. There’s also archived for soft deletion. The posting queue displays pets in all active states with filters. The n8n automation picks up the next queued or due-scheduled pet for each org on a 30-minute interval.

Scheduled posts have a scheduled_at timestamp. The next-to-post endpoint returns them when scheduled_at <= now(). Queued posts have no specific time and go in created_at order — first in, first out.

Non-Blocking Social Posting

When staff clicks “Post Now”, the backend responds as soon as Bluesky confirms. Facebook and Instagram happen after the response is already sent:

POST /api/pets/:id/post-now
  → authenticate Bluesky
  → upload image blob
  → create Bluesky post (with richtext hashtag facets)
  → mark pet as posted in DB
  → send 200 OK  ← staff sees success immediately
  → [background] POST to Facebook Page
  → [background] POST to Instagram (2-step: container → publish)

Errors from Facebook or Instagram are logged to Railway but don’t fail the request. A successful Bluesky post is never rolled back because Instagram had a transient error.

Instagram requires an image — text-only posts aren’t supported via the API. The backend skips Instagram silently if the pet has no photo, rather than erroring.

The AI Chat Widget

Each public org page has a floating 💬 button in the bottom-right corner. Clicking it opens a chat panel where visitors can ask questions in plain language. The backend fetches fresh org details and up to 50 available pets on every request, builds a system prompt that strictly scopes Claude to shelter topics, and calls Claude Haiku (not the more expensive Sonnet or Opus — cost control matters).

The system prompt handles edge cases explicitly: off-topic questions get a redirect, rude messages get a warm deflection, and “what AI are you?” questions get answered as “[Shelter Name]’s assistant” — Claude doesn’t reveal the underlying model.

Conversation history lives in React state only. It’s never stored in the database. Each request caps at the last 10 messages to keep token costs predictable — roughly $0.01 per full conversation at Haiku pricing.


API & Data Fetching

Two Kinds of Endpoints

PawPost has two classes of API routes:

Authenticated routes — everything under /api/pets/* and /api/orgs/* — require a JWT in the Authorization: Bearer header. The backend’s auth-middleware.ts verifies the token against Supabase on every request and attaches req.user (with userId and orgId) before the handler runs. All DB queries then filter by org_id, so a staff member at one shelter can never read or modify another shelter’s data.

Public routes/api/public/* — require no auth. These power the public-facing org pages: the pet feed, individual pet profiles, the inquiry submission form, and the AI chat widget. Anyone can call them.

The authFetch Helper

Rather than manually attaching headers on every API call, all authenticated requests go through a single thin wrapper:

const API_BASE = import.meta.env.VITE_API_URL ?? '';

export async function authFetch(url: string, options: RequestInit = {}): Promise<Response> {
  url = `${API_BASE}${url}`;
  const token = localStorage.getItem('pawpost_token');

  const headers: HeadersInit = {
    ...(options.headers ?? {}),
    ...(token ? { Authorization: `Bearer ${token}` } : {}),
  };

  // Only set Content-Type for JSON — let FormData set its own boundary
  if (!(options.body instanceof FormData)) {
    (headers as Record<string, string>)['Content-Type'] = 'application/json';
  }

  return fetch(url, { ...options, headers });
}

Two things this solves:

Environment portability. VITE_API_URL is empty in local development — the Vite dev server proxies /api/* to http://localhost:3001. In production it’s set to the Railway backend URL. Every component calls authFetch('/api/pets') and the correct base URL is prepended automatically. No if (isDev) conditionals anywhere in component code.

Token injection. The JWT from Supabase Auth is stored in localStorage under pawpost_token and refreshed automatically via onAuthStateChange. Every authFetch call picks up the latest token without any component needing to know about auth state.

For photo uploads, the helper deliberately skips setting Content-Type when the body is FormData. This lets the browser set the correct multipart/form-data boundary on its own — a subtle but important detail that would cause silent upload failures if missed.

Server-Side Pagination and Search

The queue page can hold hundreds of pets across all statuses. Fetching everything at once would be slow and wasteful, so the backend does the slicing:

GET /api/pets?limit=20&offset=40&status=queued&search=luna
  • limit + offset implement cursor-free pagination — straightforward and works well at this scale
  • status filters by workflow stage (draftqueuedscheduledpostedadopted)
  • search runs a case-insensitive ilike query against both name and breed across the entire database — not just the current page. Searching for “luna” on page 3 will still find Luna if she’s on page 1

The frontend debounces the search input by 300ms before firing the request, so the API isn’t hit on every keystroke. When the search term changes, the page resets to 1 automatically.

Status counts — the 5 queued · 12 posted badges at the top of the queue — come from a separate lightweight endpoint, GET /api/pets/stats, which runs one COUNT query per status. Much cheaper than fetching all pets just to count them client-side.

n8n Webhook Auth

The automation layer (n8n) doesn’t use JWTs. It authenticates with a shared secret in an x-api-key header:

const apiKey = req.headers['x-api-key'];
if (apiKey !== process.env.N8N_API_KEY) return res.status(401).json({ error: 'Unauthorized' });

This keeps the webhook endpoints completely separate from the Supabase JWT auth middleware while still blocking unauthorized callers. The N8N_API_KEY is set in both the Railway environment (backend reads it) and n8n’s HTTP credentials (n8n sends it). It’s the only shared secret in the system.


The n8n Automation

The posting workflow runs on Railway every 30 minutes:

  1. GET /api/orgs/due-for-posting — returns only orgs where now - last_posted_at >= posting_interval_hours. Orgs that posted recently are skipped entirely.
  2. Split Out — fans out all due orgs as parallel items (no loop node — n8n executes each item concurrently)
  3. GET /api/pets/next-to-post?org_id=... — returns the next queued or due-scheduled pet for that org
  4. POST /api/pets/:id/post-via-webhook — backend handles Bluesky + Facebook + Instagram

Removing the Loop node was intentional. n8n’s Split Out node fans items out in parallel — 10 orgs get posted in the time it would take a loop to process 1. Each org has its own posting_interval_hours setting (1h, 2h, 4h, 6h, 12h, 24h) so posting frequency is configurable per shelter.


Photo Upload and AI Vision

Photos are compressed to JPEG at 80% quality using Canvas before upload. This cuts typical file sizes from 3–5MB to under 500KB without visible quality loss — faster uploads, cheaper Supabase storage, and faster Claude vision calls.

The compressed image goes to Supabase Storage (pet-photos bucket, public) via a multipart/form-data POST. The backend gets back a public URL and passes it directly to Claude’s vision API:

const response = await anthropic.messages.create({
  model: org.ai_model || 'claude-sonnet-4-6',
  max_tokens: 500,
  messages: [{
    role: 'user',
    content: [
      { type: 'image', source: { type: 'url', url: photoUrl } },
      { type: 'text', text: 'Analyze this pet photo and extract: species, breed, approximate age, gender, and 2-3 personality observations...' }
    ]
  }]
});

The org’s ai_model setting controls which Claude model is used for photo analysis and post writing (Haiku for budget, Sonnet for quality). The chat widget always uses Haiku regardless of this setting — it’s a cost control decision, not a quality one.


Graceful Degradation

If the Anthropic API is unavailable, the app doesn’t break — it degrades gracefully:

  • The photo analysis endpoint returns an error, and the form stays blank for manual entry
  • The post generation endpoint returns an error, and the adoption post textarea remains visible and editable
  • Staff can write their own post manually and still queue or post the pet

The textarea for the adoption post is always visible — it doesn’t hide behind “no AI-generated content yet”. This was a conscious design choice: AI is a helper, not a gatekeeper.


Deployment

Frontend deploys to Netlify from the main branch automatically on every push. Build time is under 30 seconds. Environment variables (VITE_SUPABASE_URL, VITE_SUPABASE_ANON_KEY, VITE_API_URL) are set in the Netlify dashboard.

Backend deploys to Railway from the same main branch. Railway runs npm run build (TypeScript compile) then npm start. Environment variables are set in the Railway service. The PORT variable is set automatically by Railway (it uses 8080 in production instead of the local 3001).

Database lives in Supabase. Schema is version-controlled in backend/schema.sql. Migrations are currently run manually in the Supabase SQL editor — the next thing I’d add for a production-grade system is proper migration tooling.


What I’d Do Differently

Row-level security in Supabase. Currently multi-tenancy is enforced in application code. RLS would add a second enforcement layer so a logic bug can’t leak data across orgs. I opted for application-layer enforcement first because it’s easier to reason about in code review, but RLS would be the right addition before this goes to production at scale.

Proper migration tooling. Schema changes are manual SQL right now. At one shelter this is fine; at ten it becomes error-prone.

React Query for public pages. The public org and pet pages use plain useEffect + useState for fetching. React Query would add caching and stale-while-revalidate behavior — a visitor navigating back to the org page after viewing a pet wouldn’t re-fetch data they already have.

End-to-end tests. The project has no test suite. I have a Vitest setup planned for the pure utility functions (timeAgo, the system prompt builder), but the posting flow — where a real social media post could go out — is untested beyond manual verification.


Why I Built This

Two reasons. The practical one: animal shelters are under-resourced and the ones I’ve talked to spend meaningful volunteer hours on social media tasks that could be automated. If this saves a shelter coordinator two hours a week, that’s two hours spent with the animals.

The technical one: this project touches almost every layer of modern web development — multi-tenancy, file uploads, AI vision, social media APIs, OAuth, background jobs, public and authenticated API design, and automation workflows. It’s the kind of project where the interesting problems are in the integration, not any single piece.

The live app is at petreach.netlify.app, and you can watch the demo video on YouTube. In April 2026, PawPost won 3rd place in the Advanced category at the Weber State University AI Hackathon — which was a good reminder that projects built to solve real problems tend to resonate with judges too.