The Workflow: Day by Day
Let me walk you through exactly how I shipped a full-stack SaaS dashboard — user auth, team management, data visualization, Stripe billing — in 8 days.
Day 1 — Architecture and Scaffolding
Day 1 is the most important day, and it's the day where I use AI the least.
I start with a blank notebook (physical, not digital) and sketch:
- Data models and their relationships
- User flows for the core use case
- API boundaries — what's a server action, what's an API route, what's a webhook
- Auth model — who can see what
Only after I have a clear architecture do I touch the terminal:
npx create-next-app@latest client-dashboard --typescript --tailwind --app --src-dir
cd client-dashboard
Then I set up the project structure with Claude Code:
claude "Set up the project structure for a SaaS dashboard with these domains:
- auth (NextAuth.js with Google + email magic links)
- teams (CRUD, invitations, roles)
- dashboard (data visualization with recharts)
- billing (Stripe subscription management)
Create the folder structure under src/ with placeholder files.
Use the App Router with route groups. Add a shared lib/ for utils,
db/ for Prisma schema, and types/ for shared TypeScript interfaces."
Claude Code generates the full folder structure, the Prisma schema based on my description, and the NextAuth configuration. What used to take half a day takes 20 minutes.
But here's the critical part — I review every file. I adjust the Prisma schema because AI doesn't know my specific cardinality requirements. I modify the auth config because I need custom session fields. The scaffolding is a starting point, not a finished product.
By end of Day 1, I have:
- Working Next.js app with proper folder structure
- Prisma schema with all models defined
- Auth configured and tested with a real Google OAuth flow
- Database migrations run against a local Postgres
- CI pipeline set up (GitHub Actions)
Day 2-3 — Core Features with AI Pair Programming
This is where AI pair programming truly shines. I have the architecture locked. I know the data models. Now I need to build features.
My workflow for each feature:
- Write the type definition first (by hand)
- Let AI generate the implementation
- Review, adjust, test
Here's a real example. I need a team invitation system. I start by defining what I want:
// src/types/teams.ts
export interface TeamInvitation {
id: string;
teamId: string;
email: string;
role: 'admin' | 'member' | 'viewer';
status: 'pending' | 'accepted' | 'expired';
invitedBy: string;
expiresAt: Date;
createdAt: Date;
}
export interface InviteTeamMemberInput {
teamId: string;
email: string;
role: TeamInvitation['role'];
}
Then I hand it to Claude Code:
claude "Given the TeamInvitation type in src/types/teams.ts and the Prisma schema,
build the complete invitation flow:
1. Server action to create invitation (validate team ownership, check duplicates,
send email via Resend)
2. API route for accepting invitation via token
3. React component for the invitation form with proper error handling
4. React component for pending invitations list with resend/revoke actions
Use server actions for mutations, proper error boundaries,
and optimistic updates with useOptimistic."
AI generates 4-5 files. I spend 15 minutes reviewing instead of 3 hours writing. The code is solid because the types constrain what AI can generate — it can't hallucinate a field that doesn't exist on the type.
This is why TypeScript isn't optional in AI-accelerated development. Types are your guardrails. Without them, AI generates plausible-looking code that breaks at runtime in subtle ways.
For the data visualization dashboard, I take a similar approach:
claude "Create a dashboard page at src/app/(dashboard)/page.tsx that displays:
- KPI cards (total users, active teams, MRR, churn rate) using server components
- A line chart showing user growth over last 12 months using recharts
- A table of recent team activity with pagination
Fetch data using server components with direct Prisma queries.
Use Suspense boundaries for each section with skeleton loaders.
Make the charts client components, everything else server components."
By end of Day 3, all core features work. Users can sign up, create teams, invite members, view dashboards, and the billing integration handles subscription creation.
Day 4-5 — Edge Cases, Error Handling, Testing
This is where most "vibe coded" projects fall apart. The happy path works, but what about:
- What if a user accepts an invitation but their email doesn't match?
- What happens when a Stripe webhook fires twice?
- How does the dashboard render with zero data?
- What if a team admin removes themselves?
I maintain a checklist of edge cases and work through them systematically. AI is helpful here too, but differently:
claude "Review the team invitation flow in src/actions/teams/ and identify
edge cases I might have missed. For each edge case, suggest the fix.
Consider: race conditions, permission boundaries, email edge cases,
and expired token handling."
AI catches things I miss — like what happens if two admins send an invitation to the same email simultaneously. But it also suggests edge cases that don't apply to my specific use case. You need judgment to know which suggestions matter.
For testing, AI generates the boilerplate, and I focus on the assertions:
// src/__tests__/teams/invitation.test.ts
import { createInvitation } from '@/actions/teams';
import { prismaMock } from '@/test/prisma-mock';
describe('Team Invitations', () => {
it('should reject invitation if user is not team admin', async () => {
prismaMock.teamMember.findFirst.mockResolvedValue({
role: 'viewer', // not admin
// ... other fields
});
const result = await createInvitation({
teamId: 'team-1',
email: 'new@member.com',
role: 'member',
});
expect(result.error).toBe('INSUFFICIENT_PERMISSIONS');
});
it('should prevent duplicate pending invitations', async () => {
prismaMock.teamInvitation.findFirst.mockResolvedValue({
status: 'pending',
email: 'existing@member.com',
// ... existing invitation
});
const result = await createInvitation({
teamId: 'team-1',
email: 'existing@member.com',
role: 'member',
});
expect(result.error).toBe('INVITATION_EXISTS');
});
it('should set expiration to 7 days from creation', async () => {
prismaMock.teamMember.findFirst.mockResolvedValue({ role: 'admin' });
prismaMock.teamInvitation.findFirst.mockResolvedValue(null);
prismaMock.teamInvitation.create.mockImplementation(({ data }) => data);
const result = await createInvitation({
teamId: 'team-1',
email: 'new@member.com',
role: 'member',
});
const daysDiff = Math.round(
(result.data.expiresAt.getTime() - Date.now()) / (1000 * 60 * 60 * 24)
);
expect(daysDiff).toBe(7);
});
});
I write the test descriptions and key assertions. AI fills in the mock setup and boilerplate. This is exactly the kind of mechanical work where AI saves hours.
Performance optimization is where your experience matters more than AI's suggestions. AI can tell you to add React.memo() everywhere, but you need to know where it actually matters.
My performance checklist:
- Run Lighthouse, fix anything below 90
- Check bundle size with
@next/bundle-analyzer
- Add proper caching headers for static assets
- Verify server components aren't accidentally becoming client components
- Test with throttled network in DevTools
For deployment, I use a straightforward Vercel setup:
# Environment variables configured in Vercel dashboard
# Preview deployments on every PR
# Production deploys from main branch
# vercel.json for any custom config
claude "Generate a vercel.json with:
- Security headers (CSP, HSTS, X-Frame-Options)
- Caching rules for static assets (1 year) and API routes (no-cache)
- Redirect from www to non-www"
Day 8 — Launch Day Checklist
Launch day is about verification, not development:
## Launch Checklist
- [ ] All environment variables set in production
- [ ] Database migrations run on production DB
- [ ] Stripe webhooks pointed to production URL
- [ ] Error monitoring active (Sentry)
- [ ] Analytics tracking verified (PostHog)
- [ ] DNS configured, SSL certificate active
- [ ] Smoke test: sign up, create team, invite member, view dashboard
- [ ] Load test: can it handle 100 concurrent users?
- [ ] Backup strategy documented
- [ ] On-call alerts configured
Ship it.