Schema-as-Code: What Prisma Taught Us About Developer Experience
How we borrowed Prisma's developer experience model to build a CMS where content schemas live in code, sync to the cloud, and generate fully-typed clients.
The Before Times
Before Prisma, working with databases in Node.js meant writing raw SQL strings, hoping your column names were right, and discovering mismatches at runtime when your application crashed in production. You defined your tables in a database GUI, copy-pasted connection strings into environment variables, and prayed that your staging environment's schema matched production.
Then Prisma showed up and said: what if the schema lived in a file in your repository? What if you could define your database structure in code, version control it, diff it, review it in pull requests, and generate a fully-typed client from it?
That shift — from runtime configuration to code-defined schemas — changed how an entire generation of developers thinks about databases. It wasn't a minor improvement. It was a category shift in developer experience.
Content management is stuck in the Before Times. Content models live in admin dashboards. You configure fields by clicking through dropdown menus. Your staging environment drifts from production because someone added a field in the GUI and forgot to do the same in the other environment. There's no diff to review. No pull request to approve. No TypeScript compiler catching the mismatch.
We built Laizy CMS to apply the same principles Prisma brought to databases. Schemas live in files. The CLI syncs them to the cloud. A code generator produces a fully-typed TypeScript client. The workflow lives in your terminal and your editor, not in a browser tab.
This post walks through how it all works, with real code from our actual codebase.
The Prisma Parallel
The comparison to Prisma isn't superficial marketing. It's the architectural foundation of how Laizy works. Let's trace the parallel.
Prisma has a schema file. You define your data models in prisma/schema.prisma. Every table, every column, every relationship — declared in a human-readable file that lives in your repository.
Laizy has a schema file. You define your content models in laizy/schema.laizy. Every model, every field, every constraint — declared in a file that lives in your repository.
Prisma has prisma migrate. You run a CLI command that compares your local schema to the database, generates a migration plan, shows you what will change, and applies it.
Laizy has laizy sync. You run a CLI command that compares your local schema to the cloud, generates a migration plan, shows you what will change, and applies it.
Prisma has prisma generate. You run a CLI command that reads your schema file and generates a typed client. Every model becomes a TypeScript interface. Every query method gets proper autocomplete. Every field reference is checked at compile time.
Laizy has laizy generate. Same concept, same workflow. Your schema becomes TypeScript interfaces, a typed client, and an index file you import into your application.
Prisma has prisma studio. A visual interface for viewing and editing data. Useful, but the schema itself is always code-first.
Laizy has a dashboard. A visual interface for viewing and managing content. Useful, but the content model itself is always code-first.
The pattern is deliberate. Prisma proved that developers prefer code-defined schemas over GUI-configured ones. Not because developers hate GUIs, but because code-defined schemas unlock capabilities that GUIs can't provide: version control, code review, reproducible environments, and automated tooling.
We took that lesson and applied it to content management.
.laizy Syntax Deep Dive
The schema syntax is designed to be immediately readable if you've used Prisma, TypeScript interfaces, or any block-structured configuration language. Here's what a real .laizy schema file looks like — this is from our actual codebase:
model BlogPost {
title: String {
required: true
maxLength: 200
}
content: String {
required: true
}
status: String
slug: String {
required: true
unique: true
}
}
model Author {
name: String {
required: true
maxLength: 100
}
email: String {
required: true
unique: true
}
bio: String
}
Let's break down what's happening.
Models are declared with the model keyword followed by a name and a block. The name becomes the TypeScript interface name and the API endpoint identifier. model BlogPost becomes interface BlogPost and client.blogPost.findMany().
Fields are declared as fieldName: FieldType inside a model block. The supported types are String, Int, Float, Boolean, and DateTime. Each type maps directly to a TypeScript type in the generated client.
Constraints live inside an optional block after the field type. Currently supported constraints include:
required: true— The field must have a value. Without this, fields are optional by default.unique: true— Values must be unique across all entries of this model.maxLength: 200— String fields can have a maximum character length.default: "value"— A default value applied when the field isn't provided.
The syntax is intentionally minimal. There are no decorators, no import statements, no configuration preambles. You open the file, define your models, and you're done.
Here's a more complex example showing the kinds of content models you'd use in a real website:
model HeroSection {
badge: String
headline: String
subheading: String
ctaPrimary: String
ctaSecondary: String
disclaimer: String
}
model FeatureBentoCard {
sectionId: String {
required: true
unique: true
}
category: String { required: true }
title: String { required: true }
description: String { required: true }
}
model SectionHeader {
sectionId: String {
required: true
unique: true
}
title: String { required: true }
}
Notice that HeroSection has all optional fields — it's a content model for a section of a website where every field might be empty during initial development. FeatureBentoCard uses sectionId with a unique constraint to ensure each bento card maps to exactly one section of the page. These are real content models from our own website, not hypothetical examples.
The schema file is plain text. Any tool that can read a file can read it. Any tool that can write a file can modify it. Any version control system can diff it. This matters more than it seems — we'll come back to it.
The Sync Workflow
Once you've defined your schema, you need to push it to the cloud where your content actually lives. That's what laizy sync does.
The sync command follows a deliberate workflow: parse, compare, plan, confirm, execute. Let's walk through each step.
Step 1: Parse the local schema. The CLI reads your .laizy file and parses it into an abstract syntax tree. If the syntax is wrong, you get an error with line numbers — not a silent failure an hour later when content creation breaks.
$ laizy sync
✨ Laizy CMS Schema Sync
📁 Found schema file: laizy/schema.laizy
✓ Parsed 7 models
Step 2: Fetch the remote state. The CLI calls the Laizy API to get the current schema state in the cloud. This is the source of truth for what's actually deployed.
Step 3: Compute the diff. The compareSchemas function in our sync engine walks through both schema sets and classifies every difference. Here's the actual diffing logic from our codebase:
// From lib/schema/sync/schema-differ.ts
export function compareSchemas(
localModels: ModelDefinition[],
remoteModels: RemoteContentModel[],
): SchemaDiff {
const localModelMap = new Map(localModels.map((model) => [model.name, model]));
const remoteModelMap = new Map(remoteModels.map((model) => [model.name, model]));
const toCreate: ModelDefinition[] = [];
const toUpdate: ModelUpdate[] = [];
const toDelete: RemoteContentModel[] = [];
// Find models to create (in local but not remote)
for (const localModel of localModels) {
if (!remoteModelMap.has(localModel.name)) {
toCreate.push(localModel);
}
}
// Find models to delete (in remote but not local)
for (const remoteModel of remoteModels) {
if (!localModelMap.has(remoteModel.name)) {
toDelete.push(remoteModel);
}
}
// Find models to update (in both local and remote but different)
for (const localModel of localModels) {
const remoteModel = remoteModelMap.get(localModel.name);
if (remoteModel) {
const fieldChanges = compareModelFields(localModel, remoteModel.schemaAST);
if (fieldChanges.length > 0) {
toUpdate.push({
modelName: localModel.name,
currentAST: remoteModel.schemaAST,
targetAST: localModel,
fieldChanges,
});
}
}
}
return { toCreate, toUpdate, toDelete };
}
Three categories: models to create, models to update, models to delete. For updates, the differ goes field-by-field, comparing names, types, and constraints.
Step 4: Generate the migration plan. The diff is converted into a migration plan with impact classifications. Every operation is labeled as safe, warning, or destructive:
- Safe: Adding an optional field, removing a constraint, creating a new model. No data is at risk.
- Warning: Adding a required field without a default (existing content needs a value), changing a field type to a compatible type (String to Int might lose data).
- Destructive: Removing a field (data loss), removing a model (all content deleted), incompatible type changes.
The impact classification drives the confirmation flow. Safe operations run automatically. Warning operations show you what will happen and ask for confirmation. Destructive operations show explicit warnings and require a second confirmation.
Step 5: Show the plan and confirm. The CLI displays a formatted table of every operation, its impact level, and what it will do to existing content:
📋 Migration Plan:
┌─────────────┬──────────────┬────────┬──────────────────────────────┐
│ Operation │ Model │ Impact │ Description │
├─────────────┼──────────────┼────────┼──────────────────────────────┤
│ CREATE │ BlogPost │ safe │ Create new model (4 fields) │
│ ADD FIELD │ Author │ safe │ Add optional field: bio │
│ MODIFY │ HeroSection │ warn │ Make headline required │
│ REMOVE │ LegacyModel │ danger │ Delete model and all content │
└─────────────┴──────────────┴────────┴──────────────────────────────┘
For destructive changes, the CLI shows explicit warnings with content counts — telling you exactly how many entries will be affected:
⚠️ Warning: Removing model "LegacyModel" will delete 47 content entries.
Are you absolutely sure you want to continue? This cannot be undone. (y/N)
Step 6: Execute (or dry-run). If you pass --dry-run, the CLI stops after showing the plan. No changes. No risk. Just information. This is the workflow that makes schema changes reviewable — run the dry-run in CI, review the output in your pull request, then merge with confidence.
Without --dry-run, the CLI calls the sync orchestrator to execute the migration. The orchestrator is responsible for the full lifecycle:
// From lib/schema/sync/schema-sync-orchestrator.ts
async syncSchemas(localModels: ModelDefinition[], options: SyncOptions): Promise<SyncResult> {
// Step 1: Get current remote schema state
const remoteModels = await this.managementClient.getSchemaState();
// Step 2: Compare local vs remote schemas
const diff = compareSchemas(localModels, remoteModels);
// Step 3: Generate migration plan
const migrationPlan = generateMigrationPlan(diff);
// Step 4: Enhance with content impact analysis
const impactEnhancedPlan = await this.enhanceWithContentImpact(migrationPlan);
// Step 5: Execute migration if not dry run
if (!options.dryRun) {
migrationResult = await this.managementClient.executeMigration(
impactEnhancedPlan,
options.force || false,
);
}
return { diff, migrationPlan: impactEnhancedPlan, migrationResult };
}
The content impact enhancement step is worth highlighting. Before executing any migration, the orchestrator queries the database to find out how many content entries exist for each affected model. A field removal on a model with zero entries is a different conversation than a field removal on a model with 10,000 entries. The CLI makes this distinction visible.
Generated Client Showcase
After your schema is synced, you run laizy generate to produce a TypeScript client. The generator reads your .laizy schema file, creates an AST, and emits three files: types.ts, client.ts, and index.ts.
Here's what the generated types look like for our schema:
// Generated TypeScript types for CMS schema
// DO NOT EDIT - This file is auto-generated
export interface BlogPost {
title: string
content: string
status: string
slug: string
// System fields
id: string
createdAt: Date
updatedAt: Date
}
export interface Author {
name: string
email: string
bio: string
// System fields
id: string
createdAt: Date
updatedAt: Date
}
export interface CreateBlogPostInput {
title: string
content: string
status?: string
slug: string
}
export interface UpdateBlogPostInput {
title?: string
content?: string
status?: string
slug?: string
}
Notice a few things. Each model produces three interfaces: the read type (all fields present), the create input (required fields required, optional fields optional), and the update input (all fields optional, since you might only update one). System fields (id, createdAt, updatedAt) are added automatically.
The generated client provides a Prisma-like query API:
// Generated CMS client implementation
// DO NOT EDIT - This file is auto-generated
class BlogPostClient {
private managementClient!: ManagementClient
async findMany(options: FindManyOptions<BlogPost> = {}): Promise<BlogPost[]> {
const listOptions = {
modelName: 'BlogPost',
limit: options.limit,
offset: options.offset,
}
const result = await this.managementClient.listContentData(listOptions)
return result.map(item => ({
...item.data,
id: item.id,
createdAt: item.createdAt,
updatedAt: item.updatedAt,
})) as BlogPost[]
}
async findById(id: string): Promise<BlogPost | null> {
try {
const result = await this.managementClient.getContentData(id)
if (!result) return null
return {
...result.data,
id: result.id,
createdAt: result.createdAt,
updatedAt: result.updatedAt,
} as BlogPost
} catch (error: any) {
if (error.message?.includes('NOT_FOUND') || error.message?.includes('404')) {
return null
}
throw error
}
}
async count(options: { where?: Partial<BlogPost> } = {}): Promise<number> {
return this.managementClient.countContentData({ modelName: 'BlogPost' })
}
}
export class LaizyClient {
blogPost = new BlogPostClient()
author = new AuthorClient()
heroSection = new HeroSectionClient()
footerContent = new FooterContentClient()
newsletterSection = new NewsletterSectionClient()
featureBentoCard = new FeatureBentoCardClient()
sectionHeader = new SectionHeaderClient()
constructor(managementClient: ManagementClient) {
this.blogPost.setManagementClient(managementClient)
this.author.setManagementClient(managementClient)
this.heroSection.setManagementClient(managementClient)
// ... each model client gets the management client
}
}
Using it in your application looks like this:
import { LaizyClient } from './generated/laizy';
import { ManagementClient } from '@/lib/management-client';
const managementClient = new ManagementClient({
baseUrl: 'https://app.laizycms.com',
apiToken: process.env.LAIZY_API_TOKEN,
});
const client = new LaizyClient(managementClient);
// Fully typed - TypeScript knows blogPost has title, content, slug, status
const posts = await client.blogPost.findMany({ limit: 10 });
// TypeScript catches this at compile time if "title" doesn't exist
const titles = posts.map(post => post.title);
// Find a specific post - returns BlogPost | null
const post = await client.blogPost.findById('abc123');
// Count entries
const totalPosts = await client.blogPost.count();
Every method call is typed. client.blogPost.findMany() returns Promise<BlogPost[]>. client.blogPost.findById() returns Promise<BlogPost | null>. If you rename a field in your schema, regenerate, and try to access the old field name in your application code, TypeScript tells you at compile time. Not at runtime. Not in production. In your editor, with a red underline.
The generator itself lives in cli/commands/generate.ts and follows a straightforward pattern: parse the schema, generate the code, write the files:
// From cli/commands/generate.ts
export async function generateCommand(): Promise<void> {
const config = loadConfig();
let allModels = loadAndParseSchemas(config.schemaPath);
// Generate the client code
const generated = generateCMSClient({ models: allModels });
// Write the three separate files
fs.writeFileSync(path.join(outputDir, 'types.ts'), generated.typesCode);
fs.writeFileSync(path.join(outputDir, 'client.ts'), generated.clientCode);
fs.writeFileSync(path.join(outputDir, 'index.ts'), generated.indexCode);
}
Three files, clearly separated: types for the data shapes, client for the runtime queries, index for the public API surface. If you need to understand what the generated code does, you can read it — it's TypeScript, not a compiled binary.
Version Control for Content Models
This is where schema-as-code pays the biggest dividends, and it's the capability that GUI-based content modeling can never match.
When your schema lives in a file, content model changes become pull requests. Here's what a schema change looks like in a git diff:
model BlogPost {
title: String {
required: true
maxLength: 200
}
content: String {
required: true
}
+ excerpt: String {
+ maxLength: 300
+ }
+
status: String
slug: String {
required: true
unique: true
}
+
+ publishedAt: DateTime
}
This diff tells you everything: two new fields were added to BlogPost. excerpt is an optional string with a 300-character limit. publishedAt is an optional DateTime. A reviewer can look at this and ask: "Should publishedAt be required for published posts?" That conversation happens in a code review, not in a Slack message three days after someone modified the content model in the admin panel.
CI/CD integration follows naturally. You can add a step to your pipeline that runs laizy sync --dry-run and posts the migration plan as a PR comment. Before any code is merged, the team sees exactly what the schema change will do to the cloud database:
# In your CI pipeline
laizy sync --dry-run 2>&1
# Output:
# 📋 Migration Plan:
# ┌────────────┬───────────┬────────┬─────────────────────────────────────┐
# │ Operation │ Model │ Impact │ Description │
# ├────────────┼───────────┼────────┼─────────────────────────────────────┤
# │ ADD FIELD │ BlogPost │ safe │ Add optional field: excerpt │
# │ ADD FIELD │ BlogPost │ safe │ Add optional field: publishedAt │
# └────────────┴───────────┴────────┴─────────────────────────────────────┘
# 🔍 Dry run complete - no changes made
Both operations are classified as "safe" because the new fields are optional. No existing content is affected. The reviewer sees this, approves the PR, and the schema change gets applied during deployment.
Now contrast this with the traditional workflow: someone opens the CMS admin panel, adds a field, and hopes the frontend team knows about it. The staging environment might have the field, production might not. The developer building the Next.js page references a field that doesn't exist yet in production. The deployment fails. Nobody knows why until someone checks the CMS admin panel and realizes the content model changes weren't applied.
Schema-as-code eliminates that entire class of bugs. The schema file is the source of truth. The git history is the audit trail. The CI pipeline is the deployment mechanism. The same workflow that works for application code works for content models.
Environment parity becomes trivial. Every environment gets the same schema because every environment runs the same laizy sync command against the same schema file. Staging can't drift from production because they're both synced from the same source.
Rollbacks become git operations. If a schema change causes problems, you revert the commit, run laizy sync, and you're back to the previous state. No hunting through admin panel audit logs. No recreating deleted fields by hand.
Why This Matters
Schema-as-code isn't a developer preference — it's an architectural decision that cascades through every layer of how content gets managed.
Content teams get structure without friction. When a developer defines a BlogPost model with a 200-character title limit, a required slug, and an optional excerpt, the content dashboard enforces those constraints automatically. Content creators don't need to remember the rules. The schema encodes them.
Developers get type safety across the entire stack. From the schema file to the generated client to the application code, every field reference is checked at compile time. Rename a field, regenerate, and TypeScript shows you every place in your codebase that needs to update. This isn't aspirational — it's how the generated client works today.
AI agents get schemas they can understand. This is the part that most CMS platforms haven't thought about yet, and it's the part that matters most going forward.
A .laizy schema file is plain text. Any AI agent that can read a file — Claude Code, Cursor, GitHub Copilot Workspace — can read a .laizy schema file and understand the content model. It knows the field names, the types, the constraints. It can generate valid content on the first attempt because it has the full specification before it starts writing.
When an AI agent encounters a schema like:
model BlogPost {
title: String {
required: true
maxLength: 200
}
content: String { required: true }
slug: String {
required: true
unique: true
}
status: String
}
It doesn't need an API reference. It doesn't need to make a test request and parse the error response. It reads the file and knows: titles can't exceed 200 characters, slugs must be unique, content is required, status is optional. That's the complete specification for generating a valid BlogPost.
Compare this to an AI agent trying to create content in a GUI-configured CMS. It would need to either scrape the admin panel (fragile and unreliable), use a discovery API to learn the schema (an extra round-trip that most CMSs don't provide), or rely on documentation that might be outdated. The file-system schema eliminates all of that. The schema is the documentation. The schema is the contract. The schema is the AI agent's instruction manual.
This is the same insight that made Infrastructure as Code successful. Terraform configs, Kubernetes manifests, Docker Compose files — they all work because they're readable text files that any tool can process. The CMS content model should be no different.
Prisma proved that developers prefer schema files over database GUIs. We're betting that the same is true for content management. And as AI agents become an increasingly common way to create and manage content, that bet looks better every day.
If you want to see this workflow in action, request access and try defining a schema, syncing it, and generating a typed client. The entire process takes less than five minutes.
Want to try Laizy CMS?
Define your content models in code. Let AI handle the rest. We are onboarding teams from our waitlist now.