Database Design in Cursor AI: Complete Guide with ER Flow
Cursor + ER Flow MCP Server is the most productive database design workflow available today. Here's the step-by-step setup and the prompts that actually work.
Cursor has become the IDE of choice for developers who want AI deeply integrated into their coding workflow. It understands your entire codebase, can write and refactor code across multiple files, and maintains context across a long conversation. But Cursor's default approach to database design has always had a gap: it generates migration files and model code, but you have no visual picture of the schema accumulating across those files.
ER Flow fills that gap. By connecting Cursor to ER Flow via the MCP Server, every database change Cursor makes appears instantly on a visual ER diagram canvas. You get the speed of AI-assisted development with the clarity of a purpose-built database design tool. The schema is always visible, always accurate, and always in sync with what your AI is building.
Why Cursor + ER Flow is a Powerful Combination
The core problem with AI-only database design is visibility. When you ask Cursor to "add a notifications system," it generates a migration file. But you cannot see how the new tables relate to your existing schema without mentally parsing the migration SQL. If you ask for three features in a row, you quickly lose track of the overall structure and whether the cumulative design is coherent.
ER Flow provides the visual layer that AI tools lack. When Cursor calls ER Flow's MCP tools, tables appear on the canvas with their columns, types, and relationships drawn as connecting lines. You can see the entire schema at once, spot inconsistencies immediately, and ask Cursor to fix them β all without leaving your development environment or switching to a separate diagramming window.
Step-by-Step Setup
Start by opening your ER Flow project and navigating to Settings β MCP Server. Copy your project's MCP Server URL and API key. These are unique to your project β anyone with this key can modify your schema, so treat it like a database credential.
In your project root, create or edit .cursor/mcp.json. Add an entry for erflow with type: "sse", the project URL (https://app.erflow.io/mcp/YOUR_PROJECT_ID/sse), and an Authorization: Bearer YOUR_API_KEY header. Open Cursor's Settings and make sure the MCP feature is enabled. Restart Cursor to load the new MCP configuration. You will see "erflow" appear in the available tools section of the AI panel, with all schema tools listed.
Building a New App Schema from Scratch
The best way to experience the Cursor + ER Flow workflow is to design a schema for a new application from scratch. Open Cursor's chat, ensure the ER Flow MCP tools are active, and start with a high-level description of your domain.
Example prompt: *"I'm building a SaaS project management tool. I need tables for: organizations (the tenants), users (who belong to organizations), projects (owned by organizations), tasks (within projects), task assignments (many-to-many between tasks and users), comments on tasks, file attachments on tasks and comments, and tags that can be applied to tasks. Follow these conventions: all PKs are UUIDs, all tables have created_at and updated_at timestamps, use org_id as the tenant foreign key column name. Read the current schema first, then create everything in one batch operation."*
Cursor reads the current ER Flow schema via get-data-model-dbml, understands your conventions from both the prompt and the existing schema, then issues a single batch-operations call that creates all the tables with proper foreign keys, junction tables, indexes, and timestamps. The entire schema appears on your ER Flow canvas within seconds β ready for visual review.
How Cursor Reads the Existing Schema Before Making Changes
Before making any changes, Cursor calls get-data-model-dbml to read your current schema as DBML. This is a critical step that prevents the most common AI database design mistake: creating a table that already exists, adding a column with a conflicting type, or building a relationship that contradicts an existing constraint.
You can rely on this behavior by explicitly telling Cursor to read the schema first: *"Read the current schema before making any changes."* This is especially important after you have been designing manually in ER Flow and then switch to asking Cursor to extend the schema. The AI reads your current state β including any manual changes β and designs additions that fit naturally into what exists.
Batch Operations for Speed and Consistency
The batch-operations tool is your most important performance primitive. Instead of creating each table and relationship in separate requests, Cursor bundles everything into a single atomic operation. This matters for two reasons: speed (one network round trip instead of many) and consistency (the schema is never in a partial state where some tables exist but their referenced foreign key targets do not yet).
Explicitly prompt Cursor to use batch operations when designing interconnected tables: *"Create all these tables in a single batch operation so foreign key references are all defined together."* This produces cleaner results β no transient invalid-FK states, and a single undo step if you want to revert the entire feature.
Generating Migrations from ER Flow
Once your schema looks correct on the ER Flow canvas, use ER Flow's migration generator to produce framework-specific migrations. ER Flow supports Laravel migrations (PHP classes using the Schema builder fluent API) and Phinx (the migration library used by CakePHP and many standalone PHP projects). You can also export plain SQL compatible with any database client.
The workflow is: design in Cursor + ER Flow, review on the visual canvas, create a checkpoint in ER Flow (a named snapshot of the current schema), then generate the migration from the checkpoint. If you have already run migrations from a previous checkpoint, ER Flow generates only the diff β the changes since the last checkpoint β rather than the full schema as a single monolithic migration.
Prompts That Work Well
Some prompt patterns consistently produce excellent results in the Cursor + ER Flow workflow. For new features: *"Read the current schema. Add [feature description]. Follow the existing naming conventions. Use batch operations. Create indexes on all foreign key columns and on columns I will filter by frequently."* For refactoring: *"Read the current schema. The orders table has too many columns. Split the shipping address fields into a separate order_addresses table with a one-to-one relationship back to orders."* For common patterns: *"Add soft delete support (a deleted_at timestamp) to the users, projects, and tasks tables."* For auditing: *"Add an audit_logs table with polymorphic support that can track changes to users, projects, and tasks, with indexes for querying by record and by user."*
Tips and Best Practices
Keep ER Flow open in a browser tab while working in Cursor. The visual feedback is immediate β you see changes appear on the canvas as Cursor applies them. This lets you catch design mistakes in real time rather than after the migration has been generated and committed.
Create ER Flow checkpoints alongside git commits. ER Flow's checkpoint system creates named snapshots of your schema. Create a checkpoint whenever you complete a feature, and name it the same as your git branch or issue ticket. This creates a clear audit trail linking schema changes to code changes, and makes it trivial to generate an incremental migration for exactly the work done in that feature branch.
Describe your conventions once, prominently. Put your naming conventions, primary key strategy, and standard column requirements (tenant_id, timestamps, soft deletes) at the top of your first Cursor prompt. Cursor applies these conventions to all subsequent changes in the conversation. You can also keep a brief conventions note in your ER Flow diagram, which Cursor reads when it calls get-data-model-dbml β making your standards visible both in the diagram and in the AI conversation.
The combination of Cursor's codebase understanding and ER Flow's visual database layer creates a development workflow where the schema is always visible, always accurate, and always in sync with your application code. It is the closest thing to having a dedicated database architect available at all times β one that knows your entire codebase, applies best practices automatically, and visualizes every decision on a shared canvas your whole team can see.