feat(sqlite): add SQLite writer for converting PostgreSQL schemas
All checks were successful
All checks were successful
- Implement SQLite DDL writer to convert PostgreSQL schemas to SQLite-compatible SQL statements. - Include automatic schema flattening, type mapping, auto-increment detection, and function translation. - Add templates for creating tables, indexes, unique constraints, check constraints, and foreign keys. - Implement tests for writer functionality and data type mapping.
This commit is contained in:
@@ -59,6 +59,7 @@ RelSpec can write database schemas to multiple formats:
|
|||||||
|
|
||||||
#### Database DDL
|
#### Database DDL
|
||||||
- [PostgreSQL](pkg/writers/pgsql/README.md) - PostgreSQL DDL (CREATE TABLE, etc.)
|
- [PostgreSQL](pkg/writers/pgsql/README.md) - PostgreSQL DDL (CREATE TABLE, etc.)
|
||||||
|
- [SQLite](pkg/writers/sqlite/README.md) - SQLite DDL with automatic schema flattening
|
||||||
|
|
||||||
#### Schema Formats
|
#### Schema Formats
|
||||||
- [DBML](pkg/writers/dbml/README.md) - Database Markup Language
|
- [DBML](pkg/writers/dbml/README.md) - Database Markup Language
|
||||||
@@ -185,6 +186,10 @@ relspec convert --from pgsql --from-conn "postgres://..." \
|
|||||||
# Convert DBML to PostgreSQL SQL
|
# Convert DBML to PostgreSQL SQL
|
||||||
relspec convert --from dbml --from-path schema.dbml \
|
relspec convert --from dbml --from-path schema.dbml \
|
||||||
--to pgsql --to-path schema.sql
|
--to pgsql --to-path schema.sql
|
||||||
|
|
||||||
|
# Convert PostgreSQL database to SQLite (with automatic schema flattening)
|
||||||
|
relspec convert --from pgsql --from-conn "postgres://..." \
|
||||||
|
--to sqlite --to-path sqlite_schema.sql
|
||||||
```
|
```
|
||||||
|
|
||||||
### Schema Validation
|
### Schema Validation
|
||||||
|
|||||||
32
TODO.md
32
TODO.md
@@ -1,43 +1,44 @@
|
|||||||
# RelSpec - TODO List
|
# RelSpec - TODO List
|
||||||
|
|
||||||
|
|
||||||
## Input Readers / Writers
|
## Input Readers / Writers
|
||||||
|
|
||||||
- [✔️] **Database Inspector**
|
- [✔️] **Database Inspector**
|
||||||
- [✔️] PostgreSQL driver
|
- [✔️] PostgreSQL driver (reader + writer)
|
||||||
- [ ] MySQL driver
|
- [ ] MySQL driver
|
||||||
- [ ] SQLite driver
|
- [✔️] SQLite driver (reader + writer with automatic schema flattening)
|
||||||
- [ ] MSSQL driver
|
- [ ] MSSQL driver
|
||||||
- [✔️] Foreign key detection
|
- [✔️] Foreign key detection
|
||||||
- [✔️] Index extraction
|
- [✔️] Index extraction
|
||||||
- [*] .sql file generation with sequence and priority
|
- [✔️] .sql file generation (PostgreSQL, SQLite)
|
||||||
- [✔️] .dbml: Database Markup Language (DBML) for textual schema representation.
|
- [✔️] .dbml: Database Markup Language (DBML) for textual schema representation.
|
||||||
- [✔️] Prisma schema support (PSL format) .prisma
|
- [✔️] Prisma schema support (PSL format) .prisma
|
||||||
- [✔️] Drizzle ORM support .ts (TypeScript / JavaScript) (Mr. Edd wanted to move from Prisma to Drizzle. If you are bugs, you are welcome to do pull requests or issues)
|
- [✔️] Drizzle ORM support .ts (TypeScript / JavaScript) (Mr. Edd wanted to move from Prisma to Drizzle. If you are bugs, you are welcome to do pull requests or issues)
|
||||||
- [☠️] Entity Framework (.NET) model .edmx (Fuck no, EDMX files were bloated, verbose XML nightmares—hard to merge, error-prone, and a pain in teams. Microsoft wisely ditched them in EF Core for code-first. Classic overkill from old MS era.)
|
- [☠️] Entity Framework (.NET) model .edmx (Fuck no, EDMX files were bloated, verbose XML nightmares—hard to merge, error-prone, and a pain in teams. Microsoft wisely ditched them in EF Core for code-first. Classic overkill from old MS era.)
|
||||||
- [✔️] TypeORM support
|
- [✔️] TypeORM support
|
||||||
- [] .hbm.xml / schema.xml: Hibernate/Propel mappings (Java/PHP) (💲 Someone can do this, not me)
|
- [] .hbm.xml / schema.xml: Hibernate/Propel mappings (Java/PHP) (💲 Someone can do this, not me)
|
||||||
- [ ] Django models.py (Python classes), Sequelize migrations (JS) (💲 Someone can do this, not me)
|
- [ ] Django models.py (Python classes), Sequelize migrations (JS) (💲 Someone can do this, not me)
|
||||||
- [] .avsc: Avro schema (JSON format for data serialization) (💲 Someone can do this, not me)
|
- [] .avsc: Avro schema (JSON format for data serialization) (💲 Someone can do this, not me)
|
||||||
- [✔️] GraphQL schema generation
|
- [✔️] GraphQL schema generation
|
||||||
|
|
||||||
|
## UI
|
||||||
|
|
||||||
## UI
|
|
||||||
- [✔️] Basic UI (I went with tview)
|
- [✔️] Basic UI (I went with tview)
|
||||||
- [✔️] Save / Load Database
|
- [✔️] Save / Load Database
|
||||||
- [✔️] Schemas / Domains / Tables
|
- [✔️] Schemas / Domains / Tables
|
||||||
- [ ] Add Relations
|
- [ ] Add Relations
|
||||||
- [ ] Add Indexes
|
- [ ] Add Indexes
|
||||||
- [ ] Add Views
|
- [ ] Add Views
|
||||||
- [ ] Add Sequences
|
- [ ] Add Sequences
|
||||||
- [ ] Add Scripts
|
- [ ] Add Scripts
|
||||||
- [ ] Domain / Table Assignment
|
- [ ] Domain / Table Assignment
|
||||||
|
|
||||||
## Documentation
|
## Documentation
|
||||||
|
|
||||||
- [ ] API documentation (godoc)
|
- [ ] API documentation (godoc)
|
||||||
- [ ] Usage examples for each format combination
|
- [ ] Usage examples for each format combination
|
||||||
|
|
||||||
## Advanced Features
|
## Advanced Features
|
||||||
|
|
||||||
- [ ] Dry-run mode for validation
|
- [ ] Dry-run mode for validation
|
||||||
- [x] Diff tool for comparing specifications
|
- [x] Diff tool for comparing specifications
|
||||||
- [ ] Migration script generation
|
- [ ] Migration script generation
|
||||||
@@ -46,12 +47,13 @@
|
|||||||
- [ ] Watch mode for auto-regeneration
|
- [ ] Watch mode for auto-regeneration
|
||||||
|
|
||||||
## Future Considerations
|
## Future Considerations
|
||||||
|
|
||||||
- [ ] Web UI for visual editing
|
- [ ] Web UI for visual editing
|
||||||
- [ ] REST API server mode
|
- [ ] REST API server mode
|
||||||
- [ ] Support for NoSQL databases
|
- [ ] Support for NoSQL databases
|
||||||
|
|
||||||
|
|
||||||
## Performance
|
## Performance
|
||||||
|
|
||||||
- [ ] Concurrent processing for multiple tables
|
- [ ] Concurrent processing for multiple tables
|
||||||
- [ ] Streaming for large databases
|
- [ ] Streaming for large databases
|
||||||
- [ ] Memory optimization
|
- [ ] Memory optimization
|
||||||
|
|||||||
@@ -33,6 +33,7 @@ import (
|
|||||||
wjson "git.warky.dev/wdevs/relspecgo/pkg/writers/json"
|
wjson "git.warky.dev/wdevs/relspecgo/pkg/writers/json"
|
||||||
wpgsql "git.warky.dev/wdevs/relspecgo/pkg/writers/pgsql"
|
wpgsql "git.warky.dev/wdevs/relspecgo/pkg/writers/pgsql"
|
||||||
wprisma "git.warky.dev/wdevs/relspecgo/pkg/writers/prisma"
|
wprisma "git.warky.dev/wdevs/relspecgo/pkg/writers/prisma"
|
||||||
|
wsqlite "git.warky.dev/wdevs/relspecgo/pkg/writers/sqlite"
|
||||||
wtypeorm "git.warky.dev/wdevs/relspecgo/pkg/writers/typeorm"
|
wtypeorm "git.warky.dev/wdevs/relspecgo/pkg/writers/typeorm"
|
||||||
wyaml "git.warky.dev/wdevs/relspecgo/pkg/writers/yaml"
|
wyaml "git.warky.dev/wdevs/relspecgo/pkg/writers/yaml"
|
||||||
)
|
)
|
||||||
@@ -84,6 +85,7 @@ Output formats:
|
|||||||
- prisma: Prisma schema files (.prisma)
|
- prisma: Prisma schema files (.prisma)
|
||||||
- typeorm: TypeORM entity files (TypeScript)
|
- typeorm: TypeORM entity files (TypeScript)
|
||||||
- pgsql: PostgreSQL SQL schema
|
- pgsql: PostgreSQL SQL schema
|
||||||
|
- sqlite: SQLite SQL schema (with automatic schema flattening)
|
||||||
|
|
||||||
PostgreSQL Connection String Examples:
|
PostgreSQL Connection String Examples:
|
||||||
postgres://username:password@localhost:5432/database_name
|
postgres://username:password@localhost:5432/database_name
|
||||||
@@ -346,6 +348,9 @@ func writeDatabase(db *models.Database, dbType, outputPath, packageName, schemaF
|
|||||||
case "pgsql", "postgres", "postgresql", "sql":
|
case "pgsql", "postgres", "postgresql", "sql":
|
||||||
writer = wpgsql.NewWriter(writerOpts)
|
writer = wpgsql.NewWriter(writerOpts)
|
||||||
|
|
||||||
|
case "sqlite", "sqlite3":
|
||||||
|
writer = wsqlite.NewWriter(writerOpts)
|
||||||
|
|
||||||
case "prisma":
|
case "prisma":
|
||||||
writer = wprisma.NewWriter(writerOpts)
|
writer = wprisma.NewWriter(writerOpts)
|
||||||
|
|
||||||
|
|||||||
@@ -33,6 +33,7 @@ import (
|
|||||||
wjson "git.warky.dev/wdevs/relspecgo/pkg/writers/json"
|
wjson "git.warky.dev/wdevs/relspecgo/pkg/writers/json"
|
||||||
wpgsql "git.warky.dev/wdevs/relspecgo/pkg/writers/pgsql"
|
wpgsql "git.warky.dev/wdevs/relspecgo/pkg/writers/pgsql"
|
||||||
wprisma "git.warky.dev/wdevs/relspecgo/pkg/writers/prisma"
|
wprisma "git.warky.dev/wdevs/relspecgo/pkg/writers/prisma"
|
||||||
|
wsqlite "git.warky.dev/wdevs/relspecgo/pkg/writers/sqlite"
|
||||||
wtypeorm "git.warky.dev/wdevs/relspecgo/pkg/writers/typeorm"
|
wtypeorm "git.warky.dev/wdevs/relspecgo/pkg/writers/typeorm"
|
||||||
wyaml "git.warky.dev/wdevs/relspecgo/pkg/writers/yaml"
|
wyaml "git.warky.dev/wdevs/relspecgo/pkg/writers/yaml"
|
||||||
)
|
)
|
||||||
@@ -87,6 +88,7 @@ Supports reading from and writing to all supported formats:
|
|||||||
- prisma: Prisma schema files (.prisma)
|
- prisma: Prisma schema files (.prisma)
|
||||||
- typeorm: TypeORM entity files (TypeScript)
|
- typeorm: TypeORM entity files (TypeScript)
|
||||||
- pgsql: PostgreSQL SQL schema
|
- pgsql: PostgreSQL SQL schema
|
||||||
|
- sqlite: SQLite SQL schema (with automatic schema flattening)
|
||||||
|
|
||||||
PostgreSQL Connection String Examples:
|
PostgreSQL Connection String Examples:
|
||||||
postgres://username:password@localhost:5432/database_name
|
postgres://username:password@localhost:5432/database_name
|
||||||
@@ -319,6 +321,8 @@ func writeDatabaseForEdit(dbType, filePath, connString string, db *models.Databa
|
|||||||
writer = wprisma.NewWriter(&writers.WriterOptions{OutputPath: filePath})
|
writer = wprisma.NewWriter(&writers.WriterOptions{OutputPath: filePath})
|
||||||
case "typeorm":
|
case "typeorm":
|
||||||
writer = wtypeorm.NewWriter(&writers.WriterOptions{OutputPath: filePath})
|
writer = wtypeorm.NewWriter(&writers.WriterOptions{OutputPath: filePath})
|
||||||
|
case "sqlite", "sqlite3":
|
||||||
|
writer = wsqlite.NewWriter(&writers.WriterOptions{OutputPath: filePath})
|
||||||
case "pgsql":
|
case "pgsql":
|
||||||
writer = wpgsql.NewWriter(&writers.WriterOptions{OutputPath: filePath})
|
writer = wpgsql.NewWriter(&writers.WriterOptions{OutputPath: filePath})
|
||||||
default:
|
default:
|
||||||
|
|||||||
@@ -34,6 +34,7 @@ import (
|
|||||||
wjson "git.warky.dev/wdevs/relspecgo/pkg/writers/json"
|
wjson "git.warky.dev/wdevs/relspecgo/pkg/writers/json"
|
||||||
wpgsql "git.warky.dev/wdevs/relspecgo/pkg/writers/pgsql"
|
wpgsql "git.warky.dev/wdevs/relspecgo/pkg/writers/pgsql"
|
||||||
wprisma "git.warky.dev/wdevs/relspecgo/pkg/writers/prisma"
|
wprisma "git.warky.dev/wdevs/relspecgo/pkg/writers/prisma"
|
||||||
|
wsqlite "git.warky.dev/wdevs/relspecgo/pkg/writers/sqlite"
|
||||||
wtypeorm "git.warky.dev/wdevs/relspecgo/pkg/writers/typeorm"
|
wtypeorm "git.warky.dev/wdevs/relspecgo/pkg/writers/typeorm"
|
||||||
wyaml "git.warky.dev/wdevs/relspecgo/pkg/writers/yaml"
|
wyaml "git.warky.dev/wdevs/relspecgo/pkg/writers/yaml"
|
||||||
)
|
)
|
||||||
@@ -385,6 +386,8 @@ func writeDatabaseForMerge(dbType, filePath, connString string, db *models.Datab
|
|||||||
return fmt.Errorf("%s: file path is required for TypeORM format", label)
|
return fmt.Errorf("%s: file path is required for TypeORM format", label)
|
||||||
}
|
}
|
||||||
writer = wtypeorm.NewWriter(&writers.WriterOptions{OutputPath: filePath, FlattenSchema: flattenSchema})
|
writer = wtypeorm.NewWriter(&writers.WriterOptions{OutputPath: filePath, FlattenSchema: flattenSchema})
|
||||||
|
case "sqlite", "sqlite3":
|
||||||
|
writer = wsqlite.NewWriter(&writers.WriterOptions{OutputPath: filePath, FlattenSchema: flattenSchema})
|
||||||
case "pgsql":
|
case "pgsql":
|
||||||
writerOpts := &writers.WriterOptions{OutputPath: filePath, FlattenSchema: flattenSchema}
|
writerOpts := &writers.WriterOptions{OutputPath: filePath, FlattenSchema: flattenSchema}
|
||||||
if connString != "" {
|
if connString != "" {
|
||||||
|
|||||||
215
pkg/writers/sqlite/README.md
Normal file
215
pkg/writers/sqlite/README.md
Normal file
@@ -0,0 +1,215 @@
|
|||||||
|
# SQLite Writer
|
||||||
|
|
||||||
|
SQLite DDL (Data Definition Language) writer for RelSpec. Converts database schemas to SQLite-compatible SQL statements.
|
||||||
|
|
||||||
|
## Features
|
||||||
|
|
||||||
|
- **Automatic Schema Flattening** - SQLite doesn't support PostgreSQL-style schemas, so table names are automatically flattened (e.g., `public.users` → `public_users`)
|
||||||
|
- **Type Mapping** - Converts PostgreSQL data types to SQLite type affinities (TEXT, INTEGER, REAL, NUMERIC, BLOB)
|
||||||
|
- **Auto-Increment Detection** - Automatically converts SERIAL types and auto-increment columns to `INTEGER PRIMARY KEY AUTOINCREMENT`
|
||||||
|
- **Function Translation** - Converts PostgreSQL functions to SQLite equivalents (e.g., `now()` → `CURRENT_TIMESTAMP`)
|
||||||
|
- **Boolean Handling** - Maps boolean values to INTEGER (true=1, false=0)
|
||||||
|
- **Constraint Generation** - Creates indexes, unique constraints, and documents foreign keys
|
||||||
|
- **Identifier Quoting** - Properly quotes identifiers using double quotes
|
||||||
|
|
||||||
|
## Usage
|
||||||
|
|
||||||
|
### Convert PostgreSQL to SQLite
|
||||||
|
|
||||||
|
```bash
|
||||||
|
relspec convert --from pgsql --from-conn "postgres://user:pass@localhost/mydb" \
|
||||||
|
--to sqlite --to-path schema.sql
|
||||||
|
```
|
||||||
|
|
||||||
|
### Convert DBML to SQLite
|
||||||
|
|
||||||
|
```bash
|
||||||
|
relspec convert --from dbml --from-path schema.dbml \
|
||||||
|
--to sqlite --to-path schema.sql
|
||||||
|
```
|
||||||
|
|
||||||
|
### Multi-Schema Databases
|
||||||
|
|
||||||
|
SQLite doesn't support schemas, so multi-schema databases are automatically flattened:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Input has auth.users and public.posts
|
||||||
|
# Output will have auth_users and public_posts
|
||||||
|
relspec convert --from json --from-path multi_schema.json \
|
||||||
|
--to sqlite --to-path flattened.sql
|
||||||
|
```
|
||||||
|
|
||||||
|
## Type Mapping
|
||||||
|
|
||||||
|
| PostgreSQL Type | SQLite Affinity | Examples |
|
||||||
|
|----------------|-----------------|----------|
|
||||||
|
| TEXT | TEXT | varchar, text, char, citext, uuid, timestamp, json |
|
||||||
|
| INTEGER | INTEGER | int, integer, smallint, bigint, serial, boolean |
|
||||||
|
| REAL | REAL | real, float, double precision |
|
||||||
|
| NUMERIC | NUMERIC | numeric, decimal |
|
||||||
|
| BLOB | BLOB | bytea, blob |
|
||||||
|
|
||||||
|
## Auto-Increment Handling
|
||||||
|
|
||||||
|
Columns are converted to `INTEGER PRIMARY KEY AUTOINCREMENT` when they meet these criteria:
|
||||||
|
- Marked as primary key
|
||||||
|
- Integer type
|
||||||
|
- Have `AutoIncrement` flag set, OR
|
||||||
|
- Type contains "serial", OR
|
||||||
|
- Default value contains "nextval"
|
||||||
|
|
||||||
|
**Example:**
|
||||||
|
|
||||||
|
```sql
|
||||||
|
-- Input (PostgreSQL)
|
||||||
|
CREATE TABLE users (
|
||||||
|
id SERIAL PRIMARY KEY,
|
||||||
|
name VARCHAR(100)
|
||||||
|
);
|
||||||
|
|
||||||
|
-- Output (SQLite)
|
||||||
|
CREATE TABLE "users" (
|
||||||
|
"id" INTEGER PRIMARY KEY AUTOINCREMENT,
|
||||||
|
"name" TEXT
|
||||||
|
);
|
||||||
|
```
|
||||||
|
|
||||||
|
## Default Value Translation
|
||||||
|
|
||||||
|
| PostgreSQL | SQLite | Notes |
|
||||||
|
|-----------|--------|-------|
|
||||||
|
| `now()`, `CURRENT_TIMESTAMP` | `CURRENT_TIMESTAMP` | Timestamp functions |
|
||||||
|
| `CURRENT_DATE` | `CURRENT_DATE` | Date function |
|
||||||
|
| `CURRENT_TIME` | `CURRENT_TIME` | Time function |
|
||||||
|
| `true`, `false` | `1`, `0` | Boolean values |
|
||||||
|
| `gen_random_uuid()` | *(removed)* | SQLite has no built-in UUID |
|
||||||
|
| `nextval(...)` | *(removed)* | Handled by AUTOINCREMENT |
|
||||||
|
|
||||||
|
## Foreign Keys
|
||||||
|
|
||||||
|
Foreign keys are generated as commented-out ALTER TABLE statements for reference:
|
||||||
|
|
||||||
|
```sql
|
||||||
|
-- Foreign key: fk_posts_user_id
|
||||||
|
-- ALTER TABLE "posts" ADD CONSTRAINT "posts_fk_posts_user_id"
|
||||||
|
-- FOREIGN KEY ("user_id")
|
||||||
|
-- REFERENCES "users" ("id");
|
||||||
|
-- Note: Foreign keys should be defined in CREATE TABLE for better SQLite compatibility
|
||||||
|
```
|
||||||
|
|
||||||
|
For production use, define foreign keys directly in the CREATE TABLE statement or execute the ALTER TABLE commands after creating all tables.
|
||||||
|
|
||||||
|
## Constraints
|
||||||
|
|
||||||
|
- **Primary Keys**: Inline for auto-increment columns, separate constraint for composite keys
|
||||||
|
- **Unique Constraints**: Converted to `CREATE UNIQUE INDEX` statements
|
||||||
|
- **Check Constraints**: Generated as comments (should be added to CREATE TABLE manually)
|
||||||
|
- **Indexes**: Generated without PostgreSQL-specific features (no GIN, GiST, operator classes)
|
||||||
|
|
||||||
|
## Output Structure
|
||||||
|
|
||||||
|
Generated SQL follows this order:
|
||||||
|
|
||||||
|
1. Header comments
|
||||||
|
2. `PRAGMA foreign_keys = ON;`
|
||||||
|
3. CREATE TABLE statements (sorted by schema, then table)
|
||||||
|
4. CREATE INDEX statements
|
||||||
|
5. CREATE UNIQUE INDEX statements (for unique constraints)
|
||||||
|
6. Check constraint comments
|
||||||
|
7. Foreign key comments
|
||||||
|
|
||||||
|
## Example
|
||||||
|
|
||||||
|
**Input (multi-schema PostgreSQL):**
|
||||||
|
|
||||||
|
```sql
|
||||||
|
CREATE SCHEMA auth;
|
||||||
|
CREATE TABLE auth.users (
|
||||||
|
id SERIAL PRIMARY KEY,
|
||||||
|
username VARCHAR(50) UNIQUE NOT NULL,
|
||||||
|
created_at TIMESTAMP DEFAULT now()
|
||||||
|
);
|
||||||
|
|
||||||
|
CREATE SCHEMA public;
|
||||||
|
CREATE TABLE public.posts (
|
||||||
|
id SERIAL PRIMARY KEY,
|
||||||
|
user_id INTEGER REFERENCES auth.users(id),
|
||||||
|
title VARCHAR(200) NOT NULL,
|
||||||
|
published BOOLEAN DEFAULT false
|
||||||
|
);
|
||||||
|
```
|
||||||
|
|
||||||
|
**Output (SQLite with flattened schemas):**
|
||||||
|
|
||||||
|
```sql
|
||||||
|
-- SQLite Database Schema
|
||||||
|
-- Database: mydb
|
||||||
|
-- Generated by RelSpec
|
||||||
|
-- Note: Schema names have been flattened (e.g., public.users -> public_users)
|
||||||
|
|
||||||
|
-- Enable foreign key constraints
|
||||||
|
PRAGMA foreign_keys = ON;
|
||||||
|
|
||||||
|
-- Schema: auth (flattened into table names)
|
||||||
|
|
||||||
|
CREATE TABLE "auth_users" (
|
||||||
|
"id" INTEGER PRIMARY KEY AUTOINCREMENT,
|
||||||
|
"username" TEXT NOT NULL,
|
||||||
|
"created_at" TEXT DEFAULT CURRENT_TIMESTAMP
|
||||||
|
);
|
||||||
|
|
||||||
|
CREATE UNIQUE INDEX "auth_users_users_username_key" ON "auth_users" ("username");
|
||||||
|
|
||||||
|
-- Schema: public (flattened into table names)
|
||||||
|
|
||||||
|
CREATE TABLE "public_posts" (
|
||||||
|
"id" INTEGER PRIMARY KEY AUTOINCREMENT,
|
||||||
|
"user_id" INTEGER NOT NULL,
|
||||||
|
"title" TEXT NOT NULL,
|
||||||
|
"published" INTEGER DEFAULT 0
|
||||||
|
);
|
||||||
|
|
||||||
|
-- Foreign key: posts_user_id_fkey
|
||||||
|
-- ALTER TABLE "public_posts" ADD CONSTRAINT "public_posts_posts_user_id_fkey"
|
||||||
|
-- FOREIGN KEY ("user_id")
|
||||||
|
-- REFERENCES "auth_users" ("id");
|
||||||
|
-- Note: Foreign keys should be defined in CREATE TABLE for better SQLite compatibility
|
||||||
|
```
|
||||||
|
|
||||||
|
## Programmatic Usage
|
||||||
|
|
||||||
|
```go
|
||||||
|
import (
|
||||||
|
"git.warky.dev/wdevs/relspecgo/pkg/models"
|
||||||
|
"git.warky.dev/wdevs/relspecgo/pkg/writers"
|
||||||
|
"git.warky.dev/wdevs/relspecgo/pkg/writers/sqlite"
|
||||||
|
)
|
||||||
|
|
||||||
|
func main() {
|
||||||
|
// Create writer (automatically enables schema flattening)
|
||||||
|
writer := sqlite.NewWriter(&writers.WriterOptions{
|
||||||
|
OutputPath: "schema.sql",
|
||||||
|
})
|
||||||
|
|
||||||
|
// Write database schema
|
||||||
|
db := &models.Database{
|
||||||
|
Name: "mydb",
|
||||||
|
Schemas: []*models.Schema{
|
||||||
|
// ... your schema data
|
||||||
|
},
|
||||||
|
}
|
||||||
|
|
||||||
|
err := writer.WriteDatabase(db)
|
||||||
|
if err != nil {
|
||||||
|
panic(err)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
## Notes
|
||||||
|
|
||||||
|
- Schema flattening is **always enabled** for SQLite output (cannot be disabled)
|
||||||
|
- Constraint and index names are prefixed with the flattened table name to avoid collisions
|
||||||
|
- Generated SQL is compatible with SQLite 3.x
|
||||||
|
- Foreign key constraints require `PRAGMA foreign_keys = ON;` to be enforced
|
||||||
|
- For complex schemas, review and test the generated SQL before use in production
|
||||||
89
pkg/writers/sqlite/datatypes.go
Normal file
89
pkg/writers/sqlite/datatypes.go
Normal file
@@ -0,0 +1,89 @@
|
|||||||
|
package sqlite
|
||||||
|
|
||||||
|
import (
|
||||||
|
"strings"
|
||||||
|
)
|
||||||
|
|
||||||
|
// SQLite type affinities
|
||||||
|
const (
|
||||||
|
TypeText = "TEXT"
|
||||||
|
TypeInteger = "INTEGER"
|
||||||
|
TypeReal = "REAL"
|
||||||
|
TypeNumeric = "NUMERIC"
|
||||||
|
TypeBlob = "BLOB"
|
||||||
|
)
|
||||||
|
|
||||||
|
// MapPostgreSQLType maps PostgreSQL data types to SQLite type affinities
|
||||||
|
func MapPostgreSQLType(pgType string) string {
|
||||||
|
// Normalize the type
|
||||||
|
normalized := strings.ToLower(strings.TrimSpace(pgType))
|
||||||
|
|
||||||
|
// Remove array notation if present
|
||||||
|
normalized = strings.TrimSuffix(normalized, "[]")
|
||||||
|
|
||||||
|
// Remove precision/scale if present
|
||||||
|
if idx := strings.Index(normalized, "("); idx != -1 {
|
||||||
|
normalized = normalized[:idx]
|
||||||
|
}
|
||||||
|
|
||||||
|
// Map to SQLite type affinity
|
||||||
|
switch normalized {
|
||||||
|
// TEXT affinity
|
||||||
|
case "varchar", "character varying", "text", "char", "character",
|
||||||
|
"citext", "uuid", "timestamp", "timestamptz", "timestamp with time zone",
|
||||||
|
"timestamp without time zone", "date", "time", "timetz", "time with time zone",
|
||||||
|
"time without time zone", "json", "jsonb", "xml", "inet", "cidr", "macaddr":
|
||||||
|
return TypeText
|
||||||
|
|
||||||
|
// INTEGER affinity
|
||||||
|
case "int", "int2", "int4", "int8", "integer", "smallint", "bigint",
|
||||||
|
"serial", "smallserial", "bigserial", "boolean", "bool":
|
||||||
|
return TypeInteger
|
||||||
|
|
||||||
|
// REAL affinity
|
||||||
|
case "real", "float", "float4", "float8", "double precision":
|
||||||
|
return TypeReal
|
||||||
|
|
||||||
|
// NUMERIC affinity
|
||||||
|
case "numeric", "decimal", "money":
|
||||||
|
return TypeNumeric
|
||||||
|
|
||||||
|
// BLOB affinity
|
||||||
|
case "bytea", "blob":
|
||||||
|
return TypeBlob
|
||||||
|
|
||||||
|
default:
|
||||||
|
// Default to TEXT for unknown types
|
||||||
|
return TypeText
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// IsIntegerType checks if a column type should be treated as integer
|
||||||
|
func IsIntegerType(colType string) bool {
|
||||||
|
normalized := strings.ToLower(strings.TrimSpace(colType))
|
||||||
|
normalized = strings.TrimSuffix(normalized, "[]")
|
||||||
|
if idx := strings.Index(normalized, "("); idx != -1 {
|
||||||
|
normalized = normalized[:idx]
|
||||||
|
}
|
||||||
|
|
||||||
|
switch normalized {
|
||||||
|
case "int", "int2", "int4", "int8", "integer", "smallint", "bigint",
|
||||||
|
"serial", "smallserial", "bigserial":
|
||||||
|
return true
|
||||||
|
default:
|
||||||
|
return false
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// MapBooleanValue converts PostgreSQL boolean literals to SQLite (0/1)
|
||||||
|
func MapBooleanValue(value string) string {
|
||||||
|
normalized := strings.ToLower(strings.TrimSpace(value))
|
||||||
|
switch normalized {
|
||||||
|
case "true", "t", "yes", "y", "1":
|
||||||
|
return "1"
|
||||||
|
case "false", "f", "no", "n", "0":
|
||||||
|
return "0"
|
||||||
|
default:
|
||||||
|
return value
|
||||||
|
}
|
||||||
|
}
|
||||||
146
pkg/writers/sqlite/template_functions.go
Normal file
146
pkg/writers/sqlite/template_functions.go
Normal file
@@ -0,0 +1,146 @@
|
|||||||
|
package sqlite
|
||||||
|
|
||||||
|
import (
|
||||||
|
"fmt"
|
||||||
|
"strings"
|
||||||
|
"text/template"
|
||||||
|
|
||||||
|
"git.warky.dev/wdevs/relspecgo/pkg/models"
|
||||||
|
"git.warky.dev/wdevs/relspecgo/pkg/writers"
|
||||||
|
)
|
||||||
|
|
||||||
|
// GetTemplateFuncs returns template functions for SQLite SQL generation
|
||||||
|
func GetTemplateFuncs(opts *writers.WriterOptions) template.FuncMap {
|
||||||
|
return template.FuncMap{
|
||||||
|
"quote_ident": QuoteIdentifier,
|
||||||
|
"map_type": MapPostgreSQLType,
|
||||||
|
"is_autoincrement": IsAutoIncrementCandidate,
|
||||||
|
"qualified_table_name": func(schema, table string) string {
|
||||||
|
return writers.QualifiedTableName(schema, table, opts.FlattenSchema)
|
||||||
|
},
|
||||||
|
"format_default": FormatDefault,
|
||||||
|
"format_constraint_name": func(schema, table, constraint string) string {
|
||||||
|
return FormatConstraintName(schema, table, constraint, opts)
|
||||||
|
},
|
||||||
|
"join": strings.Join,
|
||||||
|
"lower": strings.ToLower,
|
||||||
|
"upper": strings.ToUpper,
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// QuoteIdentifier quotes an identifier for SQLite (double quotes)
|
||||||
|
func QuoteIdentifier(name string) string {
|
||||||
|
// SQLite uses double quotes for identifiers
|
||||||
|
// Escape any existing double quotes by doubling them
|
||||||
|
escaped := strings.ReplaceAll(name, `"`, `""`)
|
||||||
|
return fmt.Sprintf(`"%s"`, escaped)
|
||||||
|
}
|
||||||
|
|
||||||
|
// IsAutoIncrementCandidate checks if a column should use AUTOINCREMENT
|
||||||
|
func IsAutoIncrementCandidate(col *models.Column) bool {
|
||||||
|
// Must be a primary key
|
||||||
|
if !col.IsPrimaryKey {
|
||||||
|
return false
|
||||||
|
}
|
||||||
|
|
||||||
|
// Must be an integer type
|
||||||
|
if !IsIntegerType(col.Type) {
|
||||||
|
return false
|
||||||
|
}
|
||||||
|
|
||||||
|
// Check AutoIncrement field
|
||||||
|
if col.AutoIncrement {
|
||||||
|
return true
|
||||||
|
}
|
||||||
|
|
||||||
|
// Check if default suggests auto-increment
|
||||||
|
if col.Default != nil {
|
||||||
|
defaultStr, ok := col.Default.(string)
|
||||||
|
if ok {
|
||||||
|
defaultLower := strings.ToLower(defaultStr)
|
||||||
|
if strings.Contains(defaultLower, "nextval") ||
|
||||||
|
strings.Contains(defaultLower, "autoincrement") ||
|
||||||
|
strings.Contains(defaultLower, "auto_increment") {
|
||||||
|
return true
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Serial types are auto-increment
|
||||||
|
typeLower := strings.ToLower(col.Type)
|
||||||
|
return strings.Contains(typeLower, "serial")
|
||||||
|
}
|
||||||
|
|
||||||
|
// FormatDefault formats a default value for SQLite
|
||||||
|
func FormatDefault(col *models.Column) string {
|
||||||
|
if col.Default == nil {
|
||||||
|
return ""
|
||||||
|
}
|
||||||
|
|
||||||
|
// Skip auto-increment defaults (handled by AUTOINCREMENT keyword)
|
||||||
|
if IsAutoIncrementCandidate(col) {
|
||||||
|
return ""
|
||||||
|
}
|
||||||
|
|
||||||
|
// Convert to string
|
||||||
|
defaultStr, ok := col.Default.(string)
|
||||||
|
if !ok {
|
||||||
|
// If not a string, convert to string representation
|
||||||
|
defaultStr = fmt.Sprintf("%v", col.Default)
|
||||||
|
}
|
||||||
|
|
||||||
|
if defaultStr == "" {
|
||||||
|
return ""
|
||||||
|
}
|
||||||
|
|
||||||
|
// Convert PostgreSQL-specific functions to SQLite equivalents
|
||||||
|
defaultLower := strings.ToLower(defaultStr)
|
||||||
|
|
||||||
|
// Current timestamp functions
|
||||||
|
if strings.Contains(defaultLower, "current_timestamp") ||
|
||||||
|
strings.Contains(defaultLower, "now()") {
|
||||||
|
return "CURRENT_TIMESTAMP"
|
||||||
|
}
|
||||||
|
|
||||||
|
// Current date
|
||||||
|
if strings.Contains(defaultLower, "current_date") {
|
||||||
|
return "CURRENT_DATE"
|
||||||
|
}
|
||||||
|
|
||||||
|
// Current time
|
||||||
|
if strings.Contains(defaultLower, "current_time") {
|
||||||
|
return "CURRENT_TIME"
|
||||||
|
}
|
||||||
|
|
||||||
|
// Boolean values
|
||||||
|
sqliteType := MapPostgreSQLType(col.Type)
|
||||||
|
if sqliteType == TypeInteger {
|
||||||
|
typeLower := strings.ToLower(col.Type)
|
||||||
|
if strings.Contains(typeLower, "bool") {
|
||||||
|
return MapBooleanValue(defaultStr)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// UUID generation - SQLite doesn't have built-in UUID, comment it out
|
||||||
|
if strings.Contains(defaultLower, "uuid") || strings.Contains(defaultLower, "gen_random_uuid") {
|
||||||
|
return "" // Remove UUID defaults, users must handle this
|
||||||
|
}
|
||||||
|
|
||||||
|
// Remove PostgreSQL-specific casting
|
||||||
|
defaultStr = strings.ReplaceAll(defaultStr, "::text", "")
|
||||||
|
defaultStr = strings.ReplaceAll(defaultStr, "::integer", "")
|
||||||
|
defaultStr = strings.ReplaceAll(defaultStr, "::bigint", "")
|
||||||
|
defaultStr = strings.ReplaceAll(defaultStr, "::boolean", "")
|
||||||
|
|
||||||
|
return defaultStr
|
||||||
|
}
|
||||||
|
|
||||||
|
// FormatConstraintName formats a constraint name with table prefix if flattening
|
||||||
|
func FormatConstraintName(schema, table, constraint string, opts *writers.WriterOptions) string {
|
||||||
|
if opts.FlattenSchema && schema != "" {
|
||||||
|
// Prefix constraint with flattened table name
|
||||||
|
flatTable := writers.QualifiedTableName(schema, table, opts.FlattenSchema)
|
||||||
|
return fmt.Sprintf("%s_%s", flatTable, constraint)
|
||||||
|
}
|
||||||
|
return constraint
|
||||||
|
}
|
||||||
174
pkg/writers/sqlite/templates.go
Normal file
174
pkg/writers/sqlite/templates.go
Normal file
@@ -0,0 +1,174 @@
|
|||||||
|
package sqlite
|
||||||
|
|
||||||
|
import (
|
||||||
|
"bytes"
|
||||||
|
"embed"
|
||||||
|
"fmt"
|
||||||
|
"text/template"
|
||||||
|
|
||||||
|
"git.warky.dev/wdevs/relspecgo/pkg/models"
|
||||||
|
"git.warky.dev/wdevs/relspecgo/pkg/writers"
|
||||||
|
)
|
||||||
|
|
||||||
|
//go:embed templates/*.tmpl
|
||||||
|
var templateFS embed.FS
|
||||||
|
|
||||||
|
// TemplateExecutor manages and executes SQLite SQL templates
|
||||||
|
type TemplateExecutor struct {
|
||||||
|
templates *template.Template
|
||||||
|
options *writers.WriterOptions
|
||||||
|
}
|
||||||
|
|
||||||
|
// NewTemplateExecutor creates a new template executor for SQLite
|
||||||
|
func NewTemplateExecutor(opts *writers.WriterOptions) (*TemplateExecutor, error) {
|
||||||
|
// Create template with SQLite-specific functions
|
||||||
|
funcMap := GetTemplateFuncs(opts)
|
||||||
|
|
||||||
|
tmpl, err := template.New("").Funcs(funcMap).ParseFS(templateFS, "templates/*.tmpl")
|
||||||
|
if err != nil {
|
||||||
|
return nil, fmt.Errorf("failed to parse templates: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
return &TemplateExecutor{
|
||||||
|
templates: tmpl,
|
||||||
|
options: opts,
|
||||||
|
}, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// Template data structures
|
||||||
|
|
||||||
|
// TableTemplateData contains data for table template
|
||||||
|
type TableTemplateData struct {
|
||||||
|
Schema string
|
||||||
|
Name string
|
||||||
|
Columns []*models.Column
|
||||||
|
PrimaryKey *models.Constraint
|
||||||
|
}
|
||||||
|
|
||||||
|
// IndexTemplateData contains data for index template
|
||||||
|
type IndexTemplateData struct {
|
||||||
|
Schema string
|
||||||
|
Table string
|
||||||
|
Name string
|
||||||
|
Columns []string
|
||||||
|
}
|
||||||
|
|
||||||
|
// ConstraintTemplateData contains data for constraint templates
|
||||||
|
type ConstraintTemplateData struct {
|
||||||
|
Schema string
|
||||||
|
Table string
|
||||||
|
Name string
|
||||||
|
Columns []string
|
||||||
|
Expression string
|
||||||
|
ForeignSchema string
|
||||||
|
ForeignTable string
|
||||||
|
ForeignColumns []string
|
||||||
|
OnDelete string
|
||||||
|
OnUpdate string
|
||||||
|
}
|
||||||
|
|
||||||
|
// Execute methods
|
||||||
|
|
||||||
|
// ExecutePragmaForeignKeys executes the pragma foreign keys template
|
||||||
|
func (te *TemplateExecutor) ExecutePragmaForeignKeys() (string, error) {
|
||||||
|
var buf bytes.Buffer
|
||||||
|
err := te.templates.ExecuteTemplate(&buf, "pragma_foreign_keys.tmpl", nil)
|
||||||
|
if err != nil {
|
||||||
|
return "", fmt.Errorf("failed to execute pragma_foreign_keys template: %w", err)
|
||||||
|
}
|
||||||
|
return buf.String(), nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// ExecuteCreateTable executes the create table template
|
||||||
|
func (te *TemplateExecutor) ExecuteCreateTable(data TableTemplateData) (string, error) {
|
||||||
|
var buf bytes.Buffer
|
||||||
|
err := te.templates.ExecuteTemplate(&buf, "create_table.tmpl", data)
|
||||||
|
if err != nil {
|
||||||
|
return "", fmt.Errorf("failed to execute create_table template: %w", err)
|
||||||
|
}
|
||||||
|
return buf.String(), nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// ExecuteCreateIndex executes the create index template
|
||||||
|
func (te *TemplateExecutor) ExecuteCreateIndex(data IndexTemplateData) (string, error) {
|
||||||
|
var buf bytes.Buffer
|
||||||
|
err := te.templates.ExecuteTemplate(&buf, "create_index.tmpl", data)
|
||||||
|
if err != nil {
|
||||||
|
return "", fmt.Errorf("failed to execute create_index template: %w", err)
|
||||||
|
}
|
||||||
|
return buf.String(), nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// ExecuteCreateUniqueConstraint executes the create unique constraint template
|
||||||
|
func (te *TemplateExecutor) ExecuteCreateUniqueConstraint(data ConstraintTemplateData) (string, error) {
|
||||||
|
var buf bytes.Buffer
|
||||||
|
err := te.templates.ExecuteTemplate(&buf, "create_unique_constraint.tmpl", data)
|
||||||
|
if err != nil {
|
||||||
|
return "", fmt.Errorf("failed to execute create_unique_constraint template: %w", err)
|
||||||
|
}
|
||||||
|
return buf.String(), nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// ExecuteCreateCheckConstraint executes the create check constraint template
|
||||||
|
func (te *TemplateExecutor) ExecuteCreateCheckConstraint(data ConstraintTemplateData) (string, error) {
|
||||||
|
var buf bytes.Buffer
|
||||||
|
err := te.templates.ExecuteTemplate(&buf, "create_check_constraint.tmpl", data)
|
||||||
|
if err != nil {
|
||||||
|
return "", fmt.Errorf("failed to execute create_check_constraint template: %w", err)
|
||||||
|
}
|
||||||
|
return buf.String(), nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// ExecuteCreateForeignKey executes the create foreign key template
|
||||||
|
func (te *TemplateExecutor) ExecuteCreateForeignKey(data ConstraintTemplateData) (string, error) {
|
||||||
|
var buf bytes.Buffer
|
||||||
|
err := te.templates.ExecuteTemplate(&buf, "create_foreign_key.tmpl", data)
|
||||||
|
if err != nil {
|
||||||
|
return "", fmt.Errorf("failed to execute create_foreign_key template: %w", err)
|
||||||
|
}
|
||||||
|
return buf.String(), nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// Helper functions to build template data from models
|
||||||
|
|
||||||
|
// BuildTableTemplateData builds TableTemplateData from a models.Table
|
||||||
|
func BuildTableTemplateData(schema string, table *models.Table) TableTemplateData {
|
||||||
|
// Get sorted columns
|
||||||
|
columns := make([]*models.Column, 0, len(table.Columns))
|
||||||
|
for _, col := range table.Columns {
|
||||||
|
columns = append(columns, col)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Find primary key constraint
|
||||||
|
var pk *models.Constraint
|
||||||
|
for _, constraint := range table.Constraints {
|
||||||
|
if constraint.Type == models.PrimaryKeyConstraint {
|
||||||
|
pk = constraint
|
||||||
|
break
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// If no explicit primary key constraint, build one from columns with IsPrimaryKey=true
|
||||||
|
if pk == nil {
|
||||||
|
pkCols := []string{}
|
||||||
|
for _, col := range table.Columns {
|
||||||
|
if col.IsPrimaryKey {
|
||||||
|
pkCols = append(pkCols, col.Name)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
if len(pkCols) > 0 {
|
||||||
|
pk = &models.Constraint{
|
||||||
|
Name: "pk_" + table.Name,
|
||||||
|
Type: models.PrimaryKeyConstraint,
|
||||||
|
Columns: pkCols,
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
return TableTemplateData{
|
||||||
|
Schema: schema,
|
||||||
|
Name: table.Name,
|
||||||
|
Columns: columns,
|
||||||
|
PrimaryKey: pk,
|
||||||
|
}
|
||||||
|
}
|
||||||
@@ -0,0 +1,4 @@
|
|||||||
|
-- Check constraint: {{.Name}}
|
||||||
|
-- {{.Expression}}
|
||||||
|
-- Note: SQLite supports CHECK constraints in CREATE TABLE or ALTER TABLE ADD CHECK
|
||||||
|
-- This must be added manually to the table definition above
|
||||||
6
pkg/writers/sqlite/templates/create_foreign_key.tmpl
Normal file
6
pkg/writers/sqlite/templates/create_foreign_key.tmpl
Normal file
@@ -0,0 +1,6 @@
|
|||||||
|
-- Foreign key: {{.Name}}
|
||||||
|
-- ALTER TABLE {{quote_ident (qualified_table_name .Schema .Table)}} ADD CONSTRAINT {{quote_ident (format_constraint_name .Schema .Table .Name)}}
|
||||||
|
-- FOREIGN KEY ({{range $i, $col := .Columns}}{{if $i}}, {{end}}{{quote_ident $col}}{{end}})
|
||||||
|
-- REFERENCES {{quote_ident (qualified_table_name .ForeignSchema .ForeignTable)}} ({{range $i, $col := .ForeignColumns}}{{if $i}}, {{end}}{{quote_ident $col}}{{end}})
|
||||||
|
-- {{if .OnDelete}}ON DELETE {{.OnDelete}}{{end}}{{if .OnUpdate}} ON UPDATE {{.OnUpdate}}{{end}};
|
||||||
|
-- Note: Foreign keys should be defined in CREATE TABLE for better SQLite compatibility
|
||||||
1
pkg/writers/sqlite/templates/create_index.tmpl
Normal file
1
pkg/writers/sqlite/templates/create_index.tmpl
Normal file
@@ -0,0 +1 @@
|
|||||||
|
CREATE INDEX {{quote_ident (format_constraint_name .Schema .Table .Name)}} ON {{quote_ident (qualified_table_name .Schema .Table)}} ({{range $i, $col := .Columns}}{{if $i}}, {{end}}{{quote_ident $col}}{{end}});
|
||||||
9
pkg/writers/sqlite/templates/create_table.tmpl
Normal file
9
pkg/writers/sqlite/templates/create_table.tmpl
Normal file
@@ -0,0 +1,9 @@
|
|||||||
|
CREATE TABLE {{quote_ident (qualified_table_name .Schema .Name)}} (
|
||||||
|
{{- $hasAutoIncrement := false}}
|
||||||
|
{{- range $i, $col := .Columns}}{{if $i}},{{end}}
|
||||||
|
{{quote_ident $col.Name}} {{map_type $col.Type}}{{if is_autoincrement $col}}{{$hasAutoIncrement = true}} PRIMARY KEY AUTOINCREMENT{{else}}{{if $col.NotNull}} NOT NULL{{end}}{{if ne (format_default $col) ""}} DEFAULT {{format_default $col}}{{end}}{{end}}
|
||||||
|
{{- end}}
|
||||||
|
{{- if and .PrimaryKey (not $hasAutoIncrement)}}{{if gt (len .Columns) 0}},{{end}}
|
||||||
|
PRIMARY KEY ({{range $i, $colName := .PrimaryKey.Columns}}{{if $i}}, {{end}}{{quote_ident $colName}}{{end}})
|
||||||
|
{{- end}}
|
||||||
|
);
|
||||||
@@ -0,0 +1 @@
|
|||||||
|
CREATE UNIQUE INDEX {{quote_ident (format_constraint_name .Schema .Table .Name)}} ON {{quote_ident (qualified_table_name .Schema .Table)}} ({{range $i, $col := .Columns}}{{if $i}}, {{end}}{{quote_ident $col}}{{end}});
|
||||||
2
pkg/writers/sqlite/templates/pragma_foreign_keys.tmpl
Normal file
2
pkg/writers/sqlite/templates/pragma_foreign_keys.tmpl
Normal file
@@ -0,0 +1,2 @@
|
|||||||
|
-- Enable foreign key constraints
|
||||||
|
PRAGMA foreign_keys = ON;
|
||||||
291
pkg/writers/sqlite/writer.go
Normal file
291
pkg/writers/sqlite/writer.go
Normal file
@@ -0,0 +1,291 @@
|
|||||||
|
package sqlite
|
||||||
|
|
||||||
|
import (
|
||||||
|
"fmt"
|
||||||
|
"io"
|
||||||
|
"os"
|
||||||
|
"strings"
|
||||||
|
|
||||||
|
"git.warky.dev/wdevs/relspecgo/pkg/models"
|
||||||
|
"git.warky.dev/wdevs/relspecgo/pkg/writers"
|
||||||
|
)
|
||||||
|
|
||||||
|
// Writer implements the Writer interface for SQLite SQL output
|
||||||
|
type Writer struct {
|
||||||
|
options *writers.WriterOptions
|
||||||
|
writer io.Writer
|
||||||
|
executor *TemplateExecutor
|
||||||
|
}
|
||||||
|
|
||||||
|
// NewWriter creates a new SQLite SQL writer
|
||||||
|
// SQLite doesn't support schemas, so FlattenSchema is automatically enabled
|
||||||
|
func NewWriter(options *writers.WriterOptions) *Writer {
|
||||||
|
// Force schema flattening for SQLite
|
||||||
|
options.FlattenSchema = true
|
||||||
|
|
||||||
|
executor, _ := NewTemplateExecutor(options)
|
||||||
|
return &Writer{
|
||||||
|
options: options,
|
||||||
|
executor: executor,
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// WriteDatabase writes the entire database schema as SQLite SQL
|
||||||
|
func (w *Writer) WriteDatabase(db *models.Database) error {
|
||||||
|
var writer io.Writer
|
||||||
|
var file *os.File
|
||||||
|
var err error
|
||||||
|
|
||||||
|
// Use existing writer if already set (for testing)
|
||||||
|
if w.writer != nil {
|
||||||
|
writer = w.writer
|
||||||
|
} else if w.options.OutputPath != "" {
|
||||||
|
// Determine output destination
|
||||||
|
file, err = os.Create(w.options.OutputPath)
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("failed to create output file: %w", err)
|
||||||
|
}
|
||||||
|
defer file.Close()
|
||||||
|
writer = file
|
||||||
|
} else {
|
||||||
|
writer = os.Stdout
|
||||||
|
}
|
||||||
|
|
||||||
|
w.writer = writer
|
||||||
|
|
||||||
|
// Write header comment
|
||||||
|
fmt.Fprintf(w.writer, "-- SQLite Database Schema\n")
|
||||||
|
fmt.Fprintf(w.writer, "-- Database: %s\n", db.Name)
|
||||||
|
fmt.Fprintf(w.writer, "-- Generated by RelSpec\n")
|
||||||
|
fmt.Fprintf(w.writer, "-- Note: Schema names have been flattened (e.g., public.users -> public_users)\n\n")
|
||||||
|
|
||||||
|
// Enable foreign keys
|
||||||
|
pragma, err := w.executor.ExecutePragmaForeignKeys()
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("failed to generate pragma statement: %w", err)
|
||||||
|
}
|
||||||
|
fmt.Fprintf(w.writer, "%s\n", pragma)
|
||||||
|
|
||||||
|
// Process each schema in the database
|
||||||
|
for _, schema := range db.Schemas {
|
||||||
|
if err := w.WriteSchema(schema); err != nil {
|
||||||
|
return fmt.Errorf("failed to write schema %s: %w", schema.Name, err)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// WriteSchema writes a single schema as SQLite SQL
|
||||||
|
func (w *Writer) WriteSchema(schema *models.Schema) error {
|
||||||
|
// SQLite doesn't have schemas, so we just write a comment
|
||||||
|
if schema.Name != "" {
|
||||||
|
fmt.Fprintf(w.writer, "-- Schema: %s (flattened into table names)\n\n", schema.Name)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Phase 1: Create tables
|
||||||
|
for _, table := range schema.Tables {
|
||||||
|
if err := w.writeTable(schema.Name, table); err != nil {
|
||||||
|
return fmt.Errorf("failed to write table %s: %w", table.Name, err)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Phase 2: Create indexes
|
||||||
|
for _, table := range schema.Tables {
|
||||||
|
if err := w.writeIndexes(schema.Name, table); err != nil {
|
||||||
|
return fmt.Errorf("failed to write indexes for table %s: %w", table.Name, err)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Phase 3: Create unique constraints (as unique indexes)
|
||||||
|
for _, table := range schema.Tables {
|
||||||
|
if err := w.writeUniqueConstraints(schema.Name, table); err != nil {
|
||||||
|
return fmt.Errorf("failed to write unique constraints for table %s: %w", table.Name, err)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Phase 4: Check constraints (as comments, since SQLite requires them in CREATE TABLE)
|
||||||
|
for _, table := range schema.Tables {
|
||||||
|
if err := w.writeCheckConstraints(schema.Name, table); err != nil {
|
||||||
|
return fmt.Errorf("failed to write check constraints for table %s: %w", table.Name, err)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Phase 5: Foreign keys (as comments for compatibility)
|
||||||
|
for _, table := range schema.Tables {
|
||||||
|
if err := w.writeForeignKeys(schema.Name, table); err != nil {
|
||||||
|
return fmt.Errorf("failed to write foreign keys for table %s: %w", table.Name, err)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// WriteTable writes a single table as SQLite SQL
|
||||||
|
func (w *Writer) WriteTable(table *models.Table) error {
|
||||||
|
return w.writeTable("", table)
|
||||||
|
}
|
||||||
|
|
||||||
|
// writeTable is the internal implementation
|
||||||
|
func (w *Writer) writeTable(schema string, table *models.Table) error {
|
||||||
|
// Build table template data
|
||||||
|
data := BuildTableTemplateData(schema, table)
|
||||||
|
|
||||||
|
// Execute template
|
||||||
|
sql, err := w.executor.ExecuteCreateTable(data)
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("failed to execute create table template: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
fmt.Fprintf(w.writer, "%s\n", sql)
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// writeIndexes writes indexes for a table
|
||||||
|
func (w *Writer) writeIndexes(schema string, table *models.Table) error {
|
||||||
|
for _, index := range table.Indexes {
|
||||||
|
// Skip primary key indexes
|
||||||
|
if strings.HasSuffix(index.Name, "_pkey") {
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
|
||||||
|
// Skip unique indexes (handled separately as unique constraints)
|
||||||
|
if index.Unique {
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
|
||||||
|
data := IndexTemplateData{
|
||||||
|
Schema: schema,
|
||||||
|
Table: table.Name,
|
||||||
|
Name: index.Name,
|
||||||
|
Columns: index.Columns,
|
||||||
|
}
|
||||||
|
|
||||||
|
sql, err := w.executor.ExecuteCreateIndex(data)
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("failed to execute create index template: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
fmt.Fprintf(w.writer, "%s\n", sql)
|
||||||
|
}
|
||||||
|
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// writeUniqueConstraints writes unique constraints as unique indexes
|
||||||
|
func (w *Writer) writeUniqueConstraints(schema string, table *models.Table) error {
|
||||||
|
for _, constraint := range table.Constraints {
|
||||||
|
if constraint.Type != models.UniqueConstraint {
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
|
||||||
|
data := ConstraintTemplateData{
|
||||||
|
Schema: schema,
|
||||||
|
Table: table.Name,
|
||||||
|
Name: constraint.Name,
|
||||||
|
Columns: constraint.Columns,
|
||||||
|
}
|
||||||
|
|
||||||
|
sql, err := w.executor.ExecuteCreateUniqueConstraint(data)
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("failed to execute create unique constraint template: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
fmt.Fprintf(w.writer, "%s\n", sql)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Also handle unique indexes from the Indexes map
|
||||||
|
for _, index := range table.Indexes {
|
||||||
|
if !index.Unique {
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
|
||||||
|
// Skip if already handled as a constraint
|
||||||
|
alreadyHandled := false
|
||||||
|
for _, constraint := range table.Constraints {
|
||||||
|
if constraint.Type == models.UniqueConstraint && constraint.Name == index.Name {
|
||||||
|
alreadyHandled = true
|
||||||
|
break
|
||||||
|
}
|
||||||
|
}
|
||||||
|
if alreadyHandled {
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
|
||||||
|
data := ConstraintTemplateData{
|
||||||
|
Schema: schema,
|
||||||
|
Table: table.Name,
|
||||||
|
Name: index.Name,
|
||||||
|
Columns: index.Columns,
|
||||||
|
}
|
||||||
|
|
||||||
|
sql, err := w.executor.ExecuteCreateUniqueConstraint(data)
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("failed to execute create unique index template: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
fmt.Fprintf(w.writer, "%s\n", sql)
|
||||||
|
}
|
||||||
|
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// writeCheckConstraints writes check constraints as comments
|
||||||
|
func (w *Writer) writeCheckConstraints(schema string, table *models.Table) error {
|
||||||
|
for _, constraint := range table.Constraints {
|
||||||
|
if constraint.Type != models.CheckConstraint {
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
|
||||||
|
data := ConstraintTemplateData{
|
||||||
|
Schema: schema,
|
||||||
|
Table: table.Name,
|
||||||
|
Name: constraint.Name,
|
||||||
|
Expression: constraint.Expression,
|
||||||
|
}
|
||||||
|
|
||||||
|
sql, err := w.executor.ExecuteCreateCheckConstraint(data)
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("failed to execute create check constraint template: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
fmt.Fprintf(w.writer, "%s\n", sql)
|
||||||
|
}
|
||||||
|
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// writeForeignKeys writes foreign keys as comments
|
||||||
|
func (w *Writer) writeForeignKeys(schema string, table *models.Table) error {
|
||||||
|
for _, constraint := range table.Constraints {
|
||||||
|
if constraint.Type != models.ForeignKeyConstraint {
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
|
||||||
|
refSchema := constraint.ReferencedSchema
|
||||||
|
if refSchema == "" {
|
||||||
|
refSchema = schema
|
||||||
|
}
|
||||||
|
|
||||||
|
data := ConstraintTemplateData{
|
||||||
|
Schema: schema,
|
||||||
|
Table: table.Name,
|
||||||
|
Name: constraint.Name,
|
||||||
|
Columns: constraint.Columns,
|
||||||
|
ForeignSchema: refSchema,
|
||||||
|
ForeignTable: constraint.ReferencedTable,
|
||||||
|
ForeignColumns: constraint.ReferencedColumns,
|
||||||
|
OnDelete: constraint.OnDelete,
|
||||||
|
OnUpdate: constraint.OnUpdate,
|
||||||
|
}
|
||||||
|
|
||||||
|
sql, err := w.executor.ExecuteCreateForeignKey(data)
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("failed to execute create foreign key template: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
fmt.Fprintf(w.writer, "%s\n", sql)
|
||||||
|
}
|
||||||
|
|
||||||
|
return nil
|
||||||
|
}
|
||||||
418
pkg/writers/sqlite/writer_test.go
Normal file
418
pkg/writers/sqlite/writer_test.go
Normal file
@@ -0,0 +1,418 @@
|
|||||||
|
package sqlite
|
||||||
|
|
||||||
|
import (
|
||||||
|
"bytes"
|
||||||
|
"strings"
|
||||||
|
"testing"
|
||||||
|
|
||||||
|
"git.warky.dev/wdevs/relspecgo/pkg/models"
|
||||||
|
"git.warky.dev/wdevs/relspecgo/pkg/writers"
|
||||||
|
)
|
||||||
|
|
||||||
|
func TestNewWriter(t *testing.T) {
|
||||||
|
opts := &writers.WriterOptions{
|
||||||
|
OutputPath: "/tmp/test.sql",
|
||||||
|
FlattenSchema: false, // Should be forced to true
|
||||||
|
}
|
||||||
|
|
||||||
|
writer := NewWriter(opts)
|
||||||
|
|
||||||
|
if !writer.options.FlattenSchema {
|
||||||
|
t.Error("Expected FlattenSchema to be forced to true for SQLite")
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestWriteDatabase(t *testing.T) {
|
||||||
|
db := &models.Database{
|
||||||
|
Name: "testdb",
|
||||||
|
Schemas: []*models.Schema{
|
||||||
|
{
|
||||||
|
Name: "public",
|
||||||
|
Tables: []*models.Table{
|
||||||
|
{
|
||||||
|
Name: "users",
|
||||||
|
Columns: map[string]*models.Column{
|
||||||
|
"id": {
|
||||||
|
Name: "id",
|
||||||
|
Type: "serial",
|
||||||
|
NotNull: true,
|
||||||
|
IsPrimaryKey: true,
|
||||||
|
Default: "nextval('users_id_seq'::regclass)",
|
||||||
|
},
|
||||||
|
"email": {
|
||||||
|
Name: "email",
|
||||||
|
Type: "varchar(255)",
|
||||||
|
NotNull: true,
|
||||||
|
},
|
||||||
|
"active": {
|
||||||
|
Name: "active",
|
||||||
|
Type: "boolean",
|
||||||
|
NotNull: true,
|
||||||
|
Default: "true",
|
||||||
|
},
|
||||||
|
},
|
||||||
|
Constraints: map[string]*models.Constraint{
|
||||||
|
"pk_users": {
|
||||||
|
Name: "pk_users",
|
||||||
|
Type: models.PrimaryKeyConstraint,
|
||||||
|
Columns: []string{"id"},
|
||||||
|
},
|
||||||
|
},
|
||||||
|
},
|
||||||
|
},
|
||||||
|
},
|
||||||
|
},
|
||||||
|
}
|
||||||
|
|
||||||
|
var buf bytes.Buffer
|
||||||
|
opts := &writers.WriterOptions{}
|
||||||
|
writer := NewWriter(opts)
|
||||||
|
writer.writer = &buf
|
||||||
|
|
||||||
|
err := writer.WriteDatabase(db)
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("WriteDatabase failed: %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
output := buf.String()
|
||||||
|
|
||||||
|
// Check for expected elements
|
||||||
|
if !strings.Contains(output, "PRAGMA foreign_keys = ON") {
|
||||||
|
t.Error("Expected PRAGMA foreign_keys statement")
|
||||||
|
}
|
||||||
|
|
||||||
|
if !strings.Contains(output, "CREATE TABLE") {
|
||||||
|
t.Error("Expected CREATE TABLE statement")
|
||||||
|
}
|
||||||
|
|
||||||
|
if !strings.Contains(output, "\"public_users\"") {
|
||||||
|
t.Error("Expected flattened table name public_users")
|
||||||
|
}
|
||||||
|
|
||||||
|
if !strings.Contains(output, "INTEGER PRIMARY KEY AUTOINCREMENT") {
|
||||||
|
t.Error("Expected autoincrement for serial primary key")
|
||||||
|
}
|
||||||
|
|
||||||
|
if !strings.Contains(output, "TEXT") {
|
||||||
|
t.Error("Expected TEXT type for varchar")
|
||||||
|
}
|
||||||
|
|
||||||
|
// Boolean should be mapped to INTEGER with default 1
|
||||||
|
if !strings.Contains(output, "active") {
|
||||||
|
t.Error("Expected active column")
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestDataTypeMapping(t *testing.T) {
|
||||||
|
tests := []struct {
|
||||||
|
pgType string
|
||||||
|
expected string
|
||||||
|
}{
|
||||||
|
{"varchar(255)", "TEXT"},
|
||||||
|
{"text", "TEXT"},
|
||||||
|
{"integer", "INTEGER"},
|
||||||
|
{"bigint", "INTEGER"},
|
||||||
|
{"serial", "INTEGER"},
|
||||||
|
{"boolean", "INTEGER"},
|
||||||
|
{"real", "REAL"},
|
||||||
|
{"double precision", "REAL"},
|
||||||
|
{"numeric(10,2)", "NUMERIC"},
|
||||||
|
{"decimal", "NUMERIC"},
|
||||||
|
{"bytea", "BLOB"},
|
||||||
|
{"timestamp", "TEXT"},
|
||||||
|
{"uuid", "TEXT"},
|
||||||
|
{"json", "TEXT"},
|
||||||
|
{"jsonb", "TEXT"},
|
||||||
|
}
|
||||||
|
|
||||||
|
for _, tt := range tests {
|
||||||
|
result := MapPostgreSQLType(tt.pgType)
|
||||||
|
if result != tt.expected {
|
||||||
|
t.Errorf("MapPostgreSQLType(%q) = %q, want %q", tt.pgType, result, tt.expected)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestIsAutoIncrementCandidate(t *testing.T) {
|
||||||
|
tests := []struct {
|
||||||
|
name string
|
||||||
|
col *models.Column
|
||||||
|
expected bool
|
||||||
|
}{
|
||||||
|
{
|
||||||
|
name: "serial primary key",
|
||||||
|
col: &models.Column{
|
||||||
|
Name: "id",
|
||||||
|
Type: "serial",
|
||||||
|
IsPrimaryKey: true,
|
||||||
|
Default: "nextval('seq')",
|
||||||
|
},
|
||||||
|
expected: true,
|
||||||
|
},
|
||||||
|
{
|
||||||
|
name: "integer primary key with nextval",
|
||||||
|
col: &models.Column{
|
||||||
|
Name: "id",
|
||||||
|
Type: "integer",
|
||||||
|
IsPrimaryKey: true,
|
||||||
|
Default: "nextval('users_id_seq'::regclass)",
|
||||||
|
},
|
||||||
|
expected: true,
|
||||||
|
},
|
||||||
|
{
|
||||||
|
name: "integer not primary key",
|
||||||
|
col: &models.Column{
|
||||||
|
Name: "count",
|
||||||
|
Type: "integer",
|
||||||
|
IsPrimaryKey: false,
|
||||||
|
Default: "0",
|
||||||
|
},
|
||||||
|
expected: false,
|
||||||
|
},
|
||||||
|
{
|
||||||
|
name: "varchar primary key",
|
||||||
|
col: &models.Column{
|
||||||
|
Name: "code",
|
||||||
|
Type: "varchar",
|
||||||
|
IsPrimaryKey: true,
|
||||||
|
},
|
||||||
|
expected: false,
|
||||||
|
},
|
||||||
|
}
|
||||||
|
|
||||||
|
for _, tt := range tests {
|
||||||
|
t.Run(tt.name, func(t *testing.T) {
|
||||||
|
result := IsAutoIncrementCandidate(tt.col)
|
||||||
|
if result != tt.expected {
|
||||||
|
t.Errorf("IsAutoIncrementCandidate() = %v, want %v", result, tt.expected)
|
||||||
|
}
|
||||||
|
})
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestFormatDefault(t *testing.T) {
|
||||||
|
tests := []struct {
|
||||||
|
name string
|
||||||
|
col *models.Column
|
||||||
|
expected string
|
||||||
|
}{
|
||||||
|
{
|
||||||
|
name: "current_timestamp",
|
||||||
|
col: &models.Column{
|
||||||
|
Type: "timestamp",
|
||||||
|
Default: "CURRENT_TIMESTAMP",
|
||||||
|
},
|
||||||
|
expected: "CURRENT_TIMESTAMP",
|
||||||
|
},
|
||||||
|
{
|
||||||
|
name: "now()",
|
||||||
|
col: &models.Column{
|
||||||
|
Type: "timestamp",
|
||||||
|
Default: "now()",
|
||||||
|
},
|
||||||
|
expected: "CURRENT_TIMESTAMP",
|
||||||
|
},
|
||||||
|
{
|
||||||
|
name: "boolean true",
|
||||||
|
col: &models.Column{
|
||||||
|
Type: "boolean",
|
||||||
|
Default: "true",
|
||||||
|
},
|
||||||
|
expected: "1",
|
||||||
|
},
|
||||||
|
{
|
||||||
|
name: "boolean false",
|
||||||
|
col: &models.Column{
|
||||||
|
Type: "boolean",
|
||||||
|
Default: "false",
|
||||||
|
},
|
||||||
|
expected: "0",
|
||||||
|
},
|
||||||
|
{
|
||||||
|
name: "serial autoincrement",
|
||||||
|
col: &models.Column{
|
||||||
|
Type: "serial",
|
||||||
|
IsPrimaryKey: true,
|
||||||
|
Default: "nextval('seq')",
|
||||||
|
},
|
||||||
|
expected: "",
|
||||||
|
},
|
||||||
|
{
|
||||||
|
name: "uuid default removed",
|
||||||
|
col: &models.Column{
|
||||||
|
Type: "uuid",
|
||||||
|
Default: "gen_random_uuid()",
|
||||||
|
},
|
||||||
|
expected: "",
|
||||||
|
},
|
||||||
|
}
|
||||||
|
|
||||||
|
for _, tt := range tests {
|
||||||
|
t.Run(tt.name, func(t *testing.T) {
|
||||||
|
result := FormatDefault(tt.col)
|
||||||
|
if result != tt.expected {
|
||||||
|
t.Errorf("FormatDefault() = %q, want %q", result, tt.expected)
|
||||||
|
}
|
||||||
|
})
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestWriteSchema_MultiSchema(t *testing.T) {
|
||||||
|
db := &models.Database{
|
||||||
|
Name: "testdb",
|
||||||
|
Schemas: []*models.Schema{
|
||||||
|
{
|
||||||
|
Name: "auth",
|
||||||
|
Tables: []*models.Table{
|
||||||
|
{
|
||||||
|
Name: "sessions",
|
||||||
|
Columns: map[string]*models.Column{
|
||||||
|
"id": {
|
||||||
|
Name: "id",
|
||||||
|
Type: "uuid",
|
||||||
|
NotNull: true,
|
||||||
|
IsPrimaryKey: true,
|
||||||
|
},
|
||||||
|
},
|
||||||
|
Constraints: map[string]*models.Constraint{
|
||||||
|
"pk_sessions": {
|
||||||
|
Name: "pk_sessions",
|
||||||
|
Type: models.PrimaryKeyConstraint,
|
||||||
|
Columns: []string{"id"},
|
||||||
|
},
|
||||||
|
},
|
||||||
|
},
|
||||||
|
},
|
||||||
|
},
|
||||||
|
{
|
||||||
|
Name: "public",
|
||||||
|
Tables: []*models.Table{
|
||||||
|
{
|
||||||
|
Name: "posts",
|
||||||
|
Columns: map[string]*models.Column{
|
||||||
|
"id": {
|
||||||
|
Name: "id",
|
||||||
|
Type: "integer",
|
||||||
|
NotNull: true,
|
||||||
|
IsPrimaryKey: true,
|
||||||
|
},
|
||||||
|
},
|
||||||
|
Constraints: map[string]*models.Constraint{
|
||||||
|
"pk_posts": {
|
||||||
|
Name: "pk_posts",
|
||||||
|
Type: models.PrimaryKeyConstraint,
|
||||||
|
Columns: []string{"id"},
|
||||||
|
},
|
||||||
|
},
|
||||||
|
},
|
||||||
|
},
|
||||||
|
},
|
||||||
|
},
|
||||||
|
}
|
||||||
|
|
||||||
|
var buf bytes.Buffer
|
||||||
|
opts := &writers.WriterOptions{}
|
||||||
|
writer := NewWriter(opts)
|
||||||
|
writer.writer = &buf
|
||||||
|
|
||||||
|
err := writer.WriteDatabase(db)
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("WriteDatabase failed: %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
output := buf.String()
|
||||||
|
|
||||||
|
// Check for flattened table names from both schemas
|
||||||
|
if !strings.Contains(output, "\"auth_sessions\"") {
|
||||||
|
t.Error("Expected flattened table name auth_sessions")
|
||||||
|
}
|
||||||
|
|
||||||
|
if !strings.Contains(output, "\"public_posts\"") {
|
||||||
|
t.Error("Expected flattened table name public_posts")
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestWriteIndexes(t *testing.T) {
|
||||||
|
table := &models.Table{
|
||||||
|
Name: "users",
|
||||||
|
Columns: map[string]*models.Column{
|
||||||
|
"email": {
|
||||||
|
Name: "email",
|
||||||
|
Type: "varchar(255)",
|
||||||
|
},
|
||||||
|
},
|
||||||
|
Indexes: map[string]*models.Index{
|
||||||
|
"idx_users_email": {
|
||||||
|
Name: "idx_users_email",
|
||||||
|
Columns: []string{"email"},
|
||||||
|
},
|
||||||
|
},
|
||||||
|
}
|
||||||
|
|
||||||
|
var buf bytes.Buffer
|
||||||
|
opts := &writers.WriterOptions{}
|
||||||
|
writer := NewWriter(opts)
|
||||||
|
writer.writer = &buf
|
||||||
|
|
||||||
|
err := writer.writeIndexes("public", table)
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("writeIndexes failed: %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
output := buf.String()
|
||||||
|
|
||||||
|
if !strings.Contains(output, "CREATE INDEX") {
|
||||||
|
t.Error("Expected CREATE INDEX statement")
|
||||||
|
}
|
||||||
|
|
||||||
|
if !strings.Contains(output, "public_users_idx_users_email") {
|
||||||
|
t.Errorf("Expected flattened index name public_users_idx_users_email, got output:\n%s", output)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestWriteUniqueConstraints(t *testing.T) {
|
||||||
|
table := &models.Table{
|
||||||
|
Name: "users",
|
||||||
|
Constraints: map[string]*models.Constraint{
|
||||||
|
"uk_users_email": {
|
||||||
|
Name: "uk_users_email",
|
||||||
|
Type: models.UniqueConstraint,
|
||||||
|
Columns: []string{"email"},
|
||||||
|
},
|
||||||
|
},
|
||||||
|
}
|
||||||
|
|
||||||
|
var buf bytes.Buffer
|
||||||
|
opts := &writers.WriterOptions{}
|
||||||
|
writer := NewWriter(opts)
|
||||||
|
writer.writer = &buf
|
||||||
|
|
||||||
|
err := writer.writeUniqueConstraints("public", table)
|
||||||
|
if err != nil {
|
||||||
|
t.Fatalf("writeUniqueConstraints failed: %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
output := buf.String()
|
||||||
|
|
||||||
|
if !strings.Contains(output, "CREATE UNIQUE INDEX") {
|
||||||
|
t.Error("Expected CREATE UNIQUE INDEX statement")
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestQuoteIdentifier(t *testing.T) {
|
||||||
|
tests := []struct {
|
||||||
|
input string
|
||||||
|
expected string
|
||||||
|
}{
|
||||||
|
{"users", `"users"`},
|
||||||
|
{"public_users", `"public_users"`},
|
||||||
|
{`user"name`, `"user""name"`}, // Double quotes should be escaped
|
||||||
|
}
|
||||||
|
|
||||||
|
for _, tt := range tests {
|
||||||
|
result := QuoteIdentifier(tt.input)
|
||||||
|
if result != tt.expected {
|
||||||
|
t.Errorf("QuoteIdentifier(%q) = %q, want %q", tt.input, result, tt.expected)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
Reference in New Issue
Block a user