6 Commits

Author SHA1 Message Date
b57e1ba304 feat(cmd): 🎉 Add split command for schema extraction
Some checks failed
CI / Test (1.24) (push) Successful in -27m40s
CI / Test (1.25) (push) Successful in -27m39s
CI / Build (push) Successful in -28m9s
CI / Lint (push) Successful in -27m56s
Integration Tests / Integration Tests (push) Failing after -28m11s
Release / Build and Release (push) Successful in -26m13s
- Introduce 'split' command to extract selected tables and schemas.
- Supports various input and output formats.
- Allows filtering of schemas and tables during extraction.
2026-01-04 22:01:29 +02:00
19fba62f1b feat(ui): 🎉 Add GUID field to column, database, schema, and table editors
Some checks failed
CI / Test (1.24) (push) Successful in -27m38s
CI / Lint (push) Successful in -27m58s
CI / Test (1.25) (push) Successful in -26m52s
CI / Build (push) Successful in -28m9s
Integration Tests / Integration Tests (push) Failing after -28m11s
2026-01-04 20:00:18 +02:00
b4ff4334cc feat(models): 🎉 Add GUID field to various models
Some checks failed
CI / Lint (push) Successful in -27m53s
CI / Test (1.24) (push) Successful in -27m31s
CI / Build (push) Successful in -28m13s
CI / Test (1.25) (push) Failing after 1m11s
Integration Tests / Integration Tests (push) Failing after -28m15s
* Introduced GUID field to Database, Domain, DomainTable, Schema, Table, View, Sequence, Column, Index, Relationship, Constraint, Enum, and Script models.
* Updated initialization functions to assign new GUIDs using uuid package.
* Enhanced DCTX reader and writer to utilize GUIDs from models where available.
2026-01-04 19:53:17 +02:00
5d9b00c8f2 feat(ui): 🎉 Add import and merge database feature
Some checks failed
CI / Lint (push) Successful in -27m51s
CI / Test (1.24) (push) Successful in -27m35s
CI / Test (1.25) (push) Failing after 1m5s
Integration Tests / Integration Tests (push) Failing after -28m14s
CI / Build (push) Successful in -28m13s
- Introduce a new screen for importing and merging database schemas.
- Implement merge logic to combine schemas, tables, columns, and other objects.
- Add options to skip specific object types during the merge process.
- Update main menu to include the new import and merge option.
2026-01-04 19:31:28 +02:00
debf351c48 fix(ui): 🐛 Simplify keyboard shortcut handling in load/save screens
Some checks failed
CI / Test (1.24) (push) Successful in -27m35s
CI / Test (1.25) (push) Failing after 1m3s
CI / Lint (push) Successful in -27m26s
CI / Build (push) Successful in -28m10s
Integration Tests / Integration Tests (push) Failing after 1m1s
2026-01-04 18:41:59 +02:00
d87d657275 feat(ui): 🎨 Add user interface documentation and screenshots
Some checks failed
CI / Test (1.25) (push) Failing after 57s
CI / Build (push) Successful in 23s
CI / Lint (push) Failing after -27m11s
CI / Test (1.24) (push) Successful in -26m25s
Integration Tests / Integration Tests (push) Failing after 1m0s
- Document interactive terminal-based UI features
- Include screenshots for main screen, table view, and column editing
2026-01-04 18:39:13 +02:00
23 changed files with 1814 additions and 40 deletions

View File

@@ -85,6 +85,29 @@ RelSpec includes a powerful schema validation and linting tool:
## Use of AI
[Rules and use of AI](./AI_USE.md)
## User Interface
RelSpec provides an interactive terminal-based user interface for managing and editing database schemas. The UI allows you to:
- **Browse Databases** - Navigate through your database structure with an intuitive menu system
- **Edit Schemas** - Create, modify, and organize database schemas
- **Manage Tables** - Add, update, or delete tables with full control over structure
- **Configure Columns** - Define column properties, data types, constraints, and relationships
- **Interactive Editing** - Real-time validation and feedback as you make changes
The interface supports multiple input formats, making it easy to load, edit, and save your database definitions in various formats.
<p align="center" width="100%">
<img src="./assets/image/screenshots/main_screen.jpg">
</p>
<p align="center" width="100%">
<img src="./assets/image/screenshots/table_view.jpg">
</p>
<p align="center" width="100%">
<img src="./assets/image/screenshots/edit_column.jpg">
</p>
## Installation
```bash
@@ -95,6 +118,55 @@ go install -v git.warky.dev/wdevs/relspecgo/cmd/relspec@latest
## Usage
### Interactive Schema Editor
```bash
# Launch interactive editor with a DBML schema
relspec edit --from dbml --from-path schema.dbml --to dbml --to-path schema.dbml
# Edit PostgreSQL database in place
relspec edit --from pgsql --from-conn "postgres://user:pass@localhost/mydb" \
--to pgsql --to-conn "postgres://user:pass@localhost/mydb"
# Edit JSON schema and save as GORM models
relspec edit --from json --from-path db.json --to gorm --to-path models/
```
The `edit` command launches an interactive terminal user interface where you can:
- Browse and navigate your database structure
- Create, modify, and delete schemas, tables, and columns
- Configure column properties, constraints, and relationships
- Save changes to various formats
- Import and merge schemas from other databases
### Schema Merging
```bash
# Merge two JSON schemas (additive merge - adds missing items only)
relspec merge --target json --target-path base.json \
--source json --source-path additions.json \
--output json --output-path merged.json
# Merge PostgreSQL database into JSON, skipping specific tables
relspec merge --target json --target-path current.json \
--source pgsql --source-conn "postgres://user:pass@localhost/source_db" \
--output json --output-path updated.json \
--skip-tables "audit_log,temp_tables"
# Cross-format merge (DBML + YAML → JSON)
relspec merge --target dbml --target-path base.dbml \
--source yaml --source-path additions.yaml \
--output json --output-path result.json \
--skip-relations --skip-views
```
The `merge` command combines two database schemas additively:
- Adds missing schemas, tables, columns, and other objects
- Never modifies or deletes existing items (safe operation)
- Supports selective merging with skip options (domains, relations, enums, views, sequences, specific tables)
- Works across any combination of supported formats
- Perfect for integrating multiple schema definitions or applying patches
### Schema Conversion
```bash

Binary file not shown.

After

Width:  |  Height:  |  Size: 42 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 50 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 67 KiB

433
cmd/relspec/merge.go Normal file
View File

@@ -0,0 +1,433 @@
package main
import (
"fmt"
"os"
"path/filepath"
"strings"
"github.com/spf13/cobra"
"git.warky.dev/wdevs/relspecgo/pkg/merge"
"git.warky.dev/wdevs/relspecgo/pkg/models"
"git.warky.dev/wdevs/relspecgo/pkg/readers"
"git.warky.dev/wdevs/relspecgo/pkg/readers/bun"
"git.warky.dev/wdevs/relspecgo/pkg/readers/dbml"
"git.warky.dev/wdevs/relspecgo/pkg/readers/dctx"
"git.warky.dev/wdevs/relspecgo/pkg/readers/drawdb"
"git.warky.dev/wdevs/relspecgo/pkg/readers/drizzle"
"git.warky.dev/wdevs/relspecgo/pkg/readers/gorm"
"git.warky.dev/wdevs/relspecgo/pkg/readers/graphql"
"git.warky.dev/wdevs/relspecgo/pkg/readers/json"
"git.warky.dev/wdevs/relspecgo/pkg/readers/pgsql"
"git.warky.dev/wdevs/relspecgo/pkg/readers/prisma"
"git.warky.dev/wdevs/relspecgo/pkg/readers/typeorm"
"git.warky.dev/wdevs/relspecgo/pkg/readers/yaml"
"git.warky.dev/wdevs/relspecgo/pkg/writers"
wbun "git.warky.dev/wdevs/relspecgo/pkg/writers/bun"
wdbml "git.warky.dev/wdevs/relspecgo/pkg/writers/dbml"
wdctx "git.warky.dev/wdevs/relspecgo/pkg/writers/dctx"
wdrawdb "git.warky.dev/wdevs/relspecgo/pkg/writers/drawdb"
wdrizzle "git.warky.dev/wdevs/relspecgo/pkg/writers/drizzle"
wgorm "git.warky.dev/wdevs/relspecgo/pkg/writers/gorm"
wgraphql "git.warky.dev/wdevs/relspecgo/pkg/writers/graphql"
wjson "git.warky.dev/wdevs/relspecgo/pkg/writers/json"
wpgsql "git.warky.dev/wdevs/relspecgo/pkg/writers/pgsql"
wprisma "git.warky.dev/wdevs/relspecgo/pkg/writers/prisma"
wtypeorm "git.warky.dev/wdevs/relspecgo/pkg/writers/typeorm"
wyaml "git.warky.dev/wdevs/relspecgo/pkg/writers/yaml"
)
var (
mergeTargetType string
mergeTargetPath string
mergeTargetConn string
mergeSourceType string
mergeSourcePath string
mergeSourceConn string
mergeOutputType string
mergeOutputPath string
mergeOutputConn string
mergeSkipDomains bool
mergeSkipRelations bool
mergeSkipEnums bool
mergeSkipViews bool
mergeSkipSequences bool
mergeSkipTables string // Comma-separated table names to skip
mergeVerbose bool
)
var mergeCmd = &cobra.Command{
Use: "merge",
Short: "Merge database schemas (additive only - adds missing items)",
Long: `Merge one database schema into another. Performs additive merging only:
adds missing schemas, tables, columns, and other objects without modifying
or deleting existing items.
The target database is loaded first, then the source database is merged into it.
The result can be saved to a new format or updated in place.
Examples:
# Merge two JSON schemas
relspec merge --target json --target-path base.json \
--source json --source-path additional.json \
--output json --output-path merged.json
# Merge from PostgreSQL into JSON
relspec merge --target json --target-path mydb.json \
--source pgsql --source-conn "postgres://user:pass@localhost/source_db" \
--output json --output-path combined.json
# Merge DBML and YAML, skip relations
relspec merge --target dbml --target-path schema.dbml \
--source yaml --source-path tables.yaml \
--output dbml --output-path merged.dbml \
--skip-relations
# Merge and save back to target format
relspec merge --target json --target-path base.json \
--source json --source-path patch.json \
--output json --output-path base.json`,
RunE: runMerge,
}
func init() {
// Target database flags
mergeCmd.Flags().StringVar(&mergeTargetType, "target", "", "Target format (required): dbml, dctx, drawdb, graphql, json, yaml, gorm, bun, drizzle, prisma, typeorm, pgsql")
mergeCmd.Flags().StringVar(&mergeTargetPath, "target-path", "", "Target file path (required for file-based formats)")
mergeCmd.Flags().StringVar(&mergeTargetConn, "target-conn", "", "Target connection string (required for pgsql)")
// Source database flags
mergeCmd.Flags().StringVar(&mergeSourceType, "source", "", "Source format (required): dbml, dctx, drawdb, graphql, json, yaml, gorm, bun, drizzle, prisma, typeorm, pgsql")
mergeCmd.Flags().StringVar(&mergeSourcePath, "source-path", "", "Source file path (required for file-based formats)")
mergeCmd.Flags().StringVar(&mergeSourceConn, "source-conn", "", "Source connection string (required for pgsql)")
// Output flags
mergeCmd.Flags().StringVar(&mergeOutputType, "output", "", "Output format (required): dbml, dctx, drawdb, graphql, json, yaml, gorm, bun, drizzle, prisma, typeorm, pgsql")
mergeCmd.Flags().StringVar(&mergeOutputPath, "output-path", "", "Output file path (required for file-based formats)")
mergeCmd.Flags().StringVar(&mergeOutputConn, "output-conn", "", "Output connection string (for pgsql)")
// Merge options
mergeCmd.Flags().BoolVar(&mergeSkipDomains, "skip-domains", false, "Skip domains during merge")
mergeCmd.Flags().BoolVar(&mergeSkipRelations, "skip-relations", false, "Skip relations during merge")
mergeCmd.Flags().BoolVar(&mergeSkipEnums, "skip-enums", false, "Skip enums during merge")
mergeCmd.Flags().BoolVar(&mergeSkipViews, "skip-views", false, "Skip views during merge")
mergeCmd.Flags().BoolVar(&mergeSkipSequences, "skip-sequences", false, "Skip sequences during merge")
mergeCmd.Flags().StringVar(&mergeSkipTables, "skip-tables", "", "Comma-separated list of table names to skip during merge")
mergeCmd.Flags().BoolVar(&mergeVerbose, "verbose", false, "Show verbose output")
}
func runMerge(cmd *cobra.Command, args []string) error {
fmt.Fprintf(os.Stderr, "\n=== RelSpec Merge ===\n")
fmt.Fprintf(os.Stderr, "Started at: %s\n\n", getCurrentTimestamp())
// Validate required flags
if mergeTargetType == "" {
return fmt.Errorf("--target format is required")
}
if mergeSourceType == "" {
return fmt.Errorf("--source format is required")
}
if mergeOutputType == "" {
return fmt.Errorf("--output format is required")
}
// Validate and expand file paths
if mergeTargetType != "pgsql" {
if mergeTargetPath == "" {
return fmt.Errorf("--target-path is required for %s format", mergeTargetType)
}
mergeTargetPath = expandPath(mergeTargetPath)
} else if mergeTargetConn == "" {
return fmt.Errorf("--target-conn is required for pgsql format")
}
if mergeSourceType != "pgsql" {
if mergeSourcePath == "" {
return fmt.Errorf("--source-path is required for %s format", mergeSourceType)
}
mergeSourcePath = expandPath(mergeSourcePath)
} else if mergeSourceConn == "" {
return fmt.Errorf("--source-conn is required for pgsql format")
}
if mergeOutputType != "pgsql" {
if mergeOutputPath == "" {
return fmt.Errorf("--output-path is required for %s format", mergeOutputType)
}
mergeOutputPath = expandPath(mergeOutputPath)
}
// Step 1: Read target database
fmt.Fprintf(os.Stderr, "[1/3] Reading target database...\n")
fmt.Fprintf(os.Stderr, " Format: %s\n", mergeTargetType)
if mergeTargetPath != "" {
fmt.Fprintf(os.Stderr, " Path: %s\n", mergeTargetPath)
}
if mergeTargetConn != "" {
fmt.Fprintf(os.Stderr, " Conn: %s\n", maskPassword(mergeTargetConn))
}
targetDB, err := readDatabaseForMerge(mergeTargetType, mergeTargetPath, mergeTargetConn, "Target")
if err != nil {
return fmt.Errorf("failed to read target database: %w", err)
}
fmt.Fprintf(os.Stderr, " ✓ Successfully read target database '%s'\n", targetDB.Name)
printDatabaseStats(targetDB)
// Step 2: Read source database
fmt.Fprintf(os.Stderr, "\n[2/3] Reading source database...\n")
fmt.Fprintf(os.Stderr, " Format: %s\n", mergeSourceType)
if mergeSourcePath != "" {
fmt.Fprintf(os.Stderr, " Path: %s\n", mergeSourcePath)
}
if mergeSourceConn != "" {
fmt.Fprintf(os.Stderr, " Conn: %s\n", maskPassword(mergeSourceConn))
}
sourceDB, err := readDatabaseForMerge(mergeSourceType, mergeSourcePath, mergeSourceConn, "Source")
if err != nil {
return fmt.Errorf("failed to read source database: %w", err)
}
fmt.Fprintf(os.Stderr, " ✓ Successfully read source database '%s'\n", sourceDB.Name)
printDatabaseStats(sourceDB)
// Step 3: Merge databases
fmt.Fprintf(os.Stderr, "\n[3/3] Merging databases...\n")
opts := &merge.MergeOptions{
SkipDomains: mergeSkipDomains,
SkipRelations: mergeSkipRelations,
SkipEnums: mergeSkipEnums,
SkipViews: mergeSkipViews,
SkipSequences: mergeSkipSequences,
}
// Parse skip-tables flag
if mergeSkipTables != "" {
opts.SkipTableNames = parseSkipTables(mergeSkipTables)
if len(opts.SkipTableNames) > 0 {
fmt.Fprintf(os.Stderr, " Skipping tables: %s\n", mergeSkipTables)
}
}
result := merge.MergeDatabases(targetDB, sourceDB, opts)
// Update timestamp
targetDB.UpdateDate()
// Print merge summary
fmt.Fprintf(os.Stderr, " ✓ Merge complete\n\n")
fmt.Fprintf(os.Stderr, "%s\n", merge.GetMergeSummary(result))
// Step 4: Write output
fmt.Fprintf(os.Stderr, "\n[4/4] Writing output...\n")
fmt.Fprintf(os.Stderr, " Format: %s\n", mergeOutputType)
if mergeOutputPath != "" {
fmt.Fprintf(os.Stderr, " Path: %s\n", mergeOutputPath)
}
err = writeDatabaseForMerge(mergeOutputType, mergeOutputPath, "", targetDB, "Output")
if err != nil {
return fmt.Errorf("failed to write output: %w", err)
}
fmt.Fprintf(os.Stderr, " ✓ Successfully written merged database\n")
fmt.Fprintf(os.Stderr, "\n=== Merge complete ===\n")
return nil
}
func readDatabaseForMerge(dbType, filePath, connString, label string) (*models.Database, error) {
var reader readers.Reader
switch strings.ToLower(dbType) {
case "dbml":
if filePath == "" {
return nil, fmt.Errorf("%s: file path is required for DBML format", label)
}
reader = dbml.NewReader(&readers.ReaderOptions{FilePath: filePath})
case "dctx":
if filePath == "" {
return nil, fmt.Errorf("%s: file path is required for DCTX format", label)
}
reader = dctx.NewReader(&readers.ReaderOptions{FilePath: filePath})
case "drawdb":
if filePath == "" {
return nil, fmt.Errorf("%s: file path is required for DrawDB format", label)
}
reader = drawdb.NewReader(&readers.ReaderOptions{FilePath: filePath})
case "graphql":
if filePath == "" {
return nil, fmt.Errorf("%s: file path is required for GraphQL format", label)
}
reader = graphql.NewReader(&readers.ReaderOptions{FilePath: filePath})
case "json":
if filePath == "" {
return nil, fmt.Errorf("%s: file path is required for JSON format", label)
}
reader = json.NewReader(&readers.ReaderOptions{FilePath: filePath})
case "yaml":
if filePath == "" {
return nil, fmt.Errorf("%s: file path is required for YAML format", label)
}
reader = yaml.NewReader(&readers.ReaderOptions{FilePath: filePath})
case "gorm":
if filePath == "" {
return nil, fmt.Errorf("%s: file path is required for GORM format", label)
}
reader = gorm.NewReader(&readers.ReaderOptions{FilePath: filePath})
case "bun":
if filePath == "" {
return nil, fmt.Errorf("%s: file path is required for Bun format", label)
}
reader = bun.NewReader(&readers.ReaderOptions{FilePath: filePath})
case "drizzle":
if filePath == "" {
return nil, fmt.Errorf("%s: file path is required for Drizzle format", label)
}
reader = drizzle.NewReader(&readers.ReaderOptions{FilePath: filePath})
case "prisma":
if filePath == "" {
return nil, fmt.Errorf("%s: file path is required for Prisma format", label)
}
reader = prisma.NewReader(&readers.ReaderOptions{FilePath: filePath})
case "typeorm":
if filePath == "" {
return nil, fmt.Errorf("%s: file path is required for TypeORM format", label)
}
reader = typeorm.NewReader(&readers.ReaderOptions{FilePath: filePath})
case "pgsql":
if connString == "" {
return nil, fmt.Errorf("%s: connection string is required for PostgreSQL format", label)
}
reader = pgsql.NewReader(&readers.ReaderOptions{ConnectionString: connString})
default:
return nil, fmt.Errorf("%s: unsupported format '%s'", label, dbType)
}
db, err := reader.ReadDatabase()
if err != nil {
return nil, err
}
return db, nil
}
func writeDatabaseForMerge(dbType, filePath, connString string, db *models.Database, label string) error {
var writer writers.Writer
switch strings.ToLower(dbType) {
case "dbml":
if filePath == "" {
return fmt.Errorf("%s: file path is required for DBML format", label)
}
writer = wdbml.NewWriter(&writers.WriterOptions{OutputPath: filePath})
case "dctx":
if filePath == "" {
return fmt.Errorf("%s: file path is required for DCTX format", label)
}
writer = wdctx.NewWriter(&writers.WriterOptions{OutputPath: filePath})
case "drawdb":
if filePath == "" {
return fmt.Errorf("%s: file path is required for DrawDB format", label)
}
writer = wdrawdb.NewWriter(&writers.WriterOptions{OutputPath: filePath})
case "graphql":
if filePath == "" {
return fmt.Errorf("%s: file path is required for GraphQL format", label)
}
writer = wgraphql.NewWriter(&writers.WriterOptions{OutputPath: filePath})
case "json":
if filePath == "" {
return fmt.Errorf("%s: file path is required for JSON format", label)
}
writer = wjson.NewWriter(&writers.WriterOptions{OutputPath: filePath})
case "yaml":
if filePath == "" {
return fmt.Errorf("%s: file path is required for YAML format", label)
}
writer = wyaml.NewWriter(&writers.WriterOptions{OutputPath: filePath})
case "gorm":
if filePath == "" {
return fmt.Errorf("%s: file path is required for GORM format", label)
}
writer = wgorm.NewWriter(&writers.WriterOptions{OutputPath: filePath})
case "bun":
if filePath == "" {
return fmt.Errorf("%s: file path is required for Bun format", label)
}
writer = wbun.NewWriter(&writers.WriterOptions{OutputPath: filePath})
case "drizzle":
if filePath == "" {
return fmt.Errorf("%s: file path is required for Drizzle format", label)
}
writer = wdrizzle.NewWriter(&writers.WriterOptions{OutputPath: filePath})
case "prisma":
if filePath == "" {
return fmt.Errorf("%s: file path is required for Prisma format", label)
}
writer = wprisma.NewWriter(&writers.WriterOptions{OutputPath: filePath})
case "typeorm":
if filePath == "" {
return fmt.Errorf("%s: file path is required for TypeORM format", label)
}
writer = wtypeorm.NewWriter(&writers.WriterOptions{OutputPath: filePath})
case "pgsql":
writer = wpgsql.NewWriter(&writers.WriterOptions{OutputPath: filePath})
default:
return fmt.Errorf("%s: unsupported format '%s'", label, dbType)
}
return writer.WriteDatabase(db)
}
func expandPath(path string) string {
if len(path) > 0 && path[0] == '~' {
home, err := os.UserHomeDir()
if err == nil {
return filepath.Join(home, path[1:])
}
}
return path
}
func printDatabaseStats(db *models.Database) {
totalTables := 0
totalColumns := 0
totalConstraints := 0
totalIndexes := 0
for _, schema := range db.Schemas {
totalTables += len(schema.Tables)
for _, table := range schema.Tables {
totalColumns += len(table.Columns)
totalConstraints += len(table.Constraints)
totalIndexes += len(table.Indexes)
}
}
fmt.Fprintf(os.Stderr, " Schemas: %d, Tables: %d, Columns: %d, Constraints: %d, Indexes: %d\n",
len(db.Schemas), totalTables, totalColumns, totalConstraints, totalIndexes)
}
func parseSkipTables(skipTablesStr string) map[string]bool {
skipTables := make(map[string]bool)
if skipTablesStr == "" {
return skipTables
}
// Split by comma and trim whitespace
parts := strings.Split(skipTablesStr, ",")
for _, part := range parts {
trimmed := strings.TrimSpace(part)
if trimmed != "" {
// Store in lowercase for case-insensitive matching
skipTables[strings.ToLower(trimmed)] = true
}
}
return skipTables
}

View File

@@ -22,4 +22,6 @@ func init() {
rootCmd.AddCommand(scriptsCmd)
rootCmd.AddCommand(templCmd)
rootCmd.AddCommand(editCmd)
rootCmd.AddCommand(mergeCmd)
rootCmd.AddCommand(splitCmd)
}

318
cmd/relspec/split.go Normal file
View File

@@ -0,0 +1,318 @@
package main
import (
"fmt"
"os"
"strings"
"github.com/spf13/cobra"
"git.warky.dev/wdevs/relspecgo/pkg/models"
)
var (
splitSourceType string
splitSourcePath string
splitSourceConn string
splitTargetType string
splitTargetPath string
splitSchemas string
splitTables string
splitPackageName string
splitDatabaseName string
splitExcludeSchema string
splitExcludeTables string
)
var splitCmd = &cobra.Command{
Use: "split",
Short: "Split database schemas to extract selected tables into a separate database",
Long: `Extract selected schemas and tables from a database and write them to a separate output.
The split command allows you to:
- Select specific schemas to include in the output
- Select specific tables within schemas
- Exclude specific schemas or tables if preferred
- Export the selected subset to any supported format
Input formats:
- dbml: DBML schema files
- dctx: DCTX schema files
- drawdb: DrawDB JSON files
- graphql: GraphQL schema files (.graphql, SDL)
- json: JSON database schema
- yaml: YAML database schema
- gorm: GORM model files (Go, file or directory)
- bun: Bun model files (Go, file or directory)
- drizzle: Drizzle ORM schema files (TypeScript, file or directory)
- prisma: Prisma schema files (.prisma)
- typeorm: TypeORM entity files (TypeScript)
- pgsql: PostgreSQL database (live connection)
Output formats:
- dbml: DBML schema files
- dctx: DCTX schema files
- drawdb: DrawDB JSON files
- graphql: GraphQL schema files (.graphql, SDL)
- json: JSON database schema
- yaml: YAML database schema
- gorm: GORM model files (Go)
- bun: Bun model files (Go)
- drizzle: Drizzle ORM schema files (TypeScript)
- prisma: Prisma schema files (.prisma)
- typeorm: TypeORM entity files (TypeScript)
- pgsql: PostgreSQL SQL schema
Examples:
# Split specific schemas from DBML
relspec split --from dbml --from-path schema.dbml \
--schemas public,auth \
--to json --to-path subset.json
# Extract specific tables from PostgreSQL
relspec split --from pgsql \
--from-conn "postgres://user:pass@localhost:5432/mydb" \
--schemas public \
--tables users,orders,products \
--to dbml --to-path subset.dbml
# Exclude specific tables
relspec split --from json --from-path schema.json \
--exclude-tables "audit_log,system_config,temp_data" \
--to json --to-path public_schema.json
# Split and convert to GORM
relspec split --from json --from-path schema.json \
--tables "users,posts,comments" \
--to gorm --to-path models/ --package models \
--database-name MyAppDB
# Exclude specific schema and tables
relspec split --from pgsql \
--from-conn "postgres://user:pass@localhost/db" \
--exclude-schema pg_catalog,information_schema \
--exclude-tables "temp_users,debug_logs" \
--to json --to-path public_schema.json`,
RunE: runSplit,
}
func init() {
splitCmd.Flags().StringVar(&splitSourceType, "from", "", "Source format (dbml, dctx, drawdb, graphql, json, yaml, gorm, bun, drizzle, prisma, typeorm, pgsql)")
splitCmd.Flags().StringVar(&splitSourcePath, "from-path", "", "Source file path (for file-based formats)")
splitCmd.Flags().StringVar(&splitSourceConn, "from-conn", "", "Source connection string (for database formats)")
splitCmd.Flags().StringVar(&splitTargetType, "to", "", "Target format (dbml, dctx, drawdb, graphql, json, yaml, gorm, bun, drizzle, prisma, typeorm, pgsql)")
splitCmd.Flags().StringVar(&splitTargetPath, "to-path", "", "Target output path (file or directory)")
splitCmd.Flags().StringVar(&splitPackageName, "package", "", "Package name (for code generation formats like gorm/bun)")
splitCmd.Flags().StringVar(&splitDatabaseName, "database-name", "", "Override database name in output")
splitCmd.Flags().StringVar(&splitSchemas, "schemas", "", "Comma-separated list of schema names to include")
splitCmd.Flags().StringVar(&splitTables, "tables", "", "Comma-separated list of table names to include (case-insensitive)")
splitCmd.Flags().StringVar(&splitExcludeSchema, "exclude-schema", "", "Comma-separated list of schema names to exclude")
splitCmd.Flags().StringVar(&splitExcludeTables, "exclude-tables", "", "Comma-separated list of table names to exclude (case-insensitive)")
err := splitCmd.MarkFlagRequired("from")
if err != nil {
fmt.Fprintf(os.Stderr, "Error marking from flag as required: %v\n", err)
}
err = splitCmd.MarkFlagRequired("to")
if err != nil {
fmt.Fprintf(os.Stderr, "Error marking to flag as required: %v\n", err)
}
err = splitCmd.MarkFlagRequired("to-path")
if err != nil {
fmt.Fprintf(os.Stderr, "Error marking to-path flag as required: %v\n", err)
}
}
func runSplit(cmd *cobra.Command, args []string) error {
fmt.Fprintf(os.Stderr, "\n=== RelSpec Schema Split ===\n")
fmt.Fprintf(os.Stderr, "Started at: %s\n\n", getCurrentTimestamp())
// Read source database
fmt.Fprintf(os.Stderr, "[1/3] Reading source schema...\n")
fmt.Fprintf(os.Stderr, " Format: %s\n", splitSourceType)
if splitSourcePath != "" {
fmt.Fprintf(os.Stderr, " Path: %s\n", splitSourcePath)
}
if splitSourceConn != "" {
fmt.Fprintf(os.Stderr, " Conn: %s\n", maskPassword(splitSourceConn))
}
db, err := readDatabaseForConvert(splitSourceType, splitSourcePath, splitSourceConn)
if err != nil {
return fmt.Errorf("failed to read source: %w", err)
}
fmt.Fprintf(os.Stderr, " ✓ Successfully read database '%s'\n", db.Name)
fmt.Fprintf(os.Stderr, " Found: %d schema(s)\n", len(db.Schemas))
totalTables := 0
for _, schema := range db.Schemas {
totalTables += len(schema.Tables)
}
fmt.Fprintf(os.Stderr, " Found: %d table(s)\n\n", totalTables)
// Filter the database
fmt.Fprintf(os.Stderr, "[2/3] Filtering schemas and tables...\n")
filteredDB, err := filterDatabase(db)
if err != nil {
return fmt.Errorf("failed to filter database: %w", err)
}
if splitDatabaseName != "" {
filteredDB.Name = splitDatabaseName
}
filteredTables := 0
for _, schema := range filteredDB.Schemas {
filteredTables += len(schema.Tables)
}
fmt.Fprintf(os.Stderr, " ✓ Filtered to: %d schema(s), %d table(s)\n\n", len(filteredDB.Schemas), filteredTables)
// Write to target format
fmt.Fprintf(os.Stderr, "[3/3] Writing to target format...\n")
fmt.Fprintf(os.Stderr, " Format: %s\n", splitTargetType)
fmt.Fprintf(os.Stderr, " Output: %s\n", splitTargetPath)
if splitPackageName != "" {
fmt.Fprintf(os.Stderr, " Package: %s\n", splitPackageName)
}
err = writeDatabase(
filteredDB,
splitTargetType,
splitTargetPath,
splitPackageName,
"", // no schema filter for split
)
if err != nil {
return fmt.Errorf("failed to write output: %w", err)
}
fmt.Fprintf(os.Stderr, " ✓ Successfully written to '%s'\n\n", splitTargetPath)
fmt.Fprintf(os.Stderr, "=== Split Completed Successfully ===\n")
fmt.Fprintf(os.Stderr, "Completed at: %s\n\n", getCurrentTimestamp())
return nil
}
// filterDatabase filters the database based on provided criteria
func filterDatabase(db *models.Database) (*models.Database, error) {
filteredDB := &models.Database{
Name: db.Name,
Description: db.Description,
Comment: db.Comment,
DatabaseType: db.DatabaseType,
DatabaseVersion: db.DatabaseVersion,
SourceFormat: db.SourceFormat,
UpdatedAt: db.UpdatedAt,
GUID: db.GUID,
Schemas: []*models.Schema{},
Domains: db.Domains, // Keep domains for now
}
// Parse filter flags
includeSchemas := parseCommaSeparated(splitSchemas)
includeTables := parseCommaSeparated(splitTables)
excludeSchemas := parseCommaSeparated(splitExcludeSchema)
excludeTables := parseCommaSeparated(splitExcludeTables)
// Convert table names to lowercase for case-insensitive matching
includeTablesLower := make(map[string]bool)
for _, t := range includeTables {
includeTablesLower[strings.ToLower(t)] = true
}
excludeTablesLower := make(map[string]bool)
for _, t := range excludeTables {
excludeTablesLower[strings.ToLower(t)] = true
}
// Iterate through schemas
for _, schema := range db.Schemas {
// Check if schema should be excluded
if contains(excludeSchemas, schema.Name) {
continue
}
// Check if schema should be included
if len(includeSchemas) > 0 && !contains(includeSchemas, schema.Name) {
continue
}
// Create a copy of the schema with filtered tables
filteredSchema := &models.Schema{
Name: schema.Name,
Description: schema.Description,
Owner: schema.Owner,
Permissions: schema.Permissions,
Comment: schema.Comment,
Metadata: schema.Metadata,
Scripts: schema.Scripts,
Sequence: schema.Sequence,
Relations: schema.Relations,
Enums: schema.Enums,
UpdatedAt: schema.UpdatedAt,
GUID: schema.GUID,
Tables: []*models.Table{},
Views: schema.Views,
Sequences: schema.Sequences,
}
// Filter tables within the schema
for _, table := range schema.Tables {
tableLower := strings.ToLower(table.Name)
// Check if table should be excluded
if excludeTablesLower[tableLower] {
continue
}
// If specific tables are requested, only include those
if len(includeTablesLower) > 0 {
if !includeTablesLower[tableLower] {
continue
}
}
filteredSchema.Tables = append(filteredSchema.Tables, table)
}
// Only add schema if it has tables (unless no table filter was specified)
if len(filteredSchema.Tables) > 0 || (len(includeTablesLower) == 0 && len(excludeTablesLower) == 0) {
filteredDB.Schemas = append(filteredDB.Schemas, filteredSchema)
}
}
if len(filteredDB.Schemas) == 0 {
return nil, fmt.Errorf("no schemas matched the filter criteria")
}
return filteredDB, nil
}
// parseCommaSeparated parses a comma-separated string into a slice, trimming whitespace
func parseCommaSeparated(s string) []string {
if s == "" {
return []string{}
}
parts := strings.Split(s, ",")
result := make([]string, 0, len(parts))
for _, p := range parts {
trimmed := strings.TrimSpace(p)
if trimmed != "" {
result = append(result, trimmed)
}
}
return result
}
// contains checks if a string is in a slice
func contains(slice []string, item string) bool {
for _, s := range slice {
if s == item {
return true
}
}
return false
}

574
pkg/merge/merge.go Normal file
View File

@@ -0,0 +1,574 @@
// Package merge provides utilities for merging database schemas.
// It allows combining schemas from multiple sources while avoiding duplicates,
// supporting only additive operations (no deletion or modification of existing items).
package merge
import (
"fmt"
"strings"
"git.warky.dev/wdevs/relspecgo/pkg/models"
)
// MergeResult represents the result of a merge operation
type MergeResult struct {
SchemasAdded int
TablesAdded int
ColumnsAdded int
RelationsAdded int
DomainsAdded int
EnumsAdded int
ViewsAdded int
SequencesAdded int
}
// MergeOptions contains options for merge operations
type MergeOptions struct {
SkipDomains bool
SkipRelations bool
SkipEnums bool
SkipViews bool
SkipSequences bool
SkipTableNames map[string]bool // Tables to skip during merge (keyed by table name)
}
// MergeDatabases merges the source database into the target database.
// Only adds missing items; existing items are not modified.
func MergeDatabases(target, source *models.Database, opts *MergeOptions) *MergeResult {
if opts == nil {
opts = &MergeOptions{}
}
result := &MergeResult{}
if target == nil || source == nil {
return result
}
// Merge schemas and their contents
result.merge(target, source, opts)
return result
}
func (r *MergeResult) merge(target, source *models.Database, opts *MergeOptions) {
// Create maps of existing schemas for quick lookup
existingSchemas := make(map[string]*models.Schema)
for _, schema := range target.Schemas {
existingSchemas[schema.SQLName()] = schema
}
// Merge schemas
for _, srcSchema := range source.Schemas {
schemaName := srcSchema.SQLName()
if tgtSchema, exists := existingSchemas[schemaName]; exists {
// Schema exists, merge its contents
r.mergeSchemaContents(tgtSchema, srcSchema, opts)
} else {
// Schema doesn't exist, add it
newSchema := cloneSchema(srcSchema)
target.Schemas = append(target.Schemas, newSchema)
r.SchemasAdded++
}
}
// Merge domains if not skipped
if !opts.SkipDomains {
r.mergeDomains(target, source)
}
}
func (r *MergeResult) mergeSchemaContents(target, source *models.Schema, opts *MergeOptions) {
// Merge tables
r.mergeTables(target, source, opts)
// Merge views if not skipped
if !opts.SkipViews {
r.mergeViews(target, source)
}
// Merge sequences if not skipped
if !opts.SkipSequences {
r.mergeSequences(target, source)
}
// Merge enums if not skipped
if !opts.SkipEnums {
r.mergeEnums(target, source)
}
// Merge relations if not skipped
if !opts.SkipRelations {
r.mergeRelations(target, source)
}
}
func (r *MergeResult) mergeTables(schema *models.Schema, source *models.Schema, opts *MergeOptions) {
// Create map of existing tables
existingTables := make(map[string]*models.Table)
for _, table := range schema.Tables {
existingTables[table.SQLName()] = table
}
// Merge tables
for _, srcTable := range source.Tables {
tableName := srcTable.SQLName()
// Skip if table is in the skip list (case-insensitive)
if opts != nil && opts.SkipTableNames != nil && opts.SkipTableNames[strings.ToLower(tableName)] {
continue
}
if tgtTable, exists := existingTables[tableName]; exists {
// Table exists, merge its columns
r.mergeColumns(tgtTable, srcTable)
} else {
// Table doesn't exist, add it
newTable := cloneTable(srcTable)
schema.Tables = append(schema.Tables, newTable)
r.TablesAdded++
// Count columns in the newly added table
r.ColumnsAdded += len(newTable.Columns)
}
}
}
func (r *MergeResult) mergeColumns(table *models.Table, srcTable *models.Table) {
// Create map of existing columns
existingColumns := make(map[string]*models.Column)
for colName := range table.Columns {
existingColumns[colName] = table.Columns[colName]
}
// Merge columns
for colName, srcCol := range srcTable.Columns {
if _, exists := existingColumns[colName]; !exists {
// Column doesn't exist, add it
newCol := cloneColumn(srcCol)
table.Columns[colName] = newCol
r.ColumnsAdded++
}
}
}
func (r *MergeResult) mergeViews(schema *models.Schema, source *models.Schema) {
// Create map of existing views
existingViews := make(map[string]*models.View)
for _, view := range schema.Views {
existingViews[view.SQLName()] = view
}
// Merge views
for _, srcView := range source.Views {
viewName := srcView.SQLName()
if _, exists := existingViews[viewName]; !exists {
// View doesn't exist, add it
newView := cloneView(srcView)
schema.Views = append(schema.Views, newView)
r.ViewsAdded++
}
}
}
func (r *MergeResult) mergeSequences(schema *models.Schema, source *models.Schema) {
// Create map of existing sequences
existingSequences := make(map[string]*models.Sequence)
for _, seq := range schema.Sequences {
existingSequences[seq.SQLName()] = seq
}
// Merge sequences
for _, srcSeq := range source.Sequences {
seqName := srcSeq.SQLName()
if _, exists := existingSequences[seqName]; !exists {
// Sequence doesn't exist, add it
newSeq := cloneSequence(srcSeq)
schema.Sequences = append(schema.Sequences, newSeq)
r.SequencesAdded++
}
}
}
func (r *MergeResult) mergeEnums(schema *models.Schema, source *models.Schema) {
// Create map of existing enums
existingEnums := make(map[string]*models.Enum)
for _, enum := range schema.Enums {
existingEnums[enum.SQLName()] = enum
}
// Merge enums
for _, srcEnum := range source.Enums {
enumName := srcEnum.SQLName()
if _, exists := existingEnums[enumName]; !exists {
// Enum doesn't exist, add it
newEnum := cloneEnum(srcEnum)
schema.Enums = append(schema.Enums, newEnum)
r.EnumsAdded++
}
}
}
func (r *MergeResult) mergeRelations(schema *models.Schema, source *models.Schema) {
// Create map of existing relations
existingRelations := make(map[string]*models.Relationship)
for _, rel := range schema.Relations {
existingRelations[rel.SQLName()] = rel
}
// Merge relations
for _, srcRel := range source.Relations {
if _, exists := existingRelations[srcRel.SQLName()]; !exists {
// Relation doesn't exist, add it
newRel := cloneRelation(srcRel)
schema.Relations = append(schema.Relations, newRel)
r.RelationsAdded++
}
}
}
func (r *MergeResult) mergeDomains(target *models.Database, source *models.Database) {
// Create map of existing domains
existingDomains := make(map[string]*models.Domain)
for _, domain := range target.Domains {
existingDomains[domain.SQLName()] = domain
}
// Merge domains
for _, srcDomain := range source.Domains {
domainName := srcDomain.SQLName()
if _, exists := existingDomains[domainName]; !exists {
// Domain doesn't exist, add it
newDomain := cloneDomain(srcDomain)
target.Domains = append(target.Domains, newDomain)
r.DomainsAdded++
}
}
}
// Clone functions to create deep copies of models
func cloneSchema(schema *models.Schema) *models.Schema {
if schema == nil {
return nil
}
newSchema := &models.Schema{
Name: schema.Name,
Description: schema.Description,
Owner: schema.Owner,
Comment: schema.Comment,
Sequence: schema.Sequence,
UpdatedAt: schema.UpdatedAt,
Tables: make([]*models.Table, 0),
Views: make([]*models.View, 0),
Sequences: make([]*models.Sequence, 0),
Enums: make([]*models.Enum, 0),
Relations: make([]*models.Relationship, 0),
}
if schema.Permissions != nil {
newSchema.Permissions = make(map[string]string)
for k, v := range schema.Permissions {
newSchema.Permissions[k] = v
}
}
if schema.Metadata != nil {
newSchema.Metadata = make(map[string]interface{})
for k, v := range schema.Metadata {
newSchema.Metadata[k] = v
}
}
if schema.Scripts != nil {
newSchema.Scripts = make([]*models.Script, len(schema.Scripts))
copy(newSchema.Scripts, schema.Scripts)
}
// Clone tables
for _, table := range schema.Tables {
newSchema.Tables = append(newSchema.Tables, cloneTable(table))
}
// Clone views
for _, view := range schema.Views {
newSchema.Views = append(newSchema.Views, cloneView(view))
}
// Clone sequences
for _, seq := range schema.Sequences {
newSchema.Sequences = append(newSchema.Sequences, cloneSequence(seq))
}
// Clone enums
for _, enum := range schema.Enums {
newSchema.Enums = append(newSchema.Enums, cloneEnum(enum))
}
// Clone relations
for _, rel := range schema.Relations {
newSchema.Relations = append(newSchema.Relations, cloneRelation(rel))
}
return newSchema
}
func cloneTable(table *models.Table) *models.Table {
if table == nil {
return nil
}
newTable := &models.Table{
Name: table.Name,
Description: table.Description,
Schema: table.Schema,
Comment: table.Comment,
Sequence: table.Sequence,
UpdatedAt: table.UpdatedAt,
Columns: make(map[string]*models.Column),
Constraints: make(map[string]*models.Constraint),
Indexes: make(map[string]*models.Index),
}
if table.Metadata != nil {
newTable.Metadata = make(map[string]interface{})
for k, v := range table.Metadata {
newTable.Metadata[k] = v
}
}
// Clone columns
for colName, col := range table.Columns {
newTable.Columns[colName] = cloneColumn(col)
}
// Clone constraints
for constName, constraint := range table.Constraints {
newTable.Constraints[constName] = cloneConstraint(constraint)
}
// Clone indexes
for idxName, index := range table.Indexes {
newTable.Indexes[idxName] = cloneIndex(index)
}
return newTable
}
func cloneColumn(col *models.Column) *models.Column {
if col == nil {
return nil
}
newCol := &models.Column{
Name: col.Name,
Type: col.Type,
Description: col.Description,
Comment: col.Comment,
IsPrimaryKey: col.IsPrimaryKey,
NotNull: col.NotNull,
Default: col.Default,
Precision: col.Precision,
Scale: col.Scale,
Length: col.Length,
Sequence: col.Sequence,
AutoIncrement: col.AutoIncrement,
Collation: col.Collation,
}
return newCol
}
func cloneConstraint(constraint *models.Constraint) *models.Constraint {
if constraint == nil {
return nil
}
newConstraint := &models.Constraint{
Type: constraint.Type,
Columns: make([]string, len(constraint.Columns)),
ReferencedTable: constraint.ReferencedTable,
ReferencedSchema: constraint.ReferencedSchema,
ReferencedColumns: make([]string, len(constraint.ReferencedColumns)),
OnUpdate: constraint.OnUpdate,
OnDelete: constraint.OnDelete,
Expression: constraint.Expression,
Name: constraint.Name,
Deferrable: constraint.Deferrable,
InitiallyDeferred: constraint.InitiallyDeferred,
Sequence: constraint.Sequence,
}
copy(newConstraint.Columns, constraint.Columns)
copy(newConstraint.ReferencedColumns, constraint.ReferencedColumns)
return newConstraint
}
func cloneIndex(index *models.Index) *models.Index {
if index == nil {
return nil
}
newIndex := &models.Index{
Name: index.Name,
Description: index.Description,
Table: index.Table,
Schema: index.Schema,
Columns: make([]string, len(index.Columns)),
Unique: index.Unique,
Type: index.Type,
Where: index.Where,
Concurrent: index.Concurrent,
Include: make([]string, len(index.Include)),
Comment: index.Comment,
Sequence: index.Sequence,
}
copy(newIndex.Columns, index.Columns)
copy(newIndex.Include, index.Include)
return newIndex
}
func cloneView(view *models.View) *models.View {
if view == nil {
return nil
}
newView := &models.View{
Name: view.Name,
Description: view.Description,
Schema: view.Schema,
Definition: view.Definition,
Comment: view.Comment,
Sequence: view.Sequence,
Columns: make(map[string]*models.Column),
}
if view.Metadata != nil {
newView.Metadata = make(map[string]interface{})
for k, v := range view.Metadata {
newView.Metadata[k] = v
}
}
// Clone columns
for colName, col := range view.Columns {
newView.Columns[colName] = cloneColumn(col)
}
return newView
}
func cloneSequence(seq *models.Sequence) *models.Sequence {
if seq == nil {
return nil
}
newSeq := &models.Sequence{
Name: seq.Name,
Description: seq.Description,
Schema: seq.Schema,
StartValue: seq.StartValue,
MinValue: seq.MinValue,
MaxValue: seq.MaxValue,
IncrementBy: seq.IncrementBy,
CacheSize: seq.CacheSize,
Cycle: seq.Cycle,
OwnedByTable: seq.OwnedByTable,
OwnedByColumn: seq.OwnedByColumn,
Comment: seq.Comment,
Sequence: seq.Sequence,
}
return newSeq
}
func cloneEnum(enum *models.Enum) *models.Enum {
if enum == nil {
return nil
}
newEnum := &models.Enum{
Name: enum.Name,
Values: make([]string, len(enum.Values)),
Schema: enum.Schema,
}
copy(newEnum.Values, enum.Values)
return newEnum
}
func cloneRelation(rel *models.Relationship) *models.Relationship {
if rel == nil {
return nil
}
newRel := &models.Relationship{
Name: rel.Name,
Type: rel.Type,
FromTable: rel.FromTable,
FromSchema: rel.FromSchema,
FromColumns: make([]string, len(rel.FromColumns)),
ToTable: rel.ToTable,
ToSchema: rel.ToSchema,
ToColumns: make([]string, len(rel.ToColumns)),
ForeignKey: rel.ForeignKey,
ThroughTable: rel.ThroughTable,
ThroughSchema: rel.ThroughSchema,
Description: rel.Description,
Sequence: rel.Sequence,
}
if rel.Properties != nil {
newRel.Properties = make(map[string]string)
for k, v := range rel.Properties {
newRel.Properties[k] = v
}
}
copy(newRel.FromColumns, rel.FromColumns)
copy(newRel.ToColumns, rel.ToColumns)
return newRel
}
func cloneDomain(domain *models.Domain) *models.Domain {
if domain == nil {
return nil
}
newDomain := &models.Domain{
Name: domain.Name,
Description: domain.Description,
Comment: domain.Comment,
Sequence: domain.Sequence,
Tables: make([]*models.DomainTable, len(domain.Tables)),
}
if domain.Metadata != nil {
newDomain.Metadata = make(map[string]interface{})
for k, v := range domain.Metadata {
newDomain.Metadata[k] = v
}
}
copy(newDomain.Tables, domain.Tables)
return newDomain
}
// GetMergeSummary returns a human-readable summary of the merge result
func GetMergeSummary(result *MergeResult) string {
if result == nil {
return "No merge result available"
}
lines := []string{
"=== Merge Summary ===",
fmt.Sprintf("Schemas added: %d", result.SchemasAdded),
fmt.Sprintf("Tables added: %d", result.TablesAdded),
fmt.Sprintf("Columns added: %d", result.ColumnsAdded),
fmt.Sprintf("Views added: %d", result.ViewsAdded),
fmt.Sprintf("Sequences added: %d", result.SequencesAdded),
fmt.Sprintf("Enums added: %d", result.EnumsAdded),
fmt.Sprintf("Relations added: %d", result.RelationsAdded),
fmt.Sprintf("Domains added: %d", result.DomainsAdded),
}
totalAdded := result.SchemasAdded + result.TablesAdded + result.ColumnsAdded +
result.ViewsAdded + result.SequencesAdded + result.EnumsAdded +
result.RelationsAdded + result.DomainsAdded
lines = append(lines, fmt.Sprintf("Total items added: %d", totalAdded))
summary := ""
for _, line := range lines {
summary += line + "\n"
}
return summary
}

View File

@@ -7,6 +7,8 @@ package models
import (
"strings"
"time"
"github.com/google/uuid"
)
// DatabaseType represents the type of database system.
@@ -30,6 +32,7 @@ type Database struct {
DatabaseVersion string `json:"database_version,omitempty" yaml:"database_version,omitempty" xml:"database_version,omitempty"`
SourceFormat string `json:"source_format,omitempty" yaml:"source_format,omitempty" xml:"source_format,omitempty"` // Source Format of the database.
UpdatedAt string `json:"updatedat,omitempty" yaml:"updatedat,omitempty" xml:"updatedat,omitempty"`
GUID string `json:"guid" yaml:"guid" xml:"guid"`
}
// SQLName returns the database name in lowercase for SQL compatibility.
@@ -51,6 +54,7 @@ type Domain struct {
Comment string `json:"comment,omitempty" yaml:"comment,omitempty" xml:"comment,omitempty"`
Metadata map[string]any `json:"metadata,omitempty" yaml:"metadata,omitempty" xml:"-"`
Sequence uint `json:"sequence,omitempty" yaml:"sequence,omitempty" xml:"sequence,omitempty"`
GUID string `json:"guid" yaml:"guid" xml:"guid"`
}
// SQLName returns the domain name in lowercase for SQL compatibility.
@@ -66,6 +70,7 @@ type DomainTable struct {
SchemaName string `json:"schema_name" yaml:"schema_name" xml:"schema_name"`
Sequence uint `json:"sequence,omitempty" yaml:"sequence,omitempty" xml:"sequence,omitempty"`
RefTable *Table `json:"-" yaml:"-" xml:"-"` // Excluded to prevent circular references
GUID string `json:"guid" yaml:"guid" xml:"guid"`
}
// Schema represents a database schema, which is a logical grouping of database objects
@@ -86,6 +91,7 @@ type Schema struct {
Relations []*Relationship `json:"relations,omitempty" yaml:"relations,omitempty" xml:"-"`
Enums []*Enum `json:"enums,omitempty" yaml:"enums,omitempty" xml:"enums"`
UpdatedAt string `json:"updatedat,omitempty" yaml:"updatedat,omitempty" xml:"updatedat,omitempty"`
GUID string `json:"guid" yaml:"guid" xml:"guid"`
}
// UpdaUpdateDateted sets the UpdatedAt field to the current time in RFC3339 format.
@@ -117,6 +123,7 @@ type Table struct {
Sequence uint `json:"sequence,omitempty" yaml:"sequence,omitempty" xml:"sequence,omitempty"`
RefSchema *Schema `json:"-" yaml:"-" xml:"-"` // Excluded to prevent circular references
UpdatedAt string `json:"updatedat,omitempty" yaml:"updatedat,omitempty" xml:"updatedat,omitempty"`
GUID string `json:"guid" yaml:"guid" xml:"guid"`
}
// UpdateDate sets the UpdatedAt field to the current time in RFC3339 format.
@@ -165,6 +172,7 @@ type View struct {
Metadata map[string]any `json:"metadata,omitempty" yaml:"metadata,omitempty" xml:"-"`
Sequence uint `json:"sequence,omitempty" yaml:"sequence,omitempty" xml:"sequence,omitempty"`
RefSchema *Schema `json:"-" yaml:"-" xml:"-"` // Excluded to prevent circular references
GUID string `json:"guid" yaml:"guid" xml:"guid"`
}
// SQLName returns the view name in lowercase for SQL compatibility.
@@ -188,6 +196,7 @@ type Sequence struct {
Comment string `json:"comment,omitempty" yaml:"comment,omitempty" xml:"comment,omitempty"`
Sequence uint `json:"sequence,omitempty" yaml:"sequence,omitempty" xml:"sequence,omitempty"`
RefSchema *Schema `json:"-" yaml:"-" xml:"-"` // Excluded to prevent circular references
GUID string `json:"guid" yaml:"guid" xml:"guid"`
}
// SQLName returns the sequence name in lowercase for SQL compatibility.
@@ -212,6 +221,7 @@ type Column struct {
Comment string `json:"comment,omitempty" yaml:"comment,omitempty" xml:"comment,omitempty"`
Collation string `json:"collation,omitempty" yaml:"collation,omitempty" xml:"collation,omitempty"`
Sequence uint `json:"sequence,omitempty" yaml:"sequence,omitempty" xml:"sequence,omitempty"`
GUID string `json:"guid" yaml:"guid" xml:"guid"`
}
// SQLName returns the column name in lowercase for SQL compatibility.
@@ -234,6 +244,7 @@ type Index struct {
Include []string `json:"include,omitempty" yaml:"include,omitempty" xml:"include,omitempty"` // INCLUDE columns
Comment string `json:"comment,omitempty" yaml:"comment,omitempty" xml:"comment,omitempty"`
Sequence uint `json:"sequence,omitempty" yaml:"sequence,omitempty" xml:"sequence,omitempty"`
GUID string `json:"guid" yaml:"guid" xml:"guid"`
}
// SQLName returns the index name in lowercase for SQL compatibility.
@@ -268,6 +279,7 @@ type Relationship struct {
ThroughSchema string `json:"through_schema,omitempty" yaml:"through_schema,omitempty" xml:"through_schema,omitempty"`
Description string `json:"description,omitempty" yaml:"description,omitempty" xml:"description,omitempty"`
Sequence uint `json:"sequence,omitempty" yaml:"sequence,omitempty" xml:"sequence,omitempty"`
GUID string `json:"guid" yaml:"guid" xml:"guid"`
}
// SQLName returns the relationship name in lowercase for SQL compatibility.
@@ -292,6 +304,7 @@ type Constraint struct {
Deferrable bool `json:"deferrable,omitempty" yaml:"deferrable,omitempty" xml:"deferrable,omitempty"`
InitiallyDeferred bool `json:"initially_deferred,omitempty" yaml:"initially_deferred,omitempty" xml:"initially_deferred,omitempty"`
Sequence uint `json:"sequence,omitempty" yaml:"sequence,omitempty" xml:"sequence,omitempty"`
GUID string `json:"guid" yaml:"guid" xml:"guid"`
}
// SQLName returns the constraint name in lowercase for SQL compatibility.
@@ -307,6 +320,7 @@ type Enum struct {
Name string `json:"name" yaml:"name" xml:"name"`
Values []string `json:"values" yaml:"values" xml:"values"`
Schema string `json:"schema,omitempty" yaml:"schema,omitempty" xml:"schema,omitempty"`
GUID string `json:"guid" yaml:"guid" xml:"guid"`
}
// SQLName returns the enum name in lowercase for SQL compatibility.
@@ -314,6 +328,16 @@ func (d *Enum) SQLName() string {
return strings.ToLower(d.Name)
}
// InitEnum initializes a new Enum with empty values slice
func InitEnum(name, schema string) *Enum {
return &Enum{
Name: name,
Schema: schema,
Values: make([]string, 0),
GUID: uuid.New().String(),
}
}
// Supported constraint types.
const (
PrimaryKeyConstraint ConstraintType = "primary_key" // Primary key uniquely identifies each record
@@ -335,6 +359,7 @@ type Script struct {
Version string `json:"version,omitempty" yaml:"version,omitempty" xml:"version,omitempty"`
Priority int `json:"priority,omitempty" yaml:"priority,omitempty" xml:"priority,omitempty"`
Sequence uint `json:"sequence,omitempty" yaml:"sequence,omitempty" xml:"sequence,omitempty"`
GUID string `json:"guid" yaml:"guid" xml:"guid"`
}
// SQLName returns the script name in lowercase for SQL compatibility.
@@ -350,6 +375,7 @@ func InitDatabase(name string) *Database {
Name: name,
Schemas: make([]*Schema, 0),
Domains: make([]*Domain, 0),
GUID: uuid.New().String(),
}
}
@@ -363,6 +389,7 @@ func InitSchema(name string) *Schema {
Permissions: make(map[string]string),
Metadata: make(map[string]any),
Scripts: make([]*Script, 0),
GUID: uuid.New().String(),
}
}
@@ -376,6 +403,7 @@ func InitTable(name, schema string) *Table {
Indexes: make(map[string]*Index),
Relationships: make(map[string]*Relationship),
Metadata: make(map[string]any),
GUID: uuid.New().String(),
}
}
@@ -385,6 +413,7 @@ func InitColumn(name, table, schema string) *Column {
Name: name,
Table: table,
Schema: schema,
GUID: uuid.New().String(),
}
}
@@ -396,6 +425,7 @@ func InitIndex(name, table, schema string) *Index {
Schema: schema,
Columns: make([]string, 0),
Include: make([]string, 0),
GUID: uuid.New().String(),
}
}
@@ -408,6 +438,7 @@ func InitRelation(name, schema string) *Relationship {
Properties: make(map[string]string),
FromColumns: make([]string, 0),
ToColumns: make([]string, 0),
GUID: uuid.New().String(),
}
}
@@ -417,6 +448,7 @@ func InitRelationship(name string, relType RelationType) *Relationship {
Name: name,
Type: relType,
Properties: make(map[string]string),
GUID: uuid.New().String(),
}
}
@@ -427,6 +459,7 @@ func InitConstraint(name string, constraintType ConstraintType) *Constraint {
Type: constraintType,
Columns: make([]string, 0),
ReferencedColumns: make([]string, 0),
GUID: uuid.New().String(),
}
}
@@ -435,6 +468,7 @@ func InitScript(name string) *Script {
return &Script{
Name: name,
RunAfter: make([]string, 0),
GUID: uuid.New().String(),
}
}
@@ -445,6 +479,7 @@ func InitView(name, schema string) *View {
Schema: schema,
Columns: make(map[string]*Column),
Metadata: make(map[string]any),
GUID: uuid.New().String(),
}
}
@@ -455,6 +490,7 @@ func InitSequence(name, schema string) *Sequence {
Schema: schema,
IncrementBy: 1,
StartValue: 1,
GUID: uuid.New().String(),
}
}
@@ -464,6 +500,7 @@ func InitDomain(name string) *Domain {
Name: name,
Tables: make([]*DomainTable, 0),
Metadata: make(map[string]any),
GUID: uuid.New().String(),
}
}
@@ -472,5 +509,6 @@ func InitDomainTable(tableName, schemaName string) *DomainTable {
return &DomainTable{
TableName: tableName,
SchemaName: schemaName,
GUID: uuid.New().String(),
}
}

View File

@@ -79,6 +79,8 @@ func (r *Reader) convertToDatabase(dctx *models.DCTXDictionary) (*models.Databas
db := models.InitDatabase(dbName)
schema := models.InitSchema("public")
// Note: DCTX doesn't have database GUID, but schema can use dictionary name if available
// Create GUID mappings for tables and keys
tableGuidMap := make(map[string]string) // GUID -> table name
keyGuidMap := make(map[string]*models.DCTXKey) // GUID -> key definition
@@ -162,6 +164,10 @@ func (r *Reader) convertTable(dctxTable *models.DCTXTable) (*models.Table, map[s
tableName := r.sanitizeName(dctxTable.Name)
table := models.InitTable(tableName, "public")
table.Description = dctxTable.Description
// Assign GUID from DCTX table
if dctxTable.Guid != "" {
table.GUID = dctxTable.Guid
}
fieldGuidMap := make(map[string]string)
@@ -202,6 +208,10 @@ func (r *Reader) convertField(dctxField *models.DCTXField, tableName string) ([]
// Convert single field
column := models.InitColumn(r.sanitizeName(dctxField.Name), tableName, "public")
// Assign GUID from DCTX field
if dctxField.Guid != "" {
column.GUID = dctxField.Guid
}
// Map Clarion data types
dataType, length := r.mapDataType(dctxField.DataType, dctxField.Size)
@@ -346,6 +356,10 @@ func (r *Reader) convertKey(dctxKey *models.DCTXKey, table *models.Table, fieldG
constraint.Table = table.Name
constraint.Schema = table.Schema
constraint.Columns = columns
// Assign GUID from DCTX key
if dctxKey.Guid != "" {
constraint.GUID = dctxKey.Guid
}
table.Constraints[constraint.Name] = constraint
@@ -366,6 +380,10 @@ func (r *Reader) convertKey(dctxKey *models.DCTXKey, table *models.Table, fieldG
index.Columns = columns
index.Unique = dctxKey.Unique
index.Type = "btree"
// Assign GUID from DCTX key
if dctxKey.Guid != "" {
index.GUID = dctxKey.Guid
}
table.Indexes[index.Name] = index
return nil
@@ -460,6 +478,10 @@ func (r *Reader) processRelations(dctx *models.DCTXDictionary, schema *models.Sc
constraint.ReferencedColumns = pkColumns
constraint.OnDelete = r.mapReferentialAction(relation.Delete)
constraint.OnUpdate = r.mapReferentialAction(relation.Update)
// Assign GUID from DCTX relation
if relation.Guid != "" {
constraint.GUID = relation.Guid
}
foreignTable.Constraints[fkName] = constraint
@@ -473,6 +495,10 @@ func (r *Reader) processRelations(dctx *models.DCTXDictionary, schema *models.Sc
relationship.ForeignKey = fkName
relationship.Properties["on_delete"] = constraint.OnDelete
relationship.Properties["on_update"] = constraint.OnUpdate
// Assign GUID from DCTX relation
if relation.Guid != "" {
relationship.GUID = relation.Guid
}
foreignTable.Relationships[relationshipName] = relationship
}

View File

@@ -241,11 +241,9 @@ func (r *Reader) parsePgEnum(line string, matches []string) *models.Enum {
}
}
return &models.Enum{
Name: enumName,
Values: values,
Schema: "public",
}
enum := models.InitEnum(enumName, "public")
enum.Values = values
return enum
}
// parseTableBlock parses a complete pgTable definition block

View File

@@ -260,11 +260,7 @@ func (r *Reader) parseType(typeName string, lines []string, schema *models.Schem
}
func (r *Reader) parseEnum(enumName string, lines []string, schema *models.Schema) {
enum := &models.Enum{
Name: enumName,
Schema: schema.Name,
Values: make([]string, 0),
}
enum := models.InitEnum(enumName, schema.Name)
for _, line := range lines {
trimmed := strings.TrimSpace(line)

View File

@@ -128,11 +128,7 @@ func (r *Reader) parsePrisma(content string) (*models.Database, error) {
if matches := enumRegex.FindStringSubmatch(trimmed); matches != nil {
currentBlock = "enum"
enumName := matches[1]
currentEnum = &models.Enum{
Name: enumName,
Schema: "public",
Values: make([]string, 0),
}
currentEnum = models.InitEnum(enumName, "public")
blockContent = []string{}
continue
}

View File

@@ -150,13 +150,11 @@ func (r *Reader) readScripts() ([]*models.Script, error) {
}
// Create Script model
script := &models.Script{
Name: name,
Description: fmt.Sprintf("SQL script from %s", relPath),
SQL: string(content),
Priority: priority,
Sequence: uint(sequence),
}
script := models.InitScript(name)
script.Description = fmt.Sprintf("SQL script from %s", relPath)
script.SQL = string(content)
script.Priority = priority
script.Sequence = uint(sequence)
scripts = append(scripts, script)

View File

@@ -23,6 +23,7 @@ func (se *SchemaEditor) showColumnEditor(schemaIndex, tableIndex, colIndex int,
newIsNotNull := column.NotNull
newDefault := column.Default
newDescription := column.Description
newGUID := column.GUID
// Column type options: PostgreSQL, MySQL, SQL Server, and common SQL types
columnTypes := []string{
@@ -94,9 +95,14 @@ func (se *SchemaEditor) showColumnEditor(schemaIndex, tableIndex, colIndex int,
newDescription = value
})
form.AddInputField("GUID", column.GUID, 40, nil, func(value string) {
newGUID = value
})
form.AddButton("Save", func() {
// Apply changes using dataops
se.UpdateColumn(schemaIndex, tableIndex, originalName, newName, newType, newIsPK, newIsNotNull, newDefault, newDescription)
se.db.Schemas[schemaIndex].Tables[tableIndex].Columns[newName].GUID = newGUID
se.pages.RemovePage("column-editor")
se.pages.SwitchToPage("table-editor")

View File

@@ -14,6 +14,7 @@ func (se *SchemaEditor) showEditDatabaseForm() {
dbComment := se.db.Comment
dbType := string(se.db.DatabaseType)
dbVersion := se.db.DatabaseVersion
dbGUID := se.db.GUID
// Database type options
dbTypeOptions := []string{"pgsql", "mssql", "sqlite"}
@@ -45,11 +46,16 @@ func (se *SchemaEditor) showEditDatabaseForm() {
dbVersion = value
})
form.AddInputField("GUID", dbGUID, 40, nil, func(value string) {
dbGUID = value
})
form.AddButton("Save", func() {
if dbName == "" {
return
}
se.updateDatabase(dbName, dbDescription, dbComment, dbType, dbVersion)
se.db.GUID = dbGUID
se.pages.RemovePage("edit-database")
se.pages.RemovePage("main")
se.pages.AddPage("main", se.createMainMenu(), true, true)

View File

@@ -4,10 +4,12 @@ import (
"fmt"
"os"
"path/filepath"
"strings"
"github.com/gdamore/tcell/v2"
"github.com/rivo/tview"
"git.warky.dev/wdevs/relspecgo/pkg/merge"
"git.warky.dev/wdevs/relspecgo/pkg/models"
"git.warky.dev/wdevs/relspecgo/pkg/readers"
rbun "git.warky.dev/wdevs/relspecgo/pkg/readers/bun"
@@ -107,8 +109,7 @@ func (se *SchemaEditor) showLoadScreen() {
// Keyboard shortcuts
form.SetInputCapture(func(event *tcell.EventKey) *tcell.EventKey {
switch event.Key() {
case tcell.KeyEscape:
if event.Key() == tcell.KeyEscape {
se.app.Stop()
return nil
}
@@ -214,8 +215,7 @@ func (se *SchemaEditor) showSaveScreen() {
// Keyboard shortcuts
form.SetInputCapture(func(event *tcell.EventKey) *tcell.EventKey {
switch event.Key() {
case tcell.KeyEscape:
if event.Key() == tcell.KeyEscape {
se.pages.RemovePage("save-database")
se.pages.SwitchToPage("main")
return nil
@@ -524,3 +524,268 @@ Examples:
- File: ~/schemas/mydb.dbml
- Directory (for code formats): ./models/`
}
// showImportScreen displays the import/merge database screen
func (se *SchemaEditor) showImportScreen() {
flex := tview.NewFlex().SetDirection(tview.FlexRow)
// Title
title := tview.NewTextView().
SetText("[::b]Import & Merge Database Schema").
SetTextAlign(tview.AlignCenter).
SetDynamicColors(true)
// Form
form := tview.NewForm()
form.SetBorder(true).SetTitle(" Import Configuration ").SetTitleAlign(tview.AlignLeft)
// Format selection
formatOptions := []string{
"dbml", "dctx", "drawdb", "graphql", "json", "yaml",
"gorm", "bun", "drizzle", "prisma", "typeorm", "pgsql",
}
selectedFormat := 0
currentFormat := formatOptions[selectedFormat]
// File path input
filePath := ""
connString := ""
skipDomains := false
skipRelations := false
skipEnums := false
skipViews := false
skipSequences := false
skipTables := ""
form.AddDropDown("Format", formatOptions, 0, func(option string, index int) {
selectedFormat = index
currentFormat = option
})
form.AddInputField("File Path", "", 50, nil, func(value string) {
filePath = value
})
form.AddInputField("Connection String", "", 50, nil, func(value string) {
connString = value
})
form.AddInputField("Skip Tables (comma-separated)", "", 50, nil, func(value string) {
skipTables = value
})
form.AddCheckbox("Skip Domains", false, func(checked bool) {
skipDomains = checked
})
form.AddCheckbox("Skip Relations", false, func(checked bool) {
skipRelations = checked
})
form.AddCheckbox("Skip Enums", false, func(checked bool) {
skipEnums = checked
})
form.AddCheckbox("Skip Views", false, func(checked bool) {
skipViews = checked
})
form.AddCheckbox("Skip Sequences", false, func(checked bool) {
skipSequences = checked
})
form.AddTextView("Help", getImportHelpText(), 0, 7, true, false)
// Buttons
form.AddButton("Import & Merge [i]", func() {
se.importAndMergeDatabase(currentFormat, filePath, connString, skipDomains, skipRelations, skipEnums, skipViews, skipSequences, skipTables)
})
form.AddButton("Back [b]", func() {
se.pages.RemovePage("import-database")
se.pages.SwitchToPage("main")
})
form.AddButton("Exit [q]", func() {
se.app.Stop()
})
// Keyboard shortcuts
form.SetInputCapture(func(event *tcell.EventKey) *tcell.EventKey {
if event.Key() == tcell.KeyEscape {
se.pages.RemovePage("import-database")
se.pages.SwitchToPage("main")
return nil
}
switch event.Rune() {
case 'i':
se.importAndMergeDatabase(currentFormat, filePath, connString, skipDomains, skipRelations, skipEnums, skipViews, skipSequences, skipTables)
return nil
case 'b':
se.pages.RemovePage("import-database")
se.pages.SwitchToPage("main")
return nil
case 'q':
se.app.Stop()
return nil
}
return event
})
flex.AddItem(title, 1, 0, false).
AddItem(form, 0, 1, true)
se.pages.AddAndSwitchToPage("import-database", flex, true)
}
// importAndMergeDatabase imports and merges a database from the specified configuration
func (se *SchemaEditor) importAndMergeDatabase(format, filePath, connString string, skipDomains, skipRelations, skipEnums, skipViews, skipSequences bool, skipTables string) {
// Validate input
if format == "pgsql" {
if connString == "" {
se.showErrorDialog("Error", "Connection string is required for PostgreSQL")
return
}
} else {
if filePath == "" {
se.showErrorDialog("Error", "File path is required for "+format)
return
}
// Expand home directory
if len(filePath) > 0 && filePath[0] == '~' {
home, err := os.UserHomeDir()
if err == nil {
filePath = filepath.Join(home, filePath[1:])
}
}
}
// Create reader
var reader readers.Reader
switch format {
case "dbml":
reader = rdbml.NewReader(&readers.ReaderOptions{FilePath: filePath})
case "dctx":
reader = rdctx.NewReader(&readers.ReaderOptions{FilePath: filePath})
case "drawdb":
reader = rdrawdb.NewReader(&readers.ReaderOptions{FilePath: filePath})
case "graphql":
reader = rgraphql.NewReader(&readers.ReaderOptions{FilePath: filePath})
case "json":
reader = rjson.NewReader(&readers.ReaderOptions{FilePath: filePath})
case "yaml":
reader = ryaml.NewReader(&readers.ReaderOptions{FilePath: filePath})
case "gorm":
reader = rgorm.NewReader(&readers.ReaderOptions{FilePath: filePath})
case "bun":
reader = rbun.NewReader(&readers.ReaderOptions{FilePath: filePath})
case "drizzle":
reader = rdrizzle.NewReader(&readers.ReaderOptions{FilePath: filePath})
case "prisma":
reader = rprisma.NewReader(&readers.ReaderOptions{FilePath: filePath})
case "typeorm":
reader = rtypeorm.NewReader(&readers.ReaderOptions{FilePath: filePath})
case "pgsql":
reader = rpgsql.NewReader(&readers.ReaderOptions{ConnectionString: connString})
default:
se.showErrorDialog("Error", "Unsupported format: "+format)
return
}
// Read the database to import
importDb, err := reader.ReadDatabase()
if err != nil {
se.showErrorDialog("Import Error", fmt.Sprintf("Failed to read database: %v", err))
return
}
// Show confirmation dialog
se.showImportConfirmation(importDb, skipDomains, skipRelations, skipEnums, skipViews, skipSequences, skipTables)
}
// showImportConfirmation shows a confirmation dialog before merging
func (se *SchemaEditor) showImportConfirmation(importDb *models.Database, skipDomains, skipRelations, skipEnums, skipViews, skipSequences bool, skipTables string) {
confirmText := fmt.Sprintf("Import & Merge Database?\n\nSource: %s\nTarget: %s\n\nThis will add missing schemas, tables, columns, and other objects from the source to your database.\n\nExisting items will NOT be modified.",
importDb.Name, se.db.Name)
modal := tview.NewModal().
SetText(confirmText).
AddButtons([]string{"Cancel", "Merge"}).
SetDoneFunc(func(buttonIndex int, buttonLabel string) {
se.pages.RemovePage("import-confirm")
if buttonLabel == "Merge" {
se.performMerge(importDb, skipDomains, skipRelations, skipEnums, skipViews, skipSequences, skipTables)
}
})
modal.SetInputCapture(func(event *tcell.EventKey) *tcell.EventKey {
if event.Key() == tcell.KeyEscape {
se.pages.RemovePage("import-confirm")
se.pages.SwitchToPage("import-database")
return nil
}
return event
})
se.pages.AddAndSwitchToPage("import-confirm", modal, true)
}
// performMerge performs the actual merge operation
func (se *SchemaEditor) performMerge(importDb *models.Database, skipDomains, skipRelations, skipEnums, skipViews, skipSequences bool, skipTables string) {
// Create merge options
opts := &merge.MergeOptions{
SkipDomains: skipDomains,
SkipRelations: skipRelations,
SkipEnums: skipEnums,
SkipViews: skipViews,
SkipSequences: skipSequences,
}
// Parse skip tables
if skipTables != "" {
opts.SkipTableNames = parseSkipTablesUI(skipTables)
}
// Perform the merge
result := merge.MergeDatabases(se.db, importDb, opts)
// Update the database timestamp
se.db.UpdateDate()
// Show success dialog with summary
summary := merge.GetMergeSummary(result)
se.showSuccessDialog("Import Complete", summary, func() {
se.pages.RemovePage("import-database")
se.pages.RemovePage("main")
se.pages.AddPage("main", se.createMainMenu(), true, true)
})
}
// getImportHelpText returns the help text for the import screen
func getImportHelpText() string {
return `Import & Merge: Adds missing schemas, tables, columns, and other objects to your existing database.
File-based formats: dbml, dctx, drawdb, graphql, json, yaml, gorm, bun, drizzle, prisma, typeorm
Database formats: pgsql (requires connection string)
Skip options: Check to exclude specific object types from the merge.`
}
func parseSkipTablesUI(skipTablesStr string) map[string]bool {
skipTables := make(map[string]bool)
if skipTablesStr == "" {
return skipTables
}
// Split by comma and trim whitespace
parts := strings.Split(skipTablesStr, ",")
for _, part := range parts {
trimmed := strings.TrimSpace(part)
if trimmed != "" {
// Store in lowercase for case-insensitive matching
skipTables[strings.ToLower(trimmed)] = true
}
}
return skipTables
}

View File

@@ -39,6 +39,9 @@ func (se *SchemaEditor) createMainMenu() tview.Primitive {
AddItem("Manage Domains", "View, create, edit, and delete domains", 'd', func() {
se.showDomainList()
}).
AddItem("Import & Merge", "Import and merge schema from another database", 'i', func() {
se.showImportScreen()
}).
AddItem("Save Database", "Save database to file or database", 'w', func() {
se.showSaveScreen()
}).

View File

@@ -24,8 +24,8 @@ func (se *SchemaEditor) showSchemaList() {
schemaTable := tview.NewTable().SetBorders(true).SetSelectable(true, false).SetFixed(1, 0)
// Add header row with padding for full width
headers := []string{"Name", "Sequence", "Total Tables", "Total Sequences", "Total Views", "Description"}
headerWidths := []int{20, 15, 20, 20, 15} // Last column takes remaining space
headers := []string{"Name", "Sequence", "Total Tables", "Total Sequences", "Total Views", "GUID", "Description"}
headerWidths := []int{20, 15, 20, 20, 15, 36} // Last column takes remaining space
for i, header := range headers {
padding := ""
if i < len(headerWidths) {
@@ -67,9 +67,14 @@ func (se *SchemaEditor) showSchemaList() {
viewsCell := tview.NewTableCell(viewsStr).SetSelectable(true)
schemaTable.SetCell(row+1, 4, viewsCell)
// GUID - pad to 36 chars
guidStr := fmt.Sprintf("%-36s", schema.GUID)
guidCell := tview.NewTableCell(guidStr).SetSelectable(true)
schemaTable.SetCell(row+1, 5, guidCell)
// Description - no padding, takes remaining space
descCell := tview.NewTableCell(schema.Description).SetSelectable(true)
schemaTable.SetCell(row+1, 5, descCell)
schemaTable.SetCell(row+1, 6, descCell)
}
schemaTable.SetTitle(" Schemas ").SetBorder(true).SetTitleAlign(tview.AlignLeft)
@@ -307,6 +312,7 @@ func (se *SchemaEditor) showEditSchemaDialog(schemaIndex int) {
newName := schema.Name
newOwner := schema.Owner
newDescription := schema.Description
newGUID := schema.GUID
form.AddInputField("Schema Name", schema.Name, 40, nil, func(value string) {
newName = value
@@ -320,9 +326,14 @@ func (se *SchemaEditor) showEditSchemaDialog(schemaIndex int) {
newDescription = value
})
form.AddInputField("GUID", schema.GUID, 40, nil, func(value string) {
newGUID = value
})
form.AddButton("Save", func() {
// Apply changes using dataops
se.UpdateSchema(schemaIndex, newName, newOwner, newDescription)
se.db.Schemas[schemaIndex].GUID = newGUID
schema := se.db.Schemas[schemaIndex]
se.pages.RemovePage("edit-schema")

View File

@@ -24,8 +24,8 @@ func (se *SchemaEditor) showTableList() {
tableTable := tview.NewTable().SetBorders(true).SetSelectable(true, false).SetFixed(1, 0)
// Add header row with padding for full width
headers := []string{"Name", "Schema", "Sequence", "Total Columns", "Total Relations", "Total Indexes", "Description", "Comment"}
headerWidths := []int{18, 15, 12, 14, 15, 14, 0, 12} // Description gets remainder
headers := []string{"Name", "Schema", "Sequence", "Total Columns", "Total Relations", "Total Indexes", "GUID", "Description", "Comment"}
headerWidths := []int{18, 15, 12, 14, 15, 14, 36, 0, 12} // Description gets remainder
for i, header := range headers {
padding := ""
if i < len(headerWidths) && headerWidths[i] > 0 {
@@ -82,14 +82,19 @@ func (se *SchemaEditor) showTableList() {
idxCell := tview.NewTableCell(idxStr).SetSelectable(true)
tableTable.SetCell(row+1, 5, idxCell)
// GUID - pad to 36 chars
guidStr := fmt.Sprintf("%-36s", table.GUID)
guidCell := tview.NewTableCell(guidStr).SetSelectable(true)
tableTable.SetCell(row+1, 6, guidCell)
// Description - no padding, takes remaining space
descCell := tview.NewTableCell(table.Description).SetSelectable(true)
tableTable.SetCell(row+1, 6, descCell)
tableTable.SetCell(row+1, 7, descCell)
// Comment - pad to 12 chars
commentStr := fmt.Sprintf("%-12s", table.Comment)
commentCell := tview.NewTableCell(commentStr).SetSelectable(true)
tableTable.SetCell(row+1, 7, commentCell)
tableTable.SetCell(row+1, 8, commentCell)
}
tableTable.SetTitle(" All Tables ").SetBorder(true).SetTitleAlign(tview.AlignLeft)
@@ -188,8 +193,8 @@ func (se *SchemaEditor) showTableEditor(schemaIndex, tableIndex int, table *mode
colTable := tview.NewTable().SetBorders(true).SetSelectable(true, false).SetFixed(1, 0)
// Add header row with padding for full width
headers := []string{"Name", "Type", "Default", "KeyType", "Description"}
headerWidths := []int{20, 18, 15, 15} // Last column takes remaining space
headers := []string{"Name", "Type", "Default", "KeyType", "GUID", "Description"}
headerWidths := []int{20, 18, 15, 15, 36} // Last column takes remaining space
for i, header := range headers {
padding := ""
if i < len(headerWidths) {
@@ -237,9 +242,14 @@ func (se *SchemaEditor) showTableEditor(schemaIndex, tableIndex int, table *mode
keyTypeCell := tview.NewTableCell(keyTypeStr).SetSelectable(true)
colTable.SetCell(row+1, 3, keyTypeCell)
// GUID - pad to 36 chars
guidStr := fmt.Sprintf("%-36s", column.GUID)
guidCell := tview.NewTableCell(guidStr).SetSelectable(true)
colTable.SetCell(row+1, 4, guidCell)
// Description
descCell := tview.NewTableCell(column.Description).SetSelectable(true)
colTable.SetCell(row+1, 4, descCell)
colTable.SetCell(row+1, 5, descCell)
}
colTable.SetTitle(" Columns ").SetBorder(true).SetTitleAlign(tview.AlignLeft)
@@ -490,6 +500,7 @@ func (se *SchemaEditor) showEditTableDialog(schemaIndex, tableIndex int) {
// Local variables to collect changes
newName := table.Name
newDescription := table.Description
newGUID := table.GUID
form.AddInputField("Table Name", table.Name, 40, nil, func(value string) {
newName = value
@@ -499,9 +510,14 @@ func (se *SchemaEditor) showEditTableDialog(schemaIndex, tableIndex int) {
newDescription = value
})
form.AddInputField("GUID", table.GUID, 40, nil, func(value string) {
newGUID = value
})
form.AddButton("Save", func() {
// Apply changes using dataops
se.UpdateTable(schemaIndex, tableIndex, newName, newDescription)
se.db.Schemas[schemaIndex].Tables[tableIndex].GUID = newGUID
table := se.db.Schemas[schemaIndex].Tables[tableIndex]
se.pages.RemovePage("edit-table")

View File

@@ -386,6 +386,7 @@ func (w *Writer) createDatabaseRef(db *models.Database) *models.Database {
DatabaseVersion: db.DatabaseVersion,
SourceFormat: db.SourceFormat,
Schemas: nil, // Don't include schemas to avoid circular reference
GUID: db.GUID,
}
}
@@ -402,5 +403,6 @@ func (w *Writer) createSchemaRef(schema *models.Schema, db *models.Database) *mo
Sequence: schema.Sequence,
RefDatabase: w.createDatabaseRef(db), // Include database ref
Tables: nil, // Don't include tables to avoid circular reference
GUID: schema.GUID,
}
}

View File

@@ -133,7 +133,11 @@ func (w *Writer) mapTableFields(table *models.Table) models.DCTXTable {
prefix = table.Name[:3]
}
tableGuid := w.newGUID()
// Use GUID from model if available, otherwise generate a new one
tableGuid := table.GUID
if tableGuid == "" {
tableGuid = w.newGUID()
}
w.tableGuidMap[table.Name] = tableGuid
dctxTable := models.DCTXTable{
@@ -171,7 +175,11 @@ func (w *Writer) mapTableKeys(table *models.Table) []models.DCTXKey {
}
func (w *Writer) mapField(column *models.Column) models.DCTXField {
guid := w.newGUID()
// Use GUID from model if available, otherwise generate a new one
guid := column.GUID
if guid == "" {
guid = w.newGUID()
}
fieldKey := fmt.Sprintf("%s.%s", column.Table, column.Name)
w.fieldGuidMap[fieldKey] = guid
@@ -209,7 +217,11 @@ func (w *Writer) mapDataType(dataType string) string {
}
func (w *Writer) mapKey(index *models.Index, table *models.Table) models.DCTXKey {
guid := w.newGUID()
// Use GUID from model if available, otherwise generate a new one
guid := index.GUID
if guid == "" {
guid = w.newGUID()
}
keyKey := fmt.Sprintf("%s.%s", table.Name, index.Name)
w.keyGuidMap[keyKey] = guid
@@ -344,7 +356,7 @@ func (w *Writer) mapRelation(rel *models.Relationship, schema *models.Schema) mo
}
return models.DCTXRelation{
Guid: w.newGUID(),
Guid: rel.GUID, // Use GUID from relationship model
PrimaryTable: w.tableGuidMap[rel.ToTable], // GUID of the 'to' table (e.g., users)
ForeignTable: w.tableGuidMap[rel.FromTable], // GUID of the 'from' table (e.g., posts)
PrimaryKey: primaryKeyGUID,

View File

@@ -380,6 +380,7 @@ func (w *Writer) createDatabaseRef(db *models.Database) *models.Database {
DatabaseVersion: db.DatabaseVersion,
SourceFormat: db.SourceFormat,
Schemas: nil, // Don't include schemas to avoid circular reference
GUID: db.GUID,
}
}
@@ -396,5 +397,6 @@ func (w *Writer) createSchemaRef(schema *models.Schema, db *models.Database) *mo
Sequence: schema.Sequence,
RefDatabase: w.createDatabaseRef(db), // Include database ref
Tables: nil, // Don't include tables to avoid circular reference
GUID: schema.GUID,
}
}