33 Commits

Author SHA1 Message Date
2ec9991324 feat(merge): 🎉 Add support for merging constraints and indexes
Some checks failed
CI / Test (1.24) (push) Failing after -26m37s
CI / Test (1.25) (push) Successful in -26m8s
CI / Lint (push) Successful in -26m32s
CI / Build (push) Successful in -26m42s
Release / Build and Release (push) Successful in -26m26s
Integration Tests / Integration Tests (push) Successful in -26m3s
* Implement mergeConstraints to handle table constraints
* Implement mergeIndexes to handle table indexes
* Update mergeTables to include constraints and indexes during merge
2026-01-31 21:27:28 +02:00
a3e45c206d feat(writer): 🎉 Enhance SQL execution logging and add statement type detection
All checks were successful
CI / Test (1.24) (push) Successful in -26m21s
CI / Test (1.25) (push) Successful in -26m15s
CI / Build (push) Successful in -26m39s
CI / Lint (push) Successful in -26m29s
Release / Build and Release (push) Successful in -26m28s
Integration Tests / Integration Tests (push) Successful in -26m11s
* Log statement type during execution for better debugging
* Introduce detectStatementType function to categorize SQL statements
* Update unique constraint naming convention in tests
2026-01-31 21:19:48 +02:00
165623bb1d feat(pgsql): Add templates for constraints and sequences
All checks were successful
CI / Test (1.24) (push) Successful in -26m21s
CI / Test (1.25) (push) Successful in -26m13s
CI / Build (push) Successful in -26m39s
CI / Lint (push) Successful in -26m29s
Release / Build and Release (push) Successful in -26m28s
Integration Tests / Integration Tests (push) Successful in -26m10s
* Introduce new templates for creating unique, check, and foreign key constraints with existence checks.
* Add templates for setting sequence values and creating sequences.
* Refactor existing SQL generation logic to utilize new templates for better maintainability and readability.
* Ensure identifiers are properly quoted to handle special characters and reserved keywords.
2026-01-31 21:04:43 +02:00
3c20c3c5d9 feat(writer): 🎉 Add support for check constraints in schema generation
All checks were successful
CI / Test (1.24) (push) Successful in -26m17s
CI / Test (1.25) (push) Successful in -26m14s
CI / Build (push) Successful in -26m41s
CI / Lint (push) Successful in -26m32s
Release / Build and Release (push) Successful in -26m31s
Integration Tests / Integration Tests (push) Successful in -26m13s
* Implement check constraints in the schema writer.
* Generate SQL statements to add check constraints if they do not exist.
* Add tests to verify correct generation of check constraints.
2026-01-31 20:42:19 +02:00
a54594e49b feat(writer): 🎉 Add support for unique constraints in schema generation
All checks were successful
CI / Test (1.24) (push) Successful in -26m26s
CI / Test (1.25) (push) Successful in -26m18s
CI / Lint (push) Successful in -26m25s
CI / Build (push) Successful in -26m35s
Release / Build and Release (push) Successful in -26m29s
Integration Tests / Integration Tests (push) Successful in -26m11s
* Implement unique constraint handling in GenerateSchemaStatements
* Add writeUniqueConstraints method for generating SQL statements
* Create unit test for unique constraints in writer_test.go
2026-01-31 20:33:08 +02:00
cafe6a461f feat(scripts): 🎉 Add --ignore-errors flag for script execution
All checks were successful
CI / Test (1.24) (push) Successful in -26m18s
CI / Test (1.25) (push) Successful in -26m14s
CI / Build (push) Successful in -26m38s
CI / Lint (push) Successful in -26m30s
Release / Build and Release (push) Successful in -26m27s
Integration Tests / Integration Tests (push) Successful in -26m10s
- Allow continued execution of scripts even if errors occur.
- Update execution summary to include counts of successful and failed scripts.
- Enhance error handling and reporting for better visibility.
2026-01-31 20:21:22 +02:00
abdb9b4c78 feat(dbml/reader): 🎉 Implement splitIdentifier function for parsing
All checks were successful
CI / Test (1.24) (push) Successful in -26m24s
CI / Test (1.25) (push) Successful in -26m17s
CI / Build (push) Successful in -26m44s
CI / Lint (push) Successful in -26m33s
Integration Tests / Integration Tests (push) Successful in -26m11s
Release / Build and Release (push) Successful in -26m36s
2026-01-31 19:45:24 +02:00
e7a15c8e4f feat(writer): 🎉 Implement add column statements for schema evolution
All checks were successful
CI / Test (1.24) (push) Successful in -26m24s
CI / Test (1.25) (push) Successful in -26m14s
CI / Lint (push) Successful in -26m30s
CI / Build (push) Successful in -26m41s
Release / Build and Release (push) Successful in -26m29s
Integration Tests / Integration Tests (push) Successful in -26m13s
* Add functionality to generate ALTER TABLE ADD COLUMN statements for existing tables.
* Introduce tests for generating and writing add column statements.
* Enhance schema evolution capabilities when new columns are added.
2026-01-31 19:12:00 +02:00
c36b5ede2b feat(writer): 🎉 Enhance primary key handling and add tests
All checks were successful
CI / Test (1.24) (push) Successful in -26m18s
CI / Test (1.25) (push) Successful in -26m11s
CI / Build (push) Successful in -26m43s
CI / Lint (push) Successful in -26m34s
Release / Build and Release (push) Successful in -26m31s
Integration Tests / Integration Tests (push) Successful in -26m20s
* Implement checks for existing primary keys before adding new ones.
* Drop auto-generated primary keys if they exist.
* Add tests for primary key existence and column size specifiers.
* Improve type conversion handling for PostgreSQL compatibility.
2026-01-31 18:59:32 +02:00
51ab29f8e3 feat(writer): 🎉 Update index naming conventions for consistency
All checks were successful
CI / Test (1.24) (push) Successful in -26m25s
CI / Test (1.25) (push) Successful in -26m17s
CI / Lint (push) Successful in -26m32s
CI / Build (push) Successful in -26m42s
Release / Build and Release (push) Successful in -26m31s
Integration Tests / Integration Tests (push) Successful in -26m24s
* Use SQLName() for primary key constraint naming
* Enhance index name formatting with column suffix
2026-01-31 17:23:18 +02:00
f532fc110c feat(writer): 🎉 Enhance script execution order and add symlink skipping
All checks were successful
CI / Test (1.24) (push) Successful in -26m10s
CI / Test (1.25) (push) Successful in -26m8s
CI / Build (push) Successful in -26m44s
CI / Lint (push) Successful in -26m32s
Integration Tests / Integration Tests (push) Successful in -26m26s
* Update script execution to sort by Priority, Sequence, and Name.
* Add functionality to skip symbolic links during directory scanning.
* Improve documentation to reflect changes in execution order and features.
* Add tests for symlink skipping and ensure correct script sorting.
2026-01-31 16:59:17 +02:00
92dff99725 feat(writer): enhance type conversion for PostgreSQL compatibility and add tests
Some checks failed
CI / Test (1.24) (push) Successful in -26m32s
CI / Test (1.25) (push) Successful in -26m27s
CI / Build (push) Successful in -26m48s
CI / Lint (push) Successful in -26m33s
Integration Tests / Integration Tests (push) Failing after -26m51s
Release / Build and Release (push) Successful in -26m41s
2026-01-29 21:36:23 +02:00
283b568adb feat(pgsql): add execution reporting for SQL statements
All checks were successful
CI / Test (1.24) (push) Successful in -25m29s
CI / Test (1.25) (push) Successful in -25m13s
CI / Lint (push) Successful in -26m13s
CI / Build (push) Successful in -26m27s
Integration Tests / Integration Tests (push) Successful in -26m11s
Release / Build and Release (push) Successful in -25m8s
- Implemented ExecutionReport to track the execution status of SQL statements.
- Added SchemaReport and TableReport to monitor execution per schema and table.
- Enhanced WriteDatabase to execute SQL directly on a PostgreSQL database if a connection string is provided.
- Included error handling and logging for failed statements during execution.
- Added functionality to write execution reports to a JSON file.
- Introduced utility functions to extract table names from CREATE TABLE statements and truncate long SQL statements for error messages.
2026-01-29 21:16:14 +02:00
122743ee43 feat(writer): 🎉 Improve primary key handling by checking for explicit constraints and columns
Some checks failed
CI / Test (1.25) (push) Successful in -26m17s
CI / Test (1.24) (push) Successful in -25m44s
CI / Lint (push) Successful in -26m43s
CI / Build (push) Failing after -27m1s
Release / Build and Release (push) Successful in -26m39s
Integration Tests / Integration Tests (push) Successful in -26m25s
2026-01-28 22:08:27 +02:00
91b6046b9b feat(writer): 🎉 Enhance PostgreSQL writer, fixed bugs found using origin
Some checks failed
CI / Test (1.24) (push) Failing after -24m5s
CI / Test (1.25) (push) Successful in -23m53s
CI / Build (push) Failing after -26m29s
CI / Lint (push) Successful in -26m12s
Integration Tests / Integration Tests (push) Successful in -26m20s
Release / Build and Release (push) Successful in -25m7s
2026-01-28 21:59:25 +02:00
6f55505444 feat(writer): 🎉 Enhance model name generation and formatting
All checks were successful
CI / Test (1.24) (push) Successful in -27m27s
CI / Test (1.25) (push) Successful in -27m17s
CI / Lint (push) Successful in -27m27s
CI / Build (push) Successful in -27m38s
Release / Build and Release (push) Successful in -27m24s
Integration Tests / Integration Tests (push) Successful in -27m16s
* Update model name generation to include schema name.
* Add gofmt execution after writing output files.
* Refactor relationship field naming to include schema.
* Update tests to reflect changes in model names and relationships.
2026-01-10 18:28:41 +02:00
e0e7b64c69 feat(writer): 🎉 Resolve field name collisions with methods
All checks were successful
CI / Test (1.24) (push) Successful in -27m21s
CI / Test (1.25) (push) Successful in -27m12s
CI / Build (push) Successful in -27m37s
CI / Lint (push) Successful in -27m26s
Release / Build and Release (push) Successful in -27m25s
Integration Tests / Integration Tests (push) Successful in -27m20s
* Implement field name collision resolution in model generation.
* Add tests to verify renaming of fields that conflict with generated method names.
* Ensure primary key type safety in UpdateID method.
2026-01-10 17:54:33 +02:00
4181cb1fbd feat(writer): 🎉 Enhance relationship field naming and uniqueness
All checks were successful
CI / Test (1.24) (push) Successful in -27m15s
CI / Test (1.25) (push) Successful in -27m10s
CI / Build (push) Successful in -27m38s
CI / Lint (push) Successful in -27m25s
Release / Build and Release (push) Successful in -27m27s
Integration Tests / Integration Tests (push) Successful in -27m18s
* Update relationship field naming conventions for has-one and has-many relationships.
* Implement logic to ensure unique field names by tracking used names.
* Add tests to verify new naming conventions and uniqueness constraints.
2026-01-10 17:45:13 +02:00
120ffc6a5a feat(writer): 🎉 Update relationship field naming convention
All checks were successful
CI / Test (1.24) (push) Successful in -27m26s
CI / Test (1.25) (push) Successful in -27m14s
CI / Lint (push) Successful in -27m27s
CI / Build (push) Successful in -27m36s
Release / Build and Release (push) Successful in -27m22s
Integration Tests / Integration Tests (push) Successful in -27m17s
* Refactor generateRelationshipFieldName to use foreign key columns for unique naming.
* Add test for multiple references to the same table to ensure unique relationship field names.
* Update existing tests to reflect new naming convention.
2026-01-10 13:49:54 +02:00
b20ad35485 feat(writer): 🎉 Add sanitization for struct tag values
All checks were successful
CI / Test (1.24) (push) Successful in -27m25s
CI / Test (1.25) (push) Successful in -27m17s
CI / Build (push) Successful in -27m36s
CI / Lint (push) Successful in -27m23s
Release / Build and Release (push) Successful in -27m21s
Integration Tests / Integration Tests (push) Successful in -27m16s
* Implement SanitizeStructTagValue function to clean identifiers for struct tags.
* Update model data generation to use sanitized column names.
* Ensure safe handling of backticks in column names and types across writers.
2026-01-10 13:42:25 +02:00
f258f8baeb feat(writer): 🎉 Add filename sanitization for DBML identifiers
All checks were successful
CI / Test (1.24) (push) Successful in -27m23s
CI / Test (1.25) (push) Successful in -27m16s
CI / Build (push) Successful in -27m40s
CI / Lint (push) Successful in -27m29s
Release / Build and Release (push) Successful in -27m21s
Integration Tests / Integration Tests (push) Successful in -27m17s
* Implement SanitizeFilename function to clean identifiers
* Remove quotes, comments, and invalid characters from filenames
* Update filename generation in writers to use sanitized names
2026-01-10 13:32:33 +02:00
6388daba56 feat(reader): 🎉 Add support for multi-file DBML loading
All checks were successful
CI / Test (1.24) (push) Successful in -27m13s
CI / Test (1.25) (push) Successful in -27m5s
CI / Build (push) Successful in -27m16s
CI / Lint (push) Successful in -27m0s
Integration Tests / Integration Tests (push) Successful in -27m14s
Release / Build and Release (push) Successful in -25m52s
* Implement directory reading for DBML files.
* Merge schemas and tables from multiple files.
* Add tests for multi-file loading and merging behavior.
* Enhance file discovery and sorting logic.
2026-01-10 13:17:30 +02:00
f6c3f2b460 feat(bun): 🎉 Enhance nullability handling in column parsing
All checks were successful
CI / Test (1.24) (push) Successful in -27m40s
CI / Test (1.25) (push) Successful in -27m32s
CI / Lint (push) Successful in -27m46s
CI / Build (push) Successful in -27m56s
Integration Tests / Integration Tests (push) Successful in -27m40s
* Introduce explicit nullability markers in column tags.
* Update logic to infer nullability based on Go types when no markers are present.
* Ensure correct tags are generated for nullable and non-nullable fields.
2026-01-04 22:11:44 +02:00
156e655571 chore(ci): 🎉 Install PostgreSQL client for integration tests
Some checks failed
CI / Test (1.24) (push) Successful in -27m31s
CI / Lint (push) Successful in -27m52s
CI / Test (1.25) (push) Successful in -27m35s
CI / Build (push) Successful in -28m5s
Integration Tests / Integration Tests (push) Failing after -27m44s
2026-01-04 22:04:20 +02:00
b57e1ba304 feat(cmd): 🎉 Add split command for schema extraction
Some checks failed
CI / Test (1.24) (push) Successful in -27m40s
CI / Test (1.25) (push) Successful in -27m39s
CI / Build (push) Successful in -28m9s
CI / Lint (push) Successful in -27m56s
Integration Tests / Integration Tests (push) Failing after -28m11s
Release / Build and Release (push) Successful in -26m13s
- Introduce 'split' command to extract selected tables and schemas.
- Supports various input and output formats.
- Allows filtering of schemas and tables during extraction.
2026-01-04 22:01:29 +02:00
19fba62f1b feat(ui): 🎉 Add GUID field to column, database, schema, and table editors
Some checks failed
CI / Test (1.24) (push) Successful in -27m38s
CI / Lint (push) Successful in -27m58s
CI / Test (1.25) (push) Successful in -26m52s
CI / Build (push) Successful in -28m9s
Integration Tests / Integration Tests (push) Failing after -28m11s
2026-01-04 20:00:18 +02:00
b4ff4334cc feat(models): 🎉 Add GUID field to various models
Some checks failed
CI / Lint (push) Successful in -27m53s
CI / Test (1.24) (push) Successful in -27m31s
CI / Build (push) Successful in -28m13s
CI / Test (1.25) (push) Failing after 1m11s
Integration Tests / Integration Tests (push) Failing after -28m15s
* Introduced GUID field to Database, Domain, DomainTable, Schema, Table, View, Sequence, Column, Index, Relationship, Constraint, Enum, and Script models.
* Updated initialization functions to assign new GUIDs using uuid package.
* Enhanced DCTX reader and writer to utilize GUIDs from models where available.
2026-01-04 19:53:17 +02:00
5d9b00c8f2 feat(ui): 🎉 Add import and merge database feature
Some checks failed
CI / Lint (push) Successful in -27m51s
CI / Test (1.24) (push) Successful in -27m35s
CI / Test (1.25) (push) Failing after 1m5s
Integration Tests / Integration Tests (push) Failing after -28m14s
CI / Build (push) Successful in -28m13s
- Introduce a new screen for importing and merging database schemas.
- Implement merge logic to combine schemas, tables, columns, and other objects.
- Add options to skip specific object types during the merge process.
- Update main menu to include the new import and merge option.
2026-01-04 19:31:28 +02:00
debf351c48 fix(ui): 🐛 Simplify keyboard shortcut handling in load/save screens
Some checks failed
CI / Test (1.24) (push) Successful in -27m35s
CI / Test (1.25) (push) Failing after 1m3s
CI / Lint (push) Successful in -27m26s
CI / Build (push) Successful in -28m10s
Integration Tests / Integration Tests (push) Failing after 1m1s
2026-01-04 18:41:59 +02:00
d87d657275 feat(ui): 🎨 Add user interface documentation and screenshots
Some checks failed
CI / Test (1.25) (push) Failing after 57s
CI / Build (push) Successful in 23s
CI / Lint (push) Failing after -27m11s
CI / Test (1.24) (push) Successful in -26m25s
Integration Tests / Integration Tests (push) Failing after 1m0s
- Document interactive terminal-based UI features
- Include screenshots for main screen, table view, and column editing
2026-01-04 18:39:13 +02:00
1795eb64d1 feat(ui): 🎨 Implement schema and table management screens
Some checks failed
CI / Test (1.24) (push) Failing after 1m3s
CI / Lint (push) Failing after -27m11s
CI / Build (push) Successful in 40s
Integration Tests / Integration Tests (push) Failing after -28m11s
CI / Test (1.25) (push) Failing after -26m33s
* Add schema management screen with list and editor
* Implement table management screen with list and editor
* Create data operations for schema and table management
* Define UI rules and guidelines for consistency
* Ensure circular tab navigation and keyboard shortcuts
* Add forms for creating and editing schemas and tables
* Implement confirmation dialogs for destructive actions
2026-01-04 18:29:29 +02:00
355f0f918f chore(deps): 🚀 update module dependencies
* Add new dependencies for terminal handling and color management.
* Include updates for tcell, go-colorful, tview, and uniseg.
* Update golang.org/x/sys and golang.org/x/term for improved compatibility.
* Ensure all dependencies are explicitly listed with their versions.
2026-01-04 18:29:11 +02:00
5d3c86119e feat(domains): add domain support for DrawDB integration
Some checks failed
CI / Test (1.24) (push) Successful in -27m28s
CI / Test (1.25) (push) Successful in -27m30s
CI / Build (push) Failing after -28m36s
Integration Tests / Integration Tests (push) Failing after -28m8s
CI / Lint (push) Successful in -27m54s
- Introduce Domain and DomainTable models for logical grouping of tables.
- Implement export and import functionality for domains in DrawDB format.
- Update template execution modes to include domain processing.
- Enhance documentation for domain features and usage.
2026-01-04 15:49:47 +02:00
653 changed files with 289539 additions and 353 deletions

View File

@@ -4,10 +4,7 @@
"description": "Database Relations Specification Tool for Go", "description": "Database Relations Specification Tool for Go",
"language": "go" "language": "go"
}, },
"agent": {
"preferred": "Explore",
"description": "Use Explore agent for fast codebase navigation and Go project exploration"
},
"codeStyle": { "codeStyle": {
"useGofmt": true, "useGofmt": true,
"lineLength": 100, "lineLength": 100,

View File

@@ -46,6 +46,11 @@ jobs:
- name: Download dependencies - name: Download dependencies
run: go mod download run: go mod download
- name: Install PostgreSQL client
run: |
sudo apt-get update
sudo apt-get install -y postgresql-client
- name: Initialize test database - name: Initialize test database
env: env:
PGPASSWORD: relspec_test_password PGPASSWORD: relspec_test_password

1
.gitignore vendored
View File

@@ -47,3 +47,4 @@ dist/
build/ build/
bin/ bin/
tests/integration/failed_statements_example.txt tests/integration/failed_statements_example.txt
test_output.log

View File

@@ -85,6 +85,29 @@ RelSpec includes a powerful schema validation and linting tool:
## Use of AI ## Use of AI
[Rules and use of AI](./AI_USE.md) [Rules and use of AI](./AI_USE.md)
## User Interface
RelSpec provides an interactive terminal-based user interface for managing and editing database schemas. The UI allows you to:
- **Browse Databases** - Navigate through your database structure with an intuitive menu system
- **Edit Schemas** - Create, modify, and organize database schemas
- **Manage Tables** - Add, update, or delete tables with full control over structure
- **Configure Columns** - Define column properties, data types, constraints, and relationships
- **Interactive Editing** - Real-time validation and feedback as you make changes
The interface supports multiple input formats, making it easy to load, edit, and save your database definitions in various formats.
<p align="center" width="100%">
<img src="./assets/image/screenshots/main_screen.jpg">
</p>
<p align="center" width="100%">
<img src="./assets/image/screenshots/table_view.jpg">
</p>
<p align="center" width="100%">
<img src="./assets/image/screenshots/edit_column.jpg">
</p>
## Installation ## Installation
```bash ```bash
@@ -95,6 +118,55 @@ go install -v git.warky.dev/wdevs/relspecgo/cmd/relspec@latest
## Usage ## Usage
### Interactive Schema Editor
```bash
# Launch interactive editor with a DBML schema
relspec edit --from dbml --from-path schema.dbml --to dbml --to-path schema.dbml
# Edit PostgreSQL database in place
relspec edit --from pgsql --from-conn "postgres://user:pass@localhost/mydb" \
--to pgsql --to-conn "postgres://user:pass@localhost/mydb"
# Edit JSON schema and save as GORM models
relspec edit --from json --from-path db.json --to gorm --to-path models/
```
The `edit` command launches an interactive terminal user interface where you can:
- Browse and navigate your database structure
- Create, modify, and delete schemas, tables, and columns
- Configure column properties, constraints, and relationships
- Save changes to various formats
- Import and merge schemas from other databases
### Schema Merging
```bash
# Merge two JSON schemas (additive merge - adds missing items only)
relspec merge --target json --target-path base.json \
--source json --source-path additions.json \
--output json --output-path merged.json
# Merge PostgreSQL database into JSON, skipping specific tables
relspec merge --target json --target-path current.json \
--source pgsql --source-conn "postgres://user:pass@localhost/source_db" \
--output json --output-path updated.json \
--skip-tables "audit_log,temp_tables"
# Cross-format merge (DBML + YAML → JSON)
relspec merge --target dbml --target-path base.dbml \
--source yaml --source-path additions.yaml \
--output json --output-path result.json \
--skip-relations --skip-views
```
The `merge` command combines two database schemas additively:
- Adds missing schemas, tables, columns, and other objects
- Never modifies or deletes existing items (safe operation)
- Supports selective merging with skip options (domains, relations, enums, views, sequences, specific tables)
- Works across any combination of supported formats
- Perfect for integrating multiple schema definitions or applying patches
### Schema Conversion ### Schema Conversion
```bash ```bash

11
TODO.md
View File

@@ -22,6 +22,17 @@
- [✔️] GraphQL schema generation - [✔️] GraphQL schema generation
## UI
- [✔️] Basic UI (I went with tview)
- [✔️] Save / Load Database
- [✔️] Schemas / Domains / Tables
- [ ] Add Relations
- [ ] Add Indexes
- [ ] Add Views
- [ ] Add Sequences
- [ ] Add Scripts
- [ ] Domain / Table Assignment
## Documentation ## Documentation
- [ ] API documentation (godoc) - [ ] API documentation (godoc)
- [ ] Usage examples for each format combination - [ ] Usage examples for each format combination

Binary file not shown.

After

Width:  |  Height:  |  Size: 42 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 50 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 67 KiB

334
cmd/relspec/edit.go Normal file
View File

@@ -0,0 +1,334 @@
package main
import (
"fmt"
"os"
"strings"
"github.com/spf13/cobra"
"git.warky.dev/wdevs/relspecgo/pkg/models"
"git.warky.dev/wdevs/relspecgo/pkg/readers"
"git.warky.dev/wdevs/relspecgo/pkg/readers/bun"
"git.warky.dev/wdevs/relspecgo/pkg/readers/dbml"
"git.warky.dev/wdevs/relspecgo/pkg/readers/dctx"
"git.warky.dev/wdevs/relspecgo/pkg/readers/drawdb"
"git.warky.dev/wdevs/relspecgo/pkg/readers/drizzle"
"git.warky.dev/wdevs/relspecgo/pkg/readers/gorm"
"git.warky.dev/wdevs/relspecgo/pkg/readers/graphql"
"git.warky.dev/wdevs/relspecgo/pkg/readers/json"
"git.warky.dev/wdevs/relspecgo/pkg/readers/pgsql"
"git.warky.dev/wdevs/relspecgo/pkg/readers/prisma"
"git.warky.dev/wdevs/relspecgo/pkg/readers/typeorm"
"git.warky.dev/wdevs/relspecgo/pkg/readers/yaml"
"git.warky.dev/wdevs/relspecgo/pkg/ui"
"git.warky.dev/wdevs/relspecgo/pkg/writers"
wbun "git.warky.dev/wdevs/relspecgo/pkg/writers/bun"
wdbml "git.warky.dev/wdevs/relspecgo/pkg/writers/dbml"
wdctx "git.warky.dev/wdevs/relspecgo/pkg/writers/dctx"
wdrawdb "git.warky.dev/wdevs/relspecgo/pkg/writers/drawdb"
wdrizzle "git.warky.dev/wdevs/relspecgo/pkg/writers/drizzle"
wgorm "git.warky.dev/wdevs/relspecgo/pkg/writers/gorm"
wgraphql "git.warky.dev/wdevs/relspecgo/pkg/writers/graphql"
wjson "git.warky.dev/wdevs/relspecgo/pkg/writers/json"
wpgsql "git.warky.dev/wdevs/relspecgo/pkg/writers/pgsql"
wprisma "git.warky.dev/wdevs/relspecgo/pkg/writers/prisma"
wtypeorm "git.warky.dev/wdevs/relspecgo/pkg/writers/typeorm"
wyaml "git.warky.dev/wdevs/relspecgo/pkg/writers/yaml"
)
var (
editSourceType string
editSourcePath string
editSourceConn string
editTargetType string
editTargetPath string
editSchemaFilter string
)
var editCmd = &cobra.Command{
Use: "edit",
Short: "Edit database schema interactively with TUI",
Long: `Edit database schemas from various formats using an interactive terminal UI.
Allows you to:
- List and navigate schemas and tables
- Create, edit, and delete schemas
- Create, edit, and delete tables
- Add, edit, and delete columns
- Set table and column properties
- Add constraints, indexes, and relationships
Supports reading from and writing to all supported formats:
Input formats:
- dbml: DBML schema files
- dctx: DCTX schema files
- drawdb: DrawDB JSON files
- graphql: GraphQL schema files (.graphql, SDL)
- json: JSON database schema
- yaml: YAML database schema
- gorm: GORM model files (Go, file or directory)
- bun: Bun model files (Go, file or directory)
- drizzle: Drizzle ORM schema files (TypeScript, file or directory)
- prisma: Prisma schema files (.prisma)
- typeorm: TypeORM entity files (TypeScript)
- pgsql: PostgreSQL database (live connection)
Output formats:
- dbml: DBML schema files
- dctx: DCTX schema files
- drawdb: DrawDB JSON files
- graphql: GraphQL schema files (.graphql, SDL)
- json: JSON database schema
- yaml: YAML database schema
- gorm: GORM model files (Go)
- bun: Bun model files (Go)
- drizzle: Drizzle ORM schema files (TypeScript)
- prisma: Prisma schema files (.prisma)
- typeorm: TypeORM entity files (TypeScript)
- pgsql: PostgreSQL SQL schema
PostgreSQL Connection String Examples:
postgres://username:password@localhost:5432/database_name
postgres://username:password@localhost/database_name
postgresql://user:pass@host:5432/dbname?sslmode=disable
postgresql://user:pass@host/dbname?sslmode=require
host=localhost port=5432 user=username password=pass dbname=mydb sslmode=disable
Examples:
# Edit a DBML schema file
relspec edit --from dbml --from-path schema.dbml --to dbml --to-path schema.dbml
# Edit a PostgreSQL database
relspec edit --from pgsql --from-conn "postgres://user:pass@localhost/mydb" \
--to pgsql --to-conn "postgres://user:pass@localhost/mydb"
# Edit JSON schema and output to GORM
relspec edit --from json --from-path db.json --to gorm --to-path models/
# Edit GORM models in place
relspec edit --from gorm --from-path ./models --to gorm --to-path ./models`,
RunE: runEdit,
}
func init() {
editCmd.Flags().StringVar(&editSourceType, "from", "", "Source format (dbml, dctx, drawdb, graphql, json, yaml, gorm, bun, drizzle, prisma, typeorm, pgsql)")
editCmd.Flags().StringVar(&editSourcePath, "from-path", "", "Source file path (for file-based formats)")
editCmd.Flags().StringVar(&editSourceConn, "from-conn", "", "Source connection string (for database formats)")
editCmd.Flags().StringVar(&editTargetType, "to", "", "Target format (dbml, dctx, drawdb, graphql, json, yaml, gorm, bun, drizzle, prisma, typeorm, pgsql)")
editCmd.Flags().StringVar(&editTargetPath, "to-path", "", "Target file path (for file-based formats)")
editCmd.Flags().StringVar(&editSchemaFilter, "schema", "", "Filter to a specific schema by name")
// Flags are now optional - if not provided, UI will prompt for load/save options
}
func runEdit(cmd *cobra.Command, args []string) error {
fmt.Fprintf(os.Stderr, "\n=== RelSpec Schema Editor ===\n")
fmt.Fprintf(os.Stderr, "Started at: %s\n\n", getCurrentTimestamp())
var db *models.Database
var loadConfig *ui.LoadConfig
var saveConfig *ui.SaveConfig
var err error
// Check if source parameters are provided
if editSourceType != "" {
// Read source database
fmt.Fprintf(os.Stderr, "[1/3] Reading source schema...\n")
fmt.Fprintf(os.Stderr, " Format: %s\n", editSourceType)
if editSourcePath != "" {
fmt.Fprintf(os.Stderr, " Path: %s\n", editSourcePath)
}
if editSourceConn != "" {
fmt.Fprintf(os.Stderr, " Conn: %s\n", maskPassword(editSourceConn))
}
db, err = readDatabaseForEdit(editSourceType, editSourcePath, editSourceConn, "Source")
if err != nil {
return fmt.Errorf("failed to read source: %w", err)
}
// Apply schema filter if specified
if editSchemaFilter != "" {
db = filterDatabaseBySchema(db, editSchemaFilter)
}
fmt.Fprintf(os.Stderr, " ✓ Successfully read database '%s'\n", db.Name)
fmt.Fprintf(os.Stderr, " Found: %d schema(s)\n", len(db.Schemas))
totalTables := 0
for _, schema := range db.Schemas {
totalTables += len(schema.Tables)
}
fmt.Fprintf(os.Stderr, " Found: %d table(s)\n\n", totalTables)
// Store load config
loadConfig = &ui.LoadConfig{
SourceType: editSourceType,
FilePath: editSourcePath,
ConnString: editSourceConn,
}
} else {
// No source parameters provided, UI will show load screen
fmt.Fprintf(os.Stderr, "[1/2] No source specified, editor will prompt for database\n\n")
}
// Store save config if target parameters are provided
if editTargetType != "" {
saveConfig = &ui.SaveConfig{
TargetType: editTargetType,
FilePath: editTargetPath,
}
}
// Launch interactive TUI
if editSourceType != "" {
fmt.Fprintf(os.Stderr, "[2/3] Launching interactive editor...\n")
} else {
fmt.Fprintf(os.Stderr, "[2/2] Launching interactive editor...\n")
}
fmt.Fprintf(os.Stderr, " Use arrow keys and shortcuts to navigate\n")
fmt.Fprintf(os.Stderr, " Press ? for help\n\n")
editor := ui.NewSchemaEditorWithConfigs(db, loadConfig, saveConfig)
if err := editor.Run(); err != nil {
return fmt.Errorf("editor failed: %w", err)
}
// Only write to output if target parameters were provided and database was loaded from command line
if editTargetType != "" && editSourceType != "" && db != nil {
fmt.Fprintf(os.Stderr, "[3/3] Writing changes to output...\n")
fmt.Fprintf(os.Stderr, " Format: %s\n", editTargetType)
if editTargetPath != "" {
fmt.Fprintf(os.Stderr, " Path: %s\n", editTargetPath)
}
// Get the potentially modified database from the editor
err = writeDatabaseForEdit(editTargetType, editTargetPath, "", editor.GetDatabase(), "Target")
if err != nil {
return fmt.Errorf("failed to write output: %w", err)
}
fmt.Fprintf(os.Stderr, " ✓ Successfully written database\n")
}
fmt.Fprintf(os.Stderr, "\n=== Edit complete ===\n")
return nil
}
func readDatabaseForEdit(dbType, filePath, connString, label string) (*models.Database, error) {
var reader readers.Reader
switch strings.ToLower(dbType) {
case "dbml":
if filePath == "" {
return nil, fmt.Errorf("%s: file path is required for DBML format", label)
}
reader = dbml.NewReader(&readers.ReaderOptions{FilePath: filePath})
case "dctx":
if filePath == "" {
return nil, fmt.Errorf("%s: file path is required for DCTX format", label)
}
reader = dctx.NewReader(&readers.ReaderOptions{FilePath: filePath})
case "drawdb":
if filePath == "" {
return nil, fmt.Errorf("%s: file path is required for DrawDB format", label)
}
reader = drawdb.NewReader(&readers.ReaderOptions{FilePath: filePath})
case "graphql":
if filePath == "" {
return nil, fmt.Errorf("%s: file path is required for GraphQL format", label)
}
reader = graphql.NewReader(&readers.ReaderOptions{FilePath: filePath})
case "json":
if filePath == "" {
return nil, fmt.Errorf("%s: file path is required for JSON format", label)
}
reader = json.NewReader(&readers.ReaderOptions{FilePath: filePath})
case "yaml":
if filePath == "" {
return nil, fmt.Errorf("%s: file path is required for YAML format", label)
}
reader = yaml.NewReader(&readers.ReaderOptions{FilePath: filePath})
case "gorm":
if filePath == "" {
return nil, fmt.Errorf("%s: file path is required for GORM format", label)
}
reader = gorm.NewReader(&readers.ReaderOptions{FilePath: filePath})
case "bun":
if filePath == "" {
return nil, fmt.Errorf("%s: file path is required for Bun format", label)
}
reader = bun.NewReader(&readers.ReaderOptions{FilePath: filePath})
case "drizzle":
if filePath == "" {
return nil, fmt.Errorf("%s: file path is required for Drizzle format", label)
}
reader = drizzle.NewReader(&readers.ReaderOptions{FilePath: filePath})
case "prisma":
if filePath == "" {
return nil, fmt.Errorf("%s: file path is required for Prisma format", label)
}
reader = prisma.NewReader(&readers.ReaderOptions{FilePath: filePath})
case "typeorm":
if filePath == "" {
return nil, fmt.Errorf("%s: file path is required for TypeORM format", label)
}
reader = typeorm.NewReader(&readers.ReaderOptions{FilePath: filePath})
case "pgsql":
if connString == "" {
return nil, fmt.Errorf("%s: connection string is required for PostgreSQL format", label)
}
reader = pgsql.NewReader(&readers.ReaderOptions{ConnectionString: connString})
default:
return nil, fmt.Errorf("%s: unsupported format: %s", label, dbType)
}
db, err := reader.ReadDatabase()
if err != nil {
return nil, fmt.Errorf("%s: %w", label, err)
}
return db, nil
}
func writeDatabaseForEdit(dbType, filePath, connString string, db *models.Database, label string) error {
var writer writers.Writer
switch strings.ToLower(dbType) {
case "dbml":
writer = wdbml.NewWriter(&writers.WriterOptions{OutputPath: filePath})
case "dctx":
writer = wdctx.NewWriter(&writers.WriterOptions{OutputPath: filePath})
case "drawdb":
writer = wdrawdb.NewWriter(&writers.WriterOptions{OutputPath: filePath})
case "graphql":
writer = wgraphql.NewWriter(&writers.WriterOptions{OutputPath: filePath})
case "json":
writer = wjson.NewWriter(&writers.WriterOptions{OutputPath: filePath})
case "yaml":
writer = wyaml.NewWriter(&writers.WriterOptions{OutputPath: filePath})
case "gorm":
writer = wgorm.NewWriter(&writers.WriterOptions{OutputPath: filePath})
case "bun":
writer = wbun.NewWriter(&writers.WriterOptions{OutputPath: filePath})
case "drizzle":
writer = wdrizzle.NewWriter(&writers.WriterOptions{OutputPath: filePath})
case "prisma":
writer = wprisma.NewWriter(&writers.WriterOptions{OutputPath: filePath})
case "typeorm":
writer = wtypeorm.NewWriter(&writers.WriterOptions{OutputPath: filePath})
case "pgsql":
writer = wpgsql.NewWriter(&writers.WriterOptions{OutputPath: filePath})
default:
return fmt.Errorf("%s: unsupported format: %s", label, dbType)
}
err := writer.WriteDatabase(db)
if err != nil {
return fmt.Errorf("%s: %w", label, err)
}
return nil
}

451
cmd/relspec/merge.go Normal file
View File

@@ -0,0 +1,451 @@
package main
import (
"fmt"
"os"
"path/filepath"
"strings"
"github.com/spf13/cobra"
"git.warky.dev/wdevs/relspecgo/pkg/merge"
"git.warky.dev/wdevs/relspecgo/pkg/models"
"git.warky.dev/wdevs/relspecgo/pkg/readers"
"git.warky.dev/wdevs/relspecgo/pkg/readers/bun"
"git.warky.dev/wdevs/relspecgo/pkg/readers/dbml"
"git.warky.dev/wdevs/relspecgo/pkg/readers/dctx"
"git.warky.dev/wdevs/relspecgo/pkg/readers/drawdb"
"git.warky.dev/wdevs/relspecgo/pkg/readers/drizzle"
"git.warky.dev/wdevs/relspecgo/pkg/readers/gorm"
"git.warky.dev/wdevs/relspecgo/pkg/readers/graphql"
"git.warky.dev/wdevs/relspecgo/pkg/readers/json"
"git.warky.dev/wdevs/relspecgo/pkg/readers/pgsql"
"git.warky.dev/wdevs/relspecgo/pkg/readers/prisma"
"git.warky.dev/wdevs/relspecgo/pkg/readers/typeorm"
"git.warky.dev/wdevs/relspecgo/pkg/readers/yaml"
"git.warky.dev/wdevs/relspecgo/pkg/writers"
wbun "git.warky.dev/wdevs/relspecgo/pkg/writers/bun"
wdbml "git.warky.dev/wdevs/relspecgo/pkg/writers/dbml"
wdctx "git.warky.dev/wdevs/relspecgo/pkg/writers/dctx"
wdrawdb "git.warky.dev/wdevs/relspecgo/pkg/writers/drawdb"
wdrizzle "git.warky.dev/wdevs/relspecgo/pkg/writers/drizzle"
wgorm "git.warky.dev/wdevs/relspecgo/pkg/writers/gorm"
wgraphql "git.warky.dev/wdevs/relspecgo/pkg/writers/graphql"
wjson "git.warky.dev/wdevs/relspecgo/pkg/writers/json"
wpgsql "git.warky.dev/wdevs/relspecgo/pkg/writers/pgsql"
wprisma "git.warky.dev/wdevs/relspecgo/pkg/writers/prisma"
wtypeorm "git.warky.dev/wdevs/relspecgo/pkg/writers/typeorm"
wyaml "git.warky.dev/wdevs/relspecgo/pkg/writers/yaml"
)
var (
mergeTargetType string
mergeTargetPath string
mergeTargetConn string
mergeSourceType string
mergeSourcePath string
mergeSourceConn string
mergeOutputType string
mergeOutputPath string
mergeOutputConn string
mergeSkipDomains bool
mergeSkipRelations bool
mergeSkipEnums bool
mergeSkipViews bool
mergeSkipSequences bool
mergeSkipTables string // Comma-separated table names to skip
mergeVerbose bool
mergeReportPath string // Path to write merge report
)
var mergeCmd = &cobra.Command{
Use: "merge",
Short: "Merge database schemas (additive only - adds missing items)",
Long: `Merge one database schema into another. Performs additive merging only:
adds missing schemas, tables, columns, and other objects without modifying
or deleting existing items.
The target database is loaded first, then the source database is merged into it.
The result can be saved to a new format or updated in place.
Examples:
# Merge two JSON schemas
relspec merge --target json --target-path base.json \
--source json --source-path additional.json \
--output json --output-path merged.json
# Merge from PostgreSQL into JSON
relspec merge --target json --target-path mydb.json \
--source pgsql --source-conn "postgres://user:pass@localhost/source_db" \
--output json --output-path combined.json
# Merge and execute on PostgreSQL database with report
relspec merge --target json --target-path base.json \
--source json --source-path additional.json \
--output pgsql --output-conn "postgres://user:pass@localhost/target_db" \
--merge-report merge-report.json
# Merge DBML and YAML, skip relations
relspec merge --target dbml --target-path schema.dbml \
--source yaml --source-path tables.yaml \
--output dbml --output-path merged.dbml \
--skip-relations
# Merge and save back to target format
relspec merge --target json --target-path base.json \
--source json --source-path patch.json \
--output json --output-path base.json`,
RunE: runMerge,
}
func init() {
// Target database flags
mergeCmd.Flags().StringVar(&mergeTargetType, "target", "", "Target format (required): dbml, dctx, drawdb, graphql, json, yaml, gorm, bun, drizzle, prisma, typeorm, pgsql")
mergeCmd.Flags().StringVar(&mergeTargetPath, "target-path", "", "Target file path (required for file-based formats)")
mergeCmd.Flags().StringVar(&mergeTargetConn, "target-conn", "", "Target connection string (required for pgsql)")
// Source database flags
mergeCmd.Flags().StringVar(&mergeSourceType, "source", "", "Source format (required): dbml, dctx, drawdb, graphql, json, yaml, gorm, bun, drizzle, prisma, typeorm, pgsql")
mergeCmd.Flags().StringVar(&mergeSourcePath, "source-path", "", "Source file path (required for file-based formats)")
mergeCmd.Flags().StringVar(&mergeSourceConn, "source-conn", "", "Source connection string (required for pgsql)")
// Output flags
mergeCmd.Flags().StringVar(&mergeOutputType, "output", "", "Output format (required): dbml, dctx, drawdb, graphql, json, yaml, gorm, bun, drizzle, prisma, typeorm, pgsql")
mergeCmd.Flags().StringVar(&mergeOutputPath, "output-path", "", "Output file path (required for file-based formats)")
mergeCmd.Flags().StringVar(&mergeOutputConn, "output-conn", "", "Output connection string (for pgsql)")
// Merge options
mergeCmd.Flags().BoolVar(&mergeSkipDomains, "skip-domains", false, "Skip domains during merge")
mergeCmd.Flags().BoolVar(&mergeSkipRelations, "skip-relations", false, "Skip relations during merge")
mergeCmd.Flags().BoolVar(&mergeSkipEnums, "skip-enums", false, "Skip enums during merge")
mergeCmd.Flags().BoolVar(&mergeSkipViews, "skip-views", false, "Skip views during merge")
mergeCmd.Flags().BoolVar(&mergeSkipSequences, "skip-sequences", false, "Skip sequences during merge")
mergeCmd.Flags().StringVar(&mergeSkipTables, "skip-tables", "", "Comma-separated list of table names to skip during merge")
mergeCmd.Flags().BoolVar(&mergeVerbose, "verbose", false, "Show verbose output")
mergeCmd.Flags().StringVar(&mergeReportPath, "merge-report", "", "Path to write merge report (JSON format)")
}
func runMerge(cmd *cobra.Command, args []string) error {
fmt.Fprintf(os.Stderr, "\n=== RelSpec Merge ===\n")
fmt.Fprintf(os.Stderr, "Started at: %s\n\n", getCurrentTimestamp())
// Validate required flags
if mergeTargetType == "" {
return fmt.Errorf("--target format is required")
}
if mergeSourceType == "" {
return fmt.Errorf("--source format is required")
}
if mergeOutputType == "" {
return fmt.Errorf("--output format is required")
}
// Validate and expand file paths
if mergeTargetType != "pgsql" {
if mergeTargetPath == "" {
return fmt.Errorf("--target-path is required for %s format", mergeTargetType)
}
mergeTargetPath = expandPath(mergeTargetPath)
} else if mergeTargetConn == "" {
return fmt.Errorf("--target-conn is required for pgsql format")
}
if mergeSourceType != "pgsql" {
if mergeSourcePath == "" {
return fmt.Errorf("--source-path is required for %s format", mergeSourceType)
}
mergeSourcePath = expandPath(mergeSourcePath)
} else if mergeSourceConn == "" {
return fmt.Errorf("--source-conn is required for pgsql format")
}
if mergeOutputType != "pgsql" {
if mergeOutputPath == "" {
return fmt.Errorf("--output-path is required for %s format", mergeOutputType)
}
mergeOutputPath = expandPath(mergeOutputPath)
}
// Step 1: Read target database
fmt.Fprintf(os.Stderr, "[1/3] Reading target database...\n")
fmt.Fprintf(os.Stderr, " Format: %s\n", mergeTargetType)
if mergeTargetPath != "" {
fmt.Fprintf(os.Stderr, " Path: %s\n", mergeTargetPath)
}
if mergeTargetConn != "" {
fmt.Fprintf(os.Stderr, " Conn: %s\n", maskPassword(mergeTargetConn))
}
targetDB, err := readDatabaseForMerge(mergeTargetType, mergeTargetPath, mergeTargetConn, "Target")
if err != nil {
return fmt.Errorf("failed to read target database: %w", err)
}
fmt.Fprintf(os.Stderr, " ✓ Successfully read target database '%s'\n", targetDB.Name)
printDatabaseStats(targetDB)
// Step 2: Read source database
fmt.Fprintf(os.Stderr, "\n[2/3] Reading source database...\n")
fmt.Fprintf(os.Stderr, " Format: %s\n", mergeSourceType)
if mergeSourcePath != "" {
fmt.Fprintf(os.Stderr, " Path: %s\n", mergeSourcePath)
}
if mergeSourceConn != "" {
fmt.Fprintf(os.Stderr, " Conn: %s\n", maskPassword(mergeSourceConn))
}
sourceDB, err := readDatabaseForMerge(mergeSourceType, mergeSourcePath, mergeSourceConn, "Source")
if err != nil {
return fmt.Errorf("failed to read source database: %w", err)
}
fmt.Fprintf(os.Stderr, " ✓ Successfully read source database '%s'\n", sourceDB.Name)
printDatabaseStats(sourceDB)
// Step 3: Merge databases
fmt.Fprintf(os.Stderr, "\n[3/3] Merging databases...\n")
opts := &merge.MergeOptions{
SkipDomains: mergeSkipDomains,
SkipRelations: mergeSkipRelations,
SkipEnums: mergeSkipEnums,
SkipViews: mergeSkipViews,
SkipSequences: mergeSkipSequences,
}
// Parse skip-tables flag
if mergeSkipTables != "" {
opts.SkipTableNames = parseSkipTables(mergeSkipTables)
if len(opts.SkipTableNames) > 0 {
fmt.Fprintf(os.Stderr, " Skipping tables: %s\n", mergeSkipTables)
}
}
result := merge.MergeDatabases(targetDB, sourceDB, opts)
// Update timestamp
targetDB.UpdateDate()
// Print merge summary
fmt.Fprintf(os.Stderr, " ✓ Merge complete\n\n")
fmt.Fprintf(os.Stderr, "%s\n", merge.GetMergeSummary(result))
// Step 4: Write output
fmt.Fprintf(os.Stderr, "\n[4/4] Writing output...\n")
fmt.Fprintf(os.Stderr, " Format: %s\n", mergeOutputType)
if mergeOutputPath != "" {
fmt.Fprintf(os.Stderr, " Path: %s\n", mergeOutputPath)
}
err = writeDatabaseForMerge(mergeOutputType, mergeOutputPath, mergeOutputConn, targetDB, "Output")
if err != nil {
return fmt.Errorf("failed to write output: %w", err)
}
fmt.Fprintf(os.Stderr, " ✓ Successfully written merged database\n")
fmt.Fprintf(os.Stderr, "\n=== Merge complete ===\n")
return nil
}
func readDatabaseForMerge(dbType, filePath, connString, label string) (*models.Database, error) {
var reader readers.Reader
switch strings.ToLower(dbType) {
case "dbml":
if filePath == "" {
return nil, fmt.Errorf("%s: file path is required for DBML format", label)
}
reader = dbml.NewReader(&readers.ReaderOptions{FilePath: filePath})
case "dctx":
if filePath == "" {
return nil, fmt.Errorf("%s: file path is required for DCTX format", label)
}
reader = dctx.NewReader(&readers.ReaderOptions{FilePath: filePath})
case "drawdb":
if filePath == "" {
return nil, fmt.Errorf("%s: file path is required for DrawDB format", label)
}
reader = drawdb.NewReader(&readers.ReaderOptions{FilePath: filePath})
case "graphql":
if filePath == "" {
return nil, fmt.Errorf("%s: file path is required for GraphQL format", label)
}
reader = graphql.NewReader(&readers.ReaderOptions{FilePath: filePath})
case "json":
if filePath == "" {
return nil, fmt.Errorf("%s: file path is required for JSON format", label)
}
reader = json.NewReader(&readers.ReaderOptions{FilePath: filePath})
case "yaml":
if filePath == "" {
return nil, fmt.Errorf("%s: file path is required for YAML format", label)
}
reader = yaml.NewReader(&readers.ReaderOptions{FilePath: filePath})
case "gorm":
if filePath == "" {
return nil, fmt.Errorf("%s: file path is required for GORM format", label)
}
reader = gorm.NewReader(&readers.ReaderOptions{FilePath: filePath})
case "bun":
if filePath == "" {
return nil, fmt.Errorf("%s: file path is required for Bun format", label)
}
reader = bun.NewReader(&readers.ReaderOptions{FilePath: filePath})
case "drizzle":
if filePath == "" {
return nil, fmt.Errorf("%s: file path is required for Drizzle format", label)
}
reader = drizzle.NewReader(&readers.ReaderOptions{FilePath: filePath})
case "prisma":
if filePath == "" {
return nil, fmt.Errorf("%s: file path is required for Prisma format", label)
}
reader = prisma.NewReader(&readers.ReaderOptions{FilePath: filePath})
case "typeorm":
if filePath == "" {
return nil, fmt.Errorf("%s: file path is required for TypeORM format", label)
}
reader = typeorm.NewReader(&readers.ReaderOptions{FilePath: filePath})
case "pgsql":
if connString == "" {
return nil, fmt.Errorf("%s: connection string is required for PostgreSQL format", label)
}
reader = pgsql.NewReader(&readers.ReaderOptions{ConnectionString: connString})
default:
return nil, fmt.Errorf("%s: unsupported format '%s'", label, dbType)
}
db, err := reader.ReadDatabase()
if err != nil {
return nil, err
}
return db, nil
}
func writeDatabaseForMerge(dbType, filePath, connString string, db *models.Database, label string) error {
var writer writers.Writer
switch strings.ToLower(dbType) {
case "dbml":
if filePath == "" {
return fmt.Errorf("%s: file path is required for DBML format", label)
}
writer = wdbml.NewWriter(&writers.WriterOptions{OutputPath: filePath})
case "dctx":
if filePath == "" {
return fmt.Errorf("%s: file path is required for DCTX format", label)
}
writer = wdctx.NewWriter(&writers.WriterOptions{OutputPath: filePath})
case "drawdb":
if filePath == "" {
return fmt.Errorf("%s: file path is required for DrawDB format", label)
}
writer = wdrawdb.NewWriter(&writers.WriterOptions{OutputPath: filePath})
case "graphql":
if filePath == "" {
return fmt.Errorf("%s: file path is required for GraphQL format", label)
}
writer = wgraphql.NewWriter(&writers.WriterOptions{OutputPath: filePath})
case "json":
if filePath == "" {
return fmt.Errorf("%s: file path is required for JSON format", label)
}
writer = wjson.NewWriter(&writers.WriterOptions{OutputPath: filePath})
case "yaml":
if filePath == "" {
return fmt.Errorf("%s: file path is required for YAML format", label)
}
writer = wyaml.NewWriter(&writers.WriterOptions{OutputPath: filePath})
case "gorm":
if filePath == "" {
return fmt.Errorf("%s: file path is required for GORM format", label)
}
writer = wgorm.NewWriter(&writers.WriterOptions{OutputPath: filePath})
case "bun":
if filePath == "" {
return fmt.Errorf("%s: file path is required for Bun format", label)
}
writer = wbun.NewWriter(&writers.WriterOptions{OutputPath: filePath})
case "drizzle":
if filePath == "" {
return fmt.Errorf("%s: file path is required for Drizzle format", label)
}
writer = wdrizzle.NewWriter(&writers.WriterOptions{OutputPath: filePath})
case "prisma":
if filePath == "" {
return fmt.Errorf("%s: file path is required for Prisma format", label)
}
writer = wprisma.NewWriter(&writers.WriterOptions{OutputPath: filePath})
case "typeorm":
if filePath == "" {
return fmt.Errorf("%s: file path is required for TypeORM format", label)
}
writer = wtypeorm.NewWriter(&writers.WriterOptions{OutputPath: filePath})
case "pgsql":
writerOpts := &writers.WriterOptions{OutputPath: filePath}
if connString != "" {
writerOpts.Metadata = map[string]interface{}{
"connection_string": connString,
}
// Add report path if merge report is enabled
if mergeReportPath != "" {
writerOpts.Metadata["report_path"] = mergeReportPath
}
}
writer = wpgsql.NewWriter(writerOpts)
default:
return fmt.Errorf("%s: unsupported format '%s'", label, dbType)
}
return writer.WriteDatabase(db)
}
func expandPath(path string) string {
if len(path) > 0 && path[0] == '~' {
home, err := os.UserHomeDir()
if err == nil {
return filepath.Join(home, path[1:])
}
}
return path
}
func printDatabaseStats(db *models.Database) {
totalTables := 0
totalColumns := 0
totalConstraints := 0
totalIndexes := 0
for _, schema := range db.Schemas {
totalTables += len(schema.Tables)
for _, table := range schema.Tables {
totalColumns += len(table.Columns)
totalConstraints += len(table.Constraints)
totalIndexes += len(table.Indexes)
}
}
fmt.Fprintf(os.Stderr, " Schemas: %d, Tables: %d, Columns: %d, Constraints: %d, Indexes: %d\n",
len(db.Schemas), totalTables, totalColumns, totalConstraints, totalIndexes)
}
func parseSkipTables(skipTablesStr string) map[string]bool {
skipTables := make(map[string]bool)
if skipTablesStr == "" {
return skipTables
}
// Split by comma and trim whitespace
parts := strings.Split(skipTablesStr, ",")
for _, part := range parts {
trimmed := strings.TrimSpace(part)
if trimmed != "" {
// Store in lowercase for case-insensitive matching
skipTables[strings.ToLower(trimmed)] = true
}
}
return skipTables
}

View File

@@ -21,4 +21,7 @@ func init() {
rootCmd.AddCommand(inspectCmd) rootCmd.AddCommand(inspectCmd)
rootCmd.AddCommand(scriptsCmd) rootCmd.AddCommand(scriptsCmd)
rootCmd.AddCommand(templCmd) rootCmd.AddCommand(templCmd)
rootCmd.AddCommand(editCmd)
rootCmd.AddCommand(mergeCmd)
rootCmd.AddCommand(splitCmd)
} }

View File

@@ -18,6 +18,7 @@ var (
scriptsConn string scriptsConn string
scriptsSchemaName string scriptsSchemaName string
scriptsDBName string scriptsDBName string
scriptsIgnoreErrors bool
) )
var scriptsCmd = &cobra.Command{ var scriptsCmd = &cobra.Command{
@@ -39,8 +40,8 @@ Example filenames (hyphen format):
1-002-create-posts.sql # Priority 1, Sequence 2 1-002-create-posts.sql # Priority 1, Sequence 2
10-10-create-newid.pgsql # Priority 10, Sequence 10 10-10-create-newid.pgsql # Priority 10, Sequence 10
Both formats can be mixed in the same directory. Both formats can be mixed in the same directory and subdirectories.
Scripts are executed in order: Priority (ascending), then Sequence (ascending).`, Scripts are executed in order: Priority (ascending), Sequence (ascending), Name (alphabetical).`,
} }
var scriptsListCmd = &cobra.Command{ var scriptsListCmd = &cobra.Command{
@@ -48,8 +49,8 @@ var scriptsListCmd = &cobra.Command{
Short: "List SQL scripts from a directory", Short: "List SQL scripts from a directory",
Long: `List SQL scripts from a directory and show their execution order. Long: `List SQL scripts from a directory and show their execution order.
The scripts are read from the specified directory and displayed in the order The scripts are read recursively from the specified directory and displayed in the order
they would be executed (Priority ascending, then Sequence ascending). they would be executed: Priority (ascending), then Sequence (ascending), then Name (alphabetical).
Example: Example:
relspec scripts list --dir ./migrations`, relspec scripts list --dir ./migrations`,
@@ -61,10 +62,10 @@ var scriptsExecuteCmd = &cobra.Command{
Short: "Execute SQL scripts against a database", Short: "Execute SQL scripts against a database",
Long: `Execute SQL scripts from a directory against a PostgreSQL database. Long: `Execute SQL scripts from a directory against a PostgreSQL database.
Scripts are executed in order: Priority (ascending), then Sequence (ascending). Scripts are executed in order: Priority (ascending), Sequence (ascending), Name (alphabetical).
Execution stops immediately on the first error. By default, execution stops immediately on the first error. Use --ignore-errors to continue execution.
The directory is scanned recursively for files matching the patterns: The directory is scanned recursively for all subdirectories and files matching the patterns:
{priority}_{sequence}_{name}.sql or .pgsql (underscore format) {priority}_{sequence}_{name}.sql or .pgsql (underscore format)
{priority}-{sequence}-{name}.sql or .pgsql (hyphen format) {priority}-{sequence}-{name}.sql or .pgsql (hyphen format)
@@ -75,7 +76,7 @@ PostgreSQL Connection String Examples:
postgresql://user:pass@host/dbname?sslmode=require postgresql://user:pass@host/dbname?sslmode=require
Examples: Examples:
# Execute migration scripts # Execute migration scripts from a directory (including subdirectories)
relspec scripts execute --dir ./migrations \ relspec scripts execute --dir ./migrations \
--conn "postgres://user:pass@localhost:5432/mydb" --conn "postgres://user:pass@localhost:5432/mydb"
@@ -86,7 +87,12 @@ Examples:
# Execute with SSL disabled # Execute with SSL disabled
relspec scripts execute --dir ./sql \ relspec scripts execute --dir ./sql \
--conn "postgres://user:pass@localhost/db?sslmode=disable"`, --conn "postgres://user:pass@localhost/db?sslmode=disable"
# Continue executing even if errors occur
relspec scripts execute --dir ./migrations \
--conn "postgres://localhost/mydb" \
--ignore-errors`,
RunE: runScriptsExecute, RunE: runScriptsExecute,
} }
@@ -105,6 +111,7 @@ func init() {
scriptsExecuteCmd.Flags().StringVar(&scriptsConn, "conn", "", "PostgreSQL connection string (required)") scriptsExecuteCmd.Flags().StringVar(&scriptsConn, "conn", "", "PostgreSQL connection string (required)")
scriptsExecuteCmd.Flags().StringVar(&scriptsSchemaName, "schema", "public", "Schema name (optional, default: public)") scriptsExecuteCmd.Flags().StringVar(&scriptsSchemaName, "schema", "public", "Schema name (optional, default: public)")
scriptsExecuteCmd.Flags().StringVar(&scriptsDBName, "database", "database", "Database name (optional, default: database)") scriptsExecuteCmd.Flags().StringVar(&scriptsDBName, "database", "database", "Database name (optional, default: database)")
scriptsExecuteCmd.Flags().BoolVar(&scriptsIgnoreErrors, "ignore-errors", false, "Continue executing scripts even if errors occur")
err = scriptsExecuteCmd.MarkFlagRequired("dir") err = scriptsExecuteCmd.MarkFlagRequired("dir")
if err != nil { if err != nil {
@@ -149,7 +156,7 @@ func runScriptsList(cmd *cobra.Command, args []string) error {
return nil return nil
} }
// Sort scripts by Priority then Sequence // Sort scripts by Priority, Sequence, then Name
sortedScripts := make([]*struct { sortedScripts := make([]*struct {
name string name string
priority int priority int
@@ -186,7 +193,10 @@ func runScriptsList(cmd *cobra.Command, args []string) error {
if sortedScripts[i].priority != sortedScripts[j].priority { if sortedScripts[i].priority != sortedScripts[j].priority {
return sortedScripts[i].priority < sortedScripts[j].priority return sortedScripts[i].priority < sortedScripts[j].priority
} }
if sortedScripts[i].sequence != sortedScripts[j].sequence {
return sortedScripts[i].sequence < sortedScripts[j].sequence return sortedScripts[i].sequence < sortedScripts[j].sequence
}
return sortedScripts[i].name < sortedScripts[j].name
}) })
fmt.Fprintf(os.Stderr, "Found %d script(s) in execution order:\n\n", len(sortedScripts)) fmt.Fprintf(os.Stderr, "Found %d script(s) in execution order:\n\n", len(sortedScripts))
@@ -242,22 +252,44 @@ func runScriptsExecute(cmd *cobra.Command, args []string) error {
fmt.Fprintf(os.Stderr, " ✓ Found %d script(s)\n\n", len(schema.Scripts)) fmt.Fprintf(os.Stderr, " ✓ Found %d script(s)\n\n", len(schema.Scripts))
// Step 2: Execute scripts // Step 2: Execute scripts
fmt.Fprintf(os.Stderr, "[2/2] Executing scripts in order (Priority → Sequence)...\n\n") fmt.Fprintf(os.Stderr, "[2/2] Executing scripts in order (Priority → Sequence → Name)...\n\n")
writer := sqlexec.NewWriter(&writers.WriterOptions{ writer := sqlexec.NewWriter(&writers.WriterOptions{
Metadata: map[string]any{ Metadata: map[string]any{
"connection_string": scriptsConn, "connection_string": scriptsConn,
"ignore_errors": scriptsIgnoreErrors,
}, },
}) })
if err := writer.WriteSchema(schema); err != nil { if err := writer.WriteSchema(schema); err != nil {
fmt.Fprintf(os.Stderr, "\n") fmt.Fprintf(os.Stderr, "\n")
return fmt.Errorf("execution failed: %w", err) return fmt.Errorf("script execution failed: %w", err)
}
// Get execution results from writer metadata
totalCount := len(schema.Scripts)
successCount := totalCount
failedCount := 0
opts := writer.Options()
if total, exists := opts.Metadata["execution_total"].(int); exists {
totalCount = total
}
if success, exists := opts.Metadata["execution_success"].(int); exists {
successCount = success
}
if failed, exists := opts.Metadata["execution_failed"].(int); exists {
failedCount = failed
} }
fmt.Fprintf(os.Stderr, "\n=== Execution Complete ===\n") fmt.Fprintf(os.Stderr, "\n=== Execution Complete ===\n")
fmt.Fprintf(os.Stderr, "Completed at: %s\n", getCurrentTimestamp()) fmt.Fprintf(os.Stderr, "Completed at: %s\n", getCurrentTimestamp())
fmt.Fprintf(os.Stderr, "Successfully executed %d script(s)\n\n", len(schema.Scripts)) fmt.Fprintf(os.Stderr, "Total scripts: %d\n", totalCount)
fmt.Fprintf(os.Stderr, "Successful: %d\n", successCount)
if failedCount > 0 {
fmt.Fprintf(os.Stderr, "Failed: %d\n", failedCount)
}
fmt.Fprintf(os.Stderr, "\n")
return nil return nil
} }

318
cmd/relspec/split.go Normal file
View File

@@ -0,0 +1,318 @@
package main
import (
"fmt"
"os"
"strings"
"github.com/spf13/cobra"
"git.warky.dev/wdevs/relspecgo/pkg/models"
)
var (
splitSourceType string
splitSourcePath string
splitSourceConn string
splitTargetType string
splitTargetPath string
splitSchemas string
splitTables string
splitPackageName string
splitDatabaseName string
splitExcludeSchema string
splitExcludeTables string
)
var splitCmd = &cobra.Command{
Use: "split",
Short: "Split database schemas to extract selected tables into a separate database",
Long: `Extract selected schemas and tables from a database and write them to a separate output.
The split command allows you to:
- Select specific schemas to include in the output
- Select specific tables within schemas
- Exclude specific schemas or tables if preferred
- Export the selected subset to any supported format
Input formats:
- dbml: DBML schema files
- dctx: DCTX schema files
- drawdb: DrawDB JSON files
- graphql: GraphQL schema files (.graphql, SDL)
- json: JSON database schema
- yaml: YAML database schema
- gorm: GORM model files (Go, file or directory)
- bun: Bun model files (Go, file or directory)
- drizzle: Drizzle ORM schema files (TypeScript, file or directory)
- prisma: Prisma schema files (.prisma)
- typeorm: TypeORM entity files (TypeScript)
- pgsql: PostgreSQL database (live connection)
Output formats:
- dbml: DBML schema files
- dctx: DCTX schema files
- drawdb: DrawDB JSON files
- graphql: GraphQL schema files (.graphql, SDL)
- json: JSON database schema
- yaml: YAML database schema
- gorm: GORM model files (Go)
- bun: Bun model files (Go)
- drizzle: Drizzle ORM schema files (TypeScript)
- prisma: Prisma schema files (.prisma)
- typeorm: TypeORM entity files (TypeScript)
- pgsql: PostgreSQL SQL schema
Examples:
# Split specific schemas from DBML
relspec split --from dbml --from-path schema.dbml \
--schemas public,auth \
--to json --to-path subset.json
# Extract specific tables from PostgreSQL
relspec split --from pgsql \
--from-conn "postgres://user:pass@localhost:5432/mydb" \
--schemas public \
--tables users,orders,products \
--to dbml --to-path subset.dbml
# Exclude specific tables
relspec split --from json --from-path schema.json \
--exclude-tables "audit_log,system_config,temp_data" \
--to json --to-path public_schema.json
# Split and convert to GORM
relspec split --from json --from-path schema.json \
--tables "users,posts,comments" \
--to gorm --to-path models/ --package models \
--database-name MyAppDB
# Exclude specific schema and tables
relspec split --from pgsql \
--from-conn "postgres://user:pass@localhost/db" \
--exclude-schema pg_catalog,information_schema \
--exclude-tables "temp_users,debug_logs" \
--to json --to-path public_schema.json`,
RunE: runSplit,
}
func init() {
splitCmd.Flags().StringVar(&splitSourceType, "from", "", "Source format (dbml, dctx, drawdb, graphql, json, yaml, gorm, bun, drizzle, prisma, typeorm, pgsql)")
splitCmd.Flags().StringVar(&splitSourcePath, "from-path", "", "Source file path (for file-based formats)")
splitCmd.Flags().StringVar(&splitSourceConn, "from-conn", "", "Source connection string (for database formats)")
splitCmd.Flags().StringVar(&splitTargetType, "to", "", "Target format (dbml, dctx, drawdb, graphql, json, yaml, gorm, bun, drizzle, prisma, typeorm, pgsql)")
splitCmd.Flags().StringVar(&splitTargetPath, "to-path", "", "Target output path (file or directory)")
splitCmd.Flags().StringVar(&splitPackageName, "package", "", "Package name (for code generation formats like gorm/bun)")
splitCmd.Flags().StringVar(&splitDatabaseName, "database-name", "", "Override database name in output")
splitCmd.Flags().StringVar(&splitSchemas, "schemas", "", "Comma-separated list of schema names to include")
splitCmd.Flags().StringVar(&splitTables, "tables", "", "Comma-separated list of table names to include (case-insensitive)")
splitCmd.Flags().StringVar(&splitExcludeSchema, "exclude-schema", "", "Comma-separated list of schema names to exclude")
splitCmd.Flags().StringVar(&splitExcludeTables, "exclude-tables", "", "Comma-separated list of table names to exclude (case-insensitive)")
err := splitCmd.MarkFlagRequired("from")
if err != nil {
fmt.Fprintf(os.Stderr, "Error marking from flag as required: %v\n", err)
}
err = splitCmd.MarkFlagRequired("to")
if err != nil {
fmt.Fprintf(os.Stderr, "Error marking to flag as required: %v\n", err)
}
err = splitCmd.MarkFlagRequired("to-path")
if err != nil {
fmt.Fprintf(os.Stderr, "Error marking to-path flag as required: %v\n", err)
}
}
func runSplit(cmd *cobra.Command, args []string) error {
fmt.Fprintf(os.Stderr, "\n=== RelSpec Schema Split ===\n")
fmt.Fprintf(os.Stderr, "Started at: %s\n\n", getCurrentTimestamp())
// Read source database
fmt.Fprintf(os.Stderr, "[1/3] Reading source schema...\n")
fmt.Fprintf(os.Stderr, " Format: %s\n", splitSourceType)
if splitSourcePath != "" {
fmt.Fprintf(os.Stderr, " Path: %s\n", splitSourcePath)
}
if splitSourceConn != "" {
fmt.Fprintf(os.Stderr, " Conn: %s\n", maskPassword(splitSourceConn))
}
db, err := readDatabaseForConvert(splitSourceType, splitSourcePath, splitSourceConn)
if err != nil {
return fmt.Errorf("failed to read source: %w", err)
}
fmt.Fprintf(os.Stderr, " ✓ Successfully read database '%s'\n", db.Name)
fmt.Fprintf(os.Stderr, " Found: %d schema(s)\n", len(db.Schemas))
totalTables := 0
for _, schema := range db.Schemas {
totalTables += len(schema.Tables)
}
fmt.Fprintf(os.Stderr, " Found: %d table(s)\n\n", totalTables)
// Filter the database
fmt.Fprintf(os.Stderr, "[2/3] Filtering schemas and tables...\n")
filteredDB, err := filterDatabase(db)
if err != nil {
return fmt.Errorf("failed to filter database: %w", err)
}
if splitDatabaseName != "" {
filteredDB.Name = splitDatabaseName
}
filteredTables := 0
for _, schema := range filteredDB.Schemas {
filteredTables += len(schema.Tables)
}
fmt.Fprintf(os.Stderr, " ✓ Filtered to: %d schema(s), %d table(s)\n\n", len(filteredDB.Schemas), filteredTables)
// Write to target format
fmt.Fprintf(os.Stderr, "[3/3] Writing to target format...\n")
fmt.Fprintf(os.Stderr, " Format: %s\n", splitTargetType)
fmt.Fprintf(os.Stderr, " Output: %s\n", splitTargetPath)
if splitPackageName != "" {
fmt.Fprintf(os.Stderr, " Package: %s\n", splitPackageName)
}
err = writeDatabase(
filteredDB,
splitTargetType,
splitTargetPath,
splitPackageName,
"", // no schema filter for split
)
if err != nil {
return fmt.Errorf("failed to write output: %w", err)
}
fmt.Fprintf(os.Stderr, " ✓ Successfully written to '%s'\n\n", splitTargetPath)
fmt.Fprintf(os.Stderr, "=== Split Completed Successfully ===\n")
fmt.Fprintf(os.Stderr, "Completed at: %s\n\n", getCurrentTimestamp())
return nil
}
// filterDatabase filters the database based on provided criteria
func filterDatabase(db *models.Database) (*models.Database, error) {
filteredDB := &models.Database{
Name: db.Name,
Description: db.Description,
Comment: db.Comment,
DatabaseType: db.DatabaseType,
DatabaseVersion: db.DatabaseVersion,
SourceFormat: db.SourceFormat,
UpdatedAt: db.UpdatedAt,
GUID: db.GUID,
Schemas: []*models.Schema{},
Domains: db.Domains, // Keep domains for now
}
// Parse filter flags
includeSchemas := parseCommaSeparated(splitSchemas)
includeTables := parseCommaSeparated(splitTables)
excludeSchemas := parseCommaSeparated(splitExcludeSchema)
excludeTables := parseCommaSeparated(splitExcludeTables)
// Convert table names to lowercase for case-insensitive matching
includeTablesLower := make(map[string]bool)
for _, t := range includeTables {
includeTablesLower[strings.ToLower(t)] = true
}
excludeTablesLower := make(map[string]bool)
for _, t := range excludeTables {
excludeTablesLower[strings.ToLower(t)] = true
}
// Iterate through schemas
for _, schema := range db.Schemas {
// Check if schema should be excluded
if contains(excludeSchemas, schema.Name) {
continue
}
// Check if schema should be included
if len(includeSchemas) > 0 && !contains(includeSchemas, schema.Name) {
continue
}
// Create a copy of the schema with filtered tables
filteredSchema := &models.Schema{
Name: schema.Name,
Description: schema.Description,
Owner: schema.Owner,
Permissions: schema.Permissions,
Comment: schema.Comment,
Metadata: schema.Metadata,
Scripts: schema.Scripts,
Sequence: schema.Sequence,
Relations: schema.Relations,
Enums: schema.Enums,
UpdatedAt: schema.UpdatedAt,
GUID: schema.GUID,
Tables: []*models.Table{},
Views: schema.Views,
Sequences: schema.Sequences,
}
// Filter tables within the schema
for _, table := range schema.Tables {
tableLower := strings.ToLower(table.Name)
// Check if table should be excluded
if excludeTablesLower[tableLower] {
continue
}
// If specific tables are requested, only include those
if len(includeTablesLower) > 0 {
if !includeTablesLower[tableLower] {
continue
}
}
filteredSchema.Tables = append(filteredSchema.Tables, table)
}
// Only add schema if it has tables (unless no table filter was specified)
if len(filteredSchema.Tables) > 0 || (len(includeTablesLower) == 0 && len(excludeTablesLower) == 0) {
filteredDB.Schemas = append(filteredDB.Schemas, filteredSchema)
}
}
if len(filteredDB.Schemas) == 0 {
return nil, fmt.Errorf("no schemas matched the filter criteria")
}
return filteredDB, nil
}
// parseCommaSeparated parses a comma-separated string into a slice, trimming whitespace
func parseCommaSeparated(s string) []string {
if s == "" {
return []string{}
}
parts := strings.Split(s, ",")
result := make([]string, 0, len(parts))
for _, p := range parts {
trimmed := strings.TrimSpace(p)
if trimmed != "" {
result = append(result, trimmed)
}
}
return result
}
// contains checks if a string is in a slice
func contains(slice []string, item string) bool {
for _, s := range slice {
if s == item {
return true
}
}
return false
}

149
docs/DOMAINS_DRAWDB.md Normal file
View File

@@ -0,0 +1,149 @@
# Domains and DrawDB Areas Integration
## Overview
Domains provide a way to organize tables from potentially multiple schemas into logical business groupings. When working with DrawDB format, domains are automatically imported/exported as **Subject Areas** - a native DrawDB feature for visually grouping tables.
## How It Works
### Writing Domains to DrawDB (Export)
When you export a database with domains to DrawDB format:
1. **Schema Areas** are created automatically for each schema (existing behavior)
2. **Domain Areas** are created for each domain, calculated based on the positions of the tables they contain
3. The domain area bounds are automatically calculated to encompass all its tables with a small padding
```go
// Example: Creating a domain and exporting to DrawDB
db := models.InitDatabase("mydb")
// Create an "authentication" domain
authDomain := models.InitDomain("authentication")
authDomain.Tables = append(authDomain.Tables,
models.InitDomainTable("users", "public"),
models.InitDomainTable("roles", "public"),
models.InitDomainTable("permissions", "public"),
)
db.Domains = append(db.Domains, authDomain)
// Create a "financial" domain spanning multiple schemas
finDomain := models.InitDomain("financial")
finDomain.Tables = append(finDomain.Tables,
models.InitDomainTable("accounts", "public"),
models.InitDomainTable("transactions", "public"),
models.InitDomainTable("ledger", "finance"), // Different schema!
)
db.Domains = append(db.Domains, finDomain)
// Write to DrawDB - domains become subject areas
writer := drawdb.NewWriter(&writers.WriterOptions{
OutputPath: "schema.json",
})
writer.WriteDatabase(db)
```
The resulting DrawDB JSON will have Subject Areas for both:
- "authentication" area containing the auth tables
- "financial" area containing the financial tables from both schemas
### Reading Domains from DrawDB (Import)
When you import a DrawDB file with Subject Areas:
1. **Subject Areas** are automatically converted to **Domains**
2. Tables are assigned to a domain if they fall within the area's visual bounds
3. Table references include both the table name and schema name
```go
// Example: Reading DrawDB with areas
reader := drawdb.NewReader(&readers.ReaderOptions{
FilePath: "schema.json",
})
db, err := reader.ReadDatabase()
if err != nil {
log.Fatal(err)
}
// Access domains
for _, domain := range db.Domains {
fmt.Printf("Domain: %s\n", domain.Name)
for _, domainTable := range domain.Tables {
fmt.Printf(" - %s.%s\n", domainTable.SchemaName, domainTable.TableName)
// Access the actual table reference if loaded
if domainTable.RefTable != nil {
fmt.Printf(" Description: %s\n", domainTable.RefTable.Description)
}
}
}
```
## Domain Structure
```go
type Domain struct {
Name string // Domain name (e.g., "authentication", "user_data")
Description string // Optional human-readable description
Tables []*DomainTable // Tables belonging to this domain
Comment string // Optional comment
Metadata map[string]any // Extensible metadata
Sequence uint // Ordering hint
}
type DomainTable struct {
TableName string // Table name
SchemaName string // Schema containing the table
Sequence uint // Ordering hint
RefTable *Table // Pointer to actual table (in-memory only, not serialized)
}
```
## Multi-Schema Domains
One of the key features of domains is that they can span multiple schemas:
```
Domain: "user_data"
├── public.users
├── public.profiles
├── public.user_preferences
├── auth.user_sessions
└── auth.mfa_devices
```
This allows you to organize related tables even when they're stored in different schemas.
## Visual Organization in DrawDB
When viewing the exported DrawDB file in DrawDB Editor:
1. **Schema areas** appear in one color (original behavior)
2. **Domain areas** appear in a different color
3. Domain area bounds are calculated to fit all contained tables
4. Areas can overlap - a table can visually belong to multiple areas
## Integration with Other Formats
Currently, domain/area integration is implemented for DrawDB format.
To implement similar functionality for other formats:
1. Identify if the format has a native grouping/area feature
2. Add conversion logic in the reader to map format areas → Domain model
3. Add conversion logic in the writer to map Domain model → format areas
Example formats that could support domains:
- **DBML**: Could use DBML's `TableGroup` feature
- **DrawDB**: ✅ Already implemented (Subject Areas)
- **GraphQL**: Could use schema directives
- **Custom formats**: Implement as needed
## Tips and Best Practices
1. **Keep domains focused**: Each domain should represent a distinct business area
2. **Document purposes**: Use Description and Comment fields to explain each domain
3. **Use meaningful names**: Domain names should clearly reflect their purpose
4. **Maintain schema consistency**: Keep related tables together in the same schema when possible
5. **Use metadata**: Store tool-specific information in the Metadata field

View File

@@ -44,12 +44,13 @@ The `--mode` flag controls how the template is executed:
|------|-------------|--------|-------------| |------|-------------|--------|-------------|
| `database` | Execute once for entire database | Single file | Documentation, reports, overview files | | `database` | Execute once for entire database | Single file | Documentation, reports, overview files |
| `schema` | Execute once per schema | One file per schema | Schema-specific documentation | | `schema` | Execute once per schema | One file per schema | Schema-specific documentation |
| `domain` | Execute once per domain | One file per domain | Domain-based documentation, domain exports |
| `script` | Execute once per script | One file per script | Script processing | | `script` | Execute once per script | One file per script | Script processing |
| `table` | Execute once per table | One file per table | Model generation, table docs | | `table` | Execute once per table | One file per table | Model generation, table docs |
### Filename Patterns ### Filename Patterns
For multi-file modes (`schema`, `script`, `table`), use `--filename-pattern` to control output filenames: For multi-file modes (`schema`, `domain`, `script`, `table`), use `--filename-pattern` to control output filenames:
```bash ```bash
# Default pattern # Default pattern
@@ -296,6 +297,13 @@ The data available in templates depends on the execution mode:
.Metadata // map[string]interface{} - User metadata .Metadata // map[string]interface{} - User metadata
``` ```
### Domain Mode
```go
.Domain // *models.Domain - Current domain
.ParentDatabase // *models.Database - Parent database context
.Metadata // map[string]interface{} - User metadata
```
### Table Mode ### Table Mode
```go ```go
.Table // *models.Table - Current table .Table // *models.Table - Current table
@@ -317,6 +325,7 @@ The data available in templates depends on the execution mode:
**Database:** **Database:**
- `.Name` - Database name - `.Name` - Database name
- `.Schemas` - List of schemas - `.Schemas` - List of schemas
- `.Domains` - List of domains (business domain groupings)
- `.Description`, `.Comment` - Documentation - `.Description`, `.Comment` - Documentation
**Schema:** **Schema:**
@@ -325,6 +334,17 @@ The data available in templates depends on the execution mode:
- `.Views`, `.Sequences`, `.Scripts` - Other objects - `.Views`, `.Sequences`, `.Scripts` - Other objects
- `.Enums` - Enum types - `.Enums` - Enum types
**Domain:**
- `.Name` - Domain name
- `.Tables` - List of DomainTable references
- `.Description`, `.Comment` - Documentation
- `.Metadata` - Custom metadata map
**DomainTable:**
- `.TableName` - Name of the table
- `.SchemaName` - Schema containing the table
- `.RefTable` - Pointer to actual Table object (if loaded)
**Table:** **Table:**
- `.Name` - Table name - `.Name` - Table name
- `.Schema` - Schema name - `.Schema` - Schema name

7
go.mod
View File

@@ -3,8 +3,10 @@ module git.warky.dev/wdevs/relspecgo
go 1.24.0 go 1.24.0
require ( require (
github.com/gdamore/tcell/v2 v2.8.1
github.com/google/uuid v1.6.0 github.com/google/uuid v1.6.0
github.com/jackc/pgx/v5 v5.7.6 github.com/jackc/pgx/v5 v5.7.6
github.com/rivo/tview v0.42.0
github.com/spf13/cobra v1.10.2 github.com/spf13/cobra v1.10.2
github.com/stretchr/testify v1.11.1 github.com/stretchr/testify v1.11.1
github.com/uptrace/bun v1.2.16 github.com/uptrace/bun v1.2.16
@@ -14,13 +16,17 @@ require (
require ( require (
github.com/davecgh/go-spew v1.1.1 // indirect github.com/davecgh/go-spew v1.1.1 // indirect
github.com/gdamore/encoding v1.0.1 // indirect
github.com/inconshreveable/mousetrap v1.1.0 // indirect github.com/inconshreveable/mousetrap v1.1.0 // indirect
github.com/jackc/pgpassfile v1.0.0 // indirect github.com/jackc/pgpassfile v1.0.0 // indirect
github.com/jackc/pgservicefile v0.0.0-20240606120523-5a60cdf6a761 // indirect github.com/jackc/pgservicefile v0.0.0-20240606120523-5a60cdf6a761 // indirect
github.com/jinzhu/inflection v1.0.0 // indirect github.com/jinzhu/inflection v1.0.0 // indirect
github.com/kr/pretty v0.3.1 // indirect github.com/kr/pretty v0.3.1 // indirect
github.com/lucasb-eyer/go-colorful v1.2.0 // indirect
github.com/mattn/go-runewidth v0.0.16 // indirect
github.com/pmezard/go-difflib v1.0.0 // indirect github.com/pmezard/go-difflib v1.0.0 // indirect
github.com/puzpuzpuz/xsync/v3 v3.5.1 // indirect github.com/puzpuzpuz/xsync/v3 v3.5.1 // indirect
github.com/rivo/uniseg v0.4.7 // indirect
github.com/rogpeppe/go-internal v1.14.1 // indirect github.com/rogpeppe/go-internal v1.14.1 // indirect
github.com/spf13/pflag v1.0.10 // indirect github.com/spf13/pflag v1.0.10 // indirect
github.com/tmthrgd/go-hex v0.0.0-20190904060850-447a3041c3bc // indirect github.com/tmthrgd/go-hex v0.0.0-20190904060850-447a3041c3bc // indirect
@@ -28,4 +34,5 @@ require (
github.com/vmihailenco/tagparser/v2 v2.0.0 // indirect github.com/vmihailenco/tagparser/v2 v2.0.0 // indirect
golang.org/x/crypto v0.41.0 // indirect golang.org/x/crypto v0.41.0 // indirect
golang.org/x/sys v0.38.0 // indirect golang.org/x/sys v0.38.0 // indirect
golang.org/x/term v0.34.0 // indirect
) )

79
go.sum
View File

@@ -3,6 +3,11 @@ github.com/creack/pty v1.1.9/go.mod h1:oKZEueFk5CKHvIhNR5MUki03XCEU+Q6VDXinZuGJ3
github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38= github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c= github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c=
github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38= github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/gdamore/encoding v1.0.1 h1:YzKZckdBL6jVt2Gc+5p82qhrGiqMdG/eNs6Wy0u3Uhw=
github.com/gdamore/encoding v1.0.1/go.mod h1:0Z0cMFinngz9kS1QfMjCP8TY7em3bZYeeklsSDPivEo=
github.com/gdamore/tcell/v2 v2.8.1 h1:KPNxyqclpWpWQlPLx6Xui1pMk8S+7+R37h3g07997NU=
github.com/gdamore/tcell/v2 v2.8.1/go.mod h1:bj8ori1BG3OYMjmb3IklZVWfZUJ1UBQt9JXrOCOhGWw=
github.com/google/go-cmp v0.6.0/go.mod h1:17dUlkBOakJ0+DkrSSNjCkIjxS6bF9zb3elmeNGIjoY=
github.com/google/uuid v1.6.0 h1:NIvaJDMOsjHA8n1jAhLSgzrAzy1Hgr+hNrb57e+94F0= github.com/google/uuid v1.6.0 h1:NIvaJDMOsjHA8n1jAhLSgzrAzy1Hgr+hNrb57e+94F0=
github.com/google/uuid v1.6.0/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo= github.com/google/uuid v1.6.0/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
github.com/inconshreveable/mousetrap v1.1.0 h1:wN+x4NVGpMsO7ErUn/mUI3vEoE6Jt13X2s0bqwp9tc8= github.com/inconshreveable/mousetrap v1.1.0 h1:wN+x4NVGpMsO7ErUn/mUI3vEoE6Jt13X2s0bqwp9tc8=
@@ -21,11 +26,21 @@ github.com/kr/pretty v0.3.1 h1:flRD4NNwYAUpkphVc1HcthR4KEIFJ65n8Mw5qdRn3LE=
github.com/kr/pretty v0.3.1/go.mod h1:hoEshYVHaxMs3cyo3Yncou5ZscifuDolrwPKZanG3xk= github.com/kr/pretty v0.3.1/go.mod h1:hoEshYVHaxMs3cyo3Yncou5ZscifuDolrwPKZanG3xk=
github.com/kr/text v0.2.0 h1:5Nx0Ya0ZqY2ygV366QzturHI13Jq95ApcVaJBhpS+AY= github.com/kr/text v0.2.0 h1:5Nx0Ya0ZqY2ygV366QzturHI13Jq95ApcVaJBhpS+AY=
github.com/kr/text v0.2.0/go.mod h1:eLer722TekiGuMkidMxC/pM04lWEeraHUUmBw8l2grE= github.com/kr/text v0.2.0/go.mod h1:eLer722TekiGuMkidMxC/pM04lWEeraHUUmBw8l2grE=
github.com/lucasb-eyer/go-colorful v1.2.0 h1:1nnpGOrhyZZuNyfu1QjKiUICQ74+3FNCN69Aj6K7nkY=
github.com/lucasb-eyer/go-colorful v1.2.0/go.mod h1:R4dSotOR9KMtayYi1e77YzuveK+i7ruzyGqttikkLy0=
github.com/mattn/go-runewidth v0.0.16 h1:E5ScNMtiwvlvB5paMFdw9p4kSQzbXFikJ5SQO6TULQc=
github.com/mattn/go-runewidth v0.0.16/go.mod h1:Jdepj2loyihRzMpdS35Xk/zdY8IAYHsh153qUoGf23w=
github.com/pkg/diff v0.0.0-20210226163009-20ebb0f2a09e/go.mod h1:pJLUxLENpZxwdsKMEsNbx1VGcRFpLqf3715MtcvvzbA= github.com/pkg/diff v0.0.0-20210226163009-20ebb0f2a09e/go.mod h1:pJLUxLENpZxwdsKMEsNbx1VGcRFpLqf3715MtcvvzbA=
github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM= github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM=
github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4= github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
github.com/puzpuzpuz/xsync/v3 v3.5.1 h1:GJYJZwO6IdxN/IKbneznS6yPkVC+c3zyY/j19c++5Fg= github.com/puzpuzpuz/xsync/v3 v3.5.1 h1:GJYJZwO6IdxN/IKbneznS6yPkVC+c3zyY/j19c++5Fg=
github.com/puzpuzpuz/xsync/v3 v3.5.1/go.mod h1:VjzYrABPabuM4KyBh1Ftq6u8nhwY5tBPKP9jpmh0nnA= github.com/puzpuzpuz/xsync/v3 v3.5.1/go.mod h1:VjzYrABPabuM4KyBh1Ftq6u8nhwY5tBPKP9jpmh0nnA=
github.com/rivo/tview v0.42.0 h1:b/ftp+RxtDsHSaynXTbJb+/n/BxDEi+W3UfF5jILK6c=
github.com/rivo/tview v0.42.0/go.mod h1:cSfIYfhpSGCjp3r/ECJb+GKS7cGJnqV8vfjQPwoXyfY=
github.com/rivo/uniseg v0.2.0/go.mod h1:J6wj4VEh+S6ZtnVlnTBMWIodfgj8LQOQFoIToxlJtxc=
github.com/rivo/uniseg v0.4.3/go.mod h1:FN3SvrM+Zdj16jyLfmOkMNblXMcoc8DfTHruCPUcx88=
github.com/rivo/uniseg v0.4.7 h1:WUdvkW8uEhrYfLC4ZzdpI2ztxP1I582+49Oc5Mq64VQ=
github.com/rivo/uniseg v0.4.7/go.mod h1:FN3SvrM+Zdj16jyLfmOkMNblXMcoc8DfTHruCPUcx88=
github.com/rogpeppe/go-internal v1.9.0/go.mod h1:WtVeX8xhTBvf0smdhujwtBcq4Qrzq/fJaraNFVN+nFs= github.com/rogpeppe/go-internal v1.9.0/go.mod h1:WtVeX8xhTBvf0smdhujwtBcq4Qrzq/fJaraNFVN+nFs=
github.com/rogpeppe/go-internal v1.14.1 h1:UQB4HGPB6osV0SQTLymcB4TgvyWu6ZyliaW0tI/otEQ= github.com/rogpeppe/go-internal v1.14.1 h1:UQB4HGPB6osV0SQTLymcB4TgvyWu6ZyliaW0tI/otEQ=
github.com/rogpeppe/go-internal v1.14.1/go.mod h1:MaRKkUm5W0goXpeCfT7UZI6fk/L7L7so1lCWt35ZSgc= github.com/rogpeppe/go-internal v1.14.1/go.mod h1:MaRKkUm5W0goXpeCfT7UZI6fk/L7L7so1lCWt35ZSgc=
@@ -48,15 +63,79 @@ github.com/vmihailenco/msgpack/v5 v5.4.1 h1:cQriyiUvjTwOHg8QZaPihLWeRAAVoCpE00IU
github.com/vmihailenco/msgpack/v5 v5.4.1/go.mod h1:GaZTsDaehaPpQVyxrf5mtQlH+pc21PIudVV/E3rRQok= github.com/vmihailenco/msgpack/v5 v5.4.1/go.mod h1:GaZTsDaehaPpQVyxrf5mtQlH+pc21PIudVV/E3rRQok=
github.com/vmihailenco/tagparser/v2 v2.0.0 h1:y09buUbR+b5aycVFQs/g70pqKVZNBmxwAhO7/IwNM9g= github.com/vmihailenco/tagparser/v2 v2.0.0 h1:y09buUbR+b5aycVFQs/g70pqKVZNBmxwAhO7/IwNM9g=
github.com/vmihailenco/tagparser/v2 v2.0.0/go.mod h1:Wri+At7QHww0WTrCBeu4J6bNtoV6mEfg5OIWRZA9qds= github.com/vmihailenco/tagparser/v2 v2.0.0/go.mod h1:Wri+At7QHww0WTrCBeu4J6bNtoV6mEfg5OIWRZA9qds=
github.com/yuin/goldmark v1.4.13/go.mod h1:6yULJ656Px+3vBD8DxQVa3kxgyrAnzto9xy5taEt/CY=
go.yaml.in/yaml/v3 v3.0.4/go.mod h1:DhzuOOF2ATzADvBadXxruRBLzYTpT36CKvDb3+aBEFg= go.yaml.in/yaml/v3 v3.0.4/go.mod h1:DhzuOOF2ATzADvBadXxruRBLzYTpT36CKvDb3+aBEFg=
golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w=
golang.org/x/crypto v0.0.0-20210921155107-089bfa567519/go.mod h1:GvvjBRRGRdwPK5ydBHafDWAxML/pGHZbMvKqRZ5+Abc=
golang.org/x/crypto v0.13.0/go.mod h1:y6Z2r+Rw4iayiXXAIxJIDAJ1zMW4yaTpebo8fPOliYc=
golang.org/x/crypto v0.19.0/go.mod h1:Iy9bg/ha4yyC70EfRS8jz+B6ybOBKMaSxLj6P6oBDfU=
golang.org/x/crypto v0.23.0/go.mod h1:CKFgDieR+mRhux2Lsu27y0fO304Db0wZe70UKqHu0v8=
golang.org/x/crypto v0.41.0 h1:WKYxWedPGCTVVl5+WHSSrOBT0O8lx32+zxmHxijgXp4= golang.org/x/crypto v0.41.0 h1:WKYxWedPGCTVVl5+WHSSrOBT0O8lx32+zxmHxijgXp4=
golang.org/x/crypto v0.41.0/go.mod h1:pO5AFd7FA68rFak7rOAGVuygIISepHftHnr8dr6+sUc= golang.org/x/crypto v0.41.0/go.mod h1:pO5AFd7FA68rFak7rOAGVuygIISepHftHnr8dr6+sUc=
golang.org/x/mod v0.6.0-dev.0.20220419223038-86c51ed26bb4/go.mod h1:jJ57K6gSWd91VN4djpZkiMVwK6gcyfeH4XE8wZrZaV4=
golang.org/x/mod v0.8.0/go.mod h1:iBbtSCu2XBx23ZKBPSOrRkjjQPZFPuis4dIYUhu/chs=
golang.org/x/mod v0.12.0/go.mod h1:iBbtSCu2XBx23ZKBPSOrRkjjQPZFPuis4dIYUhu/chs=
golang.org/x/mod v0.15.0/go.mod h1:hTbmBsO62+eylJbnUtE2MGJUyE7QWk4xUqPFrRgJ+7c=
golang.org/x/mod v0.17.0/go.mod h1:hTbmBsO62+eylJbnUtE2MGJUyE7QWk4xUqPFrRgJ+7c=
golang.org/x/net v0.0.0-20190620200207-3b0461eec859/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/net v0.0.0-20210226172049-e18ecbb05110/go.mod h1:m0MpNAwzfU5UDzcl9v0D8zg8gWTRqZa9RBIspLL5mdg=
golang.org/x/net v0.0.0-20220722155237-a158d28d115b/go.mod h1:XRhObCWvk6IyKnWLug+ECip1KBveYUHfp+8e9klMJ9c=
golang.org/x/net v0.6.0/go.mod h1:2Tu9+aMcznHK/AK1HMvgo6xiTLG5rD5rZLDS+rp2Bjs=
golang.org/x/net v0.10.0/go.mod h1:0qNGK6F8kojg2nk9dLZ2mShWaEBan6FAoqfSigmmuDg=
golang.org/x/net v0.15.0/go.mod h1:idbUs1IY1+zTqbi8yxTbhexhEEk5ur9LInksu6HrEpk=
golang.org/x/net v0.21.0/go.mod h1:bIjVDfnllIU7BJ2DNgfnXvpSvtn8VRwhlsaeUTyUS44=
golang.org/x/net v0.25.0/go.mod h1:JkAGAh7GEvH74S6FOH42FLoXpXbE/aqXSrIQjXgsiwM=
golang.org/x/sync v0.0.0-20190423024810-112230192c58/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20220722155255-886fb9371eb4/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.1.0/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.3.0/go.mod h1:FU7BRWz2tNW+3quACPkgCx/L+uEAv1htQ0V83Z9Rj+Y=
golang.org/x/sync v0.6.0/go.mod h1:Czt+wKu1gCyEFDUtn0jG5QVvpJ6rzVqr5aXyt9drQfk=
golang.org/x/sync v0.7.0/go.mod h1:Czt+wKu1gCyEFDUtn0jG5QVvpJ6rzVqr5aXyt9drQfk=
golang.org/x/sync v0.10.0/go.mod h1:Czt+wKu1gCyEFDUtn0jG5QVvpJ6rzVqr5aXyt9drQfk=
golang.org/x/sync v0.16.0 h1:ycBJEhp9p4vXvUZNszeOq0kGTPghopOL8q0fq3vstxw= golang.org/x/sync v0.16.0 h1:ycBJEhp9p4vXvUZNszeOq0kGTPghopOL8q0fq3vstxw=
golang.org/x/sync v0.16.0/go.mod h1:1dzgHSNfp02xaA81J2MS99Qcpr2w7fw1gpm99rleRqA= golang.org/x/sync v0.16.0/go.mod h1:1dzgHSNfp02xaA81J2MS99Qcpr2w7fw1gpm99rleRqA=
golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20201119102817-f84b799fce68/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210615035016-665e8c7367d1/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20220520151302-bc2c85ada10a/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20220722155257-8c9f86f7a55f/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.5.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.8.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.12.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.17.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA=
golang.org/x/sys v0.20.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA=
golang.org/x/sys v0.29.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA=
golang.org/x/sys v0.38.0 h1:3yZWxaJjBmCWXqhN1qh02AkOnCQ1poK6oF+a7xWL6Gc= golang.org/x/sys v0.38.0 h1:3yZWxaJjBmCWXqhN1qh02AkOnCQ1poK6oF+a7xWL6Gc=
golang.org/x/sys v0.38.0/go.mod h1:OgkHotnGiDImocRcuBABYBEXf8A9a87e/uXjp9XT3ks= golang.org/x/sys v0.38.0/go.mod h1:OgkHotnGiDImocRcuBABYBEXf8A9a87e/uXjp9XT3ks=
golang.org/x/telemetry v0.0.0-20240228155512-f48c80bd79b2/go.mod h1:TeRTkGYfJXctD9OcfyVLyj2J3IxLnKwHJR8f4D8a3YE=
golang.org/x/term v0.0.0-20201126162022-7de9c90e9dd1/go.mod h1:bj7SfCRtBDWHUb9snDiAeCFNEtKQo2Wmx5Cou7ajbmo=
golang.org/x/term v0.0.0-20210927222741-03fcf44c2211/go.mod h1:jbD1KX2456YbFQfuXm/mYQcufACuNUgVhRMnK/tPxf8=
golang.org/x/term v0.5.0/go.mod h1:jMB1sMXY+tzblOD4FWmEbocvup2/aLOaQEp7JmGp78k=
golang.org/x/term v0.8.0/go.mod h1:xPskH00ivmX89bAKVGSKKtLOWNx2+17Eiy94tnKShWo=
golang.org/x/term v0.12.0/go.mod h1:owVbMEjm3cBLCHdkQu9b1opXd4ETQWc3BhuQGKgXgvU=
golang.org/x/term v0.17.0/go.mod h1:lLRBjIVuehSbZlaOtGMbcMncT+aqLLLmKrsjNrUguwk=
golang.org/x/term v0.20.0/go.mod h1:8UkIAJTvZgivsXaD6/pH6U9ecQzZ45awqEOzuCvwpFY=
golang.org/x/term v0.28.0/go.mod h1:Sw/lC2IAUZ92udQNf3WodGtn4k/XoLyZoh8v/8uiwek=
golang.org/x/term v0.34.0 h1:O/2T7POpk0ZZ7MAzMeWFSg6S5IpWd/RXDlM9hgM3DR4=
golang.org/x/term v0.34.0/go.mod h1:5jC53AEywhIVebHgPVeg0mj8OD3VO9OzclacVrqpaAw=
golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
golang.org/x/text v0.3.3/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
golang.org/x/text v0.3.7/go.mod h1:u+2+/6zg+i71rQMx5EYifcz6MCKuco9NR6JIITiCfzQ=
golang.org/x/text v0.7.0/go.mod h1:mrYo+phRRbMaCq/xk9113O4dZlRixOauAjOtrjsXDZ8=
golang.org/x/text v0.9.0/go.mod h1:e1OnstbJyHTd6l/uOt8jFFHp6TRDWZR/bV3emEE/zU8=
golang.org/x/text v0.13.0/go.mod h1:TvPlkZtksWOMsz7fbANvkp4WM8x/WCo/om8BMLbz+aE=
golang.org/x/text v0.14.0/go.mod h1:18ZOQIKpY8NJVqYksKHtTdi31H5itFRjB5/qKTNYzSU=
golang.org/x/text v0.15.0/go.mod h1:18ZOQIKpY8NJVqYksKHtTdi31H5itFRjB5/qKTNYzSU=
golang.org/x/text v0.21.0/go.mod h1:4IBbMaMmOPCJ8SecivzSH54+73PCFmPWxNTLm+vZkEQ=
golang.org/x/text v0.28.0 h1:rhazDwis8INMIwQ4tpjLDzUhx6RlXqZNPEM0huQojng= golang.org/x/text v0.28.0 h1:rhazDwis8INMIwQ4tpjLDzUhx6RlXqZNPEM0huQojng=
golang.org/x/text v0.28.0/go.mod h1:U8nCwOR8jO/marOQ0QbDiOngZVEBB7MAiitBuMjXiNU= golang.org/x/text v0.28.0/go.mod h1:U8nCwOR8jO/marOQ0QbDiOngZVEBB7MAiitBuMjXiNU=
golang.org/x/tools v0.0.0-20180917221912-90fa682c2a6e/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
golang.org/x/tools v0.0.0-20191119224855-298f0cb1881e/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
golang.org/x/tools v0.1.12/go.mod h1:hNGJHUnrk76NpqgfD5Aqm5Crs+Hm0VOH/i9J2+nxYbc=
golang.org/x/tools v0.6.0/go.mod h1:Xwgl3UAJ/d3gWutnCtw505GrjyAbvKui8lOU390QaIU=
golang.org/x/tools v0.13.0/go.mod h1:HvlwmtVNQAhOuCjW7xxvovg8wbNq7LwfXh/k7wXUl58=
golang.org/x/tools v0.21.1-0.20240508182429-e35e4ccd0d2d/go.mod h1:aiJjzUbINMkxbQROHiO6hDPo2LHcIPhhQsa9DLh0yGk=
golang.org/x/xerrors v0.0.0-20190717185122-a985d3407aa7/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0= gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c h1:Hei/4ADfdWqJk1ZMxUNpqntNwaWcugrBjAiHlqqRiVk= gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c h1:Hei/4ADfdWqJk1ZMxUNpqntNwaWcugrBjAiHlqqRiVk=
gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c/go.mod h1:JHkPIbrfpd72SG/EVd6muEfDQjcINNoR0C8j2r3qZ4Q= gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c/go.mod h1:JHkPIbrfpd72SG/EVd6muEfDQjcINNoR0C8j2r3qZ4Q=

620
pkg/merge/merge.go Normal file
View File

@@ -0,0 +1,620 @@
// Package merge provides utilities for merging database schemas.
// It allows combining schemas from multiple sources while avoiding duplicates,
// supporting only additive operations (no deletion or modification of existing items).
package merge
import (
"fmt"
"strings"
"git.warky.dev/wdevs/relspecgo/pkg/models"
)
// MergeResult represents the result of a merge operation
type MergeResult struct {
SchemasAdded int
TablesAdded int
ColumnsAdded int
RelationsAdded int
DomainsAdded int
EnumsAdded int
ViewsAdded int
SequencesAdded int
}
// MergeOptions contains options for merge operations
type MergeOptions struct {
SkipDomains bool
SkipRelations bool
SkipEnums bool
SkipViews bool
SkipSequences bool
SkipTableNames map[string]bool // Tables to skip during merge (keyed by table name)
}
// MergeDatabases merges the source database into the target database.
// Only adds missing items; existing items are not modified.
func MergeDatabases(target, source *models.Database, opts *MergeOptions) *MergeResult {
if opts == nil {
opts = &MergeOptions{}
}
result := &MergeResult{}
if target == nil || source == nil {
return result
}
// Merge schemas and their contents
result.merge(target, source, opts)
return result
}
func (r *MergeResult) merge(target, source *models.Database, opts *MergeOptions) {
// Create maps of existing schemas for quick lookup
existingSchemas := make(map[string]*models.Schema)
for _, schema := range target.Schemas {
existingSchemas[schema.SQLName()] = schema
}
// Merge schemas
for _, srcSchema := range source.Schemas {
schemaName := srcSchema.SQLName()
if tgtSchema, exists := existingSchemas[schemaName]; exists {
// Schema exists, merge its contents
r.mergeSchemaContents(tgtSchema, srcSchema, opts)
} else {
// Schema doesn't exist, add it
newSchema := cloneSchema(srcSchema)
target.Schemas = append(target.Schemas, newSchema)
r.SchemasAdded++
}
}
// Merge domains if not skipped
if !opts.SkipDomains {
r.mergeDomains(target, source)
}
}
func (r *MergeResult) mergeSchemaContents(target, source *models.Schema, opts *MergeOptions) {
// Merge tables
r.mergeTables(target, source, opts)
// Merge views if not skipped
if !opts.SkipViews {
r.mergeViews(target, source)
}
// Merge sequences if not skipped
if !opts.SkipSequences {
r.mergeSequences(target, source)
}
// Merge enums if not skipped
if !opts.SkipEnums {
r.mergeEnums(target, source)
}
// Merge relations if not skipped
if !opts.SkipRelations {
r.mergeRelations(target, source)
}
}
func (r *MergeResult) mergeTables(schema *models.Schema, source *models.Schema, opts *MergeOptions) {
// Create map of existing tables
existingTables := make(map[string]*models.Table)
for _, table := range schema.Tables {
existingTables[table.SQLName()] = table
}
// Merge tables
for _, srcTable := range source.Tables {
tableName := srcTable.SQLName()
// Skip if table is in the skip list (case-insensitive)
if opts != nil && opts.SkipTableNames != nil && opts.SkipTableNames[strings.ToLower(tableName)] {
continue
}
if tgtTable, exists := existingTables[tableName]; exists {
// Table exists, merge its columns, constraints, and indexes
r.mergeColumns(tgtTable, srcTable)
r.mergeConstraints(tgtTable, srcTable)
r.mergeIndexes(tgtTable, srcTable)
} else {
// Table doesn't exist, add it
newTable := cloneTable(srcTable)
schema.Tables = append(schema.Tables, newTable)
r.TablesAdded++
// Count columns in the newly added table
r.ColumnsAdded += len(newTable.Columns)
}
}
}
func (r *MergeResult) mergeColumns(table *models.Table, srcTable *models.Table) {
// Create map of existing columns
existingColumns := make(map[string]*models.Column)
for colName := range table.Columns {
existingColumns[colName] = table.Columns[colName]
}
// Merge columns
for colName, srcCol := range srcTable.Columns {
if _, exists := existingColumns[colName]; !exists {
// Column doesn't exist, add it
newCol := cloneColumn(srcCol)
table.Columns[colName] = newCol
r.ColumnsAdded++
}
}
}
func (r *MergeResult) mergeConstraints(table *models.Table, srcTable *models.Table) {
// Initialize constraints map if nil
if table.Constraints == nil {
table.Constraints = make(map[string]*models.Constraint)
}
// Create map of existing constraints
existingConstraints := make(map[string]*models.Constraint)
for constName := range table.Constraints {
existingConstraints[constName] = table.Constraints[constName]
}
// Merge constraints
for constName, srcConst := range srcTable.Constraints {
if _, exists := existingConstraints[constName]; !exists {
// Constraint doesn't exist, add it
newConst := cloneConstraint(srcConst)
table.Constraints[constName] = newConst
}
}
}
func (r *MergeResult) mergeIndexes(table *models.Table, srcTable *models.Table) {
// Initialize indexes map if nil
if table.Indexes == nil {
table.Indexes = make(map[string]*models.Index)
}
// Create map of existing indexes
existingIndexes := make(map[string]*models.Index)
for idxName := range table.Indexes {
existingIndexes[idxName] = table.Indexes[idxName]
}
// Merge indexes
for idxName, srcIdx := range srcTable.Indexes {
if _, exists := existingIndexes[idxName]; !exists {
// Index doesn't exist, add it
newIdx := cloneIndex(srcIdx)
table.Indexes[idxName] = newIdx
}
}
}
func (r *MergeResult) mergeViews(schema *models.Schema, source *models.Schema) {
// Create map of existing views
existingViews := make(map[string]*models.View)
for _, view := range schema.Views {
existingViews[view.SQLName()] = view
}
// Merge views
for _, srcView := range source.Views {
viewName := srcView.SQLName()
if _, exists := existingViews[viewName]; !exists {
// View doesn't exist, add it
newView := cloneView(srcView)
schema.Views = append(schema.Views, newView)
r.ViewsAdded++
}
}
}
func (r *MergeResult) mergeSequences(schema *models.Schema, source *models.Schema) {
// Create map of existing sequences
existingSequences := make(map[string]*models.Sequence)
for _, seq := range schema.Sequences {
existingSequences[seq.SQLName()] = seq
}
// Merge sequences
for _, srcSeq := range source.Sequences {
seqName := srcSeq.SQLName()
if _, exists := existingSequences[seqName]; !exists {
// Sequence doesn't exist, add it
newSeq := cloneSequence(srcSeq)
schema.Sequences = append(schema.Sequences, newSeq)
r.SequencesAdded++
}
}
}
func (r *MergeResult) mergeEnums(schema *models.Schema, source *models.Schema) {
// Create map of existing enums
existingEnums := make(map[string]*models.Enum)
for _, enum := range schema.Enums {
existingEnums[enum.SQLName()] = enum
}
// Merge enums
for _, srcEnum := range source.Enums {
enumName := srcEnum.SQLName()
if _, exists := existingEnums[enumName]; !exists {
// Enum doesn't exist, add it
newEnum := cloneEnum(srcEnum)
schema.Enums = append(schema.Enums, newEnum)
r.EnumsAdded++
}
}
}
func (r *MergeResult) mergeRelations(schema *models.Schema, source *models.Schema) {
// Create map of existing relations
existingRelations := make(map[string]*models.Relationship)
for _, rel := range schema.Relations {
existingRelations[rel.SQLName()] = rel
}
// Merge relations
for _, srcRel := range source.Relations {
if _, exists := existingRelations[srcRel.SQLName()]; !exists {
// Relation doesn't exist, add it
newRel := cloneRelation(srcRel)
schema.Relations = append(schema.Relations, newRel)
r.RelationsAdded++
}
}
}
func (r *MergeResult) mergeDomains(target *models.Database, source *models.Database) {
// Create map of existing domains
existingDomains := make(map[string]*models.Domain)
for _, domain := range target.Domains {
existingDomains[domain.SQLName()] = domain
}
// Merge domains
for _, srcDomain := range source.Domains {
domainName := srcDomain.SQLName()
if _, exists := existingDomains[domainName]; !exists {
// Domain doesn't exist, add it
newDomain := cloneDomain(srcDomain)
target.Domains = append(target.Domains, newDomain)
r.DomainsAdded++
}
}
}
// Clone functions to create deep copies of models
func cloneSchema(schema *models.Schema) *models.Schema {
if schema == nil {
return nil
}
newSchema := &models.Schema{
Name: schema.Name,
Description: schema.Description,
Owner: schema.Owner,
Comment: schema.Comment,
Sequence: schema.Sequence,
UpdatedAt: schema.UpdatedAt,
Tables: make([]*models.Table, 0),
Views: make([]*models.View, 0),
Sequences: make([]*models.Sequence, 0),
Enums: make([]*models.Enum, 0),
Relations: make([]*models.Relationship, 0),
}
if schema.Permissions != nil {
newSchema.Permissions = make(map[string]string)
for k, v := range schema.Permissions {
newSchema.Permissions[k] = v
}
}
if schema.Metadata != nil {
newSchema.Metadata = make(map[string]interface{})
for k, v := range schema.Metadata {
newSchema.Metadata[k] = v
}
}
if schema.Scripts != nil {
newSchema.Scripts = make([]*models.Script, len(schema.Scripts))
copy(newSchema.Scripts, schema.Scripts)
}
// Clone tables
for _, table := range schema.Tables {
newSchema.Tables = append(newSchema.Tables, cloneTable(table))
}
// Clone views
for _, view := range schema.Views {
newSchema.Views = append(newSchema.Views, cloneView(view))
}
// Clone sequences
for _, seq := range schema.Sequences {
newSchema.Sequences = append(newSchema.Sequences, cloneSequence(seq))
}
// Clone enums
for _, enum := range schema.Enums {
newSchema.Enums = append(newSchema.Enums, cloneEnum(enum))
}
// Clone relations
for _, rel := range schema.Relations {
newSchema.Relations = append(newSchema.Relations, cloneRelation(rel))
}
return newSchema
}
func cloneTable(table *models.Table) *models.Table {
if table == nil {
return nil
}
newTable := &models.Table{
Name: table.Name,
Description: table.Description,
Schema: table.Schema,
Comment: table.Comment,
Sequence: table.Sequence,
UpdatedAt: table.UpdatedAt,
Columns: make(map[string]*models.Column),
Constraints: make(map[string]*models.Constraint),
Indexes: make(map[string]*models.Index),
}
if table.Metadata != nil {
newTable.Metadata = make(map[string]interface{})
for k, v := range table.Metadata {
newTable.Metadata[k] = v
}
}
// Clone columns
for colName, col := range table.Columns {
newTable.Columns[colName] = cloneColumn(col)
}
// Clone constraints
for constName, constraint := range table.Constraints {
newTable.Constraints[constName] = cloneConstraint(constraint)
}
// Clone indexes
for idxName, index := range table.Indexes {
newTable.Indexes[idxName] = cloneIndex(index)
}
return newTable
}
func cloneColumn(col *models.Column) *models.Column {
if col == nil {
return nil
}
newCol := &models.Column{
Name: col.Name,
Type: col.Type,
Description: col.Description,
Comment: col.Comment,
IsPrimaryKey: col.IsPrimaryKey,
NotNull: col.NotNull,
Default: col.Default,
Precision: col.Precision,
Scale: col.Scale,
Length: col.Length,
Sequence: col.Sequence,
AutoIncrement: col.AutoIncrement,
Collation: col.Collation,
}
return newCol
}
func cloneConstraint(constraint *models.Constraint) *models.Constraint {
if constraint == nil {
return nil
}
newConstraint := &models.Constraint{
Type: constraint.Type,
Columns: make([]string, len(constraint.Columns)),
ReferencedTable: constraint.ReferencedTable,
ReferencedSchema: constraint.ReferencedSchema,
ReferencedColumns: make([]string, len(constraint.ReferencedColumns)),
OnUpdate: constraint.OnUpdate,
OnDelete: constraint.OnDelete,
Expression: constraint.Expression,
Name: constraint.Name,
Deferrable: constraint.Deferrable,
InitiallyDeferred: constraint.InitiallyDeferred,
Sequence: constraint.Sequence,
}
copy(newConstraint.Columns, constraint.Columns)
copy(newConstraint.ReferencedColumns, constraint.ReferencedColumns)
return newConstraint
}
func cloneIndex(index *models.Index) *models.Index {
if index == nil {
return nil
}
newIndex := &models.Index{
Name: index.Name,
Description: index.Description,
Table: index.Table,
Schema: index.Schema,
Columns: make([]string, len(index.Columns)),
Unique: index.Unique,
Type: index.Type,
Where: index.Where,
Concurrent: index.Concurrent,
Include: make([]string, len(index.Include)),
Comment: index.Comment,
Sequence: index.Sequence,
}
copy(newIndex.Columns, index.Columns)
copy(newIndex.Include, index.Include)
return newIndex
}
func cloneView(view *models.View) *models.View {
if view == nil {
return nil
}
newView := &models.View{
Name: view.Name,
Description: view.Description,
Schema: view.Schema,
Definition: view.Definition,
Comment: view.Comment,
Sequence: view.Sequence,
Columns: make(map[string]*models.Column),
}
if view.Metadata != nil {
newView.Metadata = make(map[string]interface{})
for k, v := range view.Metadata {
newView.Metadata[k] = v
}
}
// Clone columns
for colName, col := range view.Columns {
newView.Columns[colName] = cloneColumn(col)
}
return newView
}
func cloneSequence(seq *models.Sequence) *models.Sequence {
if seq == nil {
return nil
}
newSeq := &models.Sequence{
Name: seq.Name,
Description: seq.Description,
Schema: seq.Schema,
StartValue: seq.StartValue,
MinValue: seq.MinValue,
MaxValue: seq.MaxValue,
IncrementBy: seq.IncrementBy,
CacheSize: seq.CacheSize,
Cycle: seq.Cycle,
OwnedByTable: seq.OwnedByTable,
OwnedByColumn: seq.OwnedByColumn,
Comment: seq.Comment,
Sequence: seq.Sequence,
}
return newSeq
}
func cloneEnum(enum *models.Enum) *models.Enum {
if enum == nil {
return nil
}
newEnum := &models.Enum{
Name: enum.Name,
Values: make([]string, len(enum.Values)),
Schema: enum.Schema,
}
copy(newEnum.Values, enum.Values)
return newEnum
}
func cloneRelation(rel *models.Relationship) *models.Relationship {
if rel == nil {
return nil
}
newRel := &models.Relationship{
Name: rel.Name,
Type: rel.Type,
FromTable: rel.FromTable,
FromSchema: rel.FromSchema,
FromColumns: make([]string, len(rel.FromColumns)),
ToTable: rel.ToTable,
ToSchema: rel.ToSchema,
ToColumns: make([]string, len(rel.ToColumns)),
ForeignKey: rel.ForeignKey,
ThroughTable: rel.ThroughTable,
ThroughSchema: rel.ThroughSchema,
Description: rel.Description,
Sequence: rel.Sequence,
}
if rel.Properties != nil {
newRel.Properties = make(map[string]string)
for k, v := range rel.Properties {
newRel.Properties[k] = v
}
}
copy(newRel.FromColumns, rel.FromColumns)
copy(newRel.ToColumns, rel.ToColumns)
return newRel
}
func cloneDomain(domain *models.Domain) *models.Domain {
if domain == nil {
return nil
}
newDomain := &models.Domain{
Name: domain.Name,
Description: domain.Description,
Comment: domain.Comment,
Sequence: domain.Sequence,
Tables: make([]*models.DomainTable, len(domain.Tables)),
}
if domain.Metadata != nil {
newDomain.Metadata = make(map[string]interface{})
for k, v := range domain.Metadata {
newDomain.Metadata[k] = v
}
}
copy(newDomain.Tables, domain.Tables)
return newDomain
}
// GetMergeSummary returns a human-readable summary of the merge result
func GetMergeSummary(result *MergeResult) string {
if result == nil {
return "No merge result available"
}
lines := []string{
"=== Merge Summary ===",
fmt.Sprintf("Schemas added: %d", result.SchemasAdded),
fmt.Sprintf("Tables added: %d", result.TablesAdded),
fmt.Sprintf("Columns added: %d", result.ColumnsAdded),
fmt.Sprintf("Views added: %d", result.ViewsAdded),
fmt.Sprintf("Sequences added: %d", result.SequencesAdded),
fmt.Sprintf("Enums added: %d", result.EnumsAdded),
fmt.Sprintf("Relations added: %d", result.RelationsAdded),
fmt.Sprintf("Domains added: %d", result.DomainsAdded),
}
totalAdded := result.SchemasAdded + result.TablesAdded + result.ColumnsAdded +
result.ViewsAdded + result.SequencesAdded + result.EnumsAdded +
result.RelationsAdded + result.DomainsAdded
lines = append(lines, fmt.Sprintf("Total items added: %d", totalAdded))
summary := ""
for _, line := range lines {
summary += line + "\n"
}
return summary
}

View File

@@ -4,7 +4,12 @@
// intermediate representation for converting between various database schema formats. // intermediate representation for converting between various database schema formats.
package models package models
import "strings" import (
"strings"
"time"
"github.com/google/uuid"
)
// DatabaseType represents the type of database system. // DatabaseType represents the type of database system.
type DatabaseType string type DatabaseType string
@@ -21,10 +26,13 @@ type Database struct {
Name string `json:"name" yaml:"name"` Name string `json:"name" yaml:"name"`
Description string `json:"description,omitempty" yaml:"description,omitempty" xml:"description,omitempty"` Description string `json:"description,omitempty" yaml:"description,omitempty" xml:"description,omitempty"`
Schemas []*Schema `json:"schemas" yaml:"schemas" xml:"schemas"` Schemas []*Schema `json:"schemas" yaml:"schemas" xml:"schemas"`
Domains []*Domain `json:"domains,omitempty" yaml:"domains,omitempty" xml:"domains,omitempty"`
Comment string `json:"comment,omitempty" yaml:"comment,omitempty" xml:"comment,omitempty"` Comment string `json:"comment,omitempty" yaml:"comment,omitempty" xml:"comment,omitempty"`
DatabaseType DatabaseType `json:"database_type,omitempty" yaml:"database_type,omitempty" xml:"database_type,omitempty"` DatabaseType DatabaseType `json:"database_type,omitempty" yaml:"database_type,omitempty" xml:"database_type,omitempty"`
DatabaseVersion string `json:"database_version,omitempty" yaml:"database_version,omitempty" xml:"database_version,omitempty"` DatabaseVersion string `json:"database_version,omitempty" yaml:"database_version,omitempty" xml:"database_version,omitempty"`
SourceFormat string `json:"source_format,omitempty" yaml:"source_format,omitempty" xml:"source_format,omitempty"` // Source Format of the database. SourceFormat string `json:"source_format,omitempty" yaml:"source_format,omitempty" xml:"source_format,omitempty"` // Source Format of the database.
UpdatedAt string `json:"updatedat,omitempty" yaml:"updatedat,omitempty" xml:"updatedat,omitempty"`
GUID string `json:"guid" yaml:"guid" xml:"guid"`
} }
// SQLName returns the database name in lowercase for SQL compatibility. // SQLName returns the database name in lowercase for SQL compatibility.
@@ -32,6 +40,39 @@ func (d *Database) SQLName() string {
return strings.ToLower(d.Name) return strings.ToLower(d.Name)
} }
// UpdateDate sets the UpdatedAt field to the current time in RFC3339 format.
func (d *Database) UpdateDate() {
d.UpdatedAt = time.Now().Format(time.RFC3339)
}
// Domain represents a logical business domain grouping multiple tables from potentially different schemas.
// Domains allow for organizing database tables by functional areas (e.g., authentication, user data, financial).
type Domain struct {
Name string `json:"name" yaml:"name" xml:"name"`
Description string `json:"description,omitempty" yaml:"description,omitempty" xml:"description,omitempty"`
Tables []*DomainTable `json:"tables" yaml:"tables" xml:"tables"`
Comment string `json:"comment,omitempty" yaml:"comment,omitempty" xml:"comment,omitempty"`
Metadata map[string]any `json:"metadata,omitempty" yaml:"metadata,omitempty" xml:"-"`
Sequence uint `json:"sequence,omitempty" yaml:"sequence,omitempty" xml:"sequence,omitempty"`
GUID string `json:"guid" yaml:"guid" xml:"guid"`
}
// SQLName returns the domain name in lowercase for SQL compatibility.
func (d *Domain) SQLName() string {
return strings.ToLower(d.Name)
}
// DomainTable represents a reference to a specific table within a domain.
// It identifies the table by name and schema, allowing a single domain to include
// tables from multiple schemas.
type DomainTable struct {
TableName string `json:"table_name" yaml:"table_name" xml:"table_name"`
SchemaName string `json:"schema_name" yaml:"schema_name" xml:"schema_name"`
Sequence uint `json:"sequence,omitempty" yaml:"sequence,omitempty" xml:"sequence,omitempty"`
RefTable *Table `json:"-" yaml:"-" xml:"-"` // Excluded to prevent circular references
GUID string `json:"guid" yaml:"guid" xml:"guid"`
}
// Schema represents a database schema, which is a logical grouping of database objects // Schema represents a database schema, which is a logical grouping of database objects
// such as tables, views, sequences, and relationships within a database. // such as tables, views, sequences, and relationships within a database.
type Schema struct { type Schema struct {
@@ -49,6 +90,16 @@ type Schema struct {
RefDatabase *Database `json:"-" yaml:"-" xml:"-"` // Excluded to prevent circular references RefDatabase *Database `json:"-" yaml:"-" xml:"-"` // Excluded to prevent circular references
Relations []*Relationship `json:"relations,omitempty" yaml:"relations,omitempty" xml:"-"` Relations []*Relationship `json:"relations,omitempty" yaml:"relations,omitempty" xml:"-"`
Enums []*Enum `json:"enums,omitempty" yaml:"enums,omitempty" xml:"enums"` Enums []*Enum `json:"enums,omitempty" yaml:"enums,omitempty" xml:"enums"`
UpdatedAt string `json:"updatedat,omitempty" yaml:"updatedat,omitempty" xml:"updatedat,omitempty"`
GUID string `json:"guid" yaml:"guid" xml:"guid"`
}
// UpdaUpdateDateted sets the UpdatedAt field to the current time in RFC3339 format.
func (d *Schema) UpdateDate() {
d.UpdatedAt = time.Now().Format(time.RFC3339)
if d.RefDatabase != nil {
d.RefDatabase.UpdateDate()
}
} }
// SQLName returns the schema name in lowercase for SQL compatibility. // SQLName returns the schema name in lowercase for SQL compatibility.
@@ -71,6 +122,16 @@ type Table struct {
Metadata map[string]any `json:"metadata,omitempty" yaml:"metadata,omitempty" xml:"-"` Metadata map[string]any `json:"metadata,omitempty" yaml:"metadata,omitempty" xml:"-"`
Sequence uint `json:"sequence,omitempty" yaml:"sequence,omitempty" xml:"sequence,omitempty"` Sequence uint `json:"sequence,omitempty" yaml:"sequence,omitempty" xml:"sequence,omitempty"`
RefSchema *Schema `json:"-" yaml:"-" xml:"-"` // Excluded to prevent circular references RefSchema *Schema `json:"-" yaml:"-" xml:"-"` // Excluded to prevent circular references
UpdatedAt string `json:"updatedat,omitempty" yaml:"updatedat,omitempty" xml:"updatedat,omitempty"`
GUID string `json:"guid" yaml:"guid" xml:"guid"`
}
// UpdateDate sets the UpdatedAt field to the current time in RFC3339 format.
func (d *Table) UpdateDate() {
d.UpdatedAt = time.Now().Format(time.RFC3339)
if d.RefSchema != nil {
d.RefSchema.UpdateDate()
}
} }
// SQLName returns the table name in lowercase for SQL compatibility. // SQLName returns the table name in lowercase for SQL compatibility.
@@ -111,6 +172,7 @@ type View struct {
Metadata map[string]any `json:"metadata,omitempty" yaml:"metadata,omitempty" xml:"-"` Metadata map[string]any `json:"metadata,omitempty" yaml:"metadata,omitempty" xml:"-"`
Sequence uint `json:"sequence,omitempty" yaml:"sequence,omitempty" xml:"sequence,omitempty"` Sequence uint `json:"sequence,omitempty" yaml:"sequence,omitempty" xml:"sequence,omitempty"`
RefSchema *Schema `json:"-" yaml:"-" xml:"-"` // Excluded to prevent circular references RefSchema *Schema `json:"-" yaml:"-" xml:"-"` // Excluded to prevent circular references
GUID string `json:"guid" yaml:"guid" xml:"guid"`
} }
// SQLName returns the view name in lowercase for SQL compatibility. // SQLName returns the view name in lowercase for SQL compatibility.
@@ -134,6 +196,7 @@ type Sequence struct {
Comment string `json:"comment,omitempty" yaml:"comment,omitempty" xml:"comment,omitempty"` Comment string `json:"comment,omitempty" yaml:"comment,omitempty" xml:"comment,omitempty"`
Sequence uint `json:"sequence,omitempty" yaml:"sequence,omitempty" xml:"sequence,omitempty"` Sequence uint `json:"sequence,omitempty" yaml:"sequence,omitempty" xml:"sequence,omitempty"`
RefSchema *Schema `json:"-" yaml:"-" xml:"-"` // Excluded to prevent circular references RefSchema *Schema `json:"-" yaml:"-" xml:"-"` // Excluded to prevent circular references
GUID string `json:"guid" yaml:"guid" xml:"guid"`
} }
// SQLName returns the sequence name in lowercase for SQL compatibility. // SQLName returns the sequence name in lowercase for SQL compatibility.
@@ -158,6 +221,7 @@ type Column struct {
Comment string `json:"comment,omitempty" yaml:"comment,omitempty" xml:"comment,omitempty"` Comment string `json:"comment,omitempty" yaml:"comment,omitempty" xml:"comment,omitempty"`
Collation string `json:"collation,omitempty" yaml:"collation,omitempty" xml:"collation,omitempty"` Collation string `json:"collation,omitempty" yaml:"collation,omitempty" xml:"collation,omitempty"`
Sequence uint `json:"sequence,omitempty" yaml:"sequence,omitempty" xml:"sequence,omitempty"` Sequence uint `json:"sequence,omitempty" yaml:"sequence,omitempty" xml:"sequence,omitempty"`
GUID string `json:"guid" yaml:"guid" xml:"guid"`
} }
// SQLName returns the column name in lowercase for SQL compatibility. // SQLName returns the column name in lowercase for SQL compatibility.
@@ -180,6 +244,7 @@ type Index struct {
Include []string `json:"include,omitempty" yaml:"include,omitempty" xml:"include,omitempty"` // INCLUDE columns Include []string `json:"include,omitempty" yaml:"include,omitempty" xml:"include,omitempty"` // INCLUDE columns
Comment string `json:"comment,omitempty" yaml:"comment,omitempty" xml:"comment,omitempty"` Comment string `json:"comment,omitempty" yaml:"comment,omitempty" xml:"comment,omitempty"`
Sequence uint `json:"sequence,omitempty" yaml:"sequence,omitempty" xml:"sequence,omitempty"` Sequence uint `json:"sequence,omitempty" yaml:"sequence,omitempty" xml:"sequence,omitempty"`
GUID string `json:"guid" yaml:"guid" xml:"guid"`
} }
// SQLName returns the index name in lowercase for SQL compatibility. // SQLName returns the index name in lowercase for SQL compatibility.
@@ -214,6 +279,7 @@ type Relationship struct {
ThroughSchema string `json:"through_schema,omitempty" yaml:"through_schema,omitempty" xml:"through_schema,omitempty"` ThroughSchema string `json:"through_schema,omitempty" yaml:"through_schema,omitempty" xml:"through_schema,omitempty"`
Description string `json:"description,omitempty" yaml:"description,omitempty" xml:"description,omitempty"` Description string `json:"description,omitempty" yaml:"description,omitempty" xml:"description,omitempty"`
Sequence uint `json:"sequence,omitempty" yaml:"sequence,omitempty" xml:"sequence,omitempty"` Sequence uint `json:"sequence,omitempty" yaml:"sequence,omitempty" xml:"sequence,omitempty"`
GUID string `json:"guid" yaml:"guid" xml:"guid"`
} }
// SQLName returns the relationship name in lowercase for SQL compatibility. // SQLName returns the relationship name in lowercase for SQL compatibility.
@@ -238,6 +304,7 @@ type Constraint struct {
Deferrable bool `json:"deferrable,omitempty" yaml:"deferrable,omitempty" xml:"deferrable,omitempty"` Deferrable bool `json:"deferrable,omitempty" yaml:"deferrable,omitempty" xml:"deferrable,omitempty"`
InitiallyDeferred bool `json:"initially_deferred,omitempty" yaml:"initially_deferred,omitempty" xml:"initially_deferred,omitempty"` InitiallyDeferred bool `json:"initially_deferred,omitempty" yaml:"initially_deferred,omitempty" xml:"initially_deferred,omitempty"`
Sequence uint `json:"sequence,omitempty" yaml:"sequence,omitempty" xml:"sequence,omitempty"` Sequence uint `json:"sequence,omitempty" yaml:"sequence,omitempty" xml:"sequence,omitempty"`
GUID string `json:"guid" yaml:"guid" xml:"guid"`
} }
// SQLName returns the constraint name in lowercase for SQL compatibility. // SQLName returns the constraint name in lowercase for SQL compatibility.
@@ -253,6 +320,7 @@ type Enum struct {
Name string `json:"name" yaml:"name" xml:"name"` Name string `json:"name" yaml:"name" xml:"name"`
Values []string `json:"values" yaml:"values" xml:"values"` Values []string `json:"values" yaml:"values" xml:"values"`
Schema string `json:"schema,omitempty" yaml:"schema,omitempty" xml:"schema,omitempty"` Schema string `json:"schema,omitempty" yaml:"schema,omitempty" xml:"schema,omitempty"`
GUID string `json:"guid" yaml:"guid" xml:"guid"`
} }
// SQLName returns the enum name in lowercase for SQL compatibility. // SQLName returns the enum name in lowercase for SQL compatibility.
@@ -260,6 +328,16 @@ func (d *Enum) SQLName() string {
return strings.ToLower(d.Name) return strings.ToLower(d.Name)
} }
// InitEnum initializes a new Enum with empty values slice
func InitEnum(name, schema string) *Enum {
return &Enum{
Name: name,
Schema: schema,
Values: make([]string, 0),
GUID: uuid.New().String(),
}
}
// Supported constraint types. // Supported constraint types.
const ( const (
PrimaryKeyConstraint ConstraintType = "primary_key" // Primary key uniquely identifies each record PrimaryKeyConstraint ConstraintType = "primary_key" // Primary key uniquely identifies each record
@@ -281,6 +359,7 @@ type Script struct {
Version string `json:"version,omitempty" yaml:"version,omitempty" xml:"version,omitempty"` Version string `json:"version,omitempty" yaml:"version,omitempty" xml:"version,omitempty"`
Priority int `json:"priority,omitempty" yaml:"priority,omitempty" xml:"priority,omitempty"` Priority int `json:"priority,omitempty" yaml:"priority,omitempty" xml:"priority,omitempty"`
Sequence uint `json:"sequence,omitempty" yaml:"sequence,omitempty" xml:"sequence,omitempty"` Sequence uint `json:"sequence,omitempty" yaml:"sequence,omitempty" xml:"sequence,omitempty"`
GUID string `json:"guid" yaml:"guid" xml:"guid"`
} }
// SQLName returns the script name in lowercase for SQL compatibility. // SQLName returns the script name in lowercase for SQL compatibility.
@@ -295,6 +374,8 @@ func InitDatabase(name string) *Database {
return &Database{ return &Database{
Name: name, Name: name,
Schemas: make([]*Schema, 0), Schemas: make([]*Schema, 0),
Domains: make([]*Domain, 0),
GUID: uuid.New().String(),
} }
} }
@@ -308,6 +389,7 @@ func InitSchema(name string) *Schema {
Permissions: make(map[string]string), Permissions: make(map[string]string),
Metadata: make(map[string]any), Metadata: make(map[string]any),
Scripts: make([]*Script, 0), Scripts: make([]*Script, 0),
GUID: uuid.New().String(),
} }
} }
@@ -321,6 +403,7 @@ func InitTable(name, schema string) *Table {
Indexes: make(map[string]*Index), Indexes: make(map[string]*Index),
Relationships: make(map[string]*Relationship), Relationships: make(map[string]*Relationship),
Metadata: make(map[string]any), Metadata: make(map[string]any),
GUID: uuid.New().String(),
} }
} }
@@ -330,6 +413,7 @@ func InitColumn(name, table, schema string) *Column {
Name: name, Name: name,
Table: table, Table: table,
Schema: schema, Schema: schema,
GUID: uuid.New().String(),
} }
} }
@@ -341,6 +425,7 @@ func InitIndex(name, table, schema string) *Index {
Schema: schema, Schema: schema,
Columns: make([]string, 0), Columns: make([]string, 0),
Include: make([]string, 0), Include: make([]string, 0),
GUID: uuid.New().String(),
} }
} }
@@ -353,6 +438,7 @@ func InitRelation(name, schema string) *Relationship {
Properties: make(map[string]string), Properties: make(map[string]string),
FromColumns: make([]string, 0), FromColumns: make([]string, 0),
ToColumns: make([]string, 0), ToColumns: make([]string, 0),
GUID: uuid.New().String(),
} }
} }
@@ -362,6 +448,7 @@ func InitRelationship(name string, relType RelationType) *Relationship {
Name: name, Name: name,
Type: relType, Type: relType,
Properties: make(map[string]string), Properties: make(map[string]string),
GUID: uuid.New().String(),
} }
} }
@@ -372,6 +459,7 @@ func InitConstraint(name string, constraintType ConstraintType) *Constraint {
Type: constraintType, Type: constraintType,
Columns: make([]string, 0), Columns: make([]string, 0),
ReferencedColumns: make([]string, 0), ReferencedColumns: make([]string, 0),
GUID: uuid.New().String(),
} }
} }
@@ -380,6 +468,7 @@ func InitScript(name string) *Script {
return &Script{ return &Script{
Name: name, Name: name,
RunAfter: make([]string, 0), RunAfter: make([]string, 0),
GUID: uuid.New().String(),
} }
} }
@@ -390,6 +479,7 @@ func InitView(name, schema string) *View {
Schema: schema, Schema: schema,
Columns: make(map[string]*Column), Columns: make(map[string]*Column),
Metadata: make(map[string]any), Metadata: make(map[string]any),
GUID: uuid.New().String(),
} }
} }
@@ -400,5 +490,25 @@ func InitSequence(name, schema string) *Sequence {
Schema: schema, Schema: schema,
IncrementBy: 1, IncrementBy: 1,
StartValue: 1, StartValue: 1,
GUID: uuid.New().String(),
}
}
// InitDomain initializes a new Domain with empty slices and maps
func InitDomain(name string) *Domain {
return &Domain{
Name: name,
Tables: make([]*DomainTable, 0),
Metadata: make(map[string]any),
GUID: uuid.New().String(),
}
}
// InitDomainTable initializes a new DomainTable reference
func InitDomainTable(tableName, schemaName string) *DomainTable {
return &DomainTable{
TableName: tableName,
SchemaName: schemaName,
GUID: uuid.New().String(),
} }
} }

View File

@@ -4,31 +4,31 @@ import "strings"
var GoToStdTypes = map[string]string{ var GoToStdTypes = map[string]string{
"bool": "boolean", "bool": "boolean",
"int64": "integer", "int64": "bigint",
"int": "integer", "int": "integer",
"int8": "integer", "int8": "smallint",
"int16": "integer", "int16": "smallint",
"int32": "integer", "int32": "integer",
"uint": "integer", "uint": "integer",
"uint8": "integer", "uint8": "smallint",
"uint16": "integer", "uint16": "smallint",
"uint32": "integer", "uint32": "integer",
"uint64": "integer", "uint64": "bigint",
"uintptr": "integer", "uintptr": "bigint",
"znullint64": "integer", "znullint64": "bigint",
"znullint32": "integer", "znullint32": "integer",
"znullbyte": "integer", "znullbyte": "smallint",
"float64": "double", "float64": "double",
"float32": "double", "float32": "double",
"complex64": "double", "complex64": "double",
"complex128": "double", "complex128": "double",
"customfloat64": "double", "customfloat64": "double",
"string": "string", "string": "text",
"Pointer": "integer", "Pointer": "bigint",
"[]byte": "blob", "[]byte": "blob",
"customdate": "string", "customdate": "date",
"customtime": "string", "customtime": "time",
"customtimestamp": "string", "customtimestamp": "timestamp",
"sqlfloat64": "double", "sqlfloat64": "double",
"sqlfloat16": "double", "sqlfloat16": "double",
"sqluuid": "uuid", "sqluuid": "uuid",
@@ -36,9 +36,9 @@ var GoToStdTypes = map[string]string{
"sqljson": "json", "sqljson": "json",
"sqlint64": "bigint", "sqlint64": "bigint",
"sqlint32": "integer", "sqlint32": "integer",
"sqlint16": "integer", "sqlint16": "smallint",
"sqlbool": "boolean", "sqlbool": "boolean",
"sqlstring": "string", "sqlstring": "text",
"nullablejsonb": "jsonb", "nullablejsonb": "jsonb",
"nullablejson": "json", "nullablejson": "json",
"nullableuuid": "uuid", "nullableuuid": "uuid",
@@ -67,7 +67,7 @@ var GoToPGSQLTypes = map[string]string{
"float32": "real", "float32": "real",
"complex64": "double precision", "complex64": "double precision",
"complex128": "double precision", "complex128": "double precision",
"customfloat64": "double precisio", "customfloat64": "double precision",
"string": "text", "string": "text",
"Pointer": "bigint", "Pointer": "bigint",
"[]byte": "bytea", "[]byte": "bytea",
@@ -81,9 +81,9 @@ var GoToPGSQLTypes = map[string]string{
"sqljson": "json", "sqljson": "json",
"sqlint64": "bigint", "sqlint64": "bigint",
"sqlint32": "integer", "sqlint32": "integer",
"sqlint16": "integer", "sqlint16": "smallint",
"sqlbool": "boolean", "sqlbool": "boolean",
"sqlstring": "string", "sqlstring": "text",
"nullablejsonb": "jsonb", "nullablejsonb": "jsonb",
"nullablejson": "json", "nullablejson": "json",
"nullableuuid": "uuid", "nullableuuid": "uuid",

View File

@@ -632,6 +632,9 @@ func (r *Reader) parseColumn(fieldName string, fieldType ast.Expr, tag string, s
column.Name = parts[0] column.Name = parts[0]
} }
// Track if we found explicit nullability markers
hasExplicitNullableMarker := false
// Parse tag attributes // Parse tag attributes
for _, part := range parts[1:] { for _, part := range parts[1:] {
kv := strings.SplitN(part, ":", 2) kv := strings.SplitN(part, ":", 2)
@@ -649,6 +652,10 @@ func (r *Reader) parseColumn(fieldName string, fieldType ast.Expr, tag string, s
column.IsPrimaryKey = true column.IsPrimaryKey = true
case "notnull": case "notnull":
column.NotNull = true column.NotNull = true
hasExplicitNullableMarker = true
case "nullzero":
column.NotNull = false
hasExplicitNullableMarker = true
case "autoincrement": case "autoincrement":
column.AutoIncrement = true column.AutoIncrement = true
case "default": case "default":
@@ -664,17 +671,15 @@ func (r *Reader) parseColumn(fieldName string, fieldType ast.Expr, tag string, s
// Determine if nullable based on Go type and bun tags // Determine if nullable based on Go type and bun tags
// In Bun: // In Bun:
// - nullzero tag means the field is nullable (can be NULL in DB) // - explicit "notnull" tag means NOT NULL
// - absence of nullzero means the field is NOT NULL // - explicit "nullzero" tag means nullable
// - primitive types (int64, bool, string) are NOT NULL by default // - absence of explicit markers: infer from Go type
column.NotNull = true if !hasExplicitNullableMarker {
// Primary keys are always NOT NULL // Infer from Go type if no explicit marker found
if strings.Contains(bunTag, "nullzero") {
column.NotNull = false
} else {
column.NotNull = !r.isNullableGoType(fieldType) column.NotNull = !r.isNullableGoType(fieldType)
} }
// Primary keys are always NOT NULL
if column.IsPrimaryKey { if column.IsPrimaryKey {
column.NotNull = true column.NotNull = true
} }

View File

@@ -4,7 +4,9 @@ import (
"bufio" "bufio"
"fmt" "fmt"
"os" "os"
"path/filepath"
"regexp" "regexp"
"sort"
"strings" "strings"
"git.warky.dev/wdevs/relspecgo/pkg/models" "git.warky.dev/wdevs/relspecgo/pkg/models"
@@ -24,11 +26,23 @@ func NewReader(options *readers.ReaderOptions) *Reader {
} }
// ReadDatabase reads and parses DBML input, returning a Database model // ReadDatabase reads and parses DBML input, returning a Database model
// If FilePath points to a directory, all .dbml files are loaded and merged
func (r *Reader) ReadDatabase() (*models.Database, error) { func (r *Reader) ReadDatabase() (*models.Database, error) {
if r.options.FilePath == "" { if r.options.FilePath == "" {
return nil, fmt.Errorf("file path is required for DBML reader") return nil, fmt.Errorf("file path is required for DBML reader")
} }
// Check if path is a directory
info, err := os.Stat(r.options.FilePath)
if err != nil {
return nil, fmt.Errorf("failed to stat path: %w", err)
}
if info.IsDir() {
return r.readDirectoryDBML(r.options.FilePath)
}
// Single file - existing logic
content, err := os.ReadFile(r.options.FilePath) content, err := os.ReadFile(r.options.FilePath)
if err != nil { if err != nil {
return nil, fmt.Errorf("failed to read file: %w", err) return nil, fmt.Errorf("failed to read file: %w", err)
@@ -67,15 +81,341 @@ func (r *Reader) ReadTable() (*models.Table, error) {
return schema.Tables[0], nil return schema.Tables[0], nil
} }
// stripQuotes removes surrounding quotes from an identifier // readDirectoryDBML processes all .dbml files in directory
// Returns merged Database model
func (r *Reader) readDirectoryDBML(dirPath string) (*models.Database, error) {
// Discover and sort DBML files
files, err := r.discoverDBMLFiles(dirPath)
if err != nil {
return nil, fmt.Errorf("failed to discover DBML files: %w", err)
}
// If no files found, return empty database
if len(files) == 0 {
db := models.InitDatabase("database")
if r.options.Metadata != nil {
if name, ok := r.options.Metadata["name"].(string); ok {
db.Name = name
}
}
return db, nil
}
// Initialize database (will be merged with files)
var db *models.Database
// Process each file in sorted order
for _, filePath := range files {
content, err := os.ReadFile(filePath)
if err != nil {
return nil, fmt.Errorf("failed to read file %s: %w", filePath, err)
}
fileDB, err := r.parseDBML(string(content))
if err != nil {
return nil, fmt.Errorf("failed to parse file %s: %w", filePath, err)
}
// First file initializes the database
if db == nil {
db = fileDB
} else {
// Subsequent files are merged
mergeDatabase(db, fileDB)
}
}
return db, nil
}
// splitIdentifier splits a dotted identifier while respecting quotes
// Handles cases like: "schema.with.dots"."table"."column"
func splitIdentifier(s string) []string {
var parts []string
var current strings.Builder
inQuote := false
quoteChar := byte(0)
for i := 0; i < len(s); i++ {
ch := s[i]
if !inQuote {
switch ch {
case '"', '\'':
inQuote = true
quoteChar = ch
current.WriteByte(ch)
case '.':
if current.Len() > 0 {
parts = append(parts, current.String())
current.Reset()
}
default:
current.WriteByte(ch)
}
} else {
current.WriteByte(ch)
if ch == quoteChar {
inQuote = false
}
}
}
if current.Len() > 0 {
parts = append(parts, current.String())
}
return parts
}
// stripQuotes removes surrounding quotes and comments from an identifier
func stripQuotes(s string) string { func stripQuotes(s string) string {
s = strings.TrimSpace(s) s = strings.TrimSpace(s)
// Remove DBML comments in brackets (e.g., [note: 'description'])
// This handles inline comments like: "table_name" [note: 'comment']
commentRegex := regexp.MustCompile(`\s*\[.*?\]\s*`)
s = commentRegex.ReplaceAllString(s, "")
// Trim again after removing comments
s = strings.TrimSpace(s)
// Remove surrounding quotes (double or single)
if len(s) >= 2 && ((s[0] == '"' && s[len(s)-1] == '"') || (s[0] == '\'' && s[len(s)-1] == '\'')) { if len(s) >= 2 && ((s[0] == '"' && s[len(s)-1] == '"') || (s[0] == '\'' && s[len(s)-1] == '\'')) {
return s[1 : len(s)-1] return s[1 : len(s)-1]
} }
return s return s
} }
// parseFilePrefix extracts numeric prefix from filename
// Examples: "1_schema.dbml" -> (1, true), "tables.dbml" -> (0, false)
func parseFilePrefix(filename string) (int, bool) {
base := filepath.Base(filename)
re := regexp.MustCompile(`^(\d+)[_-]`)
matches := re.FindStringSubmatch(base)
if len(matches) > 1 {
var prefix int
_, err := fmt.Sscanf(matches[1], "%d", &prefix)
if err == nil {
return prefix, true
}
}
return 0, false
}
// hasCommentedRefs scans file content for commented-out Ref statements
// Returns true if file contains lines like: // Ref: table.col > other.col
func hasCommentedRefs(filePath string) (bool, error) {
content, err := os.ReadFile(filePath)
if err != nil {
return false, err
}
scanner := bufio.NewScanner(strings.NewReader(string(content)))
commentedRefRegex := regexp.MustCompile(`^\s*//.*Ref:\s+`)
for scanner.Scan() {
line := scanner.Text()
if commentedRefRegex.MatchString(line) {
return true, nil
}
}
return false, nil
}
// discoverDBMLFiles finds all .dbml files in directory and returns them sorted
func (r *Reader) discoverDBMLFiles(dirPath string) ([]string, error) {
pattern := filepath.Join(dirPath, "*.dbml")
files, err := filepath.Glob(pattern)
if err != nil {
return nil, fmt.Errorf("failed to glob .dbml files: %w", err)
}
return sortDBMLFiles(files), nil
}
// sortDBMLFiles sorts files by:
// 1. Files without commented refs (by numeric prefix, then alphabetically)
// 2. Files with commented refs (by numeric prefix, then alphabetically)
func sortDBMLFiles(files []string) []string {
// Create a slice to hold file info for sorting
type fileInfo struct {
path string
hasCommented bool
prefix int
hasPrefix bool
basename string
}
fileInfos := make([]fileInfo, 0, len(files))
for _, file := range files {
hasCommented, err := hasCommentedRefs(file)
if err != nil {
// If we can't read the file, treat it as not having commented refs
hasCommented = false
}
prefix, hasPrefix := parseFilePrefix(file)
basename := filepath.Base(file)
fileInfos = append(fileInfos, fileInfo{
path: file,
hasCommented: hasCommented,
prefix: prefix,
hasPrefix: hasPrefix,
basename: basename,
})
}
// Sort by: hasCommented (false first), hasPrefix (true first), prefix, basename
sort.Slice(fileInfos, func(i, j int) bool {
// First, sort by commented refs (files without commented refs come first)
if fileInfos[i].hasCommented != fileInfos[j].hasCommented {
return !fileInfos[i].hasCommented
}
// Then by presence of prefix (files with prefix come first)
if fileInfos[i].hasPrefix != fileInfos[j].hasPrefix {
return fileInfos[i].hasPrefix
}
// If both have prefix, sort by prefix value
if fileInfos[i].hasPrefix && fileInfos[j].hasPrefix {
if fileInfos[i].prefix != fileInfos[j].prefix {
return fileInfos[i].prefix < fileInfos[j].prefix
}
}
// Finally, sort alphabetically by basename
return fileInfos[i].basename < fileInfos[j].basename
})
// Extract sorted paths
sortedFiles := make([]string, len(fileInfos))
for i, info := range fileInfos {
sortedFiles[i] = info.path
}
return sortedFiles
}
// mergeTable combines two table definitions
// Merges: Columns (map), Constraints (map), Indexes (map), Relationships (map)
// Uses first non-empty Description
func mergeTable(baseTable, fileTable *models.Table) {
// Merge columns (map naturally merges - later keys overwrite)
for key, col := range fileTable.Columns {
baseTable.Columns[key] = col
}
// Merge constraints
for key, constraint := range fileTable.Constraints {
baseTable.Constraints[key] = constraint
}
// Merge indexes
for key, index := range fileTable.Indexes {
baseTable.Indexes[key] = index
}
// Merge relationships
for key, rel := range fileTable.Relationships {
baseTable.Relationships[key] = rel
}
// Use first non-empty description
if baseTable.Description == "" && fileTable.Description != "" {
baseTable.Description = fileTable.Description
}
// Merge metadata maps
if baseTable.Metadata == nil {
baseTable.Metadata = make(map[string]any)
}
for key, val := range fileTable.Metadata {
baseTable.Metadata[key] = val
}
}
// mergeSchema finds or creates schema and merges tables
func mergeSchema(baseDB *models.Database, fileSchema *models.Schema) {
// Find existing schema by name (normalize names by stripping quotes)
var existingSchema *models.Schema
fileSchemaName := stripQuotes(fileSchema.Name)
for _, schema := range baseDB.Schemas {
if stripQuotes(schema.Name) == fileSchemaName {
existingSchema = schema
break
}
}
// If schema doesn't exist, add it and return
if existingSchema == nil {
baseDB.Schemas = append(baseDB.Schemas, fileSchema)
return
}
// Merge tables from fileSchema into existingSchema
for _, fileTable := range fileSchema.Tables {
// Find existing table by name (normalize names by stripping quotes)
var existingTable *models.Table
fileTableName := stripQuotes(fileTable.Name)
for _, table := range existingSchema.Tables {
if stripQuotes(table.Name) == fileTableName {
existingTable = table
break
}
}
// If table doesn't exist, add it
if existingTable == nil {
existingSchema.Tables = append(existingSchema.Tables, fileTable)
} else {
// Merge table properties - tables are identical, skip
mergeTable(existingTable, fileTable)
}
}
// Merge other schema properties
existingSchema.Views = append(existingSchema.Views, fileSchema.Views...)
existingSchema.Sequences = append(existingSchema.Sequences, fileSchema.Sequences...)
existingSchema.Scripts = append(existingSchema.Scripts, fileSchema.Scripts...)
// Merge permissions
if existingSchema.Permissions == nil {
existingSchema.Permissions = make(map[string]string)
}
for key, val := range fileSchema.Permissions {
existingSchema.Permissions[key] = val
}
// Merge metadata
if existingSchema.Metadata == nil {
existingSchema.Metadata = make(map[string]any)
}
for key, val := range fileSchema.Metadata {
existingSchema.Metadata[key] = val
}
}
// mergeDatabase merges schemas from fileDB into baseDB
func mergeDatabase(baseDB, fileDB *models.Database) {
// Merge each schema from fileDB
for _, fileSchema := range fileDB.Schemas {
mergeSchema(baseDB, fileSchema)
}
// Merge domains
baseDB.Domains = append(baseDB.Domains, fileDB.Domains...)
// Use first non-empty description
if baseDB.Description == "" && fileDB.Description != "" {
baseDB.Description = fileDB.Description
}
}
// parseDBML parses DBML content and returns a Database model // parseDBML parses DBML content and returns a Database model
func (r *Reader) parseDBML(content string) (*models.Database, error) { func (r *Reader) parseDBML(content string) (*models.Database, error) {
db := models.InitDatabase("database") db := models.InitDatabase("database")
@@ -109,7 +449,9 @@ func (r *Reader) parseDBML(content string) (*models.Database, error) {
// Parse Table definition // Parse Table definition
if matches := tableRegex.FindStringSubmatch(line); matches != nil { if matches := tableRegex.FindStringSubmatch(line); matches != nil {
tableName := matches[1] tableName := matches[1]
parts := strings.Split(tableName, ".") // Strip comments/notes before parsing to avoid dots in notes
tableName = strings.TrimSpace(regexp.MustCompile(`\s*\[.*?\]\s*`).ReplaceAllString(tableName, ""))
parts := splitIdentifier(tableName)
if len(parts) == 2 { if len(parts) == 2 {
currentSchema = stripQuotes(parts[0]) currentSchema = stripQuotes(parts[0])
@@ -261,8 +603,10 @@ func (r *Reader) parseColumn(line, tableName, schemaName string) (*models.Column
column.Default = strings.Trim(defaultVal, "'\"") column.Default = strings.Trim(defaultVal, "'\"")
} else if attr == "unique" { } else if attr == "unique" {
// Create a unique constraint // Create a unique constraint
// Clean table name by removing leading underscores to avoid double underscores
cleanTableName := strings.TrimLeft(tableName, "_")
uniqueConstraint := models.InitConstraint( uniqueConstraint := models.InitConstraint(
fmt.Sprintf("uq_%s", columnName), fmt.Sprintf("ukey_%s_%s", cleanTableName, columnName),
models.UniqueConstraint, models.UniqueConstraint,
) )
uniqueConstraint.Schema = schemaName uniqueConstraint.Schema = schemaName
@@ -287,10 +631,10 @@ func (r *Reader) parseColumn(line, tableName, schemaName string) (*models.Column
refOp := strings.TrimSpace(refStr) refOp := strings.TrimSpace(refStr)
var isReverse bool var isReverse bool
if strings.HasPrefix(refOp, "<") { if strings.HasPrefix(refOp, "<") {
isReverse = column.IsPrimaryKey // < on PK means "is referenced by" (reverse) // < means "is referenced by" - only makes sense on PK columns
} else if strings.HasPrefix(refOp, ">") { isReverse = column.IsPrimaryKey
isReverse = !column.IsPrimaryKey // > on FK means reverse
} }
// > means "references" - always a forward FK, never reverse
constraint = r.parseRef(refStr) constraint = r.parseRef(refStr)
if constraint != nil { if constraint != nil {
@@ -310,8 +654,8 @@ func (r *Reader) parseColumn(line, tableName, schemaName string) (*models.Column
constraint.Table = tableName constraint.Table = tableName
constraint.Columns = []string{columnName} constraint.Columns = []string{columnName}
} }
// Generate short constraint name based on the column // Generate constraint name based on table and columns
constraint.Name = fmt.Sprintf("fk_%s", constraint.Columns[0]) constraint.Name = fmt.Sprintf("fk_%s_%s", constraint.Table, strings.Join(constraint.Columns, "_"))
} }
} }
} }
@@ -332,29 +676,33 @@ func (r *Reader) parseIndex(line, tableName, schemaName string) *models.Index {
// Format: (columns) [attributes] OR columnname [attributes] // Format: (columns) [attributes] OR columnname [attributes]
var columns []string var columns []string
if strings.Contains(line, "(") && strings.Contains(line, ")") { // Find the attributes section to avoid parsing parentheses in notes/attributes
attrStart := strings.Index(line, "[")
columnPart := line
if attrStart > 0 {
columnPart = line[:attrStart]
}
if strings.Contains(columnPart, "(") && strings.Contains(columnPart, ")") {
// Multi-column format: (col1, col2) [attributes] // Multi-column format: (col1, col2) [attributes]
colStart := strings.Index(line, "(") colStart := strings.Index(columnPart, "(")
colEnd := strings.Index(line, ")") colEnd := strings.Index(columnPart, ")")
if colStart >= colEnd { if colStart >= colEnd {
return nil return nil
} }
columnsStr := line[colStart+1 : colEnd] columnsStr := columnPart[colStart+1 : colEnd]
for _, col := range strings.Split(columnsStr, ",") { for _, col := range strings.Split(columnsStr, ",") {
columns = append(columns, stripQuotes(strings.TrimSpace(col))) columns = append(columns, stripQuotes(strings.TrimSpace(col)))
} }
} else if strings.Contains(line, "[") { } else if attrStart > 0 {
// Single column format: columnname [attributes] // Single column format: columnname [attributes]
// Extract column name before the bracket // Extract column name before the bracket
idx := strings.Index(line, "[") colName := strings.TrimSpace(columnPart)
if idx > 0 {
colName := strings.TrimSpace(line[:idx])
if colName != "" { if colName != "" {
columns = []string{stripQuotes(colName)} columns = []string{stripQuotes(colName)}
} }
} }
}
if len(columns) == 0 { if len(columns) == 0 {
return nil return nil
@@ -391,7 +739,11 @@ func (r *Reader) parseIndex(line, tableName, schemaName string) *models.Index {
// Generate name if not provided // Generate name if not provided
if index.Name == "" { if index.Name == "" {
index.Name = fmt.Sprintf("idx_%s_%s", tableName, strings.Join(columns, "_")) prefix := "idx"
if index.Unique {
prefix = "uidx"
}
index.Name = fmt.Sprintf("%s_%s_%s", prefix, tableName, strings.Join(columns, "_"))
} }
return index return index
@@ -451,10 +803,10 @@ func (r *Reader) parseRef(refStr string) *models.Constraint {
return nil return nil
} }
// Generate short constraint name based on the source column // Generate constraint name based on table and columns
constraintName := fmt.Sprintf("fk_%s_%s", fromTable, toTable) constraintName := fmt.Sprintf("fk_%s_%s", fromTable, strings.Join(fromColumns, "_"))
if len(fromColumns) > 0 { if len(fromColumns) == 0 {
constraintName = fmt.Sprintf("fk_%s", fromColumns[0]) constraintName = fmt.Sprintf("fk_%s_%s", fromTable, toTable)
} }
constraint := models.InitConstraint( constraint := models.InitConstraint(
@@ -510,7 +862,7 @@ func (r *Reader) parseTableRef(ref string) (schema, table string, columns []stri
} }
// Parse schema, table, and optionally column // Parse schema, table, and optionally column
parts := strings.Split(strings.TrimSpace(ref), ".") parts := splitIdentifier(strings.TrimSpace(ref))
if len(parts) == 3 { if len(parts) == 3 {
// Format: "schema"."table"."column" // Format: "schema"."table"."column"
schema = stripQuotes(parts[0]) schema = stripQuotes(parts[0])

View File

@@ -1,6 +1,7 @@
package dbml package dbml
import ( import (
"os"
"path/filepath" "path/filepath"
"testing" "testing"
@@ -517,3 +518,356 @@ func TestGetForeignKeys(t *testing.T) {
t.Error("Expected foreign key constraint type") t.Error("Expected foreign key constraint type")
} }
} }
// Tests for multi-file directory loading
func TestReadDirectory_MultipleFiles(t *testing.T) {
opts := &readers.ReaderOptions{
FilePath: filepath.Join("..", "..", "..", "tests", "assets", "dbml", "multifile"),
}
reader := NewReader(opts)
db, err := reader.ReadDatabase()
if err != nil {
t.Fatalf("ReadDatabase() error = %v", err)
}
if db == nil {
t.Fatal("ReadDatabase() returned nil database")
}
// Should have public schema
if len(db.Schemas) == 0 {
t.Fatal("Expected at least one schema")
}
var publicSchema *models.Schema
for _, schema := range db.Schemas {
if schema.Name == "public" {
publicSchema = schema
break
}
}
if publicSchema == nil {
t.Fatal("Public schema not found")
}
// Should have 3 tables: users, posts, comments
if len(publicSchema.Tables) != 3 {
t.Fatalf("Expected 3 tables, got %d", len(publicSchema.Tables))
}
// Find tables
var usersTable, postsTable, commentsTable *models.Table
for _, table := range publicSchema.Tables {
switch table.Name {
case "users":
usersTable = table
case "posts":
postsTable = table
case "comments":
commentsTable = table
}
}
if usersTable == nil {
t.Fatal("Users table not found")
}
if postsTable == nil {
t.Fatal("Posts table not found")
}
if commentsTable == nil {
t.Fatal("Comments table not found")
}
// Verify users table has merged columns from 1_users.dbml and 3_add_columns.dbml
expectedUserColumns := []string{"id", "email", "name", "created_at"}
if len(usersTable.Columns) != len(expectedUserColumns) {
t.Errorf("Expected %d columns in users table, got %d", len(expectedUserColumns), len(usersTable.Columns))
}
for _, colName := range expectedUserColumns {
if _, exists := usersTable.Columns[colName]; !exists {
t.Errorf("Expected column '%s' in users table", colName)
}
}
// Verify posts table columns
expectedPostColumns := []string{"id", "user_id", "title", "content", "created_at"}
for _, colName := range expectedPostColumns {
if _, exists := postsTable.Columns[colName]; !exists {
t.Errorf("Expected column '%s' in posts table", colName)
}
}
}
func TestReadDirectory_TableMerging(t *testing.T) {
opts := &readers.ReaderOptions{
FilePath: filepath.Join("..", "..", "..", "tests", "assets", "dbml", "multifile"),
}
reader := NewReader(opts)
db, err := reader.ReadDatabase()
if err != nil {
t.Fatalf("ReadDatabase() error = %v", err)
}
// Find users table
var usersTable *models.Table
for _, schema := range db.Schemas {
for _, table := range schema.Tables {
if table.Name == "users" && schema.Name == "public" {
usersTable = table
break
}
}
}
if usersTable == nil {
t.Fatal("Users table not found")
}
// Verify columns from file 1 (id, email)
if _, exists := usersTable.Columns["id"]; !exists {
t.Error("Column 'id' from 1_users.dbml not found")
}
if _, exists := usersTable.Columns["email"]; !exists {
t.Error("Column 'email' from 1_users.dbml not found")
}
// Verify columns from file 3 (name, created_at)
if _, exists := usersTable.Columns["name"]; !exists {
t.Error("Column 'name' from 3_add_columns.dbml not found")
}
if _, exists := usersTable.Columns["created_at"]; !exists {
t.Error("Column 'created_at' from 3_add_columns.dbml not found")
}
// Verify column properties from file 1
emailCol := usersTable.Columns["email"]
if !emailCol.NotNull {
t.Error("Email column should be not null (from 1_users.dbml)")
}
if emailCol.Type != "varchar(255)" {
t.Errorf("Expected email type 'varchar(255)', got '%s'", emailCol.Type)
}
}
func TestReadDirectory_CommentedRefsLast(t *testing.T) {
// This test verifies that files with commented refs are processed last
// by checking that the file discovery returns them in the correct order
dirPath := filepath.Join("..", "..", "..", "tests", "assets", "dbml", "multifile")
opts := &readers.ReaderOptions{
FilePath: dirPath,
}
reader := NewReader(opts)
files, err := reader.discoverDBMLFiles(dirPath)
if err != nil {
t.Fatalf("discoverDBMLFiles() error = %v", err)
}
if len(files) < 2 {
t.Skip("Not enough files to test ordering")
}
// Check that 9_refs.dbml (which has commented refs) comes last
lastFile := filepath.Base(files[len(files)-1])
if lastFile != "9_refs.dbml" {
t.Errorf("Expected last file to be '9_refs.dbml' (has commented refs), got '%s'", lastFile)
}
// Check that numbered files without commented refs come first
firstFile := filepath.Base(files[0])
if firstFile != "1_users.dbml" {
t.Errorf("Expected first file to be '1_users.dbml', got '%s'", firstFile)
}
}
func TestReadDirectory_EmptyDirectory(t *testing.T) {
// Create a temporary empty directory
tmpDir := filepath.Join("..", "..", "..", "tests", "assets", "dbml", "empty_test_dir")
err := os.MkdirAll(tmpDir, 0755)
if err != nil {
t.Fatalf("Failed to create temp directory: %v", err)
}
defer os.RemoveAll(tmpDir)
opts := &readers.ReaderOptions{
FilePath: tmpDir,
}
reader := NewReader(opts)
db, err := reader.ReadDatabase()
if err != nil {
t.Fatalf("ReadDatabase() should not error on empty directory, got: %v", err)
}
if db == nil {
t.Fatal("ReadDatabase() returned nil database")
}
// Empty directory should return empty database
if len(db.Schemas) != 0 {
t.Errorf("Expected 0 schemas for empty directory, got %d", len(db.Schemas))
}
}
func TestReadDatabase_BackwardCompat(t *testing.T) {
// Test that single file loading still works
opts := &readers.ReaderOptions{
FilePath: filepath.Join("..", "..", "..", "tests", "assets", "dbml", "simple.dbml"),
}
reader := NewReader(opts)
db, err := reader.ReadDatabase()
if err != nil {
t.Fatalf("ReadDatabase() error = %v", err)
}
if db == nil {
t.Fatal("ReadDatabase() returned nil database")
}
if len(db.Schemas) == 0 {
t.Fatal("Expected at least one schema")
}
schema := db.Schemas[0]
if len(schema.Tables) != 1 {
t.Fatalf("Expected 1 table, got %d", len(schema.Tables))
}
table := schema.Tables[0]
if table.Name != "users" {
t.Errorf("Expected table name 'users', got '%s'", table.Name)
}
}
func TestParseFilePrefix(t *testing.T) {
tests := []struct {
filename string
wantPrefix int
wantHas bool
}{
{"1_schema.dbml", 1, true},
{"2_tables.dbml", 2, true},
{"10_relationships.dbml", 10, true},
{"99_data.dbml", 99, true},
{"schema.dbml", 0, false},
{"tables_no_prefix.dbml", 0, false},
{"/path/to/1_file.dbml", 1, true},
{"/path/to/file.dbml", 0, false},
{"1-file.dbml", 1, true},
{"2-another.dbml", 2, true},
}
for _, tt := range tests {
t.Run(tt.filename, func(t *testing.T) {
gotPrefix, gotHas := parseFilePrefix(tt.filename)
if gotPrefix != tt.wantPrefix {
t.Errorf("parseFilePrefix(%s) prefix = %d, want %d", tt.filename, gotPrefix, tt.wantPrefix)
}
if gotHas != tt.wantHas {
t.Errorf("parseFilePrefix(%s) hasPrefix = %v, want %v", tt.filename, gotHas, tt.wantHas)
}
})
}
}
func TestConstraintNaming(t *testing.T) {
// Test that constraints are named with proper prefixes
opts := &readers.ReaderOptions{
FilePath: filepath.Join("..", "..", "..", "tests", "assets", "dbml", "complex.dbml"),
}
reader := NewReader(opts)
db, err := reader.ReadDatabase()
if err != nil {
t.Fatalf("ReadDatabase() error = %v", err)
}
// Find users table
var usersTable *models.Table
var postsTable *models.Table
for _, schema := range db.Schemas {
for _, table := range schema.Tables {
if table.Name == "users" {
usersTable = table
} else if table.Name == "posts" {
postsTable = table
}
}
}
if usersTable == nil {
t.Fatal("Users table not found")
}
if postsTable == nil {
t.Fatal("Posts table not found")
}
// Test unique constraint naming: ukey_table_column
if _, exists := usersTable.Constraints["ukey_users_email"]; !exists {
t.Error("Expected unique constraint 'ukey_users_email' not found")
t.Logf("Available constraints: %v", getKeys(usersTable.Constraints))
}
if _, exists := postsTable.Constraints["ukey_posts_slug"]; !exists {
t.Error("Expected unique constraint 'ukey_posts_slug' not found")
t.Logf("Available constraints: %v", getKeys(postsTable.Constraints))
}
// Test foreign key naming: fk_table_column
if _, exists := postsTable.Constraints["fk_posts_user_id"]; !exists {
t.Error("Expected foreign key 'fk_posts_user_id' not found")
t.Logf("Available constraints: %v", getKeys(postsTable.Constraints))
}
// Test unique index naming: uidx_table_columns
if _, exists := postsTable.Indexes["uidx_posts_slug"]; !exists {
t.Error("Expected unique index 'uidx_posts_slug' not found")
t.Logf("Available indexes: %v", getKeys(postsTable.Indexes))
}
// Test regular index naming: idx_table_columns
if _, exists := postsTable.Indexes["idx_posts_user_id_published"]; !exists {
t.Error("Expected index 'idx_posts_user_id_published' not found")
t.Logf("Available indexes: %v", getKeys(postsTable.Indexes))
}
}
func getKeys[V any](m map[string]V) []string {
keys := make([]string, 0, len(m))
for k := range m {
keys = append(keys, k)
}
return keys
}
func TestHasCommentedRefs(t *testing.T) {
// Test with the actual multifile test fixtures
tests := []struct {
filename string
wantHas bool
}{
{filepath.Join("..", "..", "..", "tests", "assets", "dbml", "multifile", "1_users.dbml"), false},
{filepath.Join("..", "..", "..", "tests", "assets", "dbml", "multifile", "2_posts.dbml"), false},
{filepath.Join("..", "..", "..", "tests", "assets", "dbml", "multifile", "3_add_columns.dbml"), false},
{filepath.Join("..", "..", "..", "tests", "assets", "dbml", "multifile", "9_refs.dbml"), true},
}
for _, tt := range tests {
t.Run(filepath.Base(tt.filename), func(t *testing.T) {
gotHas, err := hasCommentedRefs(tt.filename)
if err != nil {
t.Fatalf("hasCommentedRefs() error = %v", err)
}
if gotHas != tt.wantHas {
t.Errorf("hasCommentedRefs(%s) = %v, want %v", filepath.Base(tt.filename), gotHas, tt.wantHas)
}
})
}
}

View File

@@ -79,6 +79,8 @@ func (r *Reader) convertToDatabase(dctx *models.DCTXDictionary) (*models.Databas
db := models.InitDatabase(dbName) db := models.InitDatabase(dbName)
schema := models.InitSchema("public") schema := models.InitSchema("public")
// Note: DCTX doesn't have database GUID, but schema can use dictionary name if available
// Create GUID mappings for tables and keys // Create GUID mappings for tables and keys
tableGuidMap := make(map[string]string) // GUID -> table name tableGuidMap := make(map[string]string) // GUID -> table name
keyGuidMap := make(map[string]*models.DCTXKey) // GUID -> key definition keyGuidMap := make(map[string]*models.DCTXKey) // GUID -> key definition
@@ -162,6 +164,10 @@ func (r *Reader) convertTable(dctxTable *models.DCTXTable) (*models.Table, map[s
tableName := r.sanitizeName(dctxTable.Name) tableName := r.sanitizeName(dctxTable.Name)
table := models.InitTable(tableName, "public") table := models.InitTable(tableName, "public")
table.Description = dctxTable.Description table.Description = dctxTable.Description
// Assign GUID from DCTX table
if dctxTable.Guid != "" {
table.GUID = dctxTable.Guid
}
fieldGuidMap := make(map[string]string) fieldGuidMap := make(map[string]string)
@@ -202,6 +208,10 @@ func (r *Reader) convertField(dctxField *models.DCTXField, tableName string) ([]
// Convert single field // Convert single field
column := models.InitColumn(r.sanitizeName(dctxField.Name), tableName, "public") column := models.InitColumn(r.sanitizeName(dctxField.Name), tableName, "public")
// Assign GUID from DCTX field
if dctxField.Guid != "" {
column.GUID = dctxField.Guid
}
// Map Clarion data types // Map Clarion data types
dataType, length := r.mapDataType(dctxField.DataType, dctxField.Size) dataType, length := r.mapDataType(dctxField.DataType, dctxField.Size)
@@ -346,6 +356,10 @@ func (r *Reader) convertKey(dctxKey *models.DCTXKey, table *models.Table, fieldG
constraint.Table = table.Name constraint.Table = table.Name
constraint.Schema = table.Schema constraint.Schema = table.Schema
constraint.Columns = columns constraint.Columns = columns
// Assign GUID from DCTX key
if dctxKey.Guid != "" {
constraint.GUID = dctxKey.Guid
}
table.Constraints[constraint.Name] = constraint table.Constraints[constraint.Name] = constraint
@@ -366,6 +380,10 @@ func (r *Reader) convertKey(dctxKey *models.DCTXKey, table *models.Table, fieldG
index.Columns = columns index.Columns = columns
index.Unique = dctxKey.Unique index.Unique = dctxKey.Unique
index.Type = "btree" index.Type = "btree"
// Assign GUID from DCTX key
if dctxKey.Guid != "" {
index.GUID = dctxKey.Guid
}
table.Indexes[index.Name] = index table.Indexes[index.Name] = index
return nil return nil
@@ -460,6 +478,10 @@ func (r *Reader) processRelations(dctx *models.DCTXDictionary, schema *models.Sc
constraint.ReferencedColumns = pkColumns constraint.ReferencedColumns = pkColumns
constraint.OnDelete = r.mapReferentialAction(relation.Delete) constraint.OnDelete = r.mapReferentialAction(relation.Delete)
constraint.OnUpdate = r.mapReferentialAction(relation.Update) constraint.OnUpdate = r.mapReferentialAction(relation.Update)
// Assign GUID from DCTX relation
if relation.Guid != "" {
constraint.GUID = relation.Guid
}
foreignTable.Constraints[fkName] = constraint foreignTable.Constraints[fkName] = constraint
@@ -473,6 +495,10 @@ func (r *Reader) processRelations(dctx *models.DCTXDictionary, schema *models.Sc
relationship.ForeignKey = fkName relationship.ForeignKey = fkName
relationship.Properties["on_delete"] = constraint.OnDelete relationship.Properties["on_delete"] = constraint.OnDelete
relationship.Properties["on_update"] = constraint.OnUpdate relationship.Properties["on_update"] = constraint.OnUpdate
// Assign GUID from DCTX relation
if relation.Guid != "" {
relationship.GUID = relation.Guid
}
foreignTable.Relationships[relationshipName] = relationship foreignTable.Relationships[relationshipName] = relationship
} }

View File

@@ -140,6 +140,32 @@ func (r *Reader) convertToDatabase(drawSchema *drawdb.DrawDBSchema) (*models.Dat
db.Schemas = append(db.Schemas, schema) db.Schemas = append(db.Schemas, schema)
} }
// Convert DrawDB subject areas to domains
for _, area := range drawSchema.SubjectAreas {
domain := models.InitDomain(area.Name)
// Find all tables that visually belong to this area
// A table belongs to an area if its position is within the area bounds
for _, drawTable := range drawSchema.Tables {
if drawTable.X >= area.X && drawTable.X <= (area.X+area.Width) &&
drawTable.Y >= area.Y && drawTable.Y <= (area.Y+area.Height) {
schemaName := drawTable.Schema
if schemaName == "" {
schemaName = "public"
}
domainTable := models.InitDomainTable(drawTable.Name, schemaName)
domain.Tables = append(domain.Tables, domainTable)
}
}
// Only add domain if it has tables
if len(domain.Tables) > 0 {
db.Domains = append(db.Domains, domain)
}
}
return db, nil return db, nil
} }

View File

@@ -241,11 +241,9 @@ func (r *Reader) parsePgEnum(line string, matches []string) *models.Enum {
} }
} }
return &models.Enum{ enum := models.InitEnum(enumName, "public")
Name: enumName, enum.Values = values
Values: values, return enum
Schema: "public",
}
} }
// parseTableBlock parses a complete pgTable definition block // parseTableBlock parses a complete pgTable definition block

View File

@@ -260,11 +260,7 @@ func (r *Reader) parseType(typeName string, lines []string, schema *models.Schem
} }
func (r *Reader) parseEnum(enumName string, lines []string, schema *models.Schema) { func (r *Reader) parseEnum(enumName string, lines []string, schema *models.Schema) {
enum := &models.Enum{ enum := models.InitEnum(enumName, schema.Name)
Name: enumName,
Schema: schema.Name,
Values: make([]string, 0),
}
for _, line := range lines { for _, line := range lines {
trimmed := strings.TrimSpace(line) trimmed := strings.TrimSpace(line)

View File

@@ -329,10 +329,10 @@ func (r *Reader) deriveRelationship(table *models.Table, fk *models.Constraint)
relationshipName := fmt.Sprintf("%s_to_%s", table.Name, fk.ReferencedTable) relationshipName := fmt.Sprintf("%s_to_%s", table.Name, fk.ReferencedTable)
relationship := models.InitRelationship(relationshipName, models.OneToMany) relationship := models.InitRelationship(relationshipName, models.OneToMany)
relationship.FromTable = fk.ReferencedTable relationship.FromTable = table.Name
relationship.FromSchema = fk.ReferencedSchema relationship.FromSchema = table.Schema
relationship.ToTable = table.Name relationship.ToTable = fk.ReferencedTable
relationship.ToSchema = table.Schema relationship.ToSchema = fk.ReferencedSchema
relationship.ForeignKey = fk.Name relationship.ForeignKey = fk.Name
// Store constraint actions in properties // Store constraint actions in properties

View File

@@ -328,12 +328,12 @@ func TestDeriveRelationship(t *testing.T) {
t.Errorf("Expected relationship type %s, got %s", models.OneToMany, rel.Type) t.Errorf("Expected relationship type %s, got %s", models.OneToMany, rel.Type)
} }
if rel.FromTable != "users" { if rel.FromTable != "orders" {
t.Errorf("Expected FromTable 'users', got '%s'", rel.FromTable) t.Errorf("Expected FromTable 'orders', got '%s'", rel.FromTable)
} }
if rel.ToTable != "orders" { if rel.ToTable != "users" {
t.Errorf("Expected ToTable 'orders', got '%s'", rel.ToTable) t.Errorf("Expected ToTable 'users', got '%s'", rel.ToTable)
} }
if rel.ForeignKey != "fk_orders_user_id" { if rel.ForeignKey != "fk_orders_user_id" {

View File

@@ -128,11 +128,7 @@ func (r *Reader) parsePrisma(content string) (*models.Database, error) {
if matches := enumRegex.FindStringSubmatch(trimmed); matches != nil { if matches := enumRegex.FindStringSubmatch(trimmed); matches != nil {
currentBlock = "enum" currentBlock = "enum"
enumName := matches[1] enumName := matches[1]
currentEnum = &models.Enum{ currentEnum = models.InitEnum(enumName, "public")
Name: enumName,
Schema: "public",
Values: make([]string, 0),
}
blockContent = []string{} blockContent = []string{}
continue continue
} }

View File

@@ -93,6 +93,7 @@ fmt.Printf("Found %d scripts\n", len(schema.Scripts))
## Features ## Features
- **Recursive Directory Scanning**: Automatically scans all subdirectories - **Recursive Directory Scanning**: Automatically scans all subdirectories
- **Symlink Skipping**: Symbolic links are automatically skipped (prevents loops and duplicates)
- **Multiple Extensions**: Supports both `.sql` and `.pgsql` files - **Multiple Extensions**: Supports both `.sql` and `.pgsql` files
- **Flexible Naming**: Extract metadata from filename patterns - **Flexible Naming**: Extract metadata from filename patterns
- **Error Handling**: Validates directory existence and file accessibility - **Error Handling**: Validates directory existence and file accessibility
@@ -153,8 +154,9 @@ go test ./pkg/readers/sqldir/
``` ```
Tests include: Tests include:
- Valid file parsing - Valid file parsing (underscore and hyphen formats)
- Recursive directory scanning - Recursive directory scanning
- Symlink skipping
- Invalid filename handling - Invalid filename handling
- Empty directory handling - Empty directory handling
- Error conditions - Error conditions

View File

@@ -107,11 +107,20 @@ func (r *Reader) readScripts() ([]*models.Script, error) {
return err return err
} }
// Skip directories // Don't process directories as files (WalkDir still descends into them recursively)
if d.IsDir() { if d.IsDir() {
return nil return nil
} }
// Skip symlinks
info, err := d.Info()
if err != nil {
return err
}
if info.Mode()&os.ModeSymlink != 0 {
return nil
}
// Get filename // Get filename
filename := d.Name() filename := d.Name()
@@ -150,13 +159,11 @@ func (r *Reader) readScripts() ([]*models.Script, error) {
} }
// Create Script model // Create Script model
script := &models.Script{ script := models.InitScript(name)
Name: name, script.Description = fmt.Sprintf("SQL script from %s", relPath)
Description: fmt.Sprintf("SQL script from %s", relPath), script.SQL = string(content)
SQL: string(content), script.Priority = priority
Priority: priority, script.Sequence = uint(sequence)
Sequence: uint(sequence),
}
scripts = append(scripts, script) scripts = append(scripts, script)

View File

@@ -373,3 +373,65 @@ func TestReader_MixedFormat(t *testing.T) {
} }
} }
} }
func TestReader_SkipSymlinks(t *testing.T) {
// Create temporary test directory
tempDir, err := os.MkdirTemp("", "sqldir-test-symlink-*")
if err != nil {
t.Fatalf("Failed to create temp directory: %v", err)
}
defer os.RemoveAll(tempDir)
// Create a real SQL file
realFile := filepath.Join(tempDir, "1_001_real_file.sql")
if err := os.WriteFile(realFile, []byte("SELECT 1;"), 0644); err != nil {
t.Fatalf("Failed to create real file: %v", err)
}
// Create another file to link to
targetFile := filepath.Join(tempDir, "2_001_target.sql")
if err := os.WriteFile(targetFile, []byte("SELECT 2;"), 0644); err != nil {
t.Fatalf("Failed to create target file: %v", err)
}
// Create a symlink to the target file (this should be skipped)
symlinkFile := filepath.Join(tempDir, "3_001_symlink.sql")
if err := os.Symlink(targetFile, symlinkFile); err != nil {
// Skip test on systems that don't support symlinks (e.g., Windows without admin)
t.Skipf("Symlink creation not supported: %v", err)
}
// Create reader
reader := NewReader(&readers.ReaderOptions{
FilePath: tempDir,
})
// Read database
db, err := reader.ReadDatabase()
if err != nil {
t.Fatalf("ReadDatabase failed: %v", err)
}
schema := db.Schemas[0]
// Should only have 2 scripts (real_file and target), symlink should be skipped
if len(schema.Scripts) != 2 {
t.Errorf("Expected 2 scripts (symlink should be skipped), got %d", len(schema.Scripts))
}
// Verify the scripts are the real files, not the symlink
scriptNames := make(map[string]bool)
for _, script := range schema.Scripts {
scriptNames[script.Name] = true
}
if !scriptNames["real_file"] {
t.Error("Expected 'real_file' script to be present")
}
if !scriptNames["target"] {
t.Error("Expected 'target' script to be present")
}
if scriptNames["symlink"] {
t.Error("Symlink script should have been skipped but was found")
}
}

95
pkg/ui/column_dataops.go Normal file
View File

@@ -0,0 +1,95 @@
package ui
import "git.warky.dev/wdevs/relspecgo/pkg/models"
// Column data operations - business logic for column management
// CreateColumn creates a new column and adds it to a table
func (se *SchemaEditor) CreateColumn(schemaIndex, tableIndex int, name, dataType string, isPrimaryKey, isNotNull bool) *models.Column {
table := se.GetTable(schemaIndex, tableIndex)
if table == nil {
return nil
}
if table.Columns == nil {
table.Columns = make(map[string]*models.Column)
}
newColumn := &models.Column{
Name: name,
Type: dataType,
IsPrimaryKey: isPrimaryKey,
NotNull: isNotNull,
}
table.UpdateDate()
table.Columns[name] = newColumn
return newColumn
}
// UpdateColumn updates an existing column's properties
func (se *SchemaEditor) UpdateColumn(schemaIndex, tableIndex int, oldName, newName, dataType string, isPrimaryKey, isNotNull bool, defaultValue interface{}, description string) bool {
table := se.GetTable(schemaIndex, tableIndex)
if table == nil {
return false
}
column, exists := table.Columns[oldName]
if !exists {
return false
}
table.UpdateDate()
// If name changed, remove old entry and create new one
if oldName != newName {
delete(table.Columns, oldName)
column.Name = newName
table.Columns[newName] = column
}
// Update properties
column.Type = dataType
column.IsPrimaryKey = isPrimaryKey
column.NotNull = isNotNull
column.Default = defaultValue
column.Description = description
return true
}
// DeleteColumn removes a column from a table
func (se *SchemaEditor) DeleteColumn(schemaIndex, tableIndex int, columnName string) bool {
table := se.GetTable(schemaIndex, tableIndex)
if table == nil {
return false
}
if _, exists := table.Columns[columnName]; !exists {
return false
}
table.UpdateDate()
delete(table.Columns, columnName)
return true
}
// GetColumn returns a column by name
func (se *SchemaEditor) GetColumn(schemaIndex, tableIndex int, columnName string) *models.Column {
table := se.GetTable(schemaIndex, tableIndex)
if table == nil {
return nil
}
return table.Columns[columnName]
}
// GetAllColumns returns all columns in a table
func (se *SchemaEditor) GetAllColumns(schemaIndex, tableIndex int) map[string]*models.Column {
table := se.GetTable(schemaIndex, tableIndex)
if table == nil {
return nil
}
return table.Columns
}

214
pkg/ui/column_screens.go Normal file
View File

@@ -0,0 +1,214 @@
package ui
import (
"fmt"
"github.com/gdamore/tcell/v2"
"github.com/rivo/tview"
"git.warky.dev/wdevs/relspecgo/pkg/models"
)
// showColumnEditor shows editor for a specific column
func (se *SchemaEditor) showColumnEditor(schemaIndex, tableIndex, colIndex int, column *models.Column) {
form := tview.NewForm()
// Store original name to handle renames
originalName := column.Name
// Local variables to collect changes
newName := column.Name
newType := column.Type
newIsPK := column.IsPrimaryKey
newIsNotNull := column.NotNull
newDefault := column.Default
newDescription := column.Description
newGUID := column.GUID
// Column type options: PostgreSQL, MySQL, SQL Server, and common SQL types
columnTypes := []string{
// Numeric Types
"SMALLINT", "INTEGER", "BIGINT", "INT", "TINYINT", "FLOAT", "REAL", "DOUBLE PRECISION",
"DECIMAL(10,2)", "NUMERIC", "DECIMAL", "NUMERIC(10,2)",
// Character Types
"CHAR", "VARCHAR", "VARCHAR(255)", "TEXT", "NCHAR", "NVARCHAR", "NVARCHAR(255)",
// Boolean
"BOOLEAN", "BOOL", "BIT",
// Date/Time Types
"DATE", "TIME", "TIMESTAMP", "TIMESTAMP WITH TIME ZONE", "INTERVAL",
"DATETIME", "DATETIME2", "DATEFIRST",
// UUID and JSON
"UUID", "GUID", "JSON", "JSONB",
// Binary Types
"BYTEA", "BLOB", "IMAGE", "VARBINARY", "VARBINARY(MAX)", "BINARY",
// PostgreSQL Special Types
"int4range", "int8range", "numrange", "tsrange", "tstzrange", "daterange",
"HSTORE", "CITEXT", "INET", "MACADDR", "POINT", "LINE", "LSEG", "BOX", "PATH", "POLYGON", "CIRCLE",
// Array Types
"INTEGER ARRAY", "VARCHAR ARRAY", "TEXT ARRAY", "BIGINT ARRAY",
// MySQL Specific
"MEDIUMINT", "DOUBLE", "FLOAT(10,2)",
// SQL Server Specific
"MONEY", "SMALLMONEY", "SQL_VARIANT",
}
selectedTypeIndex := 0
// Add existing type if not already in the list
typeExists := false
for i, opt := range columnTypes {
if opt == column.Type {
selectedTypeIndex = i
typeExists = true
break
}
}
if !typeExists && column.Type != "" {
columnTypes = append(columnTypes, column.Type)
selectedTypeIndex = len(columnTypes) - 1
}
form.AddInputField("Column Name", column.Name, 40, nil, func(value string) {
newName = value
})
form.AddDropDown("Type", columnTypes, selectedTypeIndex, func(option string, index int) {
newType = option
})
form.AddCheckbox("Primary Key", column.IsPrimaryKey, func(checked bool) {
newIsPK = checked
})
form.AddCheckbox("Not Null", column.NotNull, func(checked bool) {
newIsNotNull = checked
})
defaultStr := ""
if column.Default != nil {
defaultStr = fmt.Sprintf("%v", column.Default)
}
form.AddInputField("Default Value", defaultStr, 40, nil, func(value string) {
newDefault = value
})
form.AddTextArea("Description", column.Description, 40, 5, 0, func(value string) {
newDescription = value
})
form.AddInputField("GUID", column.GUID, 40, nil, func(value string) {
newGUID = value
})
form.AddButton("Save", func() {
// Apply changes using dataops
se.UpdateColumn(schemaIndex, tableIndex, originalName, newName, newType, newIsPK, newIsNotNull, newDefault, newDescription)
se.db.Schemas[schemaIndex].Tables[tableIndex].Columns[newName].GUID = newGUID
se.pages.RemovePage("column-editor")
se.pages.SwitchToPage("table-editor")
})
form.AddButton("Delete", func() {
se.showDeleteColumnConfirm(schemaIndex, tableIndex, originalName)
})
form.AddButton("Back", func() {
// Discard changes - don't apply them
se.pages.RemovePage("column-editor")
se.pages.SwitchToPage("table-editor")
})
form.SetBorder(true).SetTitle(" Edit Column ").SetTitleAlign(tview.AlignLeft)
form.SetInputCapture(func(event *tcell.EventKey) *tcell.EventKey {
if event.Key() == tcell.KeyEscape {
se.showExitConfirmation("column-editor", "table-editor")
return nil
}
return event
})
se.pages.AddPage("column-editor", form, true, true)
}
// showNewColumnDialog shows dialog to create a new column
func (se *SchemaEditor) showNewColumnDialog(schemaIndex, tableIndex int) {
form := tview.NewForm()
columnName := ""
dataType := "VARCHAR(255)"
// Column type options: PostgreSQL, MySQL, SQL Server, and common SQL types
columnTypes := []string{
// Numeric Types
"SMALLINT", "INTEGER", "BIGINT", "INT", "TINYINT", "FLOAT", "REAL", "DOUBLE PRECISION",
"DECIMAL(10,2)", "NUMERIC", "DECIMAL", "NUMERIC(10,2)",
// Character Types
"CHAR", "VARCHAR", "VARCHAR(255)", "TEXT", "NCHAR", "NVARCHAR", "NVARCHAR(255)",
// Boolean
"BOOLEAN", "BOOL", "BIT",
// Date/Time Types
"DATE", "TIME", "TIMESTAMP", "TIMESTAMP WITH TIME ZONE", "INTERVAL",
"DATETIME", "DATETIME2", "DATEFIRST",
// UUID and JSON
"UUID", "GUID", "JSON", "JSONB",
// Binary Types
"BYTEA", "BLOB", "IMAGE", "VARBINARY", "VARBINARY(MAX)", "BINARY",
// PostgreSQL Special Types
"int4range", "int8range", "numrange", "tsrange", "tstzrange", "daterange",
"HSTORE", "CITEXT", "INET", "MACADDR", "POINT", "LINE", "LSEG", "BOX", "PATH", "POLYGON", "CIRCLE",
// Array Types
"INTEGER ARRAY", "VARCHAR ARRAY", "TEXT ARRAY", "BIGINT ARRAY",
// MySQL Specific
"MEDIUMINT", "DOUBLE", "FLOAT(10,2)",
// SQL Server Specific
"MONEY", "SMALLMONEY", "SQL_VARIANT",
}
selectedTypeIndex := 0
form.AddInputField("Column Name", "", 40, nil, func(value string) {
columnName = value
})
form.AddDropDown("Data Type", columnTypes, selectedTypeIndex, func(option string, index int) {
dataType = option
})
form.AddCheckbox("Primary Key", false, nil)
form.AddCheckbox("Not Null", false, nil)
form.AddCheckbox("Unique", false, nil)
form.AddButton("Save", func() {
if columnName == "" {
return
}
// Get form values
isPK := form.GetFormItemByLabel("Primary Key").(*tview.Checkbox).IsChecked()
isNotNull := form.GetFormItemByLabel("Not Null").(*tview.Checkbox).IsChecked()
se.CreateColumn(schemaIndex, tableIndex, columnName, dataType, isPK, isNotNull)
table := se.db.Schemas[schemaIndex].Tables[tableIndex]
se.pages.RemovePage("new-column")
se.pages.RemovePage("table-editor")
se.showTableEditor(schemaIndex, tableIndex, table)
})
form.AddButton("Back", func() {
table := se.db.Schemas[schemaIndex].Tables[tableIndex]
se.pages.RemovePage("new-column")
se.pages.RemovePage("table-editor")
se.showTableEditor(schemaIndex, tableIndex, table)
})
form.SetBorder(true).SetTitle(" New Column ").SetTitleAlign(tview.AlignLeft)
form.SetInputCapture(func(event *tcell.EventKey) *tcell.EventKey {
if event.Key() == tcell.KeyEscape {
se.showExitConfirmation("new-column", "table-editor")
return nil
}
return event
})
se.pages.AddPage("new-column", form, true, true)
}

View File

@@ -0,0 +1,15 @@
package ui
import (
"git.warky.dev/wdevs/relspecgo/pkg/models"
)
// updateDatabase updates database properties
func (se *SchemaEditor) updateDatabase(name, description, comment, dbType, dbVersion string) {
se.db.Name = name
se.db.Description = description
se.db.Comment = comment
se.db.DatabaseType = models.DatabaseType(dbType)
se.db.DatabaseVersion = dbVersion
se.db.UpdateDate()
}

View File

@@ -0,0 +1,78 @@
package ui
import (
"github.com/gdamore/tcell/v2"
"github.com/rivo/tview"
)
// showEditDatabaseForm displays a dialog to edit database properties
func (se *SchemaEditor) showEditDatabaseForm() {
form := tview.NewForm()
dbName := se.db.Name
dbDescription := se.db.Description
dbComment := se.db.Comment
dbType := string(se.db.DatabaseType)
dbVersion := se.db.DatabaseVersion
dbGUID := se.db.GUID
// Database type options
dbTypeOptions := []string{"pgsql", "mssql", "sqlite"}
selectedTypeIndex := 0
for i, opt := range dbTypeOptions {
if opt == dbType {
selectedTypeIndex = i
break
}
}
form.AddInputField("Database Name", dbName, 40, nil, func(value string) {
dbName = value
})
form.AddInputField("Description", dbDescription, 50, nil, func(value string) {
dbDescription = value
})
form.AddInputField("Comment", dbComment, 50, nil, func(value string) {
dbComment = value
})
form.AddDropDown("Database Type", dbTypeOptions, selectedTypeIndex, func(option string, index int) {
dbType = option
})
form.AddInputField("Database Version", dbVersion, 20, nil, func(value string) {
dbVersion = value
})
form.AddInputField("GUID", dbGUID, 40, nil, func(value string) {
dbGUID = value
})
form.AddButton("Save", func() {
if dbName == "" {
return
}
se.updateDatabase(dbName, dbDescription, dbComment, dbType, dbVersion)
se.db.GUID = dbGUID
se.pages.RemovePage("edit-database")
se.pages.RemovePage("main")
se.pages.AddPage("main", se.createMainMenu(), true, true)
})
form.AddButton("Back", func() {
se.pages.RemovePage("edit-database")
})
form.SetBorder(true).SetTitle(" Edit Database ").SetTitleAlign(tview.AlignLeft)
form.SetInputCapture(func(event *tcell.EventKey) *tcell.EventKey {
if event.Key() == tcell.KeyEscape {
se.showExitConfirmation("edit-database", "main")
return nil
}
return event
})
se.pages.AddPage("edit-database", form, true, true)
}

139
pkg/ui/dialogs.go Normal file
View File

@@ -0,0 +1,139 @@
package ui
import (
"fmt"
"github.com/gdamore/tcell/v2"
"github.com/rivo/tview"
)
// showExitConfirmation shows a confirmation dialog when trying to exit without saving
func (se *SchemaEditor) showExitConfirmation(pageToRemove, pageToSwitchTo string) {
modal := tview.NewModal().
SetText("Exit without saving changes?").
AddButtons([]string{"Cancel", "No, exit without saving"}).
SetDoneFunc(func(buttonIndex int, buttonLabel string) {
if buttonLabel == "No, exit without saving" {
se.pages.RemovePage(pageToRemove)
se.pages.SwitchToPage(pageToSwitchTo)
}
se.pages.RemovePage("exit-confirm")
})
modal.SetInputCapture(func(event *tcell.EventKey) *tcell.EventKey {
if event.Key() == tcell.KeyEscape {
se.pages.RemovePage("exit-confirm")
return nil
}
return event
})
se.pages.AddPage("exit-confirm", modal, true, true)
}
// showExitEditorConfirm shows confirmation dialog when trying to exit the entire editor
func (se *SchemaEditor) showExitEditorConfirm() {
modal := tview.NewModal().
SetText("Exit RelSpec Editor? Press ESC again to confirm.").
AddButtons([]string{"Cancel", "Exit"}).
SetDoneFunc(func(buttonIndex int, buttonLabel string) {
if buttonLabel == "Exit" {
se.app.Stop()
}
se.pages.RemovePage("exit-editor-confirm")
})
modal.SetInputCapture(func(event *tcell.EventKey) *tcell.EventKey {
if event.Key() == tcell.KeyEscape {
se.app.Stop()
return nil
}
return event
})
se.pages.AddPage("exit-editor-confirm", modal, true, true)
}
// showDeleteSchemaConfirm shows confirmation dialog for schema deletion
func (se *SchemaEditor) showDeleteSchemaConfirm(schemaIndex int) {
modal := tview.NewModal().
SetText(fmt.Sprintf("Delete schema '%s'? This will delete all tables in this schema.",
se.db.Schemas[schemaIndex].Name)).
AddButtons([]string{"Cancel", "Delete"}).
SetDoneFunc(func(buttonIndex int, buttonLabel string) {
if buttonLabel == "Delete" {
se.DeleteSchema(schemaIndex)
se.pages.RemovePage("schema-editor")
se.pages.RemovePage("schemas")
se.showSchemaList()
}
se.pages.RemovePage("confirm-delete-schema")
})
modal.SetInputCapture(func(event *tcell.EventKey) *tcell.EventKey {
if event.Key() == tcell.KeyEscape {
se.pages.RemovePage("confirm-delete-schema")
return nil
}
return event
})
se.pages.AddPage("confirm-delete-schema", modal, true, true)
}
// showDeleteTableConfirm shows confirmation dialog for table deletion
func (se *SchemaEditor) showDeleteTableConfirm(schemaIndex, tableIndex int) {
table := se.db.Schemas[schemaIndex].Tables[tableIndex]
modal := tview.NewModal().
SetText(fmt.Sprintf("Delete table '%s'? This action cannot be undone.",
table.Name)).
AddButtons([]string{"Cancel", "Delete"}).
SetDoneFunc(func(buttonIndex int, buttonLabel string) {
if buttonLabel == "Delete" {
se.DeleteTable(schemaIndex, tableIndex)
schema := se.db.Schemas[schemaIndex]
se.pages.RemovePage("table-editor")
se.pages.RemovePage("schema-editor")
se.showSchemaEditor(schemaIndex, schema)
}
se.pages.RemovePage("confirm-delete-table")
})
modal.SetInputCapture(func(event *tcell.EventKey) *tcell.EventKey {
if event.Key() == tcell.KeyEscape {
se.pages.RemovePage("confirm-delete-table")
return nil
}
return event
})
se.pages.AddPage("confirm-delete-table", modal, true, true)
}
// showDeleteColumnConfirm shows confirmation dialog for column deletion
func (se *SchemaEditor) showDeleteColumnConfirm(schemaIndex, tableIndex int, columnName string) {
modal := tview.NewModal().
SetText(fmt.Sprintf("Delete column '%s'? This action cannot be undone.",
columnName)).
AddButtons([]string{"Cancel", "Delete"}).
SetDoneFunc(func(buttonIndex int, buttonLabel string) {
if buttonLabel == "Delete" {
se.DeleteColumn(schemaIndex, tableIndex, columnName)
se.pages.RemovePage("column-editor")
se.pages.RemovePage("confirm-delete-column")
se.pages.SwitchToPage("table-editor")
} else {
se.pages.RemovePage("confirm-delete-column")
}
})
modal.SetInputCapture(func(event *tcell.EventKey) *tcell.EventKey {
if event.Key() == tcell.KeyEscape {
se.pages.RemovePage("confirm-delete-column")
return nil
}
return event
})
se.pages.AddPage("confirm-delete-column", modal, true, true)
}

35
pkg/ui/domain_dataops.go Normal file
View File

@@ -0,0 +1,35 @@
package ui
import (
"git.warky.dev/wdevs/relspecgo/pkg/models"
)
// createDomain creates a new domain
func (se *SchemaEditor) createDomain(name, description string) {
domain := &models.Domain{
Name: name,
Description: description,
Tables: make([]*models.DomainTable, 0),
Sequence: uint(len(se.db.Domains)),
}
se.db.Domains = append(se.db.Domains, domain)
se.showDomainList()
}
// updateDomain updates an existing domain
func (se *SchemaEditor) updateDomain(index int, name, description string) {
if index >= 0 && index < len(se.db.Domains) {
se.db.Domains[index].Name = name
se.db.Domains[index].Description = description
se.showDomainList()
}
}
// deleteDomain deletes a domain by index
func (se *SchemaEditor) deleteDomain(index int) {
if index >= 0 && index < len(se.db.Domains) {
se.db.Domains = append(se.db.Domains[:index], se.db.Domains[index+1:]...)
se.showDomainList()
}
}

258
pkg/ui/domain_screens.go Normal file
View File

@@ -0,0 +1,258 @@
package ui
import (
"fmt"
"strings"
"github.com/gdamore/tcell/v2"
"github.com/rivo/tview"
"git.warky.dev/wdevs/relspecgo/pkg/models"
)
// showDomainList displays the domain management screen
func (se *SchemaEditor) showDomainList() {
flex := tview.NewFlex().SetDirection(tview.FlexRow)
// Title
title := tview.NewTextView().
SetText("[::b]Manage Domains").
SetDynamicColors(true).
SetTextAlign(tview.AlignCenter)
// Create domains table
domainTable := tview.NewTable().SetBorders(true).SetSelectable(true, false).SetFixed(1, 0)
// Add header row
headers := []string{"Name", "Sequence", "Total Tables", "Description"}
headerWidths := []int{20, 15, 20}
for i, header := range headers {
padding := ""
if i < len(headerWidths) {
padding = strings.Repeat(" ", headerWidths[i]-len(header))
}
cell := tview.NewTableCell(header + padding).
SetTextColor(tcell.ColorYellow).
SetSelectable(false).
SetAlign(tview.AlignLeft)
domainTable.SetCell(0, i, cell)
}
// Add existing domains
for row, domain := range se.db.Domains {
domain := domain // capture for closure
// Name - pad to 20 chars
nameStr := fmt.Sprintf("%-20s", domain.Name)
nameCell := tview.NewTableCell(nameStr).SetSelectable(true)
domainTable.SetCell(row+1, 0, nameCell)
// Sequence - pad to 15 chars
seqStr := fmt.Sprintf("%-15s", fmt.Sprintf("%d", domain.Sequence))
seqCell := tview.NewTableCell(seqStr).SetSelectable(true)
domainTable.SetCell(row+1, 1, seqCell)
// Total Tables - pad to 20 chars
tablesStr := fmt.Sprintf("%-20s", fmt.Sprintf("%d", len(domain.Tables)))
tablesCell := tview.NewTableCell(tablesStr).SetSelectable(true)
domainTable.SetCell(row+1, 2, tablesCell)
// Description - no padding, takes remaining space
descCell := tview.NewTableCell(domain.Description).SetSelectable(true)
domainTable.SetCell(row+1, 3, descCell)
}
domainTable.SetTitle(" Domains ").SetBorder(true).SetTitleAlign(tview.AlignLeft)
// Action buttons flex
btnFlex := tview.NewFlex()
btnNewDomain := tview.NewButton("New Domain [n]").SetSelectedFunc(func() {
se.showNewDomainDialog()
})
btnBack := tview.NewButton("Back [b]").SetSelectedFunc(func() {
se.pages.SwitchToPage("main")
se.pages.RemovePage("domains")
})
// Set up button input captures for Tab/Shift+Tab navigation
btnNewDomain.SetInputCapture(func(event *tcell.EventKey) *tcell.EventKey {
if event.Key() == tcell.KeyBacktab {
se.app.SetFocus(domainTable)
return nil
}
if event.Key() == tcell.KeyTab {
se.app.SetFocus(btnBack)
return nil
}
return event
})
btnBack.SetInputCapture(func(event *tcell.EventKey) *tcell.EventKey {
if event.Key() == tcell.KeyBacktab {
se.app.SetFocus(btnNewDomain)
return nil
}
if event.Key() == tcell.KeyTab {
se.app.SetFocus(domainTable)
return nil
}
return event
})
btnFlex.AddItem(btnNewDomain, 0, 1, true).
AddItem(btnBack, 0, 1, false)
domainTable.SetInputCapture(func(event *tcell.EventKey) *tcell.EventKey {
if event.Key() == tcell.KeyEscape {
se.pages.SwitchToPage("main")
se.pages.RemovePage("domains")
return nil
}
if event.Key() == tcell.KeyTab {
se.app.SetFocus(btnNewDomain)
return nil
}
if event.Key() == tcell.KeyEnter {
row, _ := domainTable.GetSelection()
if row > 0 && row <= len(se.db.Domains) { // Skip header row
domainIndex := row - 1
se.showDomainEditor(domainIndex, se.db.Domains[domainIndex])
return nil
}
}
if event.Rune() == 'n' {
se.showNewDomainDialog()
return nil
}
if event.Rune() == 'b' {
se.pages.SwitchToPage("main")
se.pages.RemovePage("domains")
return nil
}
return event
})
flex.AddItem(title, 1, 0, false).
AddItem(domainTable, 0, 1, true).
AddItem(btnFlex, 1, 0, false)
se.pages.AddPage("domains", flex, true, true)
}
// showNewDomainDialog displays a dialog to create a new domain
func (se *SchemaEditor) showNewDomainDialog() {
form := tview.NewForm()
domainName := ""
domainDesc := ""
form.AddInputField("Name", "", 40, nil, func(value string) {
domainName = value
})
form.AddInputField("Description", "", 50, nil, func(value string) {
domainDesc = value
})
form.AddButton("Save", func() {
if domainName == "" {
return
}
se.createDomain(domainName, domainDesc)
se.pages.RemovePage("new-domain")
se.pages.RemovePage("domains")
se.showDomainList()
})
form.AddButton("Back", func() {
se.pages.RemovePage("new-domain")
se.pages.RemovePage("domains")
se.showDomainList()
})
form.SetBorder(true).SetTitle(" New Domain ").SetTitleAlign(tview.AlignLeft)
form.SetInputCapture(func(event *tcell.EventKey) *tcell.EventKey {
if event.Key() == tcell.KeyEscape {
se.showExitConfirmation("new-domain", "domains")
return nil
}
return event
})
se.pages.AddPage("new-domain", form, true, true)
}
// showDomainEditor displays a dialog to edit an existing domain
func (se *SchemaEditor) showDomainEditor(index int, domain *models.Domain) {
form := tview.NewForm()
domainName := domain.Name
domainDesc := domain.Description
form.AddInputField("Name", domainName, 40, nil, func(value string) {
domainName = value
})
form.AddInputField("Description", domainDesc, 50, nil, func(value string) {
domainDesc = value
})
form.AddButton("Save", func() {
if domainName == "" {
return
}
se.updateDomain(index, domainName, domainDesc)
se.pages.RemovePage("edit-domain")
se.pages.RemovePage("domains")
se.showDomainList()
})
form.AddButton("Delete", func() {
se.showDeleteDomainConfirm(index)
})
form.AddButton("Back", func() {
se.pages.RemovePage("edit-domain")
se.pages.RemovePage("domains")
se.showDomainList()
})
form.SetBorder(true).SetTitle(" Edit Domain ").SetTitleAlign(tview.AlignLeft)
form.SetInputCapture(func(event *tcell.EventKey) *tcell.EventKey {
if event.Key() == tcell.KeyEscape {
se.showExitConfirmation("edit-domain", "domains")
return nil
}
return event
})
se.pages.AddPage("edit-domain", form, true, true)
}
// showDeleteDomainConfirm shows a confirmation dialog before deleting a domain
func (se *SchemaEditor) showDeleteDomainConfirm(index int) {
modal := tview.NewModal().
SetText(fmt.Sprintf("Delete domain '%s'? This action cannot be undone.", se.db.Domains[index].Name)).
AddButtons([]string{"Cancel", "Delete"}).
SetDoneFunc(func(buttonIndex int, buttonLabel string) {
if buttonLabel == "Delete" {
se.deleteDomain(index)
se.pages.RemovePage("delete-domain-confirm")
se.pages.RemovePage("edit-domain")
se.pages.RemovePage("domains")
se.showDomainList()
} else {
se.pages.RemovePage("delete-domain-confirm")
}
})
modal.SetInputCapture(func(event *tcell.EventKey) *tcell.EventKey {
if event.Key() == tcell.KeyEscape {
se.pages.RemovePage("delete-domain-confirm")
return nil
}
return event
})
se.pages.AddAndSwitchToPage("delete-domain-confirm", modal, true)
}

73
pkg/ui/editor.go Normal file
View File

@@ -0,0 +1,73 @@
package ui
import (
"fmt"
"github.com/rivo/tview"
"git.warky.dev/wdevs/relspecgo/pkg/models"
)
// SchemaEditor represents the interactive schema editor
type SchemaEditor struct {
db *models.Database
app *tview.Application
pages *tview.Pages
loadConfig *LoadConfig
saveConfig *SaveConfig
}
// NewSchemaEditor creates a new schema editor
func NewSchemaEditor(db *models.Database) *SchemaEditor {
return &SchemaEditor{
db: db,
app: tview.NewApplication(),
pages: tview.NewPages(),
loadConfig: nil,
saveConfig: nil,
}
}
// NewSchemaEditorWithConfigs creates a new schema editor with load/save configurations
func NewSchemaEditorWithConfigs(db *models.Database, loadConfig *LoadConfig, saveConfig *SaveConfig) *SchemaEditor {
return &SchemaEditor{
db: db,
app: tview.NewApplication(),
pages: tview.NewPages(),
loadConfig: loadConfig,
saveConfig: saveConfig,
}
}
// Run starts the interactive editor
func (se *SchemaEditor) Run() error {
// If no database is loaded, show load screen
if se.db == nil {
se.showLoadScreen()
} else {
// Create main menu view
mainMenu := se.createMainMenu()
se.pages.AddPage("main", mainMenu, true, true)
}
// Run the application
if err := se.app.SetRoot(se.pages, true).Run(); err != nil {
return fmt.Errorf("application error: %w", err)
}
return nil
}
// GetDatabase returns the current database
func (se *SchemaEditor) GetDatabase() *models.Database {
return se.db
}
// Helper function to get sorted column names
func getColumnNames(table *models.Table) []string {
names := make([]string, 0, len(table.Columns))
for name := range table.Columns {
names = append(names, name)
}
return names
}

791
pkg/ui/load_save_screens.go Normal file
View File

@@ -0,0 +1,791 @@
package ui
import (
"fmt"
"os"
"path/filepath"
"strings"
"github.com/gdamore/tcell/v2"
"github.com/rivo/tview"
"git.warky.dev/wdevs/relspecgo/pkg/merge"
"git.warky.dev/wdevs/relspecgo/pkg/models"
"git.warky.dev/wdevs/relspecgo/pkg/readers"
rbun "git.warky.dev/wdevs/relspecgo/pkg/readers/bun"
rdbml "git.warky.dev/wdevs/relspecgo/pkg/readers/dbml"
rdctx "git.warky.dev/wdevs/relspecgo/pkg/readers/dctx"
rdrawdb "git.warky.dev/wdevs/relspecgo/pkg/readers/drawdb"
rdrizzle "git.warky.dev/wdevs/relspecgo/pkg/readers/drizzle"
rgorm "git.warky.dev/wdevs/relspecgo/pkg/readers/gorm"
rgraphql "git.warky.dev/wdevs/relspecgo/pkg/readers/graphql"
rjson "git.warky.dev/wdevs/relspecgo/pkg/readers/json"
rpgsql "git.warky.dev/wdevs/relspecgo/pkg/readers/pgsql"
rprisma "git.warky.dev/wdevs/relspecgo/pkg/readers/prisma"
rtypeorm "git.warky.dev/wdevs/relspecgo/pkg/readers/typeorm"
ryaml "git.warky.dev/wdevs/relspecgo/pkg/readers/yaml"
"git.warky.dev/wdevs/relspecgo/pkg/writers"
wbun "git.warky.dev/wdevs/relspecgo/pkg/writers/bun"
wdbml "git.warky.dev/wdevs/relspecgo/pkg/writers/dbml"
wdctx "git.warky.dev/wdevs/relspecgo/pkg/writers/dctx"
wdrawdb "git.warky.dev/wdevs/relspecgo/pkg/writers/drawdb"
wdrizzle "git.warky.dev/wdevs/relspecgo/pkg/writers/drizzle"
wgorm "git.warky.dev/wdevs/relspecgo/pkg/writers/gorm"
wgraphql "git.warky.dev/wdevs/relspecgo/pkg/writers/graphql"
wjson "git.warky.dev/wdevs/relspecgo/pkg/writers/json"
wpgsql "git.warky.dev/wdevs/relspecgo/pkg/writers/pgsql"
wprisma "git.warky.dev/wdevs/relspecgo/pkg/writers/prisma"
wtypeorm "git.warky.dev/wdevs/relspecgo/pkg/writers/typeorm"
wyaml "git.warky.dev/wdevs/relspecgo/pkg/writers/yaml"
)
// LoadConfig holds the configuration for loading a database
type LoadConfig struct {
SourceType string
FilePath string
ConnString string
}
// SaveConfig holds the configuration for saving a database
type SaveConfig struct {
TargetType string
FilePath string
ConnString string
}
// showLoadScreen displays the database load screen
func (se *SchemaEditor) showLoadScreen() {
flex := tview.NewFlex().SetDirection(tview.FlexRow)
// Title
title := tview.NewTextView().
SetText("[::b]Load Database Schema").
SetTextAlign(tview.AlignCenter).
SetDynamicColors(true)
// Form
form := tview.NewForm()
form.SetBorder(true).SetTitle(" Load Configuration ").SetTitleAlign(tview.AlignLeft)
// Format selection
formatOptions := []string{
"dbml", "dctx", "drawdb", "graphql", "json", "yaml",
"gorm", "bun", "drizzle", "prisma", "typeorm", "pgsql",
}
selectedFormat := 0
currentFormat := formatOptions[selectedFormat]
// File path input
filePath := ""
connString := ""
form.AddDropDown("Format", formatOptions, 0, func(option string, index int) {
selectedFormat = index
currentFormat = option
})
form.AddInputField("File Path", "", 50, nil, func(value string) {
filePath = value
})
form.AddInputField("Connection String", "", 50, nil, func(value string) {
connString = value
})
form.AddTextView("Help", getLoadHelpText(), 0, 5, true, false)
// Buttons
form.AddButton("Load [l]", func() {
se.loadDatabase(currentFormat, filePath, connString)
})
form.AddButton("Create New [n]", func() {
se.createNewDatabase()
})
form.AddButton("Exit [q]", func() {
se.app.Stop()
})
// Keyboard shortcuts
form.SetInputCapture(func(event *tcell.EventKey) *tcell.EventKey {
if event.Key() == tcell.KeyEscape {
se.app.Stop()
return nil
}
switch event.Rune() {
case 'l':
se.loadDatabase(currentFormat, filePath, connString)
return nil
case 'n':
se.createNewDatabase()
return nil
case 'q':
se.app.Stop()
return nil
}
return event
})
// Tab navigation
form.SetInputCapture(func(event *tcell.EventKey) *tcell.EventKey {
if event.Key() == tcell.KeyEscape {
se.app.Stop()
return nil
}
if event.Rune() == 'l' || event.Rune() == 'n' || event.Rune() == 'q' {
return event
}
return event
})
flex.AddItem(title, 1, 0, false).
AddItem(form, 0, 1, true)
se.pages.AddAndSwitchToPage("load-database", flex, true)
}
// showSaveScreen displays the save database screen
func (se *SchemaEditor) showSaveScreen() {
flex := tview.NewFlex().SetDirection(tview.FlexRow)
// Title
title := tview.NewTextView().
SetText("[::b]Save Database Schema").
SetTextAlign(tview.AlignCenter).
SetDynamicColors(true)
// Form
form := tview.NewForm()
form.SetBorder(true).SetTitle(" Save Configuration ").SetTitleAlign(tview.AlignLeft)
// Format selection
formatOptions := []string{
"dbml", "dctx", "drawdb", "graphql", "json", "yaml",
"gorm", "bun", "drizzle", "prisma", "typeorm", "pgsql",
}
selectedFormat := 0
currentFormat := formatOptions[selectedFormat]
// File path input
filePath := ""
if se.saveConfig != nil {
// Pre-populate with existing save config
for i, format := range formatOptions {
if format == se.saveConfig.TargetType {
selectedFormat = i
currentFormat = format
break
}
}
filePath = se.saveConfig.FilePath
}
form.AddDropDown("Format", formatOptions, selectedFormat, func(option string, index int) {
selectedFormat = index
currentFormat = option
})
form.AddInputField("File Path", filePath, 50, nil, func(value string) {
filePath = value
})
form.AddTextView("Help", getSaveHelpText(), 0, 5, true, false)
// Buttons
form.AddButton("Save [s]", func() {
se.saveDatabase(currentFormat, filePath)
})
form.AddButton("Update Existing Database [u]", func() {
// Use saveConfig if available, otherwise use loadConfig
if se.saveConfig != nil {
se.showUpdateExistingDatabaseConfirm()
} else if se.loadConfig != nil {
se.showUpdateExistingDatabaseConfirm()
} else {
se.showErrorDialog("Error", "No database source found. Use Save instead.")
}
})
form.AddButton("Back [b]", func() {
se.pages.RemovePage("save-database")
se.pages.SwitchToPage("main")
})
// Keyboard shortcuts
form.SetInputCapture(func(event *tcell.EventKey) *tcell.EventKey {
if event.Key() == tcell.KeyEscape {
se.pages.RemovePage("save-database")
se.pages.SwitchToPage("main")
return nil
}
switch event.Rune() {
case 's':
se.saveDatabase(currentFormat, filePath)
return nil
case 'u':
// Use saveConfig if available, otherwise use loadConfig
if se.saveConfig != nil {
se.showUpdateExistingDatabaseConfirm()
} else if se.loadConfig != nil {
se.showUpdateExistingDatabaseConfirm()
} else {
se.showErrorDialog("Error", "No database source found. Use Save instead.")
}
return nil
case 'b':
se.pages.RemovePage("save-database")
se.pages.SwitchToPage("main")
return nil
}
return event
})
flex.AddItem(title, 1, 0, false).
AddItem(form, 0, 1, true)
se.pages.AddAndSwitchToPage("save-database", flex, true)
}
// loadDatabase loads a database from the specified configuration
func (se *SchemaEditor) loadDatabase(format, filePath, connString string) {
// Validate input
if format == "pgsql" {
if connString == "" {
se.showErrorDialog("Error", "Connection string is required for PostgreSQL")
return
}
} else {
if filePath == "" {
se.showErrorDialog("Error", "File path is required for "+format)
return
}
// Expand home directory
if len(filePath) > 0 && filePath[0] == '~' {
home, err := os.UserHomeDir()
if err == nil {
filePath = filepath.Join(home, filePath[1:])
}
}
}
// Create reader
var reader readers.Reader
switch format {
case "dbml":
reader = rdbml.NewReader(&readers.ReaderOptions{FilePath: filePath})
case "dctx":
reader = rdctx.NewReader(&readers.ReaderOptions{FilePath: filePath})
case "drawdb":
reader = rdrawdb.NewReader(&readers.ReaderOptions{FilePath: filePath})
case "graphql":
reader = rgraphql.NewReader(&readers.ReaderOptions{FilePath: filePath})
case "json":
reader = rjson.NewReader(&readers.ReaderOptions{FilePath: filePath})
case "yaml":
reader = ryaml.NewReader(&readers.ReaderOptions{FilePath: filePath})
case "gorm":
reader = rgorm.NewReader(&readers.ReaderOptions{FilePath: filePath})
case "bun":
reader = rbun.NewReader(&readers.ReaderOptions{FilePath: filePath})
case "drizzle":
reader = rdrizzle.NewReader(&readers.ReaderOptions{FilePath: filePath})
case "prisma":
reader = rprisma.NewReader(&readers.ReaderOptions{FilePath: filePath})
case "typeorm":
reader = rtypeorm.NewReader(&readers.ReaderOptions{FilePath: filePath})
case "pgsql":
reader = rpgsql.NewReader(&readers.ReaderOptions{ConnectionString: connString})
default:
se.showErrorDialog("Error", "Unsupported format: "+format)
return
}
// Read database
db, err := reader.ReadDatabase()
if err != nil {
se.showErrorDialog("Load Error", fmt.Sprintf("Failed to load database: %v", err))
return
}
// Store load config
se.loadConfig = &LoadConfig{
SourceType: format,
FilePath: filePath,
ConnString: connString,
}
// Update database
se.db = db
// Show success and switch to main menu
se.showSuccessDialog("Load Complete", fmt.Sprintf("Successfully loaded database '%s'", db.Name), func() {
se.pages.RemovePage("load-database")
se.pages.RemovePage("main")
se.pages.AddPage("main", se.createMainMenu(), true, true)
})
}
// saveDatabase saves the database to the specified configuration
func (se *SchemaEditor) saveDatabase(format, filePath string) {
// Validate input
if format == "pgsql" {
se.showErrorDialog("Error", "Direct PostgreSQL save is not supported from the UI. Use --to pgsql --to-path output.sql")
return
}
if filePath == "" {
se.showErrorDialog("Error", "File path is required")
return
}
// Expand home directory
if len(filePath) > 0 && filePath[0] == '~' {
home, err := os.UserHomeDir()
if err == nil {
filePath = filepath.Join(home, filePath[1:])
}
}
// Create writer
var writer writers.Writer
switch format {
case "dbml":
writer = wdbml.NewWriter(&writers.WriterOptions{OutputPath: filePath})
case "dctx":
writer = wdctx.NewWriter(&writers.WriterOptions{OutputPath: filePath})
case "drawdb":
writer = wdrawdb.NewWriter(&writers.WriterOptions{OutputPath: filePath})
case "graphql":
writer = wgraphql.NewWriter(&writers.WriterOptions{OutputPath: filePath})
case "json":
writer = wjson.NewWriter(&writers.WriterOptions{OutputPath: filePath})
case "yaml":
writer = wyaml.NewWriter(&writers.WriterOptions{OutputPath: filePath})
case "gorm":
writer = wgorm.NewWriter(&writers.WriterOptions{OutputPath: filePath})
case "bun":
writer = wbun.NewWriter(&writers.WriterOptions{OutputPath: filePath})
case "drizzle":
writer = wdrizzle.NewWriter(&writers.WriterOptions{OutputPath: filePath})
case "prisma":
writer = wprisma.NewWriter(&writers.WriterOptions{OutputPath: filePath})
case "typeorm":
writer = wtypeorm.NewWriter(&writers.WriterOptions{OutputPath: filePath})
case "pgsql":
writer = wpgsql.NewWriter(&writers.WriterOptions{OutputPath: filePath})
default:
se.showErrorDialog("Error", "Unsupported format: "+format)
return
}
// Write database
err := writer.WriteDatabase(se.db)
if err != nil {
se.showErrorDialog("Save Error", fmt.Sprintf("Failed to save database: %v", err))
return
}
// Store save config
se.saveConfig = &SaveConfig{
TargetType: format,
FilePath: filePath,
}
// Show success
se.showSuccessDialog("Save Complete", fmt.Sprintf("Successfully saved database to %s", filePath), func() {
se.pages.RemovePage("save-database")
se.pages.SwitchToPage("main")
})
}
// createNewDatabase creates a new empty database
func (se *SchemaEditor) createNewDatabase() {
// Create a new empty database
se.db = &models.Database{
Name: "New Database",
Schemas: []*models.Schema{},
}
// Clear load config
se.loadConfig = nil
// Show success and switch to main menu
se.showSuccessDialog("New Database", "Created new empty database", func() {
se.pages.RemovePage("load-database")
se.pages.AddPage("main", se.createMainMenu(), true, true)
})
}
// showErrorDialog displays an error dialog
func (se *SchemaEditor) showErrorDialog(_title, message string) {
modal := tview.NewModal().
SetText(message).
AddButtons([]string{"OK"}).
SetDoneFunc(func(buttonIndex int, buttonLabel string) {
se.pages.RemovePage("error-dialog")
})
modal.SetInputCapture(func(event *tcell.EventKey) *tcell.EventKey {
if event.Key() == tcell.KeyEscape {
se.pages.RemovePage("error-dialog")
return nil
}
return event
})
se.pages.AddPage("error-dialog", modal, true, true)
}
// showSuccessDialog displays a success dialog
func (se *SchemaEditor) showSuccessDialog(_title, message string, onClose func()) {
modal := tview.NewModal().
SetText(message).
AddButtons([]string{"OK"}).
SetDoneFunc(func(buttonIndex int, buttonLabel string) {
se.pages.RemovePage("success-dialog")
if onClose != nil {
onClose()
}
})
modal.SetInputCapture(func(event *tcell.EventKey) *tcell.EventKey {
if event.Key() == tcell.KeyEscape {
se.pages.RemovePage("success-dialog")
if onClose != nil {
onClose()
}
return nil
}
return event
})
se.pages.AddPage("success-dialog", modal, true, true)
}
// getLoadHelpText returns the help text for the load screen
func getLoadHelpText() string {
return `File-based formats: dbml, dctx, drawdb, graphql, json, yaml, gorm, bun, drizzle, prisma, typeorm
Database formats: pgsql (requires connection string)
Examples:
- File path: ~/schemas/mydb.dbml or /path/to/schema.json
- Connection: postgres://user:pass@localhost/dbname`
}
// showUpdateExistingDatabaseConfirm displays a confirmation dialog before updating existing database
func (se *SchemaEditor) showUpdateExistingDatabaseConfirm() {
// Use saveConfig if available, otherwise use loadConfig
var targetType, targetPath string
if se.saveConfig != nil {
targetType = se.saveConfig.TargetType
targetPath = se.saveConfig.FilePath
} else if se.loadConfig != nil {
targetType = se.loadConfig.SourceType
targetPath = se.loadConfig.FilePath
} else {
return
}
confirmText := fmt.Sprintf("Update existing database?\n\nFormat: %s\nPath: %s\n\nThis will overwrite the source.",
targetType, targetPath)
modal := tview.NewModal().
SetText(confirmText).
AddButtons([]string{"Cancel", "Update"}).
SetDoneFunc(func(buttonIndex int, buttonLabel string) {
if buttonLabel == "Update" {
se.pages.RemovePage("update-confirm")
se.pages.RemovePage("save-database")
se.saveDatabase(targetType, targetPath)
se.pages.SwitchToPage("main")
} else {
se.pages.RemovePage("update-confirm")
}
})
modal.SetInputCapture(func(event *tcell.EventKey) *tcell.EventKey {
if event.Key() == tcell.KeyEscape {
se.pages.RemovePage("update-confirm")
return nil
}
return event
})
se.pages.AddAndSwitchToPage("update-confirm", modal, true)
}
// getSaveHelpText returns the help text for the save screen
func getSaveHelpText() string {
return `File-based formats: dbml, dctx, drawdb, graphql, json, yaml, gorm, bun, drizzle, prisma, typeorm, pgsql (SQL export)
Examples:
- File: ~/schemas/mydb.dbml
- Directory (for code formats): ./models/`
}
// showImportScreen displays the import/merge database screen
func (se *SchemaEditor) showImportScreen() {
flex := tview.NewFlex().SetDirection(tview.FlexRow)
// Title
title := tview.NewTextView().
SetText("[::b]Import & Merge Database Schema").
SetTextAlign(tview.AlignCenter).
SetDynamicColors(true)
// Form
form := tview.NewForm()
form.SetBorder(true).SetTitle(" Import Configuration ").SetTitleAlign(tview.AlignLeft)
// Format selection
formatOptions := []string{
"dbml", "dctx", "drawdb", "graphql", "json", "yaml",
"gorm", "bun", "drizzle", "prisma", "typeorm", "pgsql",
}
selectedFormat := 0
currentFormat := formatOptions[selectedFormat]
// File path input
filePath := ""
connString := ""
skipDomains := false
skipRelations := false
skipEnums := false
skipViews := false
skipSequences := false
skipTables := ""
form.AddDropDown("Format", formatOptions, 0, func(option string, index int) {
selectedFormat = index
currentFormat = option
})
form.AddInputField("File Path", "", 50, nil, func(value string) {
filePath = value
})
form.AddInputField("Connection String", "", 50, nil, func(value string) {
connString = value
})
form.AddInputField("Skip Tables (comma-separated)", "", 50, nil, func(value string) {
skipTables = value
})
form.AddCheckbox("Skip Domains", false, func(checked bool) {
skipDomains = checked
})
form.AddCheckbox("Skip Relations", false, func(checked bool) {
skipRelations = checked
})
form.AddCheckbox("Skip Enums", false, func(checked bool) {
skipEnums = checked
})
form.AddCheckbox("Skip Views", false, func(checked bool) {
skipViews = checked
})
form.AddCheckbox("Skip Sequences", false, func(checked bool) {
skipSequences = checked
})
form.AddTextView("Help", getImportHelpText(), 0, 7, true, false)
// Buttons
form.AddButton("Import & Merge [i]", func() {
se.importAndMergeDatabase(currentFormat, filePath, connString, skipDomains, skipRelations, skipEnums, skipViews, skipSequences, skipTables)
})
form.AddButton("Back [b]", func() {
se.pages.RemovePage("import-database")
se.pages.SwitchToPage("main")
})
form.AddButton("Exit [q]", func() {
se.app.Stop()
})
// Keyboard shortcuts
form.SetInputCapture(func(event *tcell.EventKey) *tcell.EventKey {
if event.Key() == tcell.KeyEscape {
se.pages.RemovePage("import-database")
se.pages.SwitchToPage("main")
return nil
}
switch event.Rune() {
case 'i':
se.importAndMergeDatabase(currentFormat, filePath, connString, skipDomains, skipRelations, skipEnums, skipViews, skipSequences, skipTables)
return nil
case 'b':
se.pages.RemovePage("import-database")
se.pages.SwitchToPage("main")
return nil
case 'q':
se.app.Stop()
return nil
}
return event
})
flex.AddItem(title, 1, 0, false).
AddItem(form, 0, 1, true)
se.pages.AddAndSwitchToPage("import-database", flex, true)
}
// importAndMergeDatabase imports and merges a database from the specified configuration
func (se *SchemaEditor) importAndMergeDatabase(format, filePath, connString string, skipDomains, skipRelations, skipEnums, skipViews, skipSequences bool, skipTables string) {
// Validate input
if format == "pgsql" {
if connString == "" {
se.showErrorDialog("Error", "Connection string is required for PostgreSQL")
return
}
} else {
if filePath == "" {
se.showErrorDialog("Error", "File path is required for "+format)
return
}
// Expand home directory
if len(filePath) > 0 && filePath[0] == '~' {
home, err := os.UserHomeDir()
if err == nil {
filePath = filepath.Join(home, filePath[1:])
}
}
}
// Create reader
var reader readers.Reader
switch format {
case "dbml":
reader = rdbml.NewReader(&readers.ReaderOptions{FilePath: filePath})
case "dctx":
reader = rdctx.NewReader(&readers.ReaderOptions{FilePath: filePath})
case "drawdb":
reader = rdrawdb.NewReader(&readers.ReaderOptions{FilePath: filePath})
case "graphql":
reader = rgraphql.NewReader(&readers.ReaderOptions{FilePath: filePath})
case "json":
reader = rjson.NewReader(&readers.ReaderOptions{FilePath: filePath})
case "yaml":
reader = ryaml.NewReader(&readers.ReaderOptions{FilePath: filePath})
case "gorm":
reader = rgorm.NewReader(&readers.ReaderOptions{FilePath: filePath})
case "bun":
reader = rbun.NewReader(&readers.ReaderOptions{FilePath: filePath})
case "drizzle":
reader = rdrizzle.NewReader(&readers.ReaderOptions{FilePath: filePath})
case "prisma":
reader = rprisma.NewReader(&readers.ReaderOptions{FilePath: filePath})
case "typeorm":
reader = rtypeorm.NewReader(&readers.ReaderOptions{FilePath: filePath})
case "pgsql":
reader = rpgsql.NewReader(&readers.ReaderOptions{ConnectionString: connString})
default:
se.showErrorDialog("Error", "Unsupported format: "+format)
return
}
// Read the database to import
importDb, err := reader.ReadDatabase()
if err != nil {
se.showErrorDialog("Import Error", fmt.Sprintf("Failed to read database: %v", err))
return
}
// Show confirmation dialog
se.showImportConfirmation(importDb, skipDomains, skipRelations, skipEnums, skipViews, skipSequences, skipTables)
}
// showImportConfirmation shows a confirmation dialog before merging
func (se *SchemaEditor) showImportConfirmation(importDb *models.Database, skipDomains, skipRelations, skipEnums, skipViews, skipSequences bool, skipTables string) {
confirmText := fmt.Sprintf("Import & Merge Database?\n\nSource: %s\nTarget: %s\n\nThis will add missing schemas, tables, columns, and other objects from the source to your database.\n\nExisting items will NOT be modified.",
importDb.Name, se.db.Name)
modal := tview.NewModal().
SetText(confirmText).
AddButtons([]string{"Cancel", "Merge"}).
SetDoneFunc(func(buttonIndex int, buttonLabel string) {
se.pages.RemovePage("import-confirm")
if buttonLabel == "Merge" {
se.performMerge(importDb, skipDomains, skipRelations, skipEnums, skipViews, skipSequences, skipTables)
}
})
modal.SetInputCapture(func(event *tcell.EventKey) *tcell.EventKey {
if event.Key() == tcell.KeyEscape {
se.pages.RemovePage("import-confirm")
se.pages.SwitchToPage("import-database")
return nil
}
return event
})
se.pages.AddAndSwitchToPage("import-confirm", modal, true)
}
// performMerge performs the actual merge operation
func (se *SchemaEditor) performMerge(importDb *models.Database, skipDomains, skipRelations, skipEnums, skipViews, skipSequences bool, skipTables string) {
// Create merge options
opts := &merge.MergeOptions{
SkipDomains: skipDomains,
SkipRelations: skipRelations,
SkipEnums: skipEnums,
SkipViews: skipViews,
SkipSequences: skipSequences,
}
// Parse skip tables
if skipTables != "" {
opts.SkipTableNames = parseSkipTablesUI(skipTables)
}
// Perform the merge
result := merge.MergeDatabases(se.db, importDb, opts)
// Update the database timestamp
se.db.UpdateDate()
// Show success dialog with summary
summary := merge.GetMergeSummary(result)
se.showSuccessDialog("Import Complete", summary, func() {
se.pages.RemovePage("import-database")
se.pages.RemovePage("main")
se.pages.AddPage("main", se.createMainMenu(), true, true)
})
}
// getImportHelpText returns the help text for the import screen
func getImportHelpText() string {
return `Import & Merge: Adds missing schemas, tables, columns, and other objects to your existing database.
File-based formats: dbml, dctx, drawdb, graphql, json, yaml, gorm, bun, drizzle, prisma, typeorm
Database formats: pgsql (requires connection string)
Skip options: Check to exclude specific object types from the merge.`
}
func parseSkipTablesUI(skipTablesStr string) map[string]bool {
skipTables := make(map[string]bool)
if skipTablesStr == "" {
return skipTables
}
// Split by comma and trim whitespace
parts := strings.Split(skipTablesStr, ",")
for _, part := range parts {
trimmed := strings.TrimSpace(part)
if trimmed != "" {
// Store in lowercase for case-insensitive matching
skipTables[strings.ToLower(trimmed)] = true
}
}
return skipTables
}

65
pkg/ui/main_menu.go Normal file
View File

@@ -0,0 +1,65 @@
package ui
import (
"fmt"
"github.com/gdamore/tcell/v2"
"github.com/rivo/tview"
)
// createMainMenu creates the main menu screen
func (se *SchemaEditor) createMainMenu() tview.Primitive {
flex := tview.NewFlex().SetDirection(tview.FlexRow)
// Title with database name
dbName := se.db.Name
if dbName == "" {
dbName = "Untitled"
}
updateAtStr := ""
if se.db.UpdatedAt != "" {
updateAtStr = fmt.Sprintf("Updated @ %s", se.db.UpdatedAt)
}
titleText := fmt.Sprintf("[::b]RelSpec Schema Editor\n[::d]Database: %s %s\n[::d]Press arrow keys to navigate, Enter to select", dbName, updateAtStr)
title := tview.NewTextView().
SetText(titleText).
SetDynamicColors(true)
// Menu options
menu := tview.NewList().
AddItem("Edit Database", "Edit database name, description, and properties", 'e', func() {
se.showEditDatabaseForm()
}).
AddItem("Manage Schemas", "View, create, edit, and delete schemas", 's', func() {
se.showSchemaList()
}).
AddItem("Manage Tables", "View and manage tables in schemas", 't', func() {
se.showTableList()
}).
AddItem("Manage Domains", "View, create, edit, and delete domains", 'd', func() {
se.showDomainList()
}).
AddItem("Import & Merge", "Import and merge schema from another database", 'i', func() {
se.showImportScreen()
}).
AddItem("Save Database", "Save database to file or database", 'w', func() {
se.showSaveScreen()
}).
AddItem("Exit Editor", "Exit the editor", 'q', func() {
se.app.Stop()
})
menu.SetBorder(true).SetTitle(" Menu ").SetTitleAlign(tview.AlignLeft)
menu.SetInputCapture(func(event *tcell.EventKey) *tcell.EventKey {
if event.Key() == tcell.KeyEscape {
se.showExitEditorConfirm()
return nil
}
return event
})
flex.AddItem(title, 5, 0, false).
AddItem(menu, 0, 1, true)
return flex
}

55
pkg/ui/schema_dataops.go Normal file
View File

@@ -0,0 +1,55 @@
package ui
import "git.warky.dev/wdevs/relspecgo/pkg/models"
// Schema data operations - business logic for schema management
// CreateSchema creates a new schema and adds it to the database
func (se *SchemaEditor) CreateSchema(name, description string) *models.Schema {
newSchema := &models.Schema{
Name: name,
Description: description,
Tables: make([]*models.Table, 0),
Sequences: make([]*models.Sequence, 0),
Enums: make([]*models.Enum, 0),
}
se.db.UpdateDate()
se.db.Schemas = append(se.db.Schemas, newSchema)
return newSchema
}
// UpdateSchema updates an existing schema's properties
func (se *SchemaEditor) UpdateSchema(schemaIndex int, name, owner, description string) {
if schemaIndex < 0 || schemaIndex >= len(se.db.Schemas) {
return
}
se.db.UpdateDate()
schema := se.db.Schemas[schemaIndex]
schema.Name = name
schema.Owner = owner
schema.Description = description
schema.UpdateDate()
}
// DeleteSchema removes a schema from the database
func (se *SchemaEditor) DeleteSchema(schemaIndex int) bool {
if schemaIndex < 0 || schemaIndex >= len(se.db.Schemas) {
return false
}
se.db.UpdateDate()
se.db.Schemas = append(se.db.Schemas[:schemaIndex], se.db.Schemas[schemaIndex+1:]...)
return true
}
// GetSchema returns a schema by index
func (se *SchemaEditor) GetSchema(schemaIndex int) *models.Schema {
if schemaIndex < 0 || schemaIndex >= len(se.db.Schemas) {
return nil
}
return se.db.Schemas[schemaIndex]
}
// GetAllSchemas returns all schemas
func (se *SchemaEditor) GetAllSchemas() []*models.Schema {
return se.db.Schemas
}

362
pkg/ui/schema_screens.go Normal file
View File

@@ -0,0 +1,362 @@
package ui
import (
"fmt"
"strings"
"github.com/gdamore/tcell/v2"
"github.com/rivo/tview"
"git.warky.dev/wdevs/relspecgo/pkg/models"
)
// showSchemaList displays the schema management screen
func (se *SchemaEditor) showSchemaList() {
flex := tview.NewFlex().SetDirection(tview.FlexRow)
// Title
title := tview.NewTextView().
SetText("[::b]Manage Schemas").
SetDynamicColors(true).
SetTextAlign(tview.AlignCenter)
// Create schemas table
schemaTable := tview.NewTable().SetBorders(true).SetSelectable(true, false).SetFixed(1, 0)
// Add header row with padding for full width
headers := []string{"Name", "Sequence", "Total Tables", "Total Sequences", "Total Views", "GUID", "Description"}
headerWidths := []int{20, 15, 20, 20, 15, 36} // Last column takes remaining space
for i, header := range headers {
padding := ""
if i < len(headerWidths) {
padding = strings.Repeat(" ", headerWidths[i]-len(header))
}
cell := tview.NewTableCell(header + padding).
SetTextColor(tcell.ColorYellow).
SetSelectable(false).
SetAlign(tview.AlignLeft)
schemaTable.SetCell(0, i, cell)
}
// Add existing schemas
for row, schema := range se.db.Schemas {
schema := schema // capture for closure
// Name - pad to 20 chars
nameStr := fmt.Sprintf("%-20s", schema.Name)
nameCell := tview.NewTableCell(nameStr).SetSelectable(true)
schemaTable.SetCell(row+1, 0, nameCell)
// Sequence - pad to 15 chars
seqStr := fmt.Sprintf("%-15s", fmt.Sprintf("%d", schema.Sequence))
seqCell := tview.NewTableCell(seqStr).SetSelectable(true)
schemaTable.SetCell(row+1, 1, seqCell)
// Total Tables - pad to 20 chars
tablesStr := fmt.Sprintf("%-20s", fmt.Sprintf("%d", len(schema.Tables)))
tablesCell := tview.NewTableCell(tablesStr).SetSelectable(true)
schemaTable.SetCell(row+1, 2, tablesCell)
// Total Sequences - pad to 20 chars
sequencesStr := fmt.Sprintf("%-20s", fmt.Sprintf("%d", len(schema.Sequences)))
sequencesCell := tview.NewTableCell(sequencesStr).SetSelectable(true)
schemaTable.SetCell(row+1, 3, sequencesCell)
// Total Views - pad to 15 chars
viewsStr := fmt.Sprintf("%-15s", fmt.Sprintf("%d", len(schema.Views)))
viewsCell := tview.NewTableCell(viewsStr).SetSelectable(true)
schemaTable.SetCell(row+1, 4, viewsCell)
// GUID - pad to 36 chars
guidStr := fmt.Sprintf("%-36s", schema.GUID)
guidCell := tview.NewTableCell(guidStr).SetSelectable(true)
schemaTable.SetCell(row+1, 5, guidCell)
// Description - no padding, takes remaining space
descCell := tview.NewTableCell(schema.Description).SetSelectable(true)
schemaTable.SetCell(row+1, 6, descCell)
}
schemaTable.SetTitle(" Schemas ").SetBorder(true).SetTitleAlign(tview.AlignLeft)
// Action buttons flex (define before input capture)
btnFlex := tview.NewFlex()
btnNewSchema := tview.NewButton("New Schema [n]").SetSelectedFunc(func() {
se.showNewSchemaDialog()
})
btnBack := tview.NewButton("Back [b]").SetSelectedFunc(func() {
se.pages.SwitchToPage("main")
se.pages.RemovePage("schemas")
})
// Set up button input captures for Tab/Shift+Tab navigation
btnNewSchema.SetInputCapture(func(event *tcell.EventKey) *tcell.EventKey {
if event.Key() == tcell.KeyBacktab {
se.app.SetFocus(schemaTable)
return nil
}
if event.Key() == tcell.KeyTab {
se.app.SetFocus(btnBack)
return nil
}
return event
})
btnBack.SetInputCapture(func(event *tcell.EventKey) *tcell.EventKey {
if event.Key() == tcell.KeyBacktab {
se.app.SetFocus(btnNewSchema)
return nil
}
if event.Key() == tcell.KeyTab {
se.app.SetFocus(schemaTable)
return nil
}
return event
})
btnFlex.AddItem(btnNewSchema, 0, 1, true).
AddItem(btnBack, 0, 1, false)
schemaTable.SetInputCapture(func(event *tcell.EventKey) *tcell.EventKey {
if event.Key() == tcell.KeyEscape {
se.pages.SwitchToPage("main")
se.pages.RemovePage("schemas")
return nil
}
if event.Key() == tcell.KeyTab {
se.app.SetFocus(btnNewSchema)
return nil
}
if event.Key() == tcell.KeyEnter {
row, _ := schemaTable.GetSelection()
if row > 0 && row <= len(se.db.Schemas) { // Skip header row
schemaIndex := row - 1
se.showSchemaEditor(schemaIndex, se.db.Schemas[schemaIndex])
return nil
}
}
if event.Rune() == 'n' {
se.showNewSchemaDialog()
return nil
}
if event.Rune() == 'b' {
se.pages.SwitchToPage("main")
se.pages.RemovePage("schemas")
return nil
}
return event
})
flex.AddItem(title, 1, 0, false).
AddItem(schemaTable, 0, 1, true).
AddItem(btnFlex, 1, 0, false)
se.pages.AddPage("schemas", flex, true, true)
}
// showSchemaEditor shows the editor for a specific schema
func (se *SchemaEditor) showSchemaEditor(index int, schema *models.Schema) {
flex := tview.NewFlex().SetDirection(tview.FlexRow)
// Title
title := tview.NewTextView().
SetText(fmt.Sprintf("[::b]Schema: %s", schema.Name)).
SetDynamicColors(true).
SetTextAlign(tview.AlignCenter)
// Schema info display
info := tview.NewTextView().SetDynamicColors(true)
info.SetText(fmt.Sprintf("Tables: %d | Description: %s",
len(schema.Tables), schema.Description))
// Table list
tableList := tview.NewList().ShowSecondaryText(true)
for i, table := range schema.Tables {
tableIndex := i
table := table
colCount := len(table.Columns)
tableList.AddItem(table.Name, fmt.Sprintf("%d columns", colCount), rune('0'+i), func() {
se.showTableEditor(index, tableIndex, table)
})
}
tableList.AddItem("[New Table]", "Add a new table to this schema", 'n', func() {
se.showNewTableDialog(index)
})
tableList.AddItem("[Edit Schema Info]", "Edit schema properties", 'e', func() {
se.showEditSchemaDialog(index)
})
tableList.AddItem("[Delete Schema]", "Delete this schema", 'd', func() {
se.showDeleteSchemaConfirm(index)
})
tableList.SetBorder(true).SetTitle(" Tables ").SetTitleAlign(tview.AlignLeft)
// Action buttons (define before input capture)
btnFlex := tview.NewFlex()
btnNewTable := tview.NewButton("New Table [n]").SetSelectedFunc(func() {
se.showNewTableDialog(index)
})
btnBack := tview.NewButton("Back to Schemas [b]").SetSelectedFunc(func() {
se.pages.RemovePage("schema-editor")
se.pages.SwitchToPage("schemas")
})
// Set up button input captures for Tab/Shift+Tab navigation
btnNewTable.SetInputCapture(func(event *tcell.EventKey) *tcell.EventKey {
if event.Key() == tcell.KeyBacktab {
se.app.SetFocus(tableList)
return nil
}
if event.Key() == tcell.KeyTab {
se.app.SetFocus(btnBack)
return nil
}
return event
})
btnBack.SetInputCapture(func(event *tcell.EventKey) *tcell.EventKey {
if event.Key() == tcell.KeyBacktab {
se.app.SetFocus(btnNewTable)
return nil
}
if event.Key() == tcell.KeyTab {
se.app.SetFocus(tableList)
return nil
}
return event
})
tableList.SetInputCapture(func(event *tcell.EventKey) *tcell.EventKey {
if event.Key() == tcell.KeyEscape {
se.pages.RemovePage("schema-editor")
se.pages.SwitchToPage("schemas")
return nil
}
if event.Key() == tcell.KeyTab {
se.app.SetFocus(btnNewTable)
return nil
}
if event.Rune() == 'b' {
se.pages.RemovePage("schema-editor")
se.pages.SwitchToPage("schemas")
}
return event
})
btnFlex.AddItem(btnNewTable, 0, 1, true).
AddItem(btnBack, 0, 1, false)
flex.AddItem(title, 1, 0, false).
AddItem(info, 2, 0, false).
AddItem(tableList, 0, 1, true).
AddItem(btnFlex, 1, 0, false)
se.pages.AddPage("schema-editor", flex, true, true)
}
// showNewSchemaDialog shows dialog to create a new schema
func (se *SchemaEditor) showNewSchemaDialog() {
form := tview.NewForm()
schemaName := ""
description := ""
form.AddInputField("Schema Name", "", 40, nil, func(value string) {
schemaName = value
})
form.AddInputField("Description", "", 40, nil, func(value string) {
description = value
})
form.AddButton("Save", func() {
if schemaName == "" {
return
}
se.CreateSchema(schemaName, description)
se.pages.RemovePage("new-schema")
se.pages.RemovePage("schemas")
se.showSchemaList()
})
form.AddButton("Back", func() {
se.pages.RemovePage("new-schema")
se.pages.RemovePage("schemas")
se.showSchemaList()
})
form.SetBorder(true).SetTitle(" New Schema ").SetTitleAlign(tview.AlignLeft)
form.SetInputCapture(func(event *tcell.EventKey) *tcell.EventKey {
if event.Key() == tcell.KeyEscape {
se.showExitConfirmation("new-schema", "schemas")
return nil
}
return event
})
se.pages.AddPage("new-schema", form, true, true)
}
// showEditSchemaDialog shows dialog to edit schema properties
func (se *SchemaEditor) showEditSchemaDialog(schemaIndex int) {
schema := se.db.Schemas[schemaIndex]
form := tview.NewForm()
// Local variables to collect changes
newName := schema.Name
newOwner := schema.Owner
newDescription := schema.Description
newGUID := schema.GUID
form.AddInputField("Schema Name", schema.Name, 40, nil, func(value string) {
newName = value
})
form.AddInputField("Owner", schema.Owner, 40, nil, func(value string) {
newOwner = value
})
form.AddTextArea("Description", schema.Description, 40, 5, 0, func(value string) {
newDescription = value
})
form.AddInputField("GUID", schema.GUID, 40, nil, func(value string) {
newGUID = value
})
form.AddButton("Save", func() {
// Apply changes using dataops
se.UpdateSchema(schemaIndex, newName, newOwner, newDescription)
se.db.Schemas[schemaIndex].GUID = newGUID
schema := se.db.Schemas[schemaIndex]
se.pages.RemovePage("edit-schema")
se.pages.RemovePage("schema-editor")
se.showSchemaEditor(schemaIndex, schema)
})
form.AddButton("Back", func() {
// Discard changes - don't apply them
schema := se.db.Schemas[schemaIndex]
se.pages.RemovePage("edit-schema")
se.pages.RemovePage("schema-editor")
se.showSchemaEditor(schemaIndex, schema)
})
form.SetBorder(true).SetTitle(" Edit Schema ").SetTitleAlign(tview.AlignLeft)
form.SetInputCapture(func(event *tcell.EventKey) *tcell.EventKey {
if event.Key() == tcell.KeyEscape {
se.showExitConfirmation("edit-schema", "schema-editor")
return nil
}
return event
})
se.pages.AddPage("edit-schema", form, true, true)
}

88
pkg/ui/table_dataops.go Normal file
View File

@@ -0,0 +1,88 @@
package ui
import "git.warky.dev/wdevs/relspecgo/pkg/models"
// Table data operations - business logic for table management
// CreateTable creates a new table and adds it to a schema
func (se *SchemaEditor) CreateTable(schemaIndex int, name, description string) *models.Table {
if schemaIndex < 0 || schemaIndex >= len(se.db.Schemas) {
return nil
}
schema := se.db.Schemas[schemaIndex]
newTable := &models.Table{
Name: name,
Schema: schema.Name,
Description: description,
Columns: make(map[string]*models.Column),
Constraints: make(map[string]*models.Constraint),
Indexes: make(map[string]*models.Index),
}
schema.UpdateDate()
schema.Tables = append(schema.Tables, newTable)
return newTable
}
// UpdateTable updates an existing table's properties
func (se *SchemaEditor) UpdateTable(schemaIndex, tableIndex int, name, description string) {
if schemaIndex < 0 || schemaIndex >= len(se.db.Schemas) {
return
}
schema := se.db.Schemas[schemaIndex]
if tableIndex < 0 || tableIndex >= len(schema.Tables) {
return
}
schema.UpdateDate()
table := schema.Tables[tableIndex]
table.Name = name
table.Description = description
table.UpdateDate()
}
// DeleteTable removes a table from a schema
func (se *SchemaEditor) DeleteTable(schemaIndex, tableIndex int) bool {
if schemaIndex < 0 || schemaIndex >= len(se.db.Schemas) {
return false
}
schema := se.db.Schemas[schemaIndex]
if tableIndex < 0 || tableIndex >= len(schema.Tables) {
return false
}
schema.UpdateDate()
schema.Tables = append(schema.Tables[:tableIndex], schema.Tables[tableIndex+1:]...)
return true
}
// GetTable returns a table by schema and table index
func (se *SchemaEditor) GetTable(schemaIndex, tableIndex int) *models.Table {
if schemaIndex < 0 || schemaIndex >= len(se.db.Schemas) {
return nil
}
schema := se.db.Schemas[schemaIndex]
if tableIndex < 0 || tableIndex >= len(schema.Tables) {
return nil
}
return schema.Tables[tableIndex]
}
// GetAllTables returns all tables across all schemas
func (se *SchemaEditor) GetAllTables() []*models.Table {
var tables []*models.Table
for _, schema := range se.db.Schemas {
tables = append(tables, schema.Tables...)
}
return tables
}
// GetTablesInSchema returns all tables in a specific schema
func (se *SchemaEditor) GetTablesInSchema(schemaIndex int) []*models.Table {
if schemaIndex < 0 || schemaIndex >= len(se.db.Schemas) {
return nil
}
return se.db.Schemas[schemaIndex].Tables
}

546
pkg/ui/table_screens.go Normal file
View File

@@ -0,0 +1,546 @@
package ui
import (
"fmt"
"strings"
"github.com/gdamore/tcell/v2"
"github.com/rivo/tview"
"git.warky.dev/wdevs/relspecgo/pkg/models"
)
// showTableList displays all tables across all schemas
func (se *SchemaEditor) showTableList() {
flex := tview.NewFlex().SetDirection(tview.FlexRow)
// Title
title := tview.NewTextView().
SetText("[::b]All Tables").
SetDynamicColors(true).
SetTextAlign(tview.AlignCenter)
// Create tables table
tableTable := tview.NewTable().SetBorders(true).SetSelectable(true, false).SetFixed(1, 0)
// Add header row with padding for full width
headers := []string{"Name", "Schema", "Sequence", "Total Columns", "Total Relations", "Total Indexes", "GUID", "Description", "Comment"}
headerWidths := []int{18, 15, 12, 14, 15, 14, 36, 0, 12} // Description gets remainder
for i, header := range headers {
padding := ""
if i < len(headerWidths) && headerWidths[i] > 0 {
padding = strings.Repeat(" ", headerWidths[i]-len(header))
}
cell := tview.NewTableCell(header + padding).
SetTextColor(tcell.ColorYellow).
SetSelectable(false).
SetAlign(tview.AlignLeft)
tableTable.SetCell(0, i, cell)
}
var tables []*models.Table
var tableLocations []struct{ schemaIdx, tableIdx int }
for si, schema := range se.db.Schemas {
for ti, table := range schema.Tables {
tables = append(tables, table)
tableLocations = append(tableLocations, struct{ schemaIdx, tableIdx int }{si, ti})
}
}
for row, table := range tables {
tableIdx := tableLocations[row]
schema := se.db.Schemas[tableIdx.schemaIdx]
// Name - pad to 18 chars
nameStr := fmt.Sprintf("%-18s", table.Name)
nameCell := tview.NewTableCell(nameStr).SetSelectable(true)
tableTable.SetCell(row+1, 0, nameCell)
// Schema - pad to 15 chars
schemaStr := fmt.Sprintf("%-15s", schema.Name)
schemaCell := tview.NewTableCell(schemaStr).SetSelectable(true)
tableTable.SetCell(row+1, 1, schemaCell)
// Sequence - pad to 12 chars
seqStr := fmt.Sprintf("%-12s", fmt.Sprintf("%d", table.Sequence))
seqCell := tview.NewTableCell(seqStr).SetSelectable(true)
tableTable.SetCell(row+1, 2, seqCell)
// Total Columns - pad to 14 chars
colsStr := fmt.Sprintf("%-14s", fmt.Sprintf("%d", len(table.Columns)))
colsCell := tview.NewTableCell(colsStr).SetSelectable(true)
tableTable.SetCell(row+1, 3, colsCell)
// Total Relations - pad to 15 chars
relsStr := fmt.Sprintf("%-15s", fmt.Sprintf("%d", len(table.Relationships)))
relsCell := tview.NewTableCell(relsStr).SetSelectable(true)
tableTable.SetCell(row+1, 4, relsCell)
// Total Indexes - pad to 14 chars
idxStr := fmt.Sprintf("%-14s", fmt.Sprintf("%d", len(table.Indexes)))
idxCell := tview.NewTableCell(idxStr).SetSelectable(true)
tableTable.SetCell(row+1, 5, idxCell)
// GUID - pad to 36 chars
guidStr := fmt.Sprintf("%-36s", table.GUID)
guidCell := tview.NewTableCell(guidStr).SetSelectable(true)
tableTable.SetCell(row+1, 6, guidCell)
// Description - no padding, takes remaining space
descCell := tview.NewTableCell(table.Description).SetSelectable(true)
tableTable.SetCell(row+1, 7, descCell)
// Comment - pad to 12 chars
commentStr := fmt.Sprintf("%-12s", table.Comment)
commentCell := tview.NewTableCell(commentStr).SetSelectable(true)
tableTable.SetCell(row+1, 8, commentCell)
}
tableTable.SetTitle(" All Tables ").SetBorder(true).SetTitleAlign(tview.AlignLeft)
// Action buttons (define before input capture)
btnFlex := tview.NewFlex()
btnNewTable := tview.NewButton("New Table [n]").SetSelectedFunc(func() {
se.showNewTableDialogFromList()
})
btnBack := tview.NewButton("Back [b]").SetSelectedFunc(func() {
se.pages.SwitchToPage("main")
se.pages.RemovePage("tables")
})
// Set up button input captures for Tab/Shift+Tab navigation
btnNewTable.SetInputCapture(func(event *tcell.EventKey) *tcell.EventKey {
if event.Key() == tcell.KeyBacktab {
se.app.SetFocus(tableTable)
return nil
}
if event.Key() == tcell.KeyTab {
se.app.SetFocus(btnBack)
return nil
}
return event
})
btnBack.SetInputCapture(func(event *tcell.EventKey) *tcell.EventKey {
if event.Key() == tcell.KeyBacktab {
se.app.SetFocus(btnNewTable)
return nil
}
if event.Key() == tcell.KeyTab {
se.app.SetFocus(tableTable)
return nil
}
return event
})
btnFlex.AddItem(btnNewTable, 0, 1, true).
AddItem(btnBack, 0, 1, false)
tableTable.SetInputCapture(func(event *tcell.EventKey) *tcell.EventKey {
if event.Key() == tcell.KeyEscape {
se.pages.SwitchToPage("main")
se.pages.RemovePage("tables")
return nil
}
if event.Key() == tcell.KeyTab {
se.app.SetFocus(btnNewTable)
return nil
}
if event.Key() == tcell.KeyEnter {
row, _ := tableTable.GetSelection()
if row > 0 && row <= len(tables) { // Skip header row
tableIdx := tableLocations[row-1]
se.showTableEditor(tableIdx.schemaIdx, tableIdx.tableIdx, tables[row-1])
return nil
}
}
if event.Rune() == 'n' {
se.showNewTableDialogFromList()
return nil
}
if event.Rune() == 'b' {
se.pages.SwitchToPage("main")
se.pages.RemovePage("tables")
return nil
}
return event
})
flex.AddItem(title, 1, 0, false).
AddItem(tableTable, 0, 1, true).
AddItem(btnFlex, 1, 0, false)
se.pages.AddPage("tables", flex, true, true)
}
// showTableEditor shows editor for a specific table
func (se *SchemaEditor) showTableEditor(schemaIndex, tableIndex int, table *models.Table) {
flex := tview.NewFlex().SetDirection(tview.FlexRow)
// Title
title := tview.NewTextView().
SetText(fmt.Sprintf("[::b]Table: %s", table.Name)).
SetDynamicColors(true).
SetTextAlign(tview.AlignCenter)
// Table info
info := tview.NewTextView().SetDynamicColors(true)
info.SetText(fmt.Sprintf("Schema: %s | Columns: %d | Description: %s",
table.Schema, len(table.Columns), table.Description))
// Create columns table
colTable := tview.NewTable().SetBorders(true).SetSelectable(true, false).SetFixed(1, 0)
// Add header row with padding for full width
headers := []string{"Name", "Type", "Default", "KeyType", "GUID", "Description"}
headerWidths := []int{20, 18, 15, 15, 36} // Last column takes remaining space
for i, header := range headers {
padding := ""
if i < len(headerWidths) {
padding = strings.Repeat(" ", headerWidths[i]-len(header))
}
cell := tview.NewTableCell(header + padding).
SetTextColor(tcell.ColorYellow).
SetSelectable(false).
SetAlign(tview.AlignLeft)
colTable.SetCell(0, i, cell)
}
// Get sorted column names
columnNames := getColumnNames(table)
for row, colName := range columnNames {
column := table.Columns[colName]
// Name - pad to 20 chars
nameStr := fmt.Sprintf("%-20s", colName)
nameCell := tview.NewTableCell(nameStr).SetSelectable(true)
colTable.SetCell(row+1, 0, nameCell)
// Type - pad to 18 chars
typeStr := fmt.Sprintf("%-18s", column.Type)
typeCell := tview.NewTableCell(typeStr).SetSelectable(true)
colTable.SetCell(row+1, 1, typeCell)
// Default - pad to 15 chars
defaultStr := ""
if column.Default != nil {
defaultStr = fmt.Sprintf("%v", column.Default)
}
defaultStr = fmt.Sprintf("%-15s", defaultStr)
defaultCell := tview.NewTableCell(defaultStr).SetSelectable(true)
colTable.SetCell(row+1, 2, defaultCell)
// KeyType - pad to 15 chars
keyTypeStr := ""
if column.IsPrimaryKey {
keyTypeStr = "PRIMARY"
} else if column.NotNull {
keyTypeStr = "NOT NULL"
}
keyTypeStr = fmt.Sprintf("%-15s", keyTypeStr)
keyTypeCell := tview.NewTableCell(keyTypeStr).SetSelectable(true)
colTable.SetCell(row+1, 3, keyTypeCell)
// GUID - pad to 36 chars
guidStr := fmt.Sprintf("%-36s", column.GUID)
guidCell := tview.NewTableCell(guidStr).SetSelectable(true)
colTable.SetCell(row+1, 4, guidCell)
// Description
descCell := tview.NewTableCell(column.Description).SetSelectable(true)
colTable.SetCell(row+1, 5, descCell)
}
colTable.SetTitle(" Columns ").SetBorder(true).SetTitleAlign(tview.AlignLeft)
// Action buttons flex (define before input capture)
btnFlex := tview.NewFlex()
btnNewCol := tview.NewButton("Add Column [n]").SetSelectedFunc(func() {
se.showNewColumnDialog(schemaIndex, tableIndex)
})
btnEditTable := tview.NewButton("Edit Table [e]").SetSelectedFunc(func() {
se.showEditTableDialog(schemaIndex, tableIndex)
})
btnEditColumn := tview.NewButton("Edit Column [c]").SetSelectedFunc(func() {
row, _ := colTable.GetSelection()
if row > 0 && row <= len(columnNames) { // Skip header row
colName := columnNames[row-1]
column := table.Columns[colName]
se.showColumnEditor(schemaIndex, tableIndex, row-1, column)
}
})
btnDelTable := tview.NewButton("Delete Table [d]").SetSelectedFunc(func() {
se.showDeleteTableConfirm(schemaIndex, tableIndex)
})
btnBack := tview.NewButton("Back to Schema [b]").SetSelectedFunc(func() {
se.pages.RemovePage("table-editor")
se.pages.SwitchToPage("schema-editor")
})
// Set up button input captures for Tab/Shift+Tab navigation
btnNewCol.SetInputCapture(func(event *tcell.EventKey) *tcell.EventKey {
if event.Key() == tcell.KeyBacktab {
se.app.SetFocus(colTable)
return nil
}
if event.Key() == tcell.KeyTab {
se.app.SetFocus(btnEditColumn)
return nil
}
return event
})
btnEditColumn.SetInputCapture(func(event *tcell.EventKey) *tcell.EventKey {
if event.Key() == tcell.KeyBacktab {
se.app.SetFocus(btnNewCol)
return nil
}
if event.Key() == tcell.KeyTab {
se.app.SetFocus(btnEditTable)
return nil
}
return event
})
btnEditTable.SetInputCapture(func(event *tcell.EventKey) *tcell.EventKey {
if event.Key() == tcell.KeyBacktab {
se.app.SetFocus(btnEditColumn)
return nil
}
if event.Key() == tcell.KeyTab {
se.app.SetFocus(btnDelTable)
return nil
}
return event
})
btnDelTable.SetInputCapture(func(event *tcell.EventKey) *tcell.EventKey {
if event.Key() == tcell.KeyBacktab {
se.app.SetFocus(btnEditTable)
return nil
}
if event.Key() == tcell.KeyTab {
se.app.SetFocus(btnBack)
return nil
}
return event
})
btnBack.SetInputCapture(func(event *tcell.EventKey) *tcell.EventKey {
if event.Key() == tcell.KeyBacktab {
se.app.SetFocus(btnDelTable)
return nil
}
if event.Key() == tcell.KeyTab {
se.app.SetFocus(colTable)
return nil
}
return event
})
btnFlex.AddItem(btnNewCol, 0, 1, true).
AddItem(btnEditColumn, 0, 1, false).
AddItem(btnEditTable, 0, 1, false).
AddItem(btnDelTable, 0, 1, false).
AddItem(btnBack, 0, 1, false)
colTable.SetInputCapture(func(event *tcell.EventKey) *tcell.EventKey {
if event.Key() == tcell.KeyEscape {
se.pages.SwitchToPage("schema-editor")
se.pages.RemovePage("table-editor")
return nil
}
if event.Key() == tcell.KeyTab {
se.app.SetFocus(btnNewCol)
return nil
}
if event.Key() == tcell.KeyEnter {
row, _ := colTable.GetSelection()
if row > 0 { // Skip header row
colName := columnNames[row-1]
column := table.Columns[colName]
se.showColumnEditor(schemaIndex, tableIndex, row-1, column)
return nil
}
}
if event.Rune() == 'c' {
row, _ := colTable.GetSelection()
if row > 0 && row <= len(columnNames) { // Skip header row
colName := columnNames[row-1]
column := table.Columns[colName]
se.showColumnEditor(schemaIndex, tableIndex, row-1, column)
}
return nil
}
if event.Rune() == 'b' {
se.pages.RemovePage("table-editor")
se.pages.SwitchToPage("schema-editor")
return nil
}
return event
})
flex.AddItem(title, 1, 0, false).
AddItem(info, 2, 0, false).
AddItem(colTable, 0, 1, true).
AddItem(btnFlex, 1, 0, false)
se.pages.AddPage("table-editor", flex, true, true)
}
// showNewTableDialog shows dialog to create a new table
func (se *SchemaEditor) showNewTableDialog(schemaIndex int) {
form := tview.NewForm()
tableName := ""
description := ""
form.AddInputField("Table Name", "", 40, nil, func(value string) {
tableName = value
})
form.AddInputField("Description", "", 40, nil, func(value string) {
description = value
})
form.AddButton("Save", func() {
if tableName == "" {
return
}
se.CreateTable(schemaIndex, tableName, description)
schema := se.db.Schemas[schemaIndex]
se.pages.RemovePage("new-table")
se.pages.RemovePage("schema-editor")
se.showSchemaEditor(schemaIndex, schema)
})
form.AddButton("Back", func() {
schema := se.db.Schemas[schemaIndex]
se.pages.RemovePage("new-table")
se.pages.RemovePage("schema-editor")
se.showSchemaEditor(schemaIndex, schema)
})
form.SetBorder(true).SetTitle(" New Table ").SetTitleAlign(tview.AlignLeft)
form.SetInputCapture(func(event *tcell.EventKey) *tcell.EventKey {
if event.Key() == tcell.KeyEscape {
se.showExitConfirmation("new-table", "schema-editor")
return nil
}
return event
})
se.pages.AddPage("new-table", form, true, true)
}
// showNewTableDialogFromList shows dialog to create a new table with schema selection
func (se *SchemaEditor) showNewTableDialogFromList() {
form := tview.NewForm()
tableName := ""
description := ""
selectedSchemaIdx := 0
// Create schema dropdown options
schemaOptions := make([]string, len(se.db.Schemas))
for i, schema := range se.db.Schemas {
schemaOptions[i] = schema.Name
}
form.AddInputField("Table Name", "", 40, nil, func(value string) {
tableName = value
})
form.AddDropDown("Schema", schemaOptions, 0, func(option string, optionIndex int) {
selectedSchemaIdx = optionIndex
})
form.AddInputField("Description", "", 40, nil, func(value string) {
description = value
})
form.AddButton("Save", func() {
if tableName == "" {
return
}
se.CreateTable(selectedSchemaIdx, tableName, description)
se.pages.RemovePage("new-table-from-list")
se.pages.RemovePage("tables")
se.showTableList()
})
form.AddButton("Back", func() {
se.pages.RemovePage("new-table-from-list")
se.pages.RemovePage("tables")
se.showTableList()
})
form.SetBorder(true).SetTitle(" New Table ").SetTitleAlign(tview.AlignLeft)
form.SetInputCapture(func(event *tcell.EventKey) *tcell.EventKey {
if event.Key() == tcell.KeyEscape {
se.showExitConfirmation("new-table-from-list", "tables")
return nil
}
return event
})
se.pages.AddPage("new-table-from-list", form, true, true)
}
// showEditTableDialog shows dialog to edit table properties
func (se *SchemaEditor) showEditTableDialog(schemaIndex, tableIndex int) {
table := se.db.Schemas[schemaIndex].Tables[tableIndex]
form := tview.NewForm()
// Local variables to collect changes
newName := table.Name
newDescription := table.Description
newGUID := table.GUID
form.AddInputField("Table Name", table.Name, 40, nil, func(value string) {
newName = value
})
form.AddTextArea("Description", table.Description, 40, 5, 0, func(value string) {
newDescription = value
})
form.AddInputField("GUID", table.GUID, 40, nil, func(value string) {
newGUID = value
})
form.AddButton("Save", func() {
// Apply changes using dataops
se.UpdateTable(schemaIndex, tableIndex, newName, newDescription)
se.db.Schemas[schemaIndex].Tables[tableIndex].GUID = newGUID
table := se.db.Schemas[schemaIndex].Tables[tableIndex]
se.pages.RemovePage("edit-table")
se.pages.RemovePage("table-editor")
se.showTableEditor(schemaIndex, tableIndex, table)
})
form.AddButton("Back", func() {
// Discard changes - don't apply them
table := se.db.Schemas[schemaIndex].Tables[tableIndex]
se.pages.RemovePage("edit-table")
se.pages.RemovePage("table-editor")
se.showTableEditor(schemaIndex, tableIndex, table)
})
form.SetBorder(true).SetTitle(" Edit Table ").SetTitleAlign(tview.AlignLeft)
form.SetInputCapture(func(event *tcell.EventKey) *tcell.EventKey {
if event.Key() == tcell.KeyEscape {
se.showExitConfirmation("edit-table", "table-editor")
return nil
}
return event
})
se.pages.AddPage("edit-table", form, true, true)
}

299
pkg/ui/ui_rules.md Normal file
View File

@@ -0,0 +1,299 @@
# UI Rules and Guidelines
## Layout Requirements
All layouts / forms must be in seperate files regarding their domain or entity.
### Screen Layout Structure
All screens must follow this consistent layout:
1. **Title at the Top** (1 row, fixed height)
- Centered bold text: `[::b]Title Text`
- Use `tview.NewTextView()` with `SetTextAlign(tview.AlignCenter)`
- Enable dynamic colors: `SetDynamicColors(true)`
2. **Content in the Middle** (flexible height)
- Tables, lists, forms, or info displays
- Uses flex weight of 1 for dynamic sizing
3. **Action Buttons at the Bottom** (1 row, fixed height)
- Must be in a horizontal flex container
- Action buttons before Back button
- Back button is always last
### Form Layout Structure
All forms must follow this button order:
1. **Save Button** (always first)
- Label: "Save"
- Primary action that commits changes
2. **Delete Button** (optional, only for edit forms)
- Label: "Delete"
- Only shown when editing existing items (not for new items)
- Must show confirmation dialog before deletion
3. **Back Button** (always last)
- Label: "Back"
- Returns to previous screen without saving
**Button Order Examples:**
- **New Item Forms:** Save, Back
- **Edit Item Forms:** Save, Delete, Back
## Tab Navigation
All screens must implement circular tab navigation:
1. **Tab Key** - Moves focus to the next focusable element
2. **Shift+Tab (BackTab)** - Moves focus to the previous focusable element
3. **At the End** - Tab cycles back to the first element
4. **At the Start** - Shift+Tab cycles back to the last element
**Navigation Flow Pattern:**
- Each widget must handle both Tab and BackTab
- First widget: BackTab → Last widget, Tab → Second widget
- Middle widgets: BackTab → Previous widget, Tab → Next widget
- Last widget: BackTab → Previous widget, Tab → First widget
## Keyboard Shortcuts
### Standard Keys
- **ESC** - Cancel current operation or go back to previous screen
- **Tab** - Move focus forward (circular)
- **Shift+Tab** - Move focus backward (circular)
- **Enter** - Activate/select current item in tables and lists
### Letter Key Shortcuts
- **'n'** - New (create new item)
- **'b'** - Back (return to previous screen)
- **'e'** - Edit (edit current item)
- **'d'** - Delete (delete current item)
- **'c'** - Edit Column (in table editor)
- **'s'** - Manage Schemas (in main menu)
- **'t'** - Manage Tables (in main menu)
- **'q'** - Quit/Exit (in main menu)
## Consistency Requirements
1. **Layout Structure** - All screens: Title (top) → Content (middle) → Buttons (bottom)
2. **Title Format** - Bold (`[::b]`), centered, dynamic colors enabled
3. **Tables** - Fixed headers (row 0), borders enabled, selectable rows
4. **Buttons** - Include keyboard shortcuts in labels (e.g., "Back [b]")
5. **Forms** - Button order: Save, Delete (if edit), Back
6. **Destructive Actions** - Always show confirmation dialogs
7. **ESC Key** - All screens support ESC to go back
8. **Action Buttons** - Positioned before Back button, in logical order
9. **Data Refresh** - Always refresh the previous screen when returning from a form or dialog
## Widget Naming Conventions
- **Tables:** `schemaTable`, `tableTable`, `colTable`
- **Buttons:** Prefix with `btn` (e.g., `btnBack`, `btnDelete`, `btnNewSchema`)
- **Flex containers:** `btnFlex` for button containers, `flex` for main layout
- **Forms:** `form`
- **Lists:** `list`, `tableList`
- **Text views:** `title`, `info`
- Use camelCase for all variable names
## Page Naming Conventions
Use descriptive kebab-case names:
- **Main screens:** `main`, `schemas`, `tables`, `schema-editor`, `table-editor`, `column-editor`
- **Load/Save screens:** `load-database`, `save-database`
- **Creation dialogs:** `new-schema`, `new-table`, `new-column`, `new-table-from-list`
- **Edit dialogs:** `edit-schema`, `edit-table`
- **Confirmations:** `confirm-delete-schema`, `confirm-delete-table`, `confirm-delete-column`
- **Exit confirmations:** `exit-confirm`, `exit-editor-confirm`
- **Status dialogs:** `error-dialog`, `success-dialog`
## Dialog and Confirmation Rules
### Confirmation Dialogs
1. **Delete Confirmations** - Required for all destructive actions
- Show item name in confirmation text
- Buttons: "Cancel", "Delete"
- ESC key dismisses dialog
2. **Exit Confirmations** - Required when exiting forms with potential unsaved changes
- Text: "Exit without saving changes?"
- Buttons: "Cancel", "No, exit without saving"
- ESC key confirms exit
3. **Save Confirmations** - Optional, based on context
- Use for critical data changes
- Clear description of what will be saved
### Dialog Behavior
- All dialogs must capture ESC key for dismissal
- Modal dialogs overlay current screen
- Confirmation dialogs use `tview.NewModal()`
- Remove dialog page after action completes
## Data Refresh Rules
When returning from any form or dialog, the previous screen must be refreshed to show updated data. If Tables exists in the screen, their data must be updated:
1. **After Save** - Remove and recreate the previous screen to display updated data
2. **After Delete** - Remove and recreate the previous screen to display remaining data
3. **After Cancel/Back** - Remove and recreate the previous screen (data may have changed)
4. **Implementation Pattern** - Remove the current page, remove the previous page, then recreate the previous page with fresh data
**Why This Matters:**
- Ensures users see their changes immediately
- Prevents stale data from being displayed
- Maintains data consistency across the UI
- Avoids confusion from seeing outdated information
**Example Flow:**
```
User on Schema List → Opens Edit Schema Form → Saves Changes →
Returns to Schema List (refreshed with updated schema data)
```
## Big Loading/Saving Operations
When loading big changes, files or data, always give a load completed or load error dialog.
Do the same with saving.
This informs the user what happens.
When data is dirty, always ask the user to save when trying to exit.
### Load/Save Screens
- **Load Screen** (`load-database`) - Shown when no source is specified via command line
- Format dropdown (dbml, dctx, drawdb, graphql, json, yaml, gorm, bun, drizzle, prisma, typeorm, pgsql)
- File path input (for file-based formats)
- Connection string input (for database formats like pgsql)
- Load button [l] - Loads the database
- Create New button [n] - Creates a new empty database
- Exit button [q] - Exits the application
- ESC key exits the application
- **Save Screen** (`save-database`) - Accessible from main menu with 'w' key
- Format dropdown (same as load screen)
- File path input
- Help text explaining format requirements
- Save button [s] - Saves the database
- Back button [b] - Returns to main menu
- ESC key returns to main menu
- Pre-populated with existing save configuration if available
### Status Dialogs
- **Error Dialog** (`error-dialog`) - Shows error messages with OK button and ESC to dismiss
- **Success Dialog** (`success-dialog`) - Shows success messages with OK button, ESC to dismiss, and optional callback on close
## Screen Organization
Organize UI code into these files:
### UI Files (Screens and Dialogs)
- **editor.go** - Core `SchemaEditor` struct, constructor, `Run()` method, helper functions
- **main_menu.go** - Main menu screen
- **load_save_screens.go** - Database load and save screens
- **database_screens.go** - Database edit form
- **schema_screens.go** - Schema list, schema editor, new/edit schema dialogs
- **table_screens.go** - Tables list, table editor, new/edit table dialogs
- **column_screens.go** - Column editor, new column dialog
- **domain_screens.go** - Domain list, domain editor, new/edit domain dialogs
- **dialogs.go** - Confirmation dialogs (exit, delete)
### Data Operations Files (Business Logic)
- **schema_dataops.go** - Schema CRUD operations (Create, Read, Update, Delete)
- **table_dataops.go** - Table CRUD operations
- **column_dataops.go** - Column CRUD operations
## Code Separation Rules
### UI vs Business Logic
1. **UI Files** - Handle only presentation and user interaction
- Display data in tables, lists, and forms
- Capture user input
- Navigate between screens
- Show/hide dialogs
- Call dataops methods for actual data changes
2. **Dataops Files** - Handle only business logic and data manipulation
- Create, read, update, delete operations
- Data validation
- Data structure manipulation
- Return created/updated objects or success/failure status
- No UI code or tview references
### Implementation Pattern
#### Creating New Items
**Bad (Direct Data Manipulation in UI):**
```go
form.AddButton("Save", func() {
schema := &models.Schema{Name: name, Description: desc, ...}
se.db.Schemas = append(se.db.Schemas, schema)
})
```
**Good (Using Dataops Methods):**
```go
form.AddButton("Save", func() {
se.CreateSchema(name, description)
})
```
#### Editing Existing Items
**Bad (Modifying Data in onChange Callbacks):**
```go
form.AddInputField("Name", column.Name, 40, nil, func(value string) {
column.Name = value // Changes immediately as user types!
})
form.AddButton("Save", func() {
// Data already changed, just refresh screen
})
```
**Good (Local Variables + Dataops on Save):**
```go
// Store original values
originalName := column.Name
newName := column.Name
form.AddInputField("Name", column.Name, 40, nil, func(value string) {
newName = value // Store in local variable
})
form.AddButton("Save", func() {
// Apply changes only when Save is clicked
se.UpdateColumn(schemaIndex, tableIndex, originalName, newName, ...)
// Then refresh screen
})
form.AddButton("Back", func() {
// Discard changes - don't apply local variables
// Just refresh screen
})
```
### Why This Matters
**Edit Forms Must Use Local Variables:**
1. **Deferred Changes** - Changes only apply when Save is clicked
2. **Cancellable** - Back button discards changes without saving
3. **Handles Renames** - Original name preserved to update map keys correctly
4. **User Expectations** - Save means "commit changes", Back means "cancel"
This separation ensures:
- Cleaner, more maintainable code
- Reusable business logic
- Easier testing
- Clear separation of concerns
- Proper change management (save vs cancel)

View File

@@ -5,6 +5,7 @@ import (
"strings" "strings"
"git.warky.dev/wdevs/relspecgo/pkg/models" "git.warky.dev/wdevs/relspecgo/pkg/models"
"git.warky.dev/wdevs/relspecgo/pkg/writers"
) )
// TemplateData represents the data passed to the template for code generation // TemplateData represents the data passed to the template for code generation
@@ -111,13 +112,17 @@ func NewModelData(table *models.Table, schema string, typeMapper *TypeMapper) *M
tableName = schema + "." + table.Name tableName = schema + "." + table.Name
} }
// Generate model name: singularize and convert to PascalCase // Generate model name: Model + Schema + Table (all PascalCase)
singularTable := Singularize(table.Name) singularTable := Singularize(table.Name)
modelName := SnakeCaseToPascalCase(singularTable) tablePart := SnakeCaseToPascalCase(singularTable)
// Add "Model" prefix if not already present // Include schema name in model name
if !hasModelPrefix(modelName) { var modelName string
modelName = "Model" + modelName if schema != "" {
schemaPart := SnakeCaseToPascalCase(schema)
modelName = "Model" + schemaPart + tablePart
} else {
modelName = "Model" + tablePart
} }
model := &ModelData{ model := &ModelData{
@@ -133,8 +138,10 @@ func NewModelData(table *models.Table, schema string, typeMapper *TypeMapper) *M
// Find primary key // Find primary key
for _, col := range table.Columns { for _, col := range table.Columns {
if col.IsPrimaryKey { if col.IsPrimaryKey {
model.PrimaryKeyField = SnakeCaseToPascalCase(col.Name) // Sanitize column name to remove backticks
model.IDColumnName = col.Name safeName := writers.SanitizeStructTagValue(col.Name)
model.PrimaryKeyField = SnakeCaseToPascalCase(safeName)
model.IDColumnName = safeName
// Check if PK type is a SQL type (contains resolvespec_common or sql_types) // Check if PK type is a SQL type (contains resolvespec_common or sql_types)
goType := typeMapper.SQLTypeToGoType(col.Type, col.NotNull) goType := typeMapper.SQLTypeToGoType(col.Type, col.NotNull)
model.PrimaryKeyIsSQL = strings.Contains(goType, "resolvespec_common") || strings.Contains(goType, "sql_types") model.PrimaryKeyIsSQL = strings.Contains(goType, "resolvespec_common") || strings.Contains(goType, "sql_types")
@@ -146,6 +153,8 @@ func NewModelData(table *models.Table, schema string, typeMapper *TypeMapper) *M
columns := sortColumns(table.Columns) columns := sortColumns(table.Columns)
for _, col := range columns { for _, col := range columns {
field := columnToField(col, table, typeMapper) field := columnToField(col, table, typeMapper)
// Check for name collision with generated methods and rename if needed
field.Name = resolveFieldNameCollision(field.Name)
model.Fields = append(model.Fields, field) model.Fields = append(model.Fields, field)
} }
@@ -154,10 +163,13 @@ func NewModelData(table *models.Table, schema string, typeMapper *TypeMapper) *M
// columnToField converts a models.Column to FieldData // columnToField converts a models.Column to FieldData
func columnToField(col *models.Column, table *models.Table, typeMapper *TypeMapper) *FieldData { func columnToField(col *models.Column, table *models.Table, typeMapper *TypeMapper) *FieldData {
fieldName := SnakeCaseToPascalCase(col.Name) // Sanitize column name first to remove backticks before generating field name
safeName := writers.SanitizeStructTagValue(col.Name)
fieldName := SnakeCaseToPascalCase(safeName)
goType := typeMapper.SQLTypeToGoType(col.Type, col.NotNull) goType := typeMapper.SQLTypeToGoType(col.Type, col.NotNull)
bunTag := typeMapper.BuildBunTag(col, table) bunTag := typeMapper.BuildBunTag(col, table)
jsonTag := col.Name // Use column name for JSON tag // Use same sanitized name for JSON tag
jsonTag := safeName
return &FieldData{ return &FieldData{
Name: fieldName, Name: fieldName,
@@ -184,9 +196,28 @@ func formatComment(description, comment string) string {
return comment return comment
} }
// hasModelPrefix checks if a name already has "Model" prefix // resolveFieldNameCollision checks if a field name conflicts with generated method names
func hasModelPrefix(name string) bool { // and adds an underscore suffix if there's a collision
return len(name) >= 5 && name[:5] == "Model" func resolveFieldNameCollision(fieldName string) string {
// List of method names that are generated by the template
reservedNames := map[string]bool{
"TableName": true,
"TableNameOnly": true,
"SchemaName": true,
"GetID": true,
"GetIDStr": true,
"SetID": true,
"UpdateID": true,
"GetIDName": true,
"GetPrefix": true,
}
// Check if field name conflicts with a reserved method name
if reservedNames[fieldName] {
return fieldName + "_"
}
return fieldName
} }
// sortColumns sorts columns by sequence, then by name // sortColumns sorts columns by sequence, then by name

View File

@@ -5,6 +5,7 @@ import (
"strings" "strings"
"git.warky.dev/wdevs/relspecgo/pkg/models" "git.warky.dev/wdevs/relspecgo/pkg/models"
"git.warky.dev/wdevs/relspecgo/pkg/writers"
) )
// TypeMapper handles type conversions between SQL and Go types for Bun // TypeMapper handles type conversions between SQL and Go types for Bun
@@ -164,11 +165,14 @@ func (tm *TypeMapper) BuildBunTag(column *models.Column, table *models.Table) st
var parts []string var parts []string
// Column name comes first (no prefix) // Column name comes first (no prefix)
parts = append(parts, column.Name) // Sanitize to remove backticks which would break struct tag syntax
safeName := writers.SanitizeStructTagValue(column.Name)
parts = append(parts, safeName)
// Add type if specified // Add type if specified
if column.Type != "" { if column.Type != "" {
typeStr := column.Type // Sanitize type to remove backticks
typeStr := writers.SanitizeStructTagValue(column.Type)
if column.Length > 0 { if column.Length > 0 {
typeStr = fmt.Sprintf("%s(%d)", typeStr, column.Length) typeStr = fmt.Sprintf("%s(%d)", typeStr, column.Length)
} else if column.Precision > 0 { } else if column.Precision > 0 {
@@ -188,12 +192,17 @@ func (tm *TypeMapper) BuildBunTag(column *models.Column, table *models.Table) st
// Default value // Default value
if column.Default != nil { if column.Default != nil {
parts = append(parts, fmt.Sprintf("default:%v", column.Default)) // Sanitize default value to remove backticks
safeDefault := writers.SanitizeStructTagValue(fmt.Sprintf("%v", column.Default))
parts = append(parts, fmt.Sprintf("default:%s", safeDefault))
} }
// Nullable (Bun uses nullzero for nullable fields) // Nullable (Bun uses nullzero for nullable fields)
// and notnull tag for explicitly non-nullable fields
if !column.NotNull && !column.IsPrimaryKey { if !column.NotNull && !column.IsPrimaryKey {
parts = append(parts, "nullzero") parts = append(parts, "nullzero")
} else if column.NotNull && !column.IsPrimaryKey {
parts = append(parts, "notnull")
} }
// Check for indexes (unique indexes should be added to tag) // Check for indexes (unique indexes should be added to tag)
@@ -260,7 +269,7 @@ func (tm *TypeMapper) NeedsFmtImport(generateGetIDStr bool) bool {
// GetSQLTypesImport returns the import path for sql_types (ResolveSpec common) // GetSQLTypesImport returns the import path for sql_types (ResolveSpec common)
func (tm *TypeMapper) GetSQLTypesImport() string { func (tm *TypeMapper) GetSQLTypesImport() string {
return "github.com/bitechdev/ResolveSpec/pkg/common" return "github.com/bitechdev/ResolveSpec/pkg/spectypes"
} }
// GetBunImport returns the import path for Bun // GetBunImport returns the import path for Bun

View File

@@ -4,6 +4,7 @@ import (
"fmt" "fmt"
"go/format" "go/format"
"os" "os"
"os/exec"
"path/filepath" "path/filepath"
"strings" "strings"
@@ -124,7 +125,16 @@ func (w *Writer) writeSingleFile(db *models.Database) error {
} }
// Write output // Write output
return w.writeOutput(formatted) if err := w.writeOutput(formatted); err != nil {
return err
}
// Run go fmt on the output file
if w.options.OutputPath != "" {
w.runGoFmt(w.options.OutputPath)
}
return nil
} }
// writeMultiFile writes each table to a separate file // writeMultiFile writes each table to a separate file
@@ -207,13 +217,19 @@ func (w *Writer) writeMultiFile(db *models.Database) error {
} }
// Generate filename: sql_{schema}_{table}.go // Generate filename: sql_{schema}_{table}.go
filename := fmt.Sprintf("sql_%s_%s.go", schema.Name, table.Name) // Sanitize schema and table names to remove quotes, comments, and invalid characters
safeSchemaName := writers.SanitizeFilename(schema.Name)
safeTableName := writers.SanitizeFilename(table.Name)
filename := fmt.Sprintf("sql_%s_%s.go", safeSchemaName, safeTableName)
filepath := filepath.Join(w.options.OutputPath, filename) filepath := filepath.Join(w.options.OutputPath, filename)
// Write file // Write file
if err := os.WriteFile(filepath, []byte(formatted), 0644); err != nil { if err := os.WriteFile(filepath, []byte(formatted), 0644); err != nil {
return fmt.Errorf("failed to write file %s: %w", filename, err) return fmt.Errorf("failed to write file %s: %w", filename, err)
} }
// Run go fmt on the generated file
w.runGoFmt(filepath)
} }
} }
@@ -222,6 +238,9 @@ func (w *Writer) writeMultiFile(db *models.Database) error {
// addRelationshipFields adds relationship fields to the model based on foreign keys // addRelationshipFields adds relationship fields to the model based on foreign keys
func (w *Writer) addRelationshipFields(modelData *ModelData, table *models.Table, schema *models.Schema, db *models.Database) { func (w *Writer) addRelationshipFields(modelData *ModelData, table *models.Table, schema *models.Schema, db *models.Database) {
// Track used field names to detect duplicates
usedFieldNames := make(map[string]int)
// For each foreign key in this table, add a belongs-to/has-one relationship // For each foreign key in this table, add a belongs-to/has-one relationship
for _, constraint := range table.Constraints { for _, constraint := range table.Constraints {
if constraint.Type != models.ForeignKeyConstraint { if constraint.Type != models.ForeignKeyConstraint {
@@ -235,8 +254,9 @@ func (w *Writer) addRelationshipFields(modelData *ModelData, table *models.Table
} }
// Create relationship field (has-one in Bun, similar to belongs-to in GORM) // Create relationship field (has-one in Bun, similar to belongs-to in GORM)
refModelName := w.getModelName(constraint.ReferencedTable) refModelName := w.getModelName(constraint.ReferencedSchema, constraint.ReferencedTable)
fieldName := w.generateRelationshipFieldName(constraint.ReferencedTable) fieldName := w.generateHasOneFieldName(constraint)
fieldName = w.ensureUniqueFieldName(fieldName, usedFieldNames)
relationTag := w.typeMapper.BuildRelationshipTag(constraint, "has-one") relationTag := w.typeMapper.BuildRelationshipTag(constraint, "has-one")
modelData.AddRelationshipField(&FieldData{ modelData.AddRelationshipField(&FieldData{
@@ -263,8 +283,9 @@ func (w *Writer) addRelationshipFields(modelData *ModelData, table *models.Table
// Check if this constraint references our table // Check if this constraint references our table
if constraint.ReferencedTable == table.Name && constraint.ReferencedSchema == schema.Name { if constraint.ReferencedTable == table.Name && constraint.ReferencedSchema == schema.Name {
// Add has-many relationship // Add has-many relationship
otherModelName := w.getModelName(otherTable.Name) otherModelName := w.getModelName(otherSchema.Name, otherTable.Name)
fieldName := w.generateRelationshipFieldName(otherTable.Name) + "s" // Pluralize fieldName := w.generateHasManyFieldName(constraint, otherSchema.Name, otherTable.Name)
fieldName = w.ensureUniqueFieldName(fieldName, usedFieldNames)
relationTag := w.typeMapper.BuildRelationshipTag(constraint, "has-many") relationTag := w.typeMapper.BuildRelationshipTag(constraint, "has-many")
modelData.AddRelationshipField(&FieldData{ modelData.AddRelationshipField(&FieldData{
@@ -295,22 +316,77 @@ func (w *Writer) findTable(schemaName, tableName string, db *models.Database) *m
return nil return nil
} }
// getModelName generates the model name from a table name // getModelName generates the model name from schema and table name
func (w *Writer) getModelName(tableName string) string { func (w *Writer) getModelName(schemaName, tableName string) string {
singular := Singularize(tableName) singular := Singularize(tableName)
modelName := SnakeCaseToPascalCase(singular) tablePart := SnakeCaseToPascalCase(singular)
if !hasModelPrefix(modelName) { // Include schema name in model name
modelName = "Model" + modelName var modelName string
if schemaName != "" {
schemaPart := SnakeCaseToPascalCase(schemaName)
modelName = "Model" + schemaPart + tablePart
} else {
modelName = "Model" + tablePart
} }
return modelName return modelName
} }
// generateRelationshipFieldName generates a field name for a relationship // generateHasOneFieldName generates a field name for has-one relationships
func (w *Writer) generateRelationshipFieldName(tableName string) string { // Uses the foreign key column name for uniqueness
// Use just the prefix (3 letters) for relationship fields func (w *Writer) generateHasOneFieldName(constraint *models.Constraint) string {
return GeneratePrefix(tableName) // Use the foreign key column name to ensure uniqueness
// If there are multiple columns, use the first one
if len(constraint.Columns) > 0 {
columnName := constraint.Columns[0]
// Convert to PascalCase for proper Go field naming
// e.g., "rid_filepointer_request" -> "RelRIDFilepointerRequest"
return "Rel" + SnakeCaseToPascalCase(columnName)
}
// Fallback to table-based prefix if no columns defined
return "Rel" + GeneratePrefix(constraint.ReferencedTable)
}
// generateHasManyFieldName generates a field name for has-many relationships
// Uses the foreign key column name + source table name to avoid duplicates
func (w *Writer) generateHasManyFieldName(constraint *models.Constraint, sourceSchemaName, sourceTableName string) string {
// For has-many, we need to include the source table name to avoid duplicates
// e.g., multiple tables referencing the same column on this table
if len(constraint.Columns) > 0 {
columnName := constraint.Columns[0]
// Get the model name for the source table (pluralized)
sourceModelName := w.getModelName(sourceSchemaName, sourceTableName)
// Remove "Model" prefix if present
sourceModelName = strings.TrimPrefix(sourceModelName, "Model")
// Convert column to PascalCase and combine with source table
// e.g., "rid_api_provider" + "Login" -> "RelRIDAPIProviderLogins"
columnPart := SnakeCaseToPascalCase(columnName)
return "Rel" + columnPart + Pluralize(sourceModelName)
}
// Fallback to table-based naming
sourceModelName := w.getModelName(sourceSchemaName, sourceTableName)
sourceModelName = strings.TrimPrefix(sourceModelName, "Model")
return "Rel" + Pluralize(sourceModelName)
}
// ensureUniqueFieldName ensures a field name is unique by adding numeric suffixes if needed
func (w *Writer) ensureUniqueFieldName(fieldName string, usedNames map[string]int) string {
originalName := fieldName
count := usedNames[originalName]
if count > 0 {
// Name is already used, add numeric suffix
fieldName = fmt.Sprintf("%s%d", originalName, count+1)
}
// Increment the counter for this base name
usedNames[originalName]++
return fieldName
} }
// getPackageName returns the package name from options or defaults to "models" // getPackageName returns the package name from options or defaults to "models"
@@ -341,6 +417,15 @@ func (w *Writer) writeOutput(content string) error {
return nil return nil
} }
// runGoFmt runs go fmt on the specified file
func (w *Writer) runGoFmt(filepath string) {
cmd := exec.Command("gofmt", "-w", filepath)
if err := cmd.Run(); err != nil {
// Don't fail the whole operation if gofmt fails, just warn
fmt.Fprintf(os.Stderr, "Warning: failed to run gofmt on %s: %v\n", filepath, err)
}
}
// shouldUseMultiFile determines whether to use multi-file mode based on metadata or output path // shouldUseMultiFile determines whether to use multi-file mode based on metadata or output path
func (w *Writer) shouldUseMultiFile() bool { func (w *Writer) shouldUseMultiFile() bool {
// Check if multi_file is explicitly set in metadata // Check if multi_file is explicitly set in metadata
@@ -386,6 +471,7 @@ func (w *Writer) createDatabaseRef(db *models.Database) *models.Database {
DatabaseVersion: db.DatabaseVersion, DatabaseVersion: db.DatabaseVersion,
SourceFormat: db.SourceFormat, SourceFormat: db.SourceFormat,
Schemas: nil, // Don't include schemas to avoid circular reference Schemas: nil, // Don't include schemas to avoid circular reference
GUID: db.GUID,
} }
} }
@@ -402,5 +488,6 @@ func (w *Writer) createSchemaRef(schema *models.Schema, db *models.Database) *mo
Sequence: schema.Sequence, Sequence: schema.Sequence,
RefDatabase: w.createDatabaseRef(db), // Include database ref RefDatabase: w.createDatabaseRef(db), // Include database ref
Tables: nil, // Don't include tables to avoid circular reference Tables: nil, // Don't include tables to avoid circular reference
GUID: schema.GUID,
} }
} }

View File

@@ -66,7 +66,7 @@ func TestWriter_WriteTable(t *testing.T) {
// Verify key elements are present // Verify key elements are present
expectations := []string{ expectations := []string{
"package models", "package models",
"type ModelUser struct", "type ModelPublicUser struct",
"bun.BaseModel", "bun.BaseModel",
"table:public.users", "table:public.users",
"alias:users", "alias:users",
@@ -78,9 +78,9 @@ func TestWriter_WriteTable(t *testing.T) {
"resolvespec_common.SqlTime", "resolvespec_common.SqlTime",
"bun:\"id", "bun:\"id",
"bun:\"email", "bun:\"email",
"func (m ModelUser) TableName() string", "func (m ModelPublicUser) TableName() string",
"return \"public.users\"", "return \"public.users\"",
"func (m ModelUser) GetID() int64", "func (m ModelPublicUser) GetID() int64",
} }
for _, expected := range expectations { for _, expected := range expectations {
@@ -175,12 +175,378 @@ func TestWriter_WriteDatabase_MultiFile(t *testing.T) {
postsStr := string(postsContent) postsStr := string(postsContent)
// Verify relationship is present with Bun format // Verify relationship is present with Bun format
if !strings.Contains(postsStr, "USE") { // Should now be RelUserID (has-one) instead of USE
t.Errorf("Missing relationship field USE") if !strings.Contains(postsStr, "RelUserID") {
t.Errorf("Missing relationship field RelUserID (new naming convention)")
} }
if !strings.Contains(postsStr, "rel:has-one") { if !strings.Contains(postsStr, "rel:has-one") {
t.Errorf("Missing Bun relationship tag: %s", postsStr) t.Errorf("Missing Bun relationship tag: %s", postsStr)
} }
// Check users file contains has-many relationship
usersContent, err := os.ReadFile(filepath.Join(tmpDir, "sql_public_users.go"))
if err != nil {
t.Fatalf("Failed to read users file: %v", err)
}
usersStr := string(usersContent)
// Should have RelUserIDPublicPosts (has-many) field - includes schema prefix
if !strings.Contains(usersStr, "RelUserIDPublicPosts") {
t.Errorf("Missing has-many relationship field RelUserIDPublicPosts")
}
}
func TestWriter_MultipleReferencesToSameTable(t *testing.T) {
// Test scenario: api_event table with multiple foreign keys to filepointer table
db := models.InitDatabase("testdb")
schema := models.InitSchema("org")
// Filepointer table
filepointer := models.InitTable("filepointer", "org")
filepointer.Columns["id_filepointer"] = &models.Column{
Name: "id_filepointer",
Type: "bigserial",
NotNull: true,
IsPrimaryKey: true,
}
schema.Tables = append(schema.Tables, filepointer)
// API event table with two foreign keys to filepointer
apiEvent := models.InitTable("api_event", "org")
apiEvent.Columns["id_api_event"] = &models.Column{
Name: "id_api_event",
Type: "bigserial",
NotNull: true,
IsPrimaryKey: true,
}
apiEvent.Columns["rid_filepointer_request"] = &models.Column{
Name: "rid_filepointer_request",
Type: "bigint",
NotNull: false,
}
apiEvent.Columns["rid_filepointer_response"] = &models.Column{
Name: "rid_filepointer_response",
Type: "bigint",
NotNull: false,
}
// Add constraints
apiEvent.Constraints["fk_request"] = &models.Constraint{
Name: "fk_request",
Type: models.ForeignKeyConstraint,
Columns: []string{"rid_filepointer_request"},
ReferencedTable: "filepointer",
ReferencedSchema: "org",
ReferencedColumns: []string{"id_filepointer"},
}
apiEvent.Constraints["fk_response"] = &models.Constraint{
Name: "fk_response",
Type: models.ForeignKeyConstraint,
Columns: []string{"rid_filepointer_response"},
ReferencedTable: "filepointer",
ReferencedSchema: "org",
ReferencedColumns: []string{"id_filepointer"},
}
schema.Tables = append(schema.Tables, apiEvent)
db.Schemas = append(db.Schemas, schema)
// Create writer
tmpDir := t.TempDir()
opts := &writers.WriterOptions{
PackageName: "models",
OutputPath: tmpDir,
Metadata: map[string]interface{}{
"multi_file": true,
},
}
writer := NewWriter(opts)
err := writer.WriteDatabase(db)
if err != nil {
t.Fatalf("WriteDatabase failed: %v", err)
}
// Read the api_event file
apiEventContent, err := os.ReadFile(filepath.Join(tmpDir, "sql_org_api_event.go"))
if err != nil {
t.Fatalf("Failed to read api_event file: %v", err)
}
contentStr := string(apiEventContent)
// Verify both relationships have unique names based on column names
expectations := []struct {
fieldName string
tag string
}{
{"RelRIDFilepointerRequest", "join:rid_filepointer_request=id_filepointer"},
{"RelRIDFilepointerResponse", "join:rid_filepointer_response=id_filepointer"},
}
for _, exp := range expectations {
if !strings.Contains(contentStr, exp.fieldName) {
t.Errorf("Missing relationship field: %s\nGenerated:\n%s", exp.fieldName, contentStr)
}
if !strings.Contains(contentStr, exp.tag) {
t.Errorf("Missing relationship tag: %s\nGenerated:\n%s", exp.tag, contentStr)
}
}
// Verify NO duplicate field names (old behavior would create duplicate "FIL" fields)
if strings.Contains(contentStr, "FIL *ModelFilepointer") {
t.Errorf("Found old prefix-based naming (FIL), should use column-based naming")
}
// Also verify has-many relationships on filepointer table
filepointerContent, err := os.ReadFile(filepath.Join(tmpDir, "sql_org_filepointer.go"))
if err != nil {
t.Fatalf("Failed to read filepointer file: %v", err)
}
filepointerStr := string(filepointerContent)
// Should have two different has-many relationships with unique names
hasManyExpectations := []string{
"RelRIDFilepointerRequestOrgAPIEvents", // Has many via rid_filepointer_request
"RelRIDFilepointerResponseOrgAPIEvents", // Has many via rid_filepointer_response
}
for _, exp := range hasManyExpectations {
if !strings.Contains(filepointerStr, exp) {
t.Errorf("Missing has-many relationship field: %s\nGenerated:\n%s", exp, filepointerStr)
}
}
}
func TestWriter_MultipleHasManyRelationships(t *testing.T) {
// Test scenario: api_provider table referenced by multiple tables via rid_api_provider
db := models.InitDatabase("testdb")
schema := models.InitSchema("org")
// Owner table
owner := models.InitTable("owner", "org")
owner.Columns["id_owner"] = &models.Column{
Name: "id_owner",
Type: "bigserial",
NotNull: true,
IsPrimaryKey: true,
}
schema.Tables = append(schema.Tables, owner)
// API Provider table
apiProvider := models.InitTable("api_provider", "org")
apiProvider.Columns["id_api_provider"] = &models.Column{
Name: "id_api_provider",
Type: "bigserial",
NotNull: true,
IsPrimaryKey: true,
}
apiProvider.Columns["rid_owner"] = &models.Column{
Name: "rid_owner",
Type: "bigint",
NotNull: true,
}
apiProvider.Constraints["fk_owner"] = &models.Constraint{
Name: "fk_owner",
Type: models.ForeignKeyConstraint,
Columns: []string{"rid_owner"},
ReferencedTable: "owner",
ReferencedSchema: "org",
ReferencedColumns: []string{"id_owner"},
}
schema.Tables = append(schema.Tables, apiProvider)
// Login table
login := models.InitTable("login", "org")
login.Columns["id_login"] = &models.Column{
Name: "id_login",
Type: "bigserial",
NotNull: true,
IsPrimaryKey: true,
}
login.Columns["rid_api_provider"] = &models.Column{
Name: "rid_api_provider",
Type: "bigint",
NotNull: true,
}
login.Constraints["fk_api_provider"] = &models.Constraint{
Name: "fk_api_provider",
Type: models.ForeignKeyConstraint,
Columns: []string{"rid_api_provider"},
ReferencedTable: "api_provider",
ReferencedSchema: "org",
ReferencedColumns: []string{"id_api_provider"},
}
schema.Tables = append(schema.Tables, login)
// Filepointer table
filepointer := models.InitTable("filepointer", "org")
filepointer.Columns["id_filepointer"] = &models.Column{
Name: "id_filepointer",
Type: "bigserial",
NotNull: true,
IsPrimaryKey: true,
}
filepointer.Columns["rid_api_provider"] = &models.Column{
Name: "rid_api_provider",
Type: "bigint",
NotNull: true,
}
filepointer.Constraints["fk_api_provider"] = &models.Constraint{
Name: "fk_api_provider",
Type: models.ForeignKeyConstraint,
Columns: []string{"rid_api_provider"},
ReferencedTable: "api_provider",
ReferencedSchema: "org",
ReferencedColumns: []string{"id_api_provider"},
}
schema.Tables = append(schema.Tables, filepointer)
// API Event table
apiEvent := models.InitTable("api_event", "org")
apiEvent.Columns["id_api_event"] = &models.Column{
Name: "id_api_event",
Type: "bigserial",
NotNull: true,
IsPrimaryKey: true,
}
apiEvent.Columns["rid_api_provider"] = &models.Column{
Name: "rid_api_provider",
Type: "bigint",
NotNull: true,
}
apiEvent.Constraints["fk_api_provider"] = &models.Constraint{
Name: "fk_api_provider",
Type: models.ForeignKeyConstraint,
Columns: []string{"rid_api_provider"},
ReferencedTable: "api_provider",
ReferencedSchema: "org",
ReferencedColumns: []string{"id_api_provider"},
}
schema.Tables = append(schema.Tables, apiEvent)
db.Schemas = append(db.Schemas, schema)
// Create writer
tmpDir := t.TempDir()
opts := &writers.WriterOptions{
PackageName: "models",
OutputPath: tmpDir,
Metadata: map[string]interface{}{
"multi_file": true,
},
}
writer := NewWriter(opts)
err := writer.WriteDatabase(db)
if err != nil {
t.Fatalf("WriteDatabase failed: %v", err)
}
// Read the api_provider file
apiProviderContent, err := os.ReadFile(filepath.Join(tmpDir, "sql_org_api_provider.go"))
if err != nil {
t.Fatalf("Failed to read api_provider file: %v", err)
}
contentStr := string(apiProviderContent)
// Verify all has-many relationships have unique names
hasManyExpectations := []string{
"RelRIDAPIProviderOrgLogins", // Has many via Login
"RelRIDAPIProviderOrgFilepointers", // Has many via Filepointer
"RelRIDAPIProviderOrgAPIEvents", // Has many via APIEvent
"RelRIDOwner", // Has one via rid_owner
}
for _, exp := range hasManyExpectations {
if !strings.Contains(contentStr, exp) {
t.Errorf("Missing relationship field: %s\nGenerated:\n%s", exp, contentStr)
}
}
// Verify NO duplicate field names
// Count occurrences of "RelRIDAPIProvider" fields - should have 3 unique ones
count := strings.Count(contentStr, "RelRIDAPIProvider")
if count != 3 {
t.Errorf("Expected 3 RelRIDAPIProvider* fields, found %d\nGenerated:\n%s", count, contentStr)
}
// Verify no duplicate declarations (would cause compilation error)
duplicatePattern := "RelRIDAPIProviders []*Model"
if strings.Contains(contentStr, duplicatePattern) {
t.Errorf("Found duplicate field declaration pattern, fields should be unique")
}
}
func TestWriter_FieldNameCollision(t *testing.T) {
// Test scenario: table with columns that would conflict with generated method names
table := models.InitTable("audit_table", "audit")
table.Columns["id_audit_table"] = &models.Column{
Name: "id_audit_table",
Type: "smallint",
NotNull: true,
IsPrimaryKey: true,
Sequence: 1,
}
table.Columns["table_name"] = &models.Column{
Name: "table_name",
Type: "varchar",
Length: 100,
NotNull: true,
Sequence: 2,
}
table.Columns["table_schema"] = &models.Column{
Name: "table_schema",
Type: "varchar",
Length: 100,
NotNull: true,
Sequence: 3,
}
// Create writer
tmpDir := t.TempDir()
opts := &writers.WriterOptions{
PackageName: "models",
OutputPath: filepath.Join(tmpDir, "test.go"),
}
writer := NewWriter(opts)
err := writer.WriteTable(table)
if err != nil {
t.Fatalf("WriteTable failed: %v", err)
}
// Read the generated file
content, err := os.ReadFile(opts.OutputPath)
if err != nil {
t.Fatalf("Failed to read generated file: %v", err)
}
generated := string(content)
// Verify that TableName field was renamed to TableName_ to avoid collision
if !strings.Contains(generated, "TableName_") {
t.Errorf("Expected field 'TableName_' (with underscore) but not found\nGenerated:\n%s", generated)
}
// Verify the struct tag still references the correct database column
if !strings.Contains(generated, `bun:"table_name,`) {
t.Errorf("Expected bun tag to reference 'table_name' column\nGenerated:\n%s", generated)
}
// Verify the TableName() method still exists and doesn't conflict
if !strings.Contains(generated, "func (m ModelAuditAuditTable) TableName() string") {
t.Errorf("TableName() method should still be generated\nGenerated:\n%s", generated)
}
// Verify NO field named just "TableName" (without underscore)
if strings.Contains(generated, "TableName resolvespec_common") || strings.Contains(generated, "TableName string") {
t.Errorf("Field 'TableName' without underscore should not exist (would conflict with method)\nGenerated:\n%s", generated)
}
} }
func TestTypeMapper_SQLTypeToGoType_Bun(t *testing.T) { func TestTypeMapper_SQLTypeToGoType_Bun(t *testing.T) {

View File

@@ -126,7 +126,15 @@ func (w *Writer) tableToDBML(t *models.Table) string {
attrs = append(attrs, "increment") attrs = append(attrs, "increment")
} }
if column.Default != nil { if column.Default != nil {
attrs = append(attrs, fmt.Sprintf("default: `%v`", column.Default)) // Check if default value contains backticks (DBML expressions like `now()`)
defaultStr := fmt.Sprintf("%v", column.Default)
if strings.HasPrefix(defaultStr, "`") && strings.HasSuffix(defaultStr, "`") {
// Already an expression with backticks, use as-is
attrs = append(attrs, fmt.Sprintf("default: %s", defaultStr))
} else {
// Regular value, wrap in single quotes
attrs = append(attrs, fmt.Sprintf("default: '%v'", column.Default))
}
} }
if len(attrs) > 0 { if len(attrs) > 0 {

View File

@@ -133,7 +133,11 @@ func (w *Writer) mapTableFields(table *models.Table) models.DCTXTable {
prefix = table.Name[:3] prefix = table.Name[:3]
} }
tableGuid := w.newGUID() // Use GUID from model if available, otherwise generate a new one
tableGuid := table.GUID
if tableGuid == "" {
tableGuid = w.newGUID()
}
w.tableGuidMap[table.Name] = tableGuid w.tableGuidMap[table.Name] = tableGuid
dctxTable := models.DCTXTable{ dctxTable := models.DCTXTable{
@@ -171,7 +175,11 @@ func (w *Writer) mapTableKeys(table *models.Table) []models.DCTXKey {
} }
func (w *Writer) mapField(column *models.Column) models.DCTXField { func (w *Writer) mapField(column *models.Column) models.DCTXField {
guid := w.newGUID() // Use GUID from model if available, otherwise generate a new one
guid := column.GUID
if guid == "" {
guid = w.newGUID()
}
fieldKey := fmt.Sprintf("%s.%s", column.Table, column.Name) fieldKey := fmt.Sprintf("%s.%s", column.Table, column.Name)
w.fieldGuidMap[fieldKey] = guid w.fieldGuidMap[fieldKey] = guid
@@ -209,7 +217,11 @@ func (w *Writer) mapDataType(dataType string) string {
} }
func (w *Writer) mapKey(index *models.Index, table *models.Table) models.DCTXKey { func (w *Writer) mapKey(index *models.Index, table *models.Table) models.DCTXKey {
guid := w.newGUID() // Use GUID from model if available, otherwise generate a new one
guid := index.GUID
if guid == "" {
guid = w.newGUID()
}
keyKey := fmt.Sprintf("%s.%s", table.Name, index.Name) keyKey := fmt.Sprintf("%s.%s", table.Name, index.Name)
w.keyGuidMap[keyKey] = guid w.keyGuidMap[keyKey] = guid
@@ -344,7 +356,7 @@ func (w *Writer) mapRelation(rel *models.Relationship, schema *models.Schema) mo
} }
return models.DCTXRelation{ return models.DCTXRelation{
Guid: w.newGUID(), Guid: rel.GUID, // Use GUID from relationship model
PrimaryTable: w.tableGuidMap[rel.ToTable], // GUID of the 'to' table (e.g., users) PrimaryTable: w.tableGuidMap[rel.ToTable], // GUID of the 'to' table (e.g., users)
ForeignTable: w.tableGuidMap[rel.FromTable], // GUID of the 'from' table (e.g., posts) ForeignTable: w.tableGuidMap[rel.FromTable], // GUID of the 'from' table (e.g., posts)
PrimaryKey: primaryKeyGUID, PrimaryKey: primaryKeyGUID,

View File

@@ -127,6 +127,51 @@ func (w *Writer) databaseToDrawDB(d *models.Database) *DrawDBSchema {
} }
} }
// Create subject areas for domains
for domainIdx, domainModel := range d.Domains {
// Calculate bounds for all tables in this domain
minX, minY := 999999, 999999
maxX, maxY := 0, 0
domainTableCount := 0
for _, domainTable := range domainModel.Tables {
// Find the table in the schema to get its position
for _, t := range schema.Tables {
if t.Name == domainTable.TableName {
if t.X < minX {
minX = t.X
}
if t.Y < minY {
minY = t.Y
}
if t.X+colWidth > maxX {
maxX = t.X + colWidth
}
if t.Y+rowHeight > maxY {
maxY = t.Y + rowHeight
}
domainTableCount++
break
}
}
}
// Only create area if domain has tables in this schema
if domainTableCount > 0 {
area := &DrawDBArea{
ID: areaID,
Name: domainModel.Name,
Color: getColorForIndex(len(d.Schemas) + domainIdx), // Use different colors than schemas
X: minX - 20,
Y: minY - 20,
Width: maxX - minX + 40,
Height: maxY - minY + 40,
}
schema.SubjectAreas = append(schema.SubjectAreas, area)
areaID++
}
}
// Add relationships // Add relationships
for _, schemaModel := range d.Schemas { for _, schemaModel := range d.Schemas {
for _, table := range schemaModel.Tables { for _, table := range schemaModel.Tables {

View File

@@ -196,7 +196,9 @@ func (w *Writer) writeTableFile(table *models.Table, schema *models.Schema, db *
} }
// Generate filename: {tableName}.ts // Generate filename: {tableName}.ts
filename := filepath.Join(w.options.OutputPath, table.Name+".ts") // Sanitize table name to remove quotes, comments, and invalid characters
safeTableName := writers.SanitizeFilename(table.Name)
filename := filepath.Join(w.options.OutputPath, safeTableName+".ts")
return os.WriteFile(filename, []byte(code), 0644) return os.WriteFile(filename, []byte(code), 0644)
} }

View File

@@ -4,6 +4,7 @@ import (
"sort" "sort"
"git.warky.dev/wdevs/relspecgo/pkg/models" "git.warky.dev/wdevs/relspecgo/pkg/models"
"git.warky.dev/wdevs/relspecgo/pkg/writers"
) )
// TemplateData represents the data passed to the template for code generation // TemplateData represents the data passed to the template for code generation
@@ -24,6 +25,7 @@ type ModelData struct {
Fields []*FieldData Fields []*FieldData
Config *MethodConfig Config *MethodConfig
PrimaryKeyField string // Name of the primary key field PrimaryKeyField string // Name of the primary key field
PrimaryKeyType string // Go type of the primary key field
IDColumnName string // Name of the ID column in database IDColumnName string // Name of the ID column in database
Prefix string // 3-letter prefix Prefix string // 3-letter prefix
} }
@@ -109,13 +111,17 @@ func NewModelData(table *models.Table, schema string, typeMapper *TypeMapper) *M
tableName = schema + "." + table.Name tableName = schema + "." + table.Name
} }
// Generate model name: singularize and convert to PascalCase // Generate model name: Model + Schema + Table (all PascalCase)
singularTable := Singularize(table.Name) singularTable := Singularize(table.Name)
modelName := SnakeCaseToPascalCase(singularTable) tablePart := SnakeCaseToPascalCase(singularTable)
// Add "Model" prefix if not already present // Include schema name in model name
if !hasModelPrefix(modelName) { var modelName string
modelName = "Model" + modelName if schema != "" {
schemaPart := SnakeCaseToPascalCase(schema)
modelName = "Model" + schemaPart + tablePart
} else {
modelName = "Model" + tablePart
} }
model := &ModelData{ model := &ModelData{
@@ -131,8 +137,11 @@ func NewModelData(table *models.Table, schema string, typeMapper *TypeMapper) *M
// Find primary key // Find primary key
for _, col := range table.Columns { for _, col := range table.Columns {
if col.IsPrimaryKey { if col.IsPrimaryKey {
model.PrimaryKeyField = SnakeCaseToPascalCase(col.Name) // Sanitize column name to remove backticks
model.IDColumnName = col.Name safeName := writers.SanitizeStructTagValue(col.Name)
model.PrimaryKeyField = SnakeCaseToPascalCase(safeName)
model.PrimaryKeyType = typeMapper.SQLTypeToGoType(col.Type, col.NotNull)
model.IDColumnName = safeName
break break
} }
} }
@@ -141,6 +150,8 @@ func NewModelData(table *models.Table, schema string, typeMapper *TypeMapper) *M
columns := sortColumns(table.Columns) columns := sortColumns(table.Columns)
for _, col := range columns { for _, col := range columns {
field := columnToField(col, table, typeMapper) field := columnToField(col, table, typeMapper)
// Check for name collision with generated methods and rename if needed
field.Name = resolveFieldNameCollision(field.Name)
model.Fields = append(model.Fields, field) model.Fields = append(model.Fields, field)
} }
@@ -149,10 +160,13 @@ func NewModelData(table *models.Table, schema string, typeMapper *TypeMapper) *M
// columnToField converts a models.Column to FieldData // columnToField converts a models.Column to FieldData
func columnToField(col *models.Column, table *models.Table, typeMapper *TypeMapper) *FieldData { func columnToField(col *models.Column, table *models.Table, typeMapper *TypeMapper) *FieldData {
fieldName := SnakeCaseToPascalCase(col.Name) // Sanitize column name first to remove backticks before generating field name
safeName := writers.SanitizeStructTagValue(col.Name)
fieldName := SnakeCaseToPascalCase(safeName)
goType := typeMapper.SQLTypeToGoType(col.Type, col.NotNull) goType := typeMapper.SQLTypeToGoType(col.Type, col.NotNull)
gormTag := typeMapper.BuildGormTag(col, table) gormTag := typeMapper.BuildGormTag(col, table)
jsonTag := col.Name // Use column name for JSON tag // Use same sanitized name for JSON tag
jsonTag := safeName
return &FieldData{ return &FieldData{
Name: fieldName, Name: fieldName,
@@ -179,9 +193,28 @@ func formatComment(description, comment string) string {
return comment return comment
} }
// hasModelPrefix checks if a name already has "Model" prefix // resolveFieldNameCollision checks if a field name conflicts with generated method names
func hasModelPrefix(name string) bool { // and adds an underscore suffix if there's a collision
return len(name) >= 5 && name[:5] == "Model" func resolveFieldNameCollision(fieldName string) string {
// List of method names that are generated by the template
reservedNames := map[string]bool{
"TableName": true,
"TableNameOnly": true,
"SchemaName": true,
"GetID": true,
"GetIDStr": true,
"SetID": true,
"UpdateID": true,
"GetIDName": true,
"GetPrefix": true,
}
// Check if field name conflicts with a reserved method name
if reservedNames[fieldName] {
return fieldName + "_"
}
return fieldName
} }
// sortColumns sorts columns by sequence, then by name // sortColumns sorts columns by sequence, then by name

View File

@@ -62,7 +62,7 @@ func (m {{.Name}}) SetID(newid int64) {
{{if and .Config.GenerateUpdateID .PrimaryKeyField}} {{if and .Config.GenerateUpdateID .PrimaryKeyField}}
// UpdateID updates the primary key value // UpdateID updates the primary key value
func (m *{{.Name}}) UpdateID(newid int64) { func (m *{{.Name}}) UpdateID(newid int64) {
m.{{.PrimaryKeyField}} = int32(newid) m.{{.PrimaryKeyField}} = {{.PrimaryKeyType}}(newid)
} }
{{end}} {{end}}
{{if and .Config.GenerateGetIDName .IDColumnName}} {{if and .Config.GenerateGetIDName .IDColumnName}}

View File

@@ -5,6 +5,7 @@ import (
"strings" "strings"
"git.warky.dev/wdevs/relspecgo/pkg/models" "git.warky.dev/wdevs/relspecgo/pkg/models"
"git.warky.dev/wdevs/relspecgo/pkg/writers"
) )
// TypeMapper handles type conversions between SQL and Go types // TypeMapper handles type conversions between SQL and Go types
@@ -199,12 +200,15 @@ func (tm *TypeMapper) BuildGormTag(column *models.Column, table *models.Table) s
var parts []string var parts []string
// Always include column name (lowercase as per user requirement) // Always include column name (lowercase as per user requirement)
parts = append(parts, fmt.Sprintf("column:%s", column.Name)) // Sanitize to remove backticks which would break struct tag syntax
safeName := writers.SanitizeStructTagValue(column.Name)
parts = append(parts, fmt.Sprintf("column:%s", safeName))
// Add type if specified // Add type if specified
if column.Type != "" { if column.Type != "" {
// Include length, precision, scale if present // Include length, precision, scale if present
typeStr := column.Type // Sanitize type to remove backticks
typeStr := writers.SanitizeStructTagValue(column.Type)
if column.Length > 0 { if column.Length > 0 {
typeStr = fmt.Sprintf("%s(%d)", typeStr, column.Length) typeStr = fmt.Sprintf("%s(%d)", typeStr, column.Length)
} else if column.Precision > 0 { } else if column.Precision > 0 {
@@ -234,7 +238,9 @@ func (tm *TypeMapper) BuildGormTag(column *models.Column, table *models.Table) s
// Default value // Default value
if column.Default != nil { if column.Default != nil {
parts = append(parts, fmt.Sprintf("default:%v", column.Default)) // Sanitize default value to remove backticks
safeDefault := writers.SanitizeStructTagValue(fmt.Sprintf("%v", column.Default))
parts = append(parts, fmt.Sprintf("default:%s", safeDefault))
} }
// Check for unique constraint // Check for unique constraint
@@ -331,5 +337,5 @@ func (tm *TypeMapper) NeedsFmtImport(generateGetIDStr bool) bool {
// GetSQLTypesImport returns the import path for sql_types // GetSQLTypesImport returns the import path for sql_types
func (tm *TypeMapper) GetSQLTypesImport() string { func (tm *TypeMapper) GetSQLTypesImport() string {
return "github.com/bitechdev/ResolveSpec/pkg/common/sql_types" return "github.com/bitechdev/ResolveSpec/pkg/spectypes"
} }

View File

@@ -4,6 +4,7 @@ import (
"fmt" "fmt"
"go/format" "go/format"
"os" "os"
"os/exec"
"path/filepath" "path/filepath"
"strings" "strings"
@@ -121,7 +122,16 @@ func (w *Writer) writeSingleFile(db *models.Database) error {
} }
// Write output // Write output
return w.writeOutput(formatted) if err := w.writeOutput(formatted); err != nil {
return err
}
// Run go fmt on the output file
if w.options.OutputPath != "" {
w.runGoFmt(w.options.OutputPath)
}
return nil
} }
// writeMultiFile writes each table to a separate file // writeMultiFile writes each table to a separate file
@@ -201,13 +211,19 @@ func (w *Writer) writeMultiFile(db *models.Database) error {
} }
// Generate filename: sql_{schema}_{table}.go // Generate filename: sql_{schema}_{table}.go
filename := fmt.Sprintf("sql_%s_%s.go", schema.Name, table.Name) // Sanitize schema and table names to remove quotes, comments, and invalid characters
safeSchemaName := writers.SanitizeFilename(schema.Name)
safeTableName := writers.SanitizeFilename(table.Name)
filename := fmt.Sprintf("sql_%s_%s.go", safeSchemaName, safeTableName)
filepath := filepath.Join(w.options.OutputPath, filename) filepath := filepath.Join(w.options.OutputPath, filename)
// Write file // Write file
if err := os.WriteFile(filepath, []byte(formatted), 0644); err != nil { if err := os.WriteFile(filepath, []byte(formatted), 0644); err != nil {
return fmt.Errorf("failed to write file %s: %w", filename, err) return fmt.Errorf("failed to write file %s: %w", filename, err)
} }
// Run go fmt on the generated file
w.runGoFmt(filepath)
} }
} }
@@ -216,6 +232,9 @@ func (w *Writer) writeMultiFile(db *models.Database) error {
// addRelationshipFields adds relationship fields to the model based on foreign keys // addRelationshipFields adds relationship fields to the model based on foreign keys
func (w *Writer) addRelationshipFields(modelData *ModelData, table *models.Table, schema *models.Schema, db *models.Database) { func (w *Writer) addRelationshipFields(modelData *ModelData, table *models.Table, schema *models.Schema, db *models.Database) {
// Track used field names to detect duplicates
usedFieldNames := make(map[string]int)
// For each foreign key in this table, add a belongs-to relationship // For each foreign key in this table, add a belongs-to relationship
for _, constraint := range table.Constraints { for _, constraint := range table.Constraints {
if constraint.Type != models.ForeignKeyConstraint { if constraint.Type != models.ForeignKeyConstraint {
@@ -229,8 +248,9 @@ func (w *Writer) addRelationshipFields(modelData *ModelData, table *models.Table
} }
// Create relationship field (belongs-to) // Create relationship field (belongs-to)
refModelName := w.getModelName(constraint.ReferencedTable) refModelName := w.getModelName(constraint.ReferencedSchema, constraint.ReferencedTable)
fieldName := w.generateRelationshipFieldName(constraint.ReferencedTable) fieldName := w.generateBelongsToFieldName(constraint)
fieldName = w.ensureUniqueFieldName(fieldName, usedFieldNames)
relationTag := w.typeMapper.BuildRelationshipTag(constraint, false) relationTag := w.typeMapper.BuildRelationshipTag(constraint, false)
modelData.AddRelationshipField(&FieldData{ modelData.AddRelationshipField(&FieldData{
@@ -257,8 +277,9 @@ func (w *Writer) addRelationshipFields(modelData *ModelData, table *models.Table
// Check if this constraint references our table // Check if this constraint references our table
if constraint.ReferencedTable == table.Name && constraint.ReferencedSchema == schema.Name { if constraint.ReferencedTable == table.Name && constraint.ReferencedSchema == schema.Name {
// Add has-many relationship // Add has-many relationship
otherModelName := w.getModelName(otherTable.Name) otherModelName := w.getModelName(otherSchema.Name, otherTable.Name)
fieldName := w.generateRelationshipFieldName(otherTable.Name) + "s" // Pluralize fieldName := w.generateHasManyFieldName(constraint, otherSchema.Name, otherTable.Name)
fieldName = w.ensureUniqueFieldName(fieldName, usedFieldNames)
relationTag := w.typeMapper.BuildRelationshipTag(constraint, true) relationTag := w.typeMapper.BuildRelationshipTag(constraint, true)
modelData.AddRelationshipField(&FieldData{ modelData.AddRelationshipField(&FieldData{
@@ -289,22 +310,77 @@ func (w *Writer) findTable(schemaName, tableName string, db *models.Database) *m
return nil return nil
} }
// getModelName generates the model name from a table name // getModelName generates the model name from schema and table name
func (w *Writer) getModelName(tableName string) string { func (w *Writer) getModelName(schemaName, tableName string) string {
singular := Singularize(tableName) singular := Singularize(tableName)
modelName := SnakeCaseToPascalCase(singular) tablePart := SnakeCaseToPascalCase(singular)
if !hasModelPrefix(modelName) { // Include schema name in model name
modelName = "Model" + modelName var modelName string
if schemaName != "" {
schemaPart := SnakeCaseToPascalCase(schemaName)
modelName = "Model" + schemaPart + tablePart
} else {
modelName = "Model" + tablePart
} }
return modelName return modelName
} }
// generateRelationshipFieldName generates a field name for a relationship // generateBelongsToFieldName generates a field name for belongs-to relationships
func (w *Writer) generateRelationshipFieldName(tableName string) string { // Uses the foreign key column name for uniqueness
// Use just the prefix (3 letters) for relationship fields func (w *Writer) generateBelongsToFieldName(constraint *models.Constraint) string {
return GeneratePrefix(tableName) // Use the foreign key column name to ensure uniqueness
// If there are multiple columns, use the first one
if len(constraint.Columns) > 0 {
columnName := constraint.Columns[0]
// Convert to PascalCase for proper Go field naming
// e.g., "rid_filepointer_request" -> "RelRIDFilepointerRequest"
return "Rel" + SnakeCaseToPascalCase(columnName)
}
// Fallback to table-based prefix if no columns defined
return "Rel" + GeneratePrefix(constraint.ReferencedTable)
}
// generateHasManyFieldName generates a field name for has-many relationships
// Uses the foreign key column name + source table name to avoid duplicates
func (w *Writer) generateHasManyFieldName(constraint *models.Constraint, sourceSchemaName, sourceTableName string) string {
// For has-many, we need to include the source table name to avoid duplicates
// e.g., multiple tables referencing the same column on this table
if len(constraint.Columns) > 0 {
columnName := constraint.Columns[0]
// Get the model name for the source table (pluralized)
sourceModelName := w.getModelName(sourceSchemaName, sourceTableName)
// Remove "Model" prefix if present
sourceModelName = strings.TrimPrefix(sourceModelName, "Model")
// Convert column to PascalCase and combine with source table
// e.g., "rid_api_provider" + "Login" -> "RelRIDAPIProviderLogins"
columnPart := SnakeCaseToPascalCase(columnName)
return "Rel" + columnPart + Pluralize(sourceModelName)
}
// Fallback to table-based naming
sourceModelName := w.getModelName(sourceSchemaName, sourceTableName)
sourceModelName = strings.TrimPrefix(sourceModelName, "Model")
return "Rel" + Pluralize(sourceModelName)
}
// ensureUniqueFieldName ensures a field name is unique by adding numeric suffixes if needed
func (w *Writer) ensureUniqueFieldName(fieldName string, usedNames map[string]int) string {
originalName := fieldName
count := usedNames[originalName]
if count > 0 {
// Name is already used, add numeric suffix
fieldName = fmt.Sprintf("%s%d", originalName, count+1)
}
// Increment the counter for this base name
usedNames[originalName]++
return fieldName
} }
// getPackageName returns the package name from options or defaults to "models" // getPackageName returns the package name from options or defaults to "models"
@@ -335,6 +411,15 @@ func (w *Writer) writeOutput(content string) error {
return nil return nil
} }
// runGoFmt runs go fmt on the specified file
func (w *Writer) runGoFmt(filepath string) {
cmd := exec.Command("gofmt", "-w", filepath)
if err := cmd.Run(); err != nil {
// Don't fail the whole operation if gofmt fails, just warn
fmt.Fprintf(os.Stderr, "Warning: failed to run gofmt on %s: %v\n", filepath, err)
}
}
// shouldUseMultiFile determines whether to use multi-file mode based on metadata or output path // shouldUseMultiFile determines whether to use multi-file mode based on metadata or output path
func (w *Writer) shouldUseMultiFile() bool { func (w *Writer) shouldUseMultiFile() bool {
// Check if multi_file is explicitly set in metadata // Check if multi_file is explicitly set in metadata
@@ -380,6 +465,7 @@ func (w *Writer) createDatabaseRef(db *models.Database) *models.Database {
DatabaseVersion: db.DatabaseVersion, DatabaseVersion: db.DatabaseVersion,
SourceFormat: db.SourceFormat, SourceFormat: db.SourceFormat,
Schemas: nil, // Don't include schemas to avoid circular reference Schemas: nil, // Don't include schemas to avoid circular reference
GUID: db.GUID,
} }
} }
@@ -396,5 +482,6 @@ func (w *Writer) createSchemaRef(schema *models.Schema, db *models.Database) *mo
Sequence: schema.Sequence, Sequence: schema.Sequence,
RefDatabase: w.createDatabaseRef(db), // Include database ref RefDatabase: w.createDatabaseRef(db), // Include database ref
Tables: nil, // Don't include tables to avoid circular reference Tables: nil, // Don't include tables to avoid circular reference
GUID: schema.GUID,
} }
} }

View File

@@ -66,7 +66,7 @@ func TestWriter_WriteTable(t *testing.T) {
// Verify key elements are present // Verify key elements are present
expectations := []string{ expectations := []string{
"package models", "package models",
"type ModelUser struct", "type ModelPublicUser struct",
"ID", "ID",
"int64", "int64",
"Email", "Email",
@@ -75,9 +75,9 @@ func TestWriter_WriteTable(t *testing.T) {
"time.Time", "time.Time",
"gorm:\"column:id", "gorm:\"column:id",
"gorm:\"column:email", "gorm:\"column:email",
"func (m ModelUser) TableName() string", "func (m ModelPublicUser) TableName() string",
"return \"public.users\"", "return \"public.users\"",
"func (m ModelUser) GetID() int64", "func (m ModelPublicUser) GetID() int64",
} }
for _, expected := range expectations { for _, expected := range expectations {
@@ -164,9 +164,437 @@ func TestWriter_WriteDatabase_MultiFile(t *testing.T) {
t.Fatalf("Failed to read posts file: %v", err) t.Fatalf("Failed to read posts file: %v", err)
} }
if !strings.Contains(string(postsContent), "USE *ModelUser") { postsStr := string(postsContent)
// Relationship field should be present
t.Logf("Posts content:\n%s", string(postsContent)) // Verify relationship is present with new naming convention
// Should now be RelUserID (belongs-to) instead of USE
if !strings.Contains(postsStr, "RelUserID") {
t.Errorf("Missing relationship field RelUserID (new naming convention)")
}
// Check users file contains has-many relationship
usersContent, err := os.ReadFile(filepath.Join(tmpDir, "sql_public_users.go"))
if err != nil {
t.Fatalf("Failed to read users file: %v", err)
}
usersStr := string(usersContent)
// Should have RelUserIDPublicPosts (has-many) field - includes schema prefix
if !strings.Contains(usersStr, "RelUserIDPublicPosts") {
t.Errorf("Missing has-many relationship field RelUserIDPublicPosts")
}
}
func TestWriter_MultipleReferencesToSameTable(t *testing.T) {
// Test scenario: api_event table with multiple foreign keys to filepointer table
db := models.InitDatabase("testdb")
schema := models.InitSchema("org")
// Filepointer table
filepointer := models.InitTable("filepointer", "org")
filepointer.Columns["id_filepointer"] = &models.Column{
Name: "id_filepointer",
Type: "bigserial",
NotNull: true,
IsPrimaryKey: true,
}
schema.Tables = append(schema.Tables, filepointer)
// API event table with two foreign keys to filepointer
apiEvent := models.InitTable("api_event", "org")
apiEvent.Columns["id_api_event"] = &models.Column{
Name: "id_api_event",
Type: "bigserial",
NotNull: true,
IsPrimaryKey: true,
}
apiEvent.Columns["rid_filepointer_request"] = &models.Column{
Name: "rid_filepointer_request",
Type: "bigint",
NotNull: false,
}
apiEvent.Columns["rid_filepointer_response"] = &models.Column{
Name: "rid_filepointer_response",
Type: "bigint",
NotNull: false,
}
// Add constraints
apiEvent.Constraints["fk_request"] = &models.Constraint{
Name: "fk_request",
Type: models.ForeignKeyConstraint,
Columns: []string{"rid_filepointer_request"},
ReferencedTable: "filepointer",
ReferencedSchema: "org",
ReferencedColumns: []string{"id_filepointer"},
}
apiEvent.Constraints["fk_response"] = &models.Constraint{
Name: "fk_response",
Type: models.ForeignKeyConstraint,
Columns: []string{"rid_filepointer_response"},
ReferencedTable: "filepointer",
ReferencedSchema: "org",
ReferencedColumns: []string{"id_filepointer"},
}
schema.Tables = append(schema.Tables, apiEvent)
db.Schemas = append(db.Schemas, schema)
// Create writer
tmpDir := t.TempDir()
opts := &writers.WriterOptions{
PackageName: "models",
OutputPath: tmpDir,
Metadata: map[string]interface{}{
"multi_file": true,
},
}
writer := NewWriter(opts)
err := writer.WriteDatabase(db)
if err != nil {
t.Fatalf("WriteDatabase failed: %v", err)
}
// Read the api_event file
apiEventContent, err := os.ReadFile(filepath.Join(tmpDir, "sql_org_api_event.go"))
if err != nil {
t.Fatalf("Failed to read api_event file: %v", err)
}
contentStr := string(apiEventContent)
// Verify both relationships have unique names based on column names
expectations := []struct {
fieldName string
tag string
}{
{"RelRIDFilepointerRequest", "foreignKey:RIDFilepointerRequest"},
{"RelRIDFilepointerResponse", "foreignKey:RIDFilepointerResponse"},
}
for _, exp := range expectations {
if !strings.Contains(contentStr, exp.fieldName) {
t.Errorf("Missing relationship field: %s\nGenerated:\n%s", exp.fieldName, contentStr)
}
if !strings.Contains(contentStr, exp.tag) {
t.Errorf("Missing relationship tag: %s\nGenerated:\n%s", exp.tag, contentStr)
}
}
// Verify NO duplicate field names (old behavior would create duplicate "FIL" fields)
if strings.Contains(contentStr, "FIL *ModelFilepointer") {
t.Errorf("Found old prefix-based naming (FIL), should use column-based naming")
}
// Also verify has-many relationships on filepointer table
filepointerContent, err := os.ReadFile(filepath.Join(tmpDir, "sql_org_filepointer.go"))
if err != nil {
t.Fatalf("Failed to read filepointer file: %v", err)
}
filepointerStr := string(filepointerContent)
// Should have two different has-many relationships with unique names
hasManyExpectations := []string{
"RelRIDFilepointerRequestOrgAPIEvents", // Has many via rid_filepointer_request
"RelRIDFilepointerResponseOrgAPIEvents", // Has many via rid_filepointer_response
}
for _, exp := range hasManyExpectations {
if !strings.Contains(filepointerStr, exp) {
t.Errorf("Missing has-many relationship field: %s\nGenerated:\n%s", exp, filepointerStr)
}
}
}
func TestWriter_MultipleHasManyRelationships(t *testing.T) {
// Test scenario: api_provider table referenced by multiple tables via rid_api_provider
db := models.InitDatabase("testdb")
schema := models.InitSchema("org")
// Owner table
owner := models.InitTable("owner", "org")
owner.Columns["id_owner"] = &models.Column{
Name: "id_owner",
Type: "bigserial",
NotNull: true,
IsPrimaryKey: true,
}
schema.Tables = append(schema.Tables, owner)
// API Provider table
apiProvider := models.InitTable("api_provider", "org")
apiProvider.Columns["id_api_provider"] = &models.Column{
Name: "id_api_provider",
Type: "bigserial",
NotNull: true,
IsPrimaryKey: true,
}
apiProvider.Columns["rid_owner"] = &models.Column{
Name: "rid_owner",
Type: "bigint",
NotNull: true,
}
apiProvider.Constraints["fk_owner"] = &models.Constraint{
Name: "fk_owner",
Type: models.ForeignKeyConstraint,
Columns: []string{"rid_owner"},
ReferencedTable: "owner",
ReferencedSchema: "org",
ReferencedColumns: []string{"id_owner"},
}
schema.Tables = append(schema.Tables, apiProvider)
// Login table
login := models.InitTable("login", "org")
login.Columns["id_login"] = &models.Column{
Name: "id_login",
Type: "bigserial",
NotNull: true,
IsPrimaryKey: true,
}
login.Columns["rid_api_provider"] = &models.Column{
Name: "rid_api_provider",
Type: "bigint",
NotNull: true,
}
login.Constraints["fk_api_provider"] = &models.Constraint{
Name: "fk_api_provider",
Type: models.ForeignKeyConstraint,
Columns: []string{"rid_api_provider"},
ReferencedTable: "api_provider",
ReferencedSchema: "org",
ReferencedColumns: []string{"id_api_provider"},
}
schema.Tables = append(schema.Tables, login)
// Filepointer table
filepointer := models.InitTable("filepointer", "org")
filepointer.Columns["id_filepointer"] = &models.Column{
Name: "id_filepointer",
Type: "bigserial",
NotNull: true,
IsPrimaryKey: true,
}
filepointer.Columns["rid_api_provider"] = &models.Column{
Name: "rid_api_provider",
Type: "bigint",
NotNull: true,
}
filepointer.Constraints["fk_api_provider"] = &models.Constraint{
Name: "fk_api_provider",
Type: models.ForeignKeyConstraint,
Columns: []string{"rid_api_provider"},
ReferencedTable: "api_provider",
ReferencedSchema: "org",
ReferencedColumns: []string{"id_api_provider"},
}
schema.Tables = append(schema.Tables, filepointer)
// API Event table
apiEvent := models.InitTable("api_event", "org")
apiEvent.Columns["id_api_event"] = &models.Column{
Name: "id_api_event",
Type: "bigserial",
NotNull: true,
IsPrimaryKey: true,
}
apiEvent.Columns["rid_api_provider"] = &models.Column{
Name: "rid_api_provider",
Type: "bigint",
NotNull: true,
}
apiEvent.Constraints["fk_api_provider"] = &models.Constraint{
Name: "fk_api_provider",
Type: models.ForeignKeyConstraint,
Columns: []string{"rid_api_provider"},
ReferencedTable: "api_provider",
ReferencedSchema: "org",
ReferencedColumns: []string{"id_api_provider"},
}
schema.Tables = append(schema.Tables, apiEvent)
db.Schemas = append(db.Schemas, schema)
// Create writer
tmpDir := t.TempDir()
opts := &writers.WriterOptions{
PackageName: "models",
OutputPath: tmpDir,
Metadata: map[string]interface{}{
"multi_file": true,
},
}
writer := NewWriter(opts)
err := writer.WriteDatabase(db)
if err != nil {
t.Fatalf("WriteDatabase failed: %v", err)
}
// Read the api_provider file
apiProviderContent, err := os.ReadFile(filepath.Join(tmpDir, "sql_org_api_provider.go"))
if err != nil {
t.Fatalf("Failed to read api_provider file: %v", err)
}
contentStr := string(apiProviderContent)
// Verify all has-many relationships have unique names
hasManyExpectations := []string{
"RelRIDAPIProviderOrgLogins", // Has many via Login
"RelRIDAPIProviderOrgFilepointers", // Has many via Filepointer
"RelRIDAPIProviderOrgAPIEvents", // Has many via APIEvent
"RelRIDOwner", // Belongs to via rid_owner
}
for _, exp := range hasManyExpectations {
if !strings.Contains(contentStr, exp) {
t.Errorf("Missing relationship field: %s\nGenerated:\n%s", exp, contentStr)
}
}
// Verify NO duplicate field names
// Count occurrences of "RelRIDAPIProvider" fields - should have 3 unique ones
count := strings.Count(contentStr, "RelRIDAPIProvider")
if count != 3 {
t.Errorf("Expected 3 RelRIDAPIProvider* fields, found %d\nGenerated:\n%s", count, contentStr)
}
// Verify no duplicate declarations (would cause compilation error)
duplicatePattern := "RelRIDAPIProviders []*Model"
if strings.Contains(contentStr, duplicatePattern) {
t.Errorf("Found duplicate field declaration pattern, fields should be unique")
}
}
func TestWriter_FieldNameCollision(t *testing.T) {
// Test scenario: table with columns that would conflict with generated method names
table := models.InitTable("audit_table", "audit")
table.Columns["id_audit_table"] = &models.Column{
Name: "id_audit_table",
Type: "smallint",
NotNull: true,
IsPrimaryKey: true,
Sequence: 1,
}
table.Columns["table_name"] = &models.Column{
Name: "table_name",
Type: "varchar",
Length: 100,
NotNull: true,
Sequence: 2,
}
table.Columns["table_schema"] = &models.Column{
Name: "table_schema",
Type: "varchar",
Length: 100,
NotNull: true,
Sequence: 3,
}
// Create writer
tmpDir := t.TempDir()
opts := &writers.WriterOptions{
PackageName: "models",
OutputPath: filepath.Join(tmpDir, "test.go"),
}
writer := NewWriter(opts)
err := writer.WriteTable(table)
if err != nil {
t.Fatalf("WriteTable failed: %v", err)
}
// Read the generated file
content, err := os.ReadFile(opts.OutputPath)
if err != nil {
t.Fatalf("Failed to read generated file: %v", err)
}
generated := string(content)
// Verify that TableName field was renamed to TableName_ to avoid collision
if !strings.Contains(generated, "TableName_") {
t.Errorf("Expected field 'TableName_' (with underscore) but not found\nGenerated:\n%s", generated)
}
// Verify the struct tag still references the correct database column
if !strings.Contains(generated, `gorm:"column:table_name;`) {
t.Errorf("Expected gorm tag to reference 'table_name' column\nGenerated:\n%s", generated)
}
// Verify the TableName() method still exists and doesn't conflict
if !strings.Contains(generated, "func (m ModelAuditAuditTable) TableName() string") {
t.Errorf("TableName() method should still be generated\nGenerated:\n%s", generated)
}
// Verify NO field named just "TableName" (without underscore)
if strings.Contains(generated, "TableName sql_types") || strings.Contains(generated, "TableName string") {
t.Errorf("Field 'TableName' without underscore should not exist (would conflict with method)\nGenerated:\n%s", generated)
}
}
func TestWriter_UpdateIDTypeSafety(t *testing.T) {
// Test scenario: tables with different primary key types
tests := []struct {
name string
pkType string
expectedPK string
castType string
}{
{"int32_pk", "int", "int32", "int32(newid)"},
{"int16_pk", "smallint", "int16", "int16(newid)"},
{"int64_pk", "bigint", "int64", "int64(newid)"},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
table := models.InitTable("test_table", "public")
table.Columns["id"] = &models.Column{
Name: "id",
Type: tt.pkType,
NotNull: true,
IsPrimaryKey: true,
}
tmpDir := t.TempDir()
opts := &writers.WriterOptions{
PackageName: "models",
OutputPath: filepath.Join(tmpDir, "test.go"),
}
writer := NewWriter(opts)
err := writer.WriteTable(table)
if err != nil {
t.Fatalf("WriteTable failed: %v", err)
}
content, err := os.ReadFile(opts.OutputPath)
if err != nil {
t.Fatalf("Failed to read generated file: %v", err)
}
generated := string(content)
// Verify UpdateID method has correct type cast
if !strings.Contains(generated, tt.castType) {
t.Errorf("Expected UpdateID to cast to %s\nGenerated:\n%s", tt.castType, generated)
}
// Verify no invalid int32(newid) for non-int32 types
if tt.expectedPK != "int32" && strings.Contains(generated, "int32(newid)") {
t.Errorf("UpdateID should not cast to int32 for %s type\nGenerated:\n%s", tt.pkType, generated)
}
// Verify UpdateID parameter is int64 (for consistency)
if !strings.Contains(generated, "UpdateID(newid int64)") {
t.Errorf("UpdateID should accept int64 parameter\nGenerated:\n%s", generated)
}
})
} }
} }

View File

@@ -0,0 +1,217 @@
# PostgreSQL Naming Conventions
Standardized naming rules for all database objects in RelSpec PostgreSQL output.
## Quick Reference
| Object Type | Prefix | Format | Example |
| ----------------- | ----------- | ---------------------------------- | ------------------------ |
| Primary Key | `pk_` | `pk_<schema>_<table>` | `pk_public_users` |
| Foreign Key | `fk_` | `fk_<table>_<referenced_table>` | `fk_posts_users` |
| Unique Constraint | `uk_` | `uk_<table>_<column>` | `uk_users_email` |
| Unique Index | `uidx_` | `uidx_<table>_<column>` | `uidx_users_email` |
| Regular Index | `idx_` | `idx_<table>_<column>` | `idx_posts_user_id` |
| Check Constraint | `chk_` | `chk_<table>_<constraint_purpose>` | `chk_users_age_positive` |
| Sequence | `identity_` | `identity_<table>_<column>` | `identity_users_id` |
| Trigger | `t_` | `t_<purpose>_<table>` | `t_audit_users` |
| Trigger Function | `tf_` | `tf_<purpose>_<table>` | `tf_audit_users` |
## Naming Rules by Object Type
### Primary Keys
**Pattern:** `pk_<schema>_<table>`
- Include schema name to avoid collisions across schemas
- Use lowercase, snake_case format
- Examples:
- `pk_public_users`
- `pk_audit_audit_log`
- `pk_staging_temp_data`
### Foreign Keys
**Pattern:** `fk_<table>_<referenced_table>`
- Reference the table containing the FK followed by the referenced table
- Use lowercase, snake_case format
- Do NOT include column names in standard FK constraints
- Examples:
- `fk_posts_users` (posts.user_id → users.id)
- `fk_comments_posts` (comments.post_id → posts.id)
- `fk_order_items_orders` (order_items.order_id → orders.id)
### Unique Constraints
**Pattern:** `uk_<table>_<column>`
- Use `uk_` prefix strictly for database constraints (CONSTRAINT type)
- Include column name for clarity
- Examples:
- `uk_users_email`
- `uk_users_username`
- `uk_products_sku`
### Unique Indexes
**Pattern:** `uidx_<table>_<column>`
- Use `uidx_` prefix strictly for index type objects
- Distinguished from constraints for clarity and implementation flexibility
- Examples:
- `uidx_users_email`
- `uidx_sessions_token`
- `uidx_api_keys_key`
### Regular Indexes
**Pattern:** `idx_<table>_<column>`
- Standard indexes for query optimization
- Single column: `idx_<table>_<column>`
- Examples:
- `idx_posts_user_id`
- `idx_orders_created_at`
- `idx_users_status`
### Check Constraints
**Pattern:** `chk_<table>_<constraint_purpose>`
- Describe the constraint validation purpose
- Use lowercase, snake_case for the purpose
- Examples:
- `chk_users_age_positive` (CHECK (age > 0))
- `chk_orders_quantity_positive` (CHECK (quantity > 0))
- `chk_products_price_valid` (CHECK (price >= 0))
- `chk_users_status_enum` (CHECK (status IN ('active', 'inactive')))
### Sequences
**Pattern:** `identity_<table>_<column>`
- Used for SERIAL/IDENTITY columns
- Explicitly named for clarity and management
- Examples:
- `identity_users_id`
- `identity_posts_id`
- `identity_transactions_id`
### Triggers
**Pattern:** `t_<purpose>_<table>`
- Include purpose before table name
- Lowercase, snake_case format
- Examples:
- `t_audit_users` (audit trigger on users table)
- `t_update_timestamp_posts` (timestamp update trigger on posts)
- `t_validate_orders` (validation trigger on orders)
### Trigger Functions
**Pattern:** `tf_<purpose>_<table>`
- Pair with trigger naming convention
- Use `tf_` prefix to distinguish from triggers themselves
- Examples:
- `tf_audit_users` (function for t_audit_users)
- `tf_update_timestamp_posts` (function for t_update_timestamp_posts)
- `tf_validate_orders` (function for t_validate_orders)
## Multi-Column Objects
### Composite Primary Keys
**Pattern:** `pk_<schema>_<table>`
- Same as single-column PKs
- Example: `pk_public_order_items` (composite key on order_id + item_id)
### Composite Unique Constraints
**Pattern:** `uk_<table>_<column1>_<column2>_[...]`
- Append all column names in order
- Examples:
- `uk_users_email_domain` (UNIQUE(email, domain))
- `uk_inventory_warehouse_sku` (UNIQUE(warehouse_id, sku))
### Composite Unique Indexes
**Pattern:** `uidx_<table>_<column1>_<column2>_[...]`
- Append all column names in order
- Examples:
- `uidx_users_first_name_last_name` (UNIQUE INDEX on first_name, last_name)
- `uidx_sessions_user_id_device_id` (UNIQUE INDEX on user_id, device_id)
### Composite Regular Indexes
**Pattern:** `idx_<table>_<column1>_<column2>_[...]`
- Append all column names in order
- List columns in typical query filter order
- Examples:
- `idx_orders_user_id_created_at` (filter by user, then sort by created_at)
- `idx_logs_level_timestamp` (filter by level, then by timestamp)
## Special Cases & Conventions
### Audit Trail Tables
- Audit table naming: `<original_table>_audit` or `audit_<original_table>`
- Audit indexes follow standard pattern: `idx_<audit_table>_<column>`
- Examples:
- Users table audit: `users_audit` with `idx_users_audit_tablename`, `idx_users_audit_changedate`
- Posts table audit: `posts_audit` with `idx_posts_audit_tablename`, `idx_posts_audit_changedate`
### Temporal/Versioning Tables
- Use suffix `_history` or `_versions` if needed
- Apply standard naming rules with the full table name
- Examples:
- `idx_users_history_user_id`
- `uk_posts_versions_version_number`
### Schema-Specific Objects
- Always qualify with schema when needed: `pk_<schema>_<table>`
- Multiple schemas allowed: `pk_public_users`, `pk_staging_users`
### Reserved Words & Special Names
- Avoid PostgreSQL reserved keywords in object names
- If column/table names conflict, use quoted identifiers in DDL
- Naming convention rules still apply to the logical name
### Generated/Anonymous Indexes
- If an index lacks explicit naming, default to: `idx_<schema>_<table>`
- Should be replaced with explicit names following standards
- Examples (to be renamed):
- `idx_public_users` → should be `idx_users_<column>`
## Implementation Notes
### Code Generation
- Names are always lowercase in generated SQL
- Underscore separators are required
### Migration Safety
- Do NOT rename objects after creation without explicit migration
- Names should be consistent across all schema versions
- Test generated DDL against PostgreSQL before deployment
### Testing
- Ensure consistency across all table and constraint generation
- Test with reserved words to verify escaping
## Related Documentation
- PostgreSQL Identifier Rules: https://www.postgresql.org/docs/current/sql-syntax-lexical.html#SQL-IDENTIFIERS
- Constraint Documentation: https://www.postgresql.org/docs/current/ddl-constraints.html
- Index Documentation: https://www.postgresql.org/docs/current/indexes.html

View File

@@ -8,6 +8,7 @@ import (
"strings" "strings"
"git.warky.dev/wdevs/relspecgo/pkg/models" "git.warky.dev/wdevs/relspecgo/pkg/models"
"git.warky.dev/wdevs/relspecgo/pkg/pgsql"
"git.warky.dev/wdevs/relspecgo/pkg/writers" "git.warky.dev/wdevs/relspecgo/pkg/writers"
) )
@@ -335,7 +336,7 @@ func (w *MigrationWriter) generateAlterTableScripts(schema *models.Schema, model
SchemaName: schema.Name, SchemaName: schema.Name,
TableName: modelTable.Name, TableName: modelTable.Name,
ColumnName: modelCol.Name, ColumnName: modelCol.Name,
ColumnType: modelCol.Type, ColumnType: pgsql.ConvertSQLType(modelCol.Type),
Default: defaultVal, Default: defaultVal,
NotNull: modelCol.NotNull, NotNull: modelCol.NotNull,
}) })
@@ -359,7 +360,7 @@ func (w *MigrationWriter) generateAlterTableScripts(schema *models.Schema, model
SchemaName: schema.Name, SchemaName: schema.Name,
TableName: modelTable.Name, TableName: modelTable.Name,
ColumnName: modelCol.Name, ColumnName: modelCol.Name,
NewType: modelCol.Type, NewType: pgsql.ConvertSQLType(modelCol.Type),
}) })
if err != nil { if err != nil {
return nil, err return nil, err
@@ -427,9 +428,11 @@ func (w *MigrationWriter) generateIndexScripts(model *models.Schema, current *mo
for _, modelTable := range model.Tables { for _, modelTable := range model.Tables {
currentTable := currentTables[strings.ToLower(modelTable.Name)] currentTable := currentTables[strings.ToLower(modelTable.Name)]
// Process primary keys first // Process primary keys first - check explicit constraints
foundExplicitPK := false
for constraintName, constraint := range modelTable.Constraints { for constraintName, constraint := range modelTable.Constraints {
if constraint.Type == models.PrimaryKeyConstraint { if constraint.Type == models.PrimaryKeyConstraint {
foundExplicitPK = true
shouldCreate := true shouldCreate := true
if currentTable != nil { if currentTable != nil {
@@ -464,6 +467,53 @@ func (w *MigrationWriter) generateIndexScripts(model *models.Schema, current *mo
} }
} }
// If no explicit PK constraint, check for columns with IsPrimaryKey = true
if !foundExplicitPK {
pkColumns := []string{}
for _, col := range modelTable.Columns {
if col.IsPrimaryKey {
pkColumns = append(pkColumns, col.SQLName())
}
}
if len(pkColumns) > 0 {
sort.Strings(pkColumns)
constraintName := fmt.Sprintf("pk_%s_%s", model.SQLName(), modelTable.SQLName())
shouldCreate := true
if currentTable != nil {
// Check if a PK constraint already exists (by any name)
for _, constraint := range currentTable.Constraints {
if constraint.Type == models.PrimaryKeyConstraint {
shouldCreate = false
break
}
}
}
if shouldCreate {
sql, err := w.executor.ExecuteCreatePrimaryKey(CreatePrimaryKeyData{
SchemaName: model.Name,
TableName: modelTable.Name,
ConstraintName: constraintName,
Columns: strings.Join(pkColumns, ", "),
})
if err != nil {
return nil, err
}
script := MigrationScript{
ObjectName: fmt.Sprintf("%s.%s.%s", model.Name, modelTable.Name, constraintName),
ObjectType: "create primary key",
Schema: model.Name,
Priority: 160,
Sequence: len(scripts),
Body: sql,
}
scripts = append(scripts, script)
}
}
}
// Process indexes // Process indexes
for indexName, modelIndex := range modelTable.Indexes { for indexName, modelIndex := range modelTable.Indexes {
// Skip primary key indexes // Skip primary key indexes
@@ -703,7 +753,7 @@ func (w *MigrationWriter) generateAuditScripts(schema *models.Schema, auditConfi
} }
// Generate audit function // Generate audit function
funcName := fmt.Sprintf("ft_audit_%s", table.Name) funcName := fmt.Sprintf("tf_audit_%s", table.Name)
funcData := BuildAuditFunctionData(schema.Name, table, pk, config, auditSchema, auditConfig.UserFunction) funcData := BuildAuditFunctionData(schema.Name, table, pk, config, auditSchema, auditConfig.UserFunction)
funcSQL, err := w.executor.ExecuteAuditFunction(funcData) funcSQL, err := w.executor.ExecuteAuditFunction(funcData)

View File

@@ -121,7 +121,7 @@ func TestWriteMigration_WithAudit(t *testing.T) {
} }
// Verify audit function // Verify audit function
if !strings.Contains(output, "CREATE OR REPLACE FUNCTION public.ft_audit_users()") { if !strings.Contains(output, "CREATE OR REPLACE FUNCTION public.tf_audit_users()") {
t.Error("Migration missing audit function") t.Error("Migration missing audit function")
} }
@@ -177,7 +177,7 @@ func TestTemplateExecutor_AuditFunction(t *testing.T) {
data := AuditFunctionData{ data := AuditFunctionData{
SchemaName: "public", SchemaName: "public",
FunctionName: "ft_audit_users", FunctionName: "tf_audit_users",
TableName: "users", TableName: "users",
TablePrefix: "NULL", TablePrefix: "NULL",
PrimaryKey: "id", PrimaryKey: "id",
@@ -202,7 +202,7 @@ func TestTemplateExecutor_AuditFunction(t *testing.T) {
t.Logf("Generated SQL:\n%s", sql) t.Logf("Generated SQL:\n%s", sql)
if !strings.Contains(sql, "CREATE OR REPLACE FUNCTION public.ft_audit_users()") { if !strings.Contains(sql, "CREATE OR REPLACE FUNCTION public.tf_audit_users()") {
t.Error("SQL missing function definition") t.Error("SQL missing function definition")
} }
if !strings.Contains(sql, "IF TG_OP = 'INSERT'") { if !strings.Contains(sql, "IF TG_OP = 'INSERT'") {
@@ -215,3 +215,70 @@ func TestTemplateExecutor_AuditFunction(t *testing.T) {
t.Error("SQL missing DELETE handling") t.Error("SQL missing DELETE handling")
} }
} }
func TestWriteMigration_NumericConstraintNames(t *testing.T) {
// Current database (empty)
current := models.InitDatabase("testdb")
currentSchema := models.InitSchema("entity")
current.Schemas = append(current.Schemas, currentSchema)
// Model database (with constraint starting with number)
model := models.InitDatabase("testdb")
modelSchema := models.InitSchema("entity")
// Create individual_actor_relationship table
table := models.InitTable("individual_actor_relationship", "entity")
idCol := models.InitColumn("id", "individual_actor_relationship", "entity")
idCol.Type = "integer"
idCol.IsPrimaryKey = true
table.Columns["id"] = idCol
actorIDCol := models.InitColumn("actor_id", "individual_actor_relationship", "entity")
actorIDCol.Type = "integer"
table.Columns["actor_id"] = actorIDCol
// Add constraint with name starting with number
constraint := &models.Constraint{
Name: "215162_fk_actor",
Type: models.ForeignKeyConstraint,
Columns: []string{"actor_id"},
ReferencedSchema: "entity",
ReferencedTable: "actor",
ReferencedColumns: []string{"id"},
OnDelete: "CASCADE",
OnUpdate: "NO ACTION",
}
table.Constraints["215162_fk_actor"] = constraint
modelSchema.Tables = append(modelSchema.Tables, table)
model.Schemas = append(model.Schemas, modelSchema)
// Generate migration
var buf bytes.Buffer
writer, err := NewMigrationWriter(&writers.WriterOptions{})
if err != nil {
t.Fatalf("Failed to create writer: %v", err)
}
writer.writer = &buf
err = writer.WriteMigration(model, current)
if err != nil {
t.Fatalf("WriteMigration failed: %v", err)
}
output := buf.String()
t.Logf("Generated migration:\n%s", output)
// Verify constraint name is properly quoted
if !strings.Contains(output, `"215162_fk_actor"`) {
t.Error("Constraint name starting with number should be quoted")
}
// Verify the SQL is syntactically correct (contains required keywords)
if !strings.Contains(output, "ADD CONSTRAINT") {
t.Error("Migration missing ADD CONSTRAINT")
}
if !strings.Contains(output, "FOREIGN KEY") {
t.Error("Migration missing FOREIGN KEY")
}
}

View File

@@ -21,6 +21,7 @@ func TemplateFunctions() map[string]interface{} {
"quote": quote, "quote": quote,
"escape": escape, "escape": escape,
"safe_identifier": safeIdentifier, "safe_identifier": safeIdentifier,
"quote_ident": quoteIdent,
// Type conversion // Type conversion
"goTypeToSQL": goTypeToSQL, "goTypeToSQL": goTypeToSQL,
@@ -122,6 +123,43 @@ func safeIdentifier(s string) string {
return strings.ToLower(safe) return strings.ToLower(safe)
} }
// quoteIdent quotes a PostgreSQL identifier if necessary
// Identifiers need quoting if they:
// - Start with a digit
// - Contain special characters
// - Are reserved keywords
// - Contain uppercase letters (to preserve case)
func quoteIdent(s string) string {
if s == "" {
return `""`
}
// Check if quoting is needed
needsQuoting := unicode.IsDigit(rune(s[0]))
// Starts with digit
// Contains uppercase letters or special characters
for _, r := range s {
if unicode.IsUpper(r) {
needsQuoting = true
break
}
if !unicode.IsLetter(r) && !unicode.IsDigit(r) && r != '_' {
needsQuoting = true
break
}
}
if needsQuoting {
// Escape double quotes by doubling them
escaped := strings.ReplaceAll(s, `"`, `""`)
return `"` + escaped + `"`
}
return s
}
// Type conversion functions // Type conversion functions
// goTypeToSQL converts Go type to PostgreSQL type // goTypeToSQL converts Go type to PostgreSQL type

View File

@@ -101,6 +101,31 @@ func TestSafeIdentifier(t *testing.T) {
} }
} }
func TestQuoteIdent(t *testing.T) {
tests := []struct {
input string
expected string
}{
{"valid_name", "valid_name"},
{"ValidName", `"ValidName"`},
{"123column", `"123column"`},
{"215162_fk_constraint", `"215162_fk_constraint"`},
{"user-id", `"user-id"`},
{"user@domain", `"user@domain"`},
{`"quoted"`, `"""quoted"""`},
{"", `""`},
{"lowercase", "lowercase"},
{"with_underscore", "with_underscore"},
}
for _, tt := range tests {
result := quoteIdent(tt.input)
if result != tt.expected {
t.Errorf("quoteIdent(%q) = %q, want %q", tt.input, result, tt.expected)
}
}
}
func TestGoTypeToSQL(t *testing.T) { func TestGoTypeToSQL(t *testing.T) {
tests := []struct { tests := []struct {
input string input string
@@ -243,7 +268,7 @@ func TestTemplateFunctions(t *testing.T) {
// Check that all expected functions are registered // Check that all expected functions are registered
expectedFuncs := []string{ expectedFuncs := []string{
"upper", "lower", "snake_case", "camelCase", "upper", "lower", "snake_case", "camelCase",
"indent", "quote", "escape", "safe_identifier", "indent", "quote", "escape", "safe_identifier", "quote_ident",
"goTypeToSQL", "sqlTypeToGo", "isNumeric", "isText", "goTypeToSQL", "sqlTypeToGo", "isNumeric", "isText",
"first", "last", "filter", "mapFunc", "join_with", "first", "last", "filter", "mapFunc", "join_with",
"join", "join",

View File

@@ -177,6 +177,72 @@ type AuditTriggerData struct {
Events string Events string
} }
// CreateUniqueConstraintData contains data for create unique constraint template
type CreateUniqueConstraintData struct {
SchemaName string
TableName string
ConstraintName string
Columns string
}
// CreateCheckConstraintData contains data for create check constraint template
type CreateCheckConstraintData struct {
SchemaName string
TableName string
ConstraintName string
Expression string
}
// CreateForeignKeyWithCheckData contains data for create foreign key with existence check template
type CreateForeignKeyWithCheckData struct {
SchemaName string
TableName string
ConstraintName string
SourceColumns string
TargetSchema string
TargetTable string
TargetColumns string
OnDelete string
OnUpdate string
Deferrable bool
}
// SetSequenceValueData contains data for set sequence value template
type SetSequenceValueData struct {
SchemaName string
TableName string
SequenceName string
ColumnName string
}
// CreateSequenceData contains data for create sequence template
type CreateSequenceData struct {
SchemaName string
SequenceName string
Increment int
MinValue int64
MaxValue int64
StartValue int64
CacheSize int
}
// AddColumnWithCheckData contains data for add column with existence check template
type AddColumnWithCheckData struct {
SchemaName string
TableName string
ColumnName string
ColumnDefinition string
}
// CreatePrimaryKeyWithAutoGenCheckData contains data for primary key with auto-generated key check template
type CreatePrimaryKeyWithAutoGenCheckData struct {
SchemaName string
TableName string
ConstraintName string
AutoGenNames string // Comma-separated list of names like "'name1', 'name2'"
Columns string
}
// Execute methods for each template // Execute methods for each template
// ExecuteCreateTable executes the create table template // ExecuteCreateTable executes the create table template
@@ -319,6 +385,76 @@ func (te *TemplateExecutor) ExecuteAuditTrigger(data AuditTriggerData) (string,
return buf.String(), nil return buf.String(), nil
} }
// ExecuteCreateUniqueConstraint executes the create unique constraint template
func (te *TemplateExecutor) ExecuteCreateUniqueConstraint(data CreateUniqueConstraintData) (string, error) {
var buf bytes.Buffer
err := te.templates.ExecuteTemplate(&buf, "create_unique_constraint.tmpl", data)
if err != nil {
return "", fmt.Errorf("failed to execute create_unique_constraint template: %w", err)
}
return buf.String(), nil
}
// ExecuteCreateCheckConstraint executes the create check constraint template
func (te *TemplateExecutor) ExecuteCreateCheckConstraint(data CreateCheckConstraintData) (string, error) {
var buf bytes.Buffer
err := te.templates.ExecuteTemplate(&buf, "create_check_constraint.tmpl", data)
if err != nil {
return "", fmt.Errorf("failed to execute create_check_constraint template: %w", err)
}
return buf.String(), nil
}
// ExecuteCreateForeignKeyWithCheck executes the create foreign key with check template
func (te *TemplateExecutor) ExecuteCreateForeignKeyWithCheck(data CreateForeignKeyWithCheckData) (string, error) {
var buf bytes.Buffer
err := te.templates.ExecuteTemplate(&buf, "create_foreign_key_with_check.tmpl", data)
if err != nil {
return "", fmt.Errorf("failed to execute create_foreign_key_with_check template: %w", err)
}
return buf.String(), nil
}
// ExecuteSetSequenceValue executes the set sequence value template
func (te *TemplateExecutor) ExecuteSetSequenceValue(data SetSequenceValueData) (string, error) {
var buf bytes.Buffer
err := te.templates.ExecuteTemplate(&buf, "set_sequence_value.tmpl", data)
if err != nil {
return "", fmt.Errorf("failed to execute set_sequence_value template: %w", err)
}
return buf.String(), nil
}
// ExecuteCreateSequence executes the create sequence template
func (te *TemplateExecutor) ExecuteCreateSequence(data CreateSequenceData) (string, error) {
var buf bytes.Buffer
err := te.templates.ExecuteTemplate(&buf, "create_sequence.tmpl", data)
if err != nil {
return "", fmt.Errorf("failed to execute create_sequence template: %w", err)
}
return buf.String(), nil
}
// ExecuteAddColumnWithCheck executes the add column with check template
func (te *TemplateExecutor) ExecuteAddColumnWithCheck(data AddColumnWithCheckData) (string, error) {
var buf bytes.Buffer
err := te.templates.ExecuteTemplate(&buf, "add_column_with_check.tmpl", data)
if err != nil {
return "", fmt.Errorf("failed to execute add_column_with_check template: %w", err)
}
return buf.String(), nil
}
// ExecuteCreatePrimaryKeyWithAutoGenCheck executes the create primary key with auto-generated key check template
func (te *TemplateExecutor) ExecuteCreatePrimaryKeyWithAutoGenCheck(data CreatePrimaryKeyWithAutoGenCheckData) (string, error) {
var buf bytes.Buffer
err := te.templates.ExecuteTemplate(&buf, "create_primary_key_with_autogen_check.tmpl", data)
if err != nil {
return "", fmt.Errorf("failed to execute create_primary_key_with_autogen_check template: %w", err)
}
return buf.String(), nil
}
// Helper functions to build template data from models // Helper functions to build template data from models
// BuildCreateTableData builds CreateTableData from a models.Table // BuildCreateTableData builds CreateTableData from a models.Table
@@ -355,7 +491,7 @@ func BuildAuditFunctionData(
auditSchema string, auditSchema string,
userFunction string, userFunction string,
) AuditFunctionData { ) AuditFunctionData {
funcName := fmt.Sprintf("ft_audit_%s", table.Name) funcName := fmt.Sprintf("tf_audit_%s", table.Name)
// Build list of audited columns // Build list of audited columns
auditedColumns := make([]*models.Column, 0) auditedColumns := make([]*models.Column, 0)

View File

@@ -1,4 +1,4 @@
ALTER TABLE {{.SchemaName}}.{{.TableName}} ALTER TABLE {{quote_ident .SchemaName}}.{{quote_ident .TableName}}
ADD COLUMN IF NOT EXISTS {{.ColumnName}} {{.ColumnType}} ADD COLUMN IF NOT EXISTS {{quote_ident .ColumnName}} {{.ColumnType}}
{{- if .Default}} DEFAULT {{.Default}}{{end}} {{- if .Default}} DEFAULT {{.Default}}{{end}}
{{- if .NotNull}} NOT NULL{{end}}; {{- if .NotNull}} NOT NULL{{end}};

View File

@@ -0,0 +1,12 @@
DO $$
BEGIN
IF NOT EXISTS (
SELECT 1 FROM information_schema.columns
WHERE table_schema = '{{.SchemaName}}'
AND table_name = '{{.TableName}}'
AND column_name = '{{.ColumnName}}'
) THEN
ALTER TABLE {{quote_ident .SchemaName}}.{{quote_ident .TableName}} ADD COLUMN {{.ColumnDefinition}};
END IF;
END;
$$;

View File

@@ -1,7 +1,7 @@
{{- if .SetDefault -}} {{- if .SetDefault -}}
ALTER TABLE {{.SchemaName}}.{{.TableName}} ALTER TABLE {{quote_ident .SchemaName}}.{{quote_ident .TableName}}
ALTER COLUMN {{.ColumnName}} SET DEFAULT {{.DefaultValue}}; ALTER COLUMN {{quote_ident .ColumnName}} SET DEFAULT {{.DefaultValue}};
{{- else -}} {{- else -}}
ALTER TABLE {{.SchemaName}}.{{.TableName}} ALTER TABLE {{quote_ident .SchemaName}}.{{quote_ident .TableName}}
ALTER COLUMN {{.ColumnName}} DROP DEFAULT; ALTER COLUMN {{quote_ident .ColumnName}} DROP DEFAULT;
{{- end -}} {{- end -}}

View File

@@ -1,2 +1,2 @@
ALTER TABLE {{.SchemaName}}.{{.TableName}} ALTER TABLE {{quote_ident .SchemaName}}.{{quote_ident .TableName}}
ALTER COLUMN {{.ColumnName}} TYPE {{.NewType}}; ALTER COLUMN {{quote_ident .ColumnName}} TYPE {{.NewType}};

View File

@@ -1 +1 @@
COMMENT ON COLUMN {{.SchemaName}}.{{.TableName}}.{{.ColumnName}} IS '{{.Comment}}'; COMMENT ON COLUMN {{quote_ident .SchemaName}}.{{quote_ident .TableName}}.{{quote_ident .ColumnName}} IS '{{.Comment}}';

View File

@@ -1 +1 @@
COMMENT ON TABLE {{.SchemaName}}.{{.TableName}} IS '{{.Comment}}'; COMMENT ON TABLE {{quote_ident .SchemaName}}.{{quote_ident .TableName}} IS '{{.Comment}}';

View File

@@ -0,0 +1,12 @@
DO $$
BEGIN
IF NOT EXISTS (
SELECT 1 FROM information_schema.table_constraints
WHERE table_schema = '{{.SchemaName}}'
AND table_name = '{{.TableName}}'
AND constraint_name = '{{.ConstraintName}}'
) THEN
ALTER TABLE {{quote_ident .SchemaName}}.{{quote_ident .TableName}} ADD CONSTRAINT {{quote_ident .ConstraintName}} CHECK ({{.Expression}});
END IF;
END;
$$;

View File

@@ -1,10 +1,10 @@
ALTER TABLE {{.SchemaName}}.{{.TableName}} ALTER TABLE {{quote_ident .SchemaName}}.{{quote_ident .TableName}}
DROP CONSTRAINT IF EXISTS {{.ConstraintName}}; DROP CONSTRAINT IF EXISTS {{quote_ident .ConstraintName}};
ALTER TABLE {{.SchemaName}}.{{.TableName}} ALTER TABLE {{quote_ident .SchemaName}}.{{quote_ident .TableName}}
ADD CONSTRAINT {{.ConstraintName}} ADD CONSTRAINT {{quote_ident .ConstraintName}}
FOREIGN KEY ({{.SourceColumns}}) FOREIGN KEY ({{.SourceColumns}})
REFERENCES {{.TargetSchema}}.{{.TargetTable}} ({{.TargetColumns}}) REFERENCES {{quote_ident .TargetSchema}}.{{quote_ident .TargetTable}} ({{.TargetColumns}})
ON DELETE {{.OnDelete}} ON DELETE {{.OnDelete}}
ON UPDATE {{.OnUpdate}} ON UPDATE {{.OnUpdate}}
DEFERRABLE; DEFERRABLE;

View File

@@ -0,0 +1,18 @@
DO $$
BEGIN
IF NOT EXISTS (
SELECT 1 FROM information_schema.table_constraints
WHERE table_schema = '{{.SchemaName}}'
AND table_name = '{{.TableName}}'
AND constraint_name = '{{.ConstraintName}}'
) THEN
ALTER TABLE {{quote_ident .SchemaName}}.{{quote_ident .TableName}}
ADD CONSTRAINT {{quote_ident .ConstraintName}}
FOREIGN KEY ({{.SourceColumns}})
REFERENCES {{quote_ident .TargetSchema}}.{{quote_ident .TargetTable}} ({{.TargetColumns}})
ON DELETE {{.OnDelete}}
ON UPDATE {{.OnUpdate}}{{if .Deferrable}}
DEFERRABLE{{end}};
END IF;
END;
$$;

View File

@@ -1,2 +1,2 @@
CREATE {{if .Unique}}UNIQUE {{end}}INDEX IF NOT EXISTS {{.IndexName}} CREATE {{if .Unique}}UNIQUE {{end}}INDEX IF NOT EXISTS {{quote_ident .IndexName}}
ON {{.SchemaName}}.{{.TableName}} USING {{.IndexType}} ({{.Columns}}); ON {{quote_ident .SchemaName}}.{{quote_ident .TableName}} USING {{.IndexType}} ({{.Columns}});

View File

@@ -6,8 +6,8 @@ BEGIN
AND table_name = '{{.TableName}}' AND table_name = '{{.TableName}}'
AND constraint_name = '{{.ConstraintName}}' AND constraint_name = '{{.ConstraintName}}'
) THEN ) THEN
ALTER TABLE {{.SchemaName}}.{{.TableName}} ALTER TABLE {{quote_ident .SchemaName}}.{{quote_ident .TableName}}
ADD CONSTRAINT {{.ConstraintName}} PRIMARY KEY ({{.Columns}}); ADD CONSTRAINT {{quote_ident .ConstraintName}} PRIMARY KEY ({{.Columns}});
END IF; END IF;
END; END;
$$; $$;

View File

@@ -0,0 +1,27 @@
DO $$
DECLARE
auto_pk_name text;
BEGIN
-- Drop auto-generated primary key if it exists
SELECT constraint_name INTO auto_pk_name
FROM information_schema.table_constraints
WHERE table_schema = '{{.SchemaName}}'
AND table_name = '{{.TableName}}'
AND constraint_type = 'PRIMARY KEY'
AND constraint_name IN ({{.AutoGenNames}});
IF auto_pk_name IS NOT NULL THEN
EXECUTE 'ALTER TABLE {{quote_ident .SchemaName}}.{{quote_ident .TableName}} DROP CONSTRAINT ' || quote_ident(auto_pk_name);
END IF;
-- Add named primary key if it doesn't exist
IF NOT EXISTS (
SELECT 1 FROM information_schema.table_constraints
WHERE table_schema = '{{.SchemaName}}'
AND table_name = '{{.TableName}}'
AND constraint_name = '{{.ConstraintName}}'
) THEN
ALTER TABLE {{quote_ident .SchemaName}}.{{quote_ident .TableName}} ADD CONSTRAINT {{quote_ident .ConstraintName}} PRIMARY KEY ({{.Columns}});
END IF;
END;
$$;

View File

@@ -0,0 +1,6 @@
CREATE SEQUENCE IF NOT EXISTS {{quote_ident .SchemaName}}.{{quote_ident .SequenceName}}
INCREMENT {{.Increment}}
MINVALUE {{.MinValue}}
MAXVALUE {{.MaxValue}}
START {{.StartValue}}
CACHE {{.CacheSize}};

View File

@@ -1,7 +1,7 @@
CREATE TABLE IF NOT EXISTS {{.SchemaName}}.{{.TableName}} ( CREATE TABLE IF NOT EXISTS {{quote_ident .SchemaName}}.{{quote_ident .TableName}} (
{{- range $i, $col := .Columns}} {{- range $i, $col := .Columns}}
{{- if $i}},{{end}} {{- if $i}},{{end}}
{{$col.Name}} {{$col.Type}} {{quote_ident $col.Name}} {{$col.Type}}
{{- if $col.Default}} DEFAULT {{$col.Default}}{{end}} {{- if $col.Default}} DEFAULT {{$col.Default}}{{end}}
{{- if $col.NotNull}} NOT NULL{{end}} {{- if $col.NotNull}} NOT NULL{{end}}
{{- end}} {{- end}}

View File

@@ -0,0 +1,12 @@
DO $$
BEGIN
IF NOT EXISTS (
SELECT 1 FROM information_schema.table_constraints
WHERE table_schema = '{{.SchemaName}}'
AND table_name = '{{.TableName}}'
AND constraint_name = '{{.ConstraintName}}'
) THEN
ALTER TABLE {{quote_ident .SchemaName}}.{{quote_ident .TableName}} ADD CONSTRAINT {{quote_ident .ConstraintName}} UNIQUE ({{.Columns}});
END IF;
END;
$$;

View File

@@ -1 +1 @@
ALTER TABLE {{.SchemaName}}.{{.TableName}} DROP CONSTRAINT IF EXISTS {{.ConstraintName}}; ALTER TABLE {{quote_ident .SchemaName}}.{{quote_ident .TableName}} DROP CONSTRAINT IF EXISTS {{quote_ident .ConstraintName}};

View File

@@ -1 +1 @@
DROP INDEX IF EXISTS {{.SchemaName}}.{{.IndexName}} CASCADE; DROP INDEX IF EXISTS {{quote_ident .SchemaName}}.{{quote_ident .IndexName}} CASCADE;

View File

@@ -0,0 +1,19 @@
DO $$
DECLARE
m_cnt bigint;
BEGIN
IF EXISTS (
SELECT 1 FROM pg_class c
INNER JOIN pg_namespace n ON n.oid = c.relnamespace
WHERE c.relname = '{{.SequenceName}}'
AND n.nspname = '{{.SchemaName}}'
AND c.relkind = 'S'
) THEN
SELECT COALESCE(MAX({{quote_ident .ColumnName}}), 0) + 1
FROM {{quote_ident .SchemaName}}.{{quote_ident .TableName}}
INTO m_cnt;
PERFORM setval('{{quote_ident .SchemaName}}.{{quote_ident .SequenceName}}'::regclass, m_cnt);
END IF;
END;
$$;

File diff suppressed because it is too large Load Diff

View File

@@ -45,11 +45,11 @@ func TestWriteDatabase(t *testing.T) {
// Add unique index // Add unique index
uniqueEmailIndex := &models.Index{ uniqueEmailIndex := &models.Index{
Name: "uk_users_email", Name: "uidx_users_email",
Unique: true, Unique: true,
Columns: []string{"email"}, Columns: []string{"email"},
} }
table.Indexes["uk_users_email"] = uniqueEmailIndex table.Indexes["uidx_users_email"] = uniqueEmailIndex
schema.Tables = append(schema.Tables, table) schema.Tables = append(schema.Tables, table)
db.Schemas = append(db.Schemas, schema) db.Schemas = append(db.Schemas, schema)
@@ -164,6 +164,296 @@ func TestWriteForeignKeys(t *testing.T) {
} }
} }
func TestWriteUniqueConstraints(t *testing.T) {
// Create a test database with unique constraints
db := models.InitDatabase("testdb")
schema := models.InitSchema("public")
// Create table with unique constraints
table := models.InitTable("users", "public")
// Add columns
emailCol := models.InitColumn("email", "users", "public")
emailCol.Type = "varchar(255)"
emailCol.NotNull = true
table.Columns["email"] = emailCol
guidCol := models.InitColumn("guid", "users", "public")
guidCol.Type = "uuid"
guidCol.NotNull = true
table.Columns["guid"] = guidCol
// Add unique constraints
emailConstraint := &models.Constraint{
Name: "uq_email",
Type: models.UniqueConstraint,
Schema: "public",
Table: "users",
Columns: []string{"email"},
}
table.Constraints["uq_email"] = emailConstraint
guidConstraint := &models.Constraint{
Name: "uq_guid",
Type: models.UniqueConstraint,
Schema: "public",
Table: "users",
Columns: []string{"guid"},
}
table.Constraints["uq_guid"] = guidConstraint
schema.Tables = append(schema.Tables, table)
db.Schemas = append(db.Schemas, schema)
// Create writer with output to buffer
var buf bytes.Buffer
options := &writers.WriterOptions{}
writer := NewWriter(options)
writer.writer = &buf
// Write the database
err := writer.WriteDatabase(db)
if err != nil {
t.Fatalf("WriteDatabase failed: %v", err)
}
output := buf.String()
// Print output for debugging
t.Logf("Generated SQL:\n%s", output)
// Verify unique constraints are present
if !strings.Contains(output, "-- Unique constraints for schema: public") {
t.Errorf("Output missing unique constraints header")
}
if !strings.Contains(output, "ADD CONSTRAINT uq_email UNIQUE (email)") {
t.Errorf("Output missing uq_email unique constraint\nFull output:\n%s", output)
}
if !strings.Contains(output, "ADD CONSTRAINT uq_guid UNIQUE (guid)") {
t.Errorf("Output missing uq_guid unique constraint\nFull output:\n%s", output)
}
}
func TestWriteCheckConstraints(t *testing.T) {
// Create a test database with check constraints
db := models.InitDatabase("testdb")
schema := models.InitSchema("public")
// Create table with check constraints
table := models.InitTable("products", "public")
// Add columns
priceCol := models.InitColumn("price", "products", "public")
priceCol.Type = "numeric(10,2)"
table.Columns["price"] = priceCol
statusCol := models.InitColumn("status", "products", "public")
statusCol.Type = "varchar(20)"
table.Columns["status"] = statusCol
quantityCol := models.InitColumn("quantity", "products", "public")
quantityCol.Type = "integer"
table.Columns["quantity"] = quantityCol
// Add check constraints
priceConstraint := &models.Constraint{
Name: "ck_price_positive",
Type: models.CheckConstraint,
Schema: "public",
Table: "products",
Expression: "price >= 0",
}
table.Constraints["ck_price_positive"] = priceConstraint
statusConstraint := &models.Constraint{
Name: "ck_status_valid",
Type: models.CheckConstraint,
Schema: "public",
Table: "products",
Expression: "status IN ('active', 'inactive', 'discontinued')",
}
table.Constraints["ck_status_valid"] = statusConstraint
quantityConstraint := &models.Constraint{
Name: "ck_quantity_nonnegative",
Type: models.CheckConstraint,
Schema: "public",
Table: "products",
Expression: "quantity >= 0",
}
table.Constraints["ck_quantity_nonnegative"] = quantityConstraint
schema.Tables = append(schema.Tables, table)
db.Schemas = append(db.Schemas, schema)
// Create writer with output to buffer
var buf bytes.Buffer
options := &writers.WriterOptions{}
writer := NewWriter(options)
writer.writer = &buf
// Write the database
err := writer.WriteDatabase(db)
if err != nil {
t.Fatalf("WriteDatabase failed: %v", err)
}
output := buf.String()
// Print output for debugging
t.Logf("Generated SQL:\n%s", output)
// Verify check constraints are present
if !strings.Contains(output, "-- Check constraints for schema: public") {
t.Errorf("Output missing check constraints header")
}
if !strings.Contains(output, "ADD CONSTRAINT ck_price_positive CHECK (price >= 0)") {
t.Errorf("Output missing ck_price_positive check constraint\nFull output:\n%s", output)
}
if !strings.Contains(output, "ADD CONSTRAINT ck_status_valid CHECK (status IN ('active', 'inactive', 'discontinued'))") {
t.Errorf("Output missing ck_status_valid check constraint\nFull output:\n%s", output)
}
if !strings.Contains(output, "ADD CONSTRAINT ck_quantity_nonnegative CHECK (quantity >= 0)") {
t.Errorf("Output missing ck_quantity_nonnegative check constraint\nFull output:\n%s", output)
}
}
func TestWriteAllConstraintTypes(t *testing.T) {
// Create a comprehensive test with all constraint types
db := models.InitDatabase("testdb")
schema := models.InitSchema("public")
// Create orders table
ordersTable := models.InitTable("orders", "public")
// Add columns
idCol := models.InitColumn("id", "orders", "public")
idCol.Type = "integer"
idCol.IsPrimaryKey = true
ordersTable.Columns["id"] = idCol
userIDCol := models.InitColumn("user_id", "orders", "public")
userIDCol.Type = "integer"
userIDCol.NotNull = true
ordersTable.Columns["user_id"] = userIDCol
orderNumberCol := models.InitColumn("order_number", "orders", "public")
orderNumberCol.Type = "varchar(50)"
orderNumberCol.NotNull = true
ordersTable.Columns["order_number"] = orderNumberCol
totalCol := models.InitColumn("total", "orders", "public")
totalCol.Type = "numeric(10,2)"
ordersTable.Columns["total"] = totalCol
statusCol := models.InitColumn("status", "orders", "public")
statusCol.Type = "varchar(20)"
ordersTable.Columns["status"] = statusCol
// Add primary key constraint
pkConstraint := &models.Constraint{
Name: "pk_orders",
Type: models.PrimaryKeyConstraint,
Schema: "public",
Table: "orders",
Columns: []string{"id"},
}
ordersTable.Constraints["pk_orders"] = pkConstraint
// Add unique constraint
uniqueConstraint := &models.Constraint{
Name: "uq_order_number",
Type: models.UniqueConstraint,
Schema: "public",
Table: "orders",
Columns: []string{"order_number"},
}
ordersTable.Constraints["uq_order_number"] = uniqueConstraint
// Add check constraint
checkConstraint := &models.Constraint{
Name: "ck_total_positive",
Type: models.CheckConstraint,
Schema: "public",
Table: "orders",
Expression: "total > 0",
}
ordersTable.Constraints["ck_total_positive"] = checkConstraint
statusCheckConstraint := &models.Constraint{
Name: "ck_status_valid",
Type: models.CheckConstraint,
Schema: "public",
Table: "orders",
Expression: "status IN ('pending', 'completed', 'cancelled')",
}
ordersTable.Constraints["ck_status_valid"] = statusCheckConstraint
// Add foreign key constraint (referencing a users table)
fkConstraint := &models.Constraint{
Name: "fk_orders_user",
Type: models.ForeignKeyConstraint,
Schema: "public",
Table: "orders",
Columns: []string{"user_id"},
ReferencedSchema: "public",
ReferencedTable: "users",
ReferencedColumns: []string{"id"},
OnDelete: "CASCADE",
OnUpdate: "CASCADE",
}
ordersTable.Constraints["fk_orders_user"] = fkConstraint
schema.Tables = append(schema.Tables, ordersTable)
db.Schemas = append(db.Schemas, schema)
// Create writer with output to buffer
var buf bytes.Buffer
options := &writers.WriterOptions{}
writer := NewWriter(options)
writer.writer = &buf
// Write the database
err := writer.WriteDatabase(db)
if err != nil {
t.Fatalf("WriteDatabase failed: %v", err)
}
output := buf.String()
// Print output for debugging
t.Logf("Generated SQL:\n%s", output)
// Verify all constraint types are present
expectedConstraints := map[string]string{
"Primary Key": "PRIMARY KEY",
"Unique": "ADD CONSTRAINT uq_order_number UNIQUE (order_number)",
"Check (total)": "ADD CONSTRAINT ck_total_positive CHECK (total > 0)",
"Check (status)": "ADD CONSTRAINT ck_status_valid CHECK (status IN ('pending', 'completed', 'cancelled'))",
"Foreign Key": "FOREIGN KEY",
}
for name, expected := range expectedConstraints {
if !strings.Contains(output, expected) {
t.Errorf("Output missing %s constraint: %s\nFull output:\n%s", name, expected, output)
}
}
// Verify section headers
sections := []string{
"-- Primary keys for schema: public",
"-- Unique constraints for schema: public",
"-- Check constraints for schema: public",
"-- Foreign keys for schema: public",
}
for _, section := range sections {
if !strings.Contains(output, section) {
t.Errorf("Output missing section header: %s", section)
}
}
}
func TestWriteTable(t *testing.T) { func TestWriteTable(t *testing.T) {
// Create a single table // Create a single table
table := models.InitTable("products", "public") table := models.InitTable("products", "public")
@@ -241,3 +531,327 @@ func TestIsIntegerType(t *testing.T) {
} }
} }
} }
func TestTypeConversion(t *testing.T) {
// Test that invalid Go types are converted to valid PostgreSQL types
db := models.InitDatabase("testdb")
schema := models.InitSchema("public")
// Create a test table with Go types instead of SQL types
table := models.InitTable("test_types", "public")
// Add columns with Go types (invalid for PostgreSQL)
stringCol := models.InitColumn("name", "test_types", "public")
stringCol.Type = "string" // Should be converted to "text"
table.Columns["name"] = stringCol
int64Col := models.InitColumn("big_id", "test_types", "public")
int64Col.Type = "int64" // Should be converted to "bigint"
table.Columns["big_id"] = int64Col
int16Col := models.InitColumn("small_id", "test_types", "public")
int16Col.Type = "int16" // Should be converted to "smallint"
table.Columns["small_id"] = int16Col
schema.Tables = append(schema.Tables, table)
db.Schemas = append(db.Schemas, schema)
// Create writer with output to buffer
var buf bytes.Buffer
options := &writers.WriterOptions{}
writer := NewWriter(options)
writer.writer = &buf
// Write the database
err := writer.WriteDatabase(db)
if err != nil {
t.Fatalf("WriteDatabase failed: %v", err)
}
output := buf.String()
// Print output for debugging
t.Logf("Generated SQL:\n%s", output)
// Verify that Go types were converted to PostgreSQL types
if strings.Contains(output, "string") {
t.Errorf("Output contains 'string' type - should be converted to 'text'\nFull output:\n%s", output)
}
if strings.Contains(output, "int64") {
t.Errorf("Output contains 'int64' type - should be converted to 'bigint'\nFull output:\n%s", output)
}
if strings.Contains(output, "int16") {
t.Errorf("Output contains 'int16' type - should be converted to 'smallint'\nFull output:\n%s", output)
}
// Verify correct PostgreSQL types are present
if !strings.Contains(output, "text") {
t.Errorf("Output missing 'text' type (converted from 'string')\nFull output:\n%s", output)
}
if !strings.Contains(output, "bigint") {
t.Errorf("Output missing 'bigint' type (converted from 'int64')\nFull output:\n%s", output)
}
if !strings.Contains(output, "smallint") {
t.Errorf("Output missing 'smallint' type (converted from 'int16')\nFull output:\n%s", output)
}
}
func TestPrimaryKeyExistenceCheck(t *testing.T) {
db := models.InitDatabase("testdb")
schema := models.InitSchema("public")
table := models.InitTable("products", "public")
idCol := models.InitColumn("id", "products", "public")
idCol.Type = "integer"
idCol.IsPrimaryKey = true
table.Columns["id"] = idCol
nameCol := models.InitColumn("name", "products", "public")
nameCol.Type = "text"
table.Columns["name"] = nameCol
schema.Tables = append(schema.Tables, table)
db.Schemas = append(db.Schemas, schema)
var buf bytes.Buffer
options := &writers.WriterOptions{}
writer := NewWriter(options)
writer.writer = &buf
err := writer.WriteDatabase(db)
if err != nil {
t.Fatalf("WriteDatabase failed: %v", err)
}
output := buf.String()
t.Logf("Generated SQL:\n%s", output)
// Verify our naming convention is used
if !strings.Contains(output, "pk_public_products") {
t.Errorf("Output missing expected primary key name 'pk_public_products'\nFull output:\n%s", output)
}
// Verify it drops auto-generated primary keys
if !strings.Contains(output, "products_pkey") || !strings.Contains(output, "DROP CONSTRAINT") {
t.Errorf("Output missing logic to drop auto-generated primary key\nFull output:\n%s", output)
}
// Verify it checks for our specific named constraint before adding it
if !strings.Contains(output, "constraint_name = 'pk_public_products'") {
t.Errorf("Output missing check for our named primary key constraint\nFull output:\n%s", output)
}
}
func TestColumnSizeSpecifiers(t *testing.T) {
db := models.InitDatabase("testdb")
schema := models.InitSchema("public")
table := models.InitTable("test_sizes", "public")
// Integer with invalid size specifier - should ignore size
integerCol := models.InitColumn("int_col", "test_sizes", "public")
integerCol.Type = "integer"
integerCol.Length = 32
table.Columns["int_col"] = integerCol
// Bigint with invalid size specifier - should ignore size
bigintCol := models.InitColumn("bigint_col", "test_sizes", "public")
bigintCol.Type = "bigint"
bigintCol.Length = 64
table.Columns["bigint_col"] = bigintCol
// Smallint with invalid size specifier - should ignore size
smallintCol := models.InitColumn("smallint_col", "test_sizes", "public")
smallintCol.Type = "smallint"
smallintCol.Length = 16
table.Columns["smallint_col"] = smallintCol
// Text with length - should convert to varchar
textCol := models.InitColumn("text_col", "test_sizes", "public")
textCol.Type = "text"
textCol.Length = 100
table.Columns["text_col"] = textCol
// Varchar with length - should keep varchar with length
varcharCol := models.InitColumn("varchar_col", "test_sizes", "public")
varcharCol.Type = "varchar"
varcharCol.Length = 50
table.Columns["varchar_col"] = varcharCol
// Decimal with precision and scale - should keep them
decimalCol := models.InitColumn("decimal_col", "test_sizes", "public")
decimalCol.Type = "decimal"
decimalCol.Precision = 19
decimalCol.Scale = 4
table.Columns["decimal_col"] = decimalCol
schema.Tables = append(schema.Tables, table)
db.Schemas = append(db.Schemas, schema)
var buf bytes.Buffer
options := &writers.WriterOptions{}
writer := NewWriter(options)
writer.writer = &buf
err := writer.WriteDatabase(db)
if err != nil {
t.Fatalf("WriteDatabase failed: %v", err)
}
output := buf.String()
t.Logf("Generated SQL:\n%s", output)
// Verify invalid size specifiers are NOT present
invalidPatterns := []string{
"integer(32)",
"bigint(64)",
"smallint(16)",
"text(100)",
}
for _, pattern := range invalidPatterns {
if strings.Contains(output, pattern) {
t.Errorf("Output contains invalid pattern '%s' - PostgreSQL doesn't support this\nFull output:\n%s", pattern, output)
}
}
// Verify valid patterns ARE present
validPatterns := []string{
"integer", // without size
"bigint", // without size
"smallint", // without size
"varchar(100)", // text converted to varchar with length
"varchar(50)", // varchar with length
"decimal(19,4)", // decimal with precision and scale
}
for _, pattern := range validPatterns {
if !strings.Contains(output, pattern) {
t.Errorf("Output missing expected pattern '%s'\nFull output:\n%s", pattern, output)
}
}
}
func TestGenerateAddColumnStatements(t *testing.T) {
// Create a test database with tables that have new columns
db := models.InitDatabase("testdb")
schema := models.InitSchema("public")
// Create a table with columns
table := models.InitTable("users", "public")
// Existing column
idCol := models.InitColumn("id", "users", "public")
idCol.Type = "integer"
idCol.NotNull = true
idCol.Sequence = 1
table.Columns["id"] = idCol
// New column to be added
emailCol := models.InitColumn("email", "users", "public")
emailCol.Type = "varchar"
emailCol.Length = 255
emailCol.NotNull = true
emailCol.Sequence = 2
table.Columns["email"] = emailCol
// New column with default
statusCol := models.InitColumn("status", "users", "public")
statusCol.Type = "text"
statusCol.Default = "active"
statusCol.Sequence = 3
table.Columns["status"] = statusCol
schema.Tables = append(schema.Tables, table)
db.Schemas = append(db.Schemas, schema)
// Create writer
options := &writers.WriterOptions{}
writer := NewWriter(options)
// Generate ADD COLUMN statements
statements, err := writer.GenerateAddColumnsForDatabase(db)
if err != nil {
t.Fatalf("GenerateAddColumnsForDatabase failed: %v", err)
}
// Join all statements to verify content
output := strings.Join(statements, "\n")
t.Logf("Generated ADD COLUMN statements:\n%s", output)
// Verify expected elements
expectedStrings := []string{
"ALTER TABLE public.users ADD COLUMN id integer NOT NULL",
"ALTER TABLE public.users ADD COLUMN email varchar(255) NOT NULL",
"ALTER TABLE public.users ADD COLUMN status text DEFAULT 'active'",
"information_schema.columns",
"table_schema = 'public'",
"table_name = 'users'",
"column_name = 'id'",
"column_name = 'email'",
"column_name = 'status'",
}
for _, expected := range expectedStrings {
if !strings.Contains(output, expected) {
t.Errorf("Output missing expected string: %s\nFull output:\n%s", expected, output)
}
}
// Verify DO blocks are present for conditional adds
doBlockCount := strings.Count(output, "DO $$")
if doBlockCount < 3 {
t.Errorf("Expected at least 3 DO blocks (one per column), got %d", doBlockCount)
}
// Verify IF NOT EXISTS logic
ifNotExistsCount := strings.Count(output, "IF NOT EXISTS")
if ifNotExistsCount < 3 {
t.Errorf("Expected at least 3 IF NOT EXISTS checks (one per column), got %d", ifNotExistsCount)
}
}
func TestWriteAddColumnStatements(t *testing.T) {
// Create a test database
db := models.InitDatabase("testdb")
schema := models.InitSchema("public")
// Create a table with a new column to be added
table := models.InitTable("products", "public")
idCol := models.InitColumn("id", "products", "public")
idCol.Type = "integer"
table.Columns["id"] = idCol
// New column with various properties
descCol := models.InitColumn("description", "products", "public")
descCol.Type = "text"
descCol.NotNull = false
table.Columns["description"] = descCol
schema.Tables = append(schema.Tables, table)
db.Schemas = append(db.Schemas, schema)
// Create writer with output to buffer
var buf bytes.Buffer
options := &writers.WriterOptions{}
writer := NewWriter(options)
writer.writer = &buf
// Write ADD COLUMN statements
err := writer.WriteAddColumnStatements(db)
if err != nil {
t.Fatalf("WriteAddColumnStatements failed: %v", err)
}
output := buf.String()
t.Logf("Generated output:\n%s", output)
// Verify output contains expected elements
if !strings.Contains(output, "ALTER TABLE public.products ADD COLUMN id integer") {
t.Errorf("Output missing ADD COLUMN for id\nFull output:\n%s", output)
}
if !strings.Contains(output, "ALTER TABLE public.products ADD COLUMN description text") {
t.Errorf("Output missing ADD COLUMN for description\nFull output:\n%s", output)
}
if !strings.Contains(output, "DO $$") {
t.Errorf("Output missing DO block\nFull output:\n%s", output)
}
}

View File

@@ -4,7 +4,7 @@ The SQL Executor Writer (`sqlexec`) executes SQL scripts from `models.Script` ob
## Features ## Features
- **Ordered Execution**: Scripts execute in Priority→Sequence order - **Ordered Execution**: Scripts execute in Priority→Sequence→Name order
- **PostgreSQL Support**: Uses `pgx/v5` driver for robust PostgreSQL connectivity - **PostgreSQL Support**: Uses `pgx/v5` driver for robust PostgreSQL connectivity
- **Stop on Error**: Execution halts immediately on first error (default behavior) - **Stop on Error**: Execution halts immediately on first error (default behavior)
- **Progress Reporting**: Prints execution status to stdout - **Progress Reporting**: Prints execution status to stdout
@@ -103,19 +103,40 @@ Scripts are sorted and executed based on:
1. **Priority** (ascending): Lower priority values execute first 1. **Priority** (ascending): Lower priority values execute first
2. **Sequence** (ascending): Within same priority, lower sequence values execute first 2. **Sequence** (ascending): Within same priority, lower sequence values execute first
3. **Name** (ascending): Within same priority and sequence, alphabetical order by name
### Example Execution Order ### Example Execution Order
Given these scripts: Given these scripts:
``` ```
Script A: Priority=2, Sequence=1 Script A: Priority=2, Sequence=1, Name="zebra"
Script B: Priority=1, Sequence=3 Script B: Priority=1, Sequence=3, Name="script"
Script C: Priority=1, Sequence=1 Script C: Priority=1, Sequence=1, Name="apple"
Script D: Priority=1, Sequence=2 Script D: Priority=1, Sequence=1, Name="beta"
Script E: Priority=3, Sequence=1 Script E: Priority=3, Sequence=1, Name="script"
``` ```
Execution order: **C → D → B → A → E** Execution order: **C (apple) → D (beta) → B → A → E**
### Directory-based Sorting Example
Given these files:
```
1_001_create_schema.sql
1_001_create_users.sql ← Alphabetically before "drop_tables"
1_001_drop_tables.sql
1_002_add_indexes.sql
2_001_constraints.sql
```
Execution order (note alphabetical sorting at same priority/sequence):
```
1_001_create_schema.sql
1_001_create_users.sql
1_001_drop_tables.sql
1_002_add_indexes.sql
2_001_constraints.sql
```
## Output ## Output

View File

@@ -23,6 +23,11 @@ func NewWriter(options *writers.WriterOptions) *Writer {
} }
} }
// Options returns the writer options (useful for reading execution results)
func (w *Writer) Options() *writers.WriterOptions {
return w.options
}
// WriteDatabase executes all scripts from all schemas in the database // WriteDatabase executes all scripts from all schemas in the database
func (w *Writer) WriteDatabase(db *models.Database) error { func (w *Writer) WriteDatabase(db *models.Database) error {
if db == nil { if db == nil {
@@ -86,20 +91,39 @@ func (w *Writer) WriteTable(table *models.Table) error {
return fmt.Errorf("WriteTable is not supported for SQL script execution") return fmt.Errorf("WriteTable is not supported for SQL script execution")
} }
// executeScripts executes scripts in Priority then Sequence order // executeScripts executes scripts in Priority, Sequence, then Name order
func (w *Writer) executeScripts(ctx context.Context, conn *pgx.Conn, scripts []*models.Script) error { func (w *Writer) executeScripts(ctx context.Context, conn *pgx.Conn, scripts []*models.Script) error {
if len(scripts) == 0 { if len(scripts) == 0 {
return nil return nil
} }
// Sort scripts by Priority (ascending) then Sequence (ascending) // Check if we should ignore errors
ignoreErrors := false
if val, ok := w.options.Metadata["ignore_errors"].(bool); ok {
ignoreErrors = val
}
// Track failed scripts and execution counts
var failedScripts []struct {
name string
priority int
sequence uint
err error
}
successCount := 0
totalCount := 0
// Sort scripts by Priority (ascending), Sequence (ascending), then Name (ascending)
sortedScripts := make([]*models.Script, len(scripts)) sortedScripts := make([]*models.Script, len(scripts))
copy(sortedScripts, scripts) copy(sortedScripts, scripts)
sort.Slice(sortedScripts, func(i, j int) bool { sort.Slice(sortedScripts, func(i, j int) bool {
if sortedScripts[i].Priority != sortedScripts[j].Priority { if sortedScripts[i].Priority != sortedScripts[j].Priority {
return sortedScripts[i].Priority < sortedScripts[j].Priority return sortedScripts[i].Priority < sortedScripts[j].Priority
} }
if sortedScripts[i].Sequence != sortedScripts[j].Sequence {
return sortedScripts[i].Sequence < sortedScripts[j].Sequence return sortedScripts[i].Sequence < sortedScripts[j].Sequence
}
return sortedScripts[i].Name < sortedScripts[j].Name
}) })
// Execute each script in order // Execute each script in order
@@ -108,18 +132,49 @@ func (w *Writer) executeScripts(ctx context.Context, conn *pgx.Conn, scripts []*
continue continue
} }
totalCount++
fmt.Printf("Executing script: %s (Priority=%d, Sequence=%d)\n", fmt.Printf("Executing script: %s (Priority=%d, Sequence=%d)\n",
script.Name, script.Priority, script.Sequence) script.Name, script.Priority, script.Sequence)
// Execute the SQL script // Execute the SQL script
_, err := conn.Exec(ctx, script.SQL) _, err := conn.Exec(ctx, script.SQL)
if err != nil { if err != nil {
return fmt.Errorf("failed to execute script %s (Priority=%d, Sequence=%d): %w", if ignoreErrors {
fmt.Printf("⚠ Error executing %s: %v (continuing due to --ignore-errors)\n", script.Name, err)
failedScripts = append(failedScripts, struct {
name string
priority int
sequence uint
err error
}{
name: script.Name,
priority: script.Priority,
sequence: script.Sequence,
err: err,
})
continue
}
return fmt.Errorf("script %s (Priority=%d, Sequence=%d): %w",
script.Name, script.Priority, script.Sequence, err) script.Name, script.Priority, script.Sequence, err)
} }
successCount++
fmt.Printf("✓ Successfully executed: %s\n", script.Name) fmt.Printf("✓ Successfully executed: %s\n", script.Name)
} }
// Store execution results in metadata for caller
w.options.Metadata["execution_total"] = totalCount
w.options.Metadata["execution_success"] = successCount
w.options.Metadata["execution_failed"] = len(failedScripts)
// Print summary of failed scripts if any
if len(failedScripts) > 0 {
fmt.Printf("\n⚠ Failed Scripts Summary (%d failed):\n", len(failedScripts))
for i, failed := range failedScripts {
fmt.Printf(" %d. %s (Priority=%d, Sequence=%d)\n Error: %v\n",
i+1, failed.name, failed.priority, failed.sequence, failed.err)
}
}
return nil return nil
} }

View File

@@ -99,13 +99,13 @@ func TestWriter_WriteTable(t *testing.T) {
} }
} }
// TestScriptSorting verifies that scripts are sorted correctly by Priority then Sequence // TestScriptSorting verifies that scripts are sorted correctly by Priority, Sequence, then Name
func TestScriptSorting(t *testing.T) { func TestScriptSorting(t *testing.T) {
scripts := []*models.Script{ scripts := []*models.Script{
{Name: "script1", Priority: 2, Sequence: 1, SQL: "SELECT 1;"}, {Name: "z_script1", Priority: 2, Sequence: 1, SQL: "SELECT 1;"},
{Name: "script2", Priority: 1, Sequence: 3, SQL: "SELECT 2;"}, {Name: "script2", Priority: 1, Sequence: 3, SQL: "SELECT 2;"},
{Name: "script3", Priority: 1, Sequence: 1, SQL: "SELECT 3;"}, {Name: "a_script3", Priority: 1, Sequence: 1, SQL: "SELECT 3;"},
{Name: "script4", Priority: 1, Sequence: 2, SQL: "SELECT 4;"}, {Name: "b_script4", Priority: 1, Sequence: 1, SQL: "SELECT 4;"},
{Name: "script5", Priority: 3, Sequence: 1, SQL: "SELECT 5;"}, {Name: "script5", Priority: 3, Sequence: 1, SQL: "SELECT 5;"},
{Name: "script6", Priority: 2, Sequence: 2, SQL: "SELECT 6;"}, {Name: "script6", Priority: 2, Sequence: 2, SQL: "SELECT 6;"},
} }
@@ -114,23 +114,33 @@ func TestScriptSorting(t *testing.T) {
sortedScripts := make([]*models.Script, len(scripts)) sortedScripts := make([]*models.Script, len(scripts))
copy(sortedScripts, scripts) copy(sortedScripts, scripts)
// Use the same sorting logic from executeScripts // Sort by Priority, Sequence, then Name (matching executeScripts logic)
for i := 0; i < len(sortedScripts)-1; i++ { for i := 0; i < len(sortedScripts)-1; i++ {
for j := i + 1; j < len(sortedScripts); j++ { for j := i + 1; j < len(sortedScripts); j++ {
if sortedScripts[i].Priority > sortedScripts[j].Priority || si, sj := sortedScripts[i], sortedScripts[j]
(sortedScripts[i].Priority == sortedScripts[j].Priority && // Compare by priority first
sortedScripts[i].Sequence > sortedScripts[j].Sequence) { if si.Priority > sj.Priority {
sortedScripts[i], sortedScripts[j] = sortedScripts[j], sortedScripts[i]
} else if si.Priority == sj.Priority {
// If same priority, compare by sequence
if si.Sequence > sj.Sequence {
sortedScripts[i], sortedScripts[j] = sortedScripts[j], sortedScripts[i]
} else if si.Sequence == sj.Sequence {
// If same sequence, compare by name
if si.Name > sj.Name {
sortedScripts[i], sortedScripts[j] = sortedScripts[j], sortedScripts[i] sortedScripts[i], sortedScripts[j] = sortedScripts[j], sortedScripts[i]
} }
} }
} }
}
}
// Expected order after sorting // Expected order after sorting (Priority -> Sequence -> Name)
expectedOrder := []string{ expectedOrder := []string{
"script3", // Priority 1, Sequence 1 "a_script3", // Priority 1, Sequence 1, Name a_script3
"script4", // Priority 1, Sequence 2 "b_script4", // Priority 1, Sequence 1, Name b_script4
"script2", // Priority 1, Sequence 3 "script2", // Priority 1, Sequence 3
"script1", // Priority 2, Sequence 1 "z_script1", // Priority 2, Sequence 1
"script6", // Priority 2, Sequence 2 "script6", // Priority 2, Sequence 2
"script5", // Priority 3, Sequence 1 "script5", // Priority 3, Sequence 1
} }
@@ -153,6 +163,13 @@ func TestScriptSorting(t *testing.T) {
t.Errorf("Sequence not ascending at position %d with same priority %d: %d > %d", t.Errorf("Sequence not ascending at position %d with same priority %d: %d > %d",
i, sortedScripts[i].Priority, sortedScripts[i].Sequence, sortedScripts[i+1].Sequence) i, sortedScripts[i].Priority, sortedScripts[i].Sequence, sortedScripts[i+1].Sequence)
} }
// Within same priority and sequence, names should be ascending
if sortedScripts[i].Priority == sortedScripts[i+1].Priority &&
sortedScripts[i].Sequence == sortedScripts[i+1].Sequence &&
sortedScripts[i].Name > sortedScripts[i+1].Name {
t.Errorf("Name not ascending at position %d with same priority/sequence: %s > %s",
i, sortedScripts[i].Name, sortedScripts[i+1].Name)
}
} }
} }

View File

@@ -7,6 +7,7 @@ type TemplateData struct {
// One of these will be populated based on execution mode // One of these will be populated based on execution mode
Database *models.Database Database *models.Database
Schema *models.Schema Schema *models.Schema
Domain *models.Domain
Script *models.Script Script *models.Script
Table *models.Table Table *models.Table
@@ -57,6 +58,15 @@ func NewSchemaData(schema *models.Schema, metadata map[string]interface{}) *Temp
} }
} }
// NewDomainData creates template data for domain mode
func NewDomainData(domain *models.Domain, db *models.Database, metadata map[string]interface{}) *TemplateData {
return &TemplateData{
Domain: domain,
ParentDatabase: db,
Metadata: metadata,
}
}
// NewScriptData creates template data for script mode // NewScriptData creates template data for script mode
func NewScriptData(script *models.Script, schema *models.Schema, db *models.Database, metadata map[string]interface{}) *TemplateData { func NewScriptData(script *models.Script, schema *models.Schema, db *models.Database, metadata map[string]interface{}) *TemplateData {
return &TemplateData{ return &TemplateData{
@@ -85,6 +95,9 @@ func (td *TemplateData) Name() string {
if td.Schema != nil { if td.Schema != nil {
return td.Schema.Name return td.Schema.Name
} }
if td.Domain != nil {
return td.Domain.Name
}
if td.Script != nil { if td.Script != nil {
return td.Script.Name return td.Script.Name
} }

View File

@@ -20,6 +20,8 @@ const (
DatabaseMode EntrypointMode = "database" DatabaseMode EntrypointMode = "database"
// SchemaMode executes the template once per schema (multi-file output) // SchemaMode executes the template once per schema (multi-file output)
SchemaMode EntrypointMode = "schema" SchemaMode EntrypointMode = "schema"
// DomainMode executes the template once per domain (multi-file output)
DomainMode EntrypointMode = "domain"
// ScriptMode executes the template once per script (multi-file output) // ScriptMode executes the template once per script (multi-file output)
ScriptMode EntrypointMode = "script" ScriptMode EntrypointMode = "script"
// TableMode executes the template once per table (multi-file output) // TableMode executes the template once per table (multi-file output)
@@ -80,6 +82,8 @@ func (w *Writer) WriteDatabase(db *models.Database) error {
return w.executeDatabaseMode(db) return w.executeDatabaseMode(db)
case SchemaMode: case SchemaMode:
return w.executeSchemaMode(db) return w.executeSchemaMode(db)
case DomainMode:
return w.executeDomainMode(db)
case ScriptMode: case ScriptMode:
return w.executeScriptMode(db) return w.executeScriptMode(db)
case TableMode: case TableMode:
@@ -143,6 +147,28 @@ func (w *Writer) executeSchemaMode(db *models.Database) error {
return nil return nil
} }
// executeDomainMode executes the template once per domain
func (w *Writer) executeDomainMode(db *models.Database) error {
for _, domain := range db.Domains {
data := NewDomainData(domain, db, w.options.Metadata)
output, err := w.executeTemplate(data)
if err != nil {
return fmt.Errorf("failed to execute template for domain %s: %w", domain.Name, err)
}
filename, err := w.generateFilename(data)
if err != nil {
return fmt.Errorf("failed to generate filename for domain %s: %w", domain.Name, err)
}
if err := w.writeOutput(output, filename); err != nil {
return fmt.Errorf("failed to write output for domain %s: %w", domain.Name, err)
}
}
return nil
}
// executeScriptMode executes the template once per script // executeScriptMode executes the template once per script
func (w *Writer) executeScriptMode(db *models.Database) error { func (w *Writer) executeScriptMode(db *models.Database) error {
for _, schema := range db.Schemas { for _, schema := range db.Schemas {

View File

@@ -1,6 +1,9 @@
package writers package writers
import ( import (
"regexp"
"strings"
"git.warky.dev/wdevs/relspecgo/pkg/models" "git.warky.dev/wdevs/relspecgo/pkg/models"
) )
@@ -28,3 +31,56 @@ type WriterOptions struct {
// Additional options can be added here as needed // Additional options can be added here as needed
Metadata map[string]interface{} Metadata map[string]interface{}
} }
// SanitizeFilename removes quotes, comments, and invalid characters from identifiers
// to make them safe for use in filenames. This handles:
// - Double and single quotes: "table_name" or 'table_name' -> table_name
// - DBML comments: table [note: 'description'] -> table
// - Invalid filename characters: replaced with underscores
func SanitizeFilename(name string) string {
// Remove DBML/DCTX style comments in brackets (e.g., [note: 'description'])
commentRegex := regexp.MustCompile(`\s*\[.*?\]\s*`)
name = commentRegex.ReplaceAllString(name, "")
// Remove quotes (both single and double)
name = strings.ReplaceAll(name, `"`, "")
name = strings.ReplaceAll(name, `'`, "")
// Remove backticks (MySQL style identifiers)
name = strings.ReplaceAll(name, "`", "")
// Replace invalid filename characters with underscores
// Invalid chars: / \ : * ? " < > | and control characters
invalidChars := regexp.MustCompile(`[/\\:*?"<>|\x00-\x1f\x7f]`)
name = invalidChars.ReplaceAllString(name, "_")
// Trim whitespace and consecutive underscores
name = strings.TrimSpace(name)
name = regexp.MustCompile(`_+`).ReplaceAllString(name, "_")
name = strings.Trim(name, "_")
return name
}
// SanitizeStructTagValue sanitizes a value to be safely used inside Go struct tags.
// Go struct tags are delimited by backticks, so any backtick in the value would break the syntax.
// This function:
// - Removes DBML/DCTX comments in brackets
// - Removes all quotes (double, single, and backticks)
// - Returns a clean identifier safe for use in struct tags and field names
func SanitizeStructTagValue(value string) string {
// Remove DBML/DCTX style comments in brackets (e.g., [note: 'description'])
commentRegex := regexp.MustCompile(`\s*\[.*?\]\s*`)
value = commentRegex.ReplaceAllString(value, "")
// Trim whitespace
value = strings.TrimSpace(value)
// Remove all quotes: backticks, double quotes, and single quotes
// This ensures the value is clean for use as Go identifiers and struct tag values
value = strings.ReplaceAll(value, "`", "")
value = strings.ReplaceAll(value, `"`, "")
value = strings.ReplaceAll(value, `'`, "")
return value
}

View File

@@ -0,0 +1,5 @@
// First file - users table basic structure
Table public.users {
id bigint [pk, increment]
email varchar(255) [unique, not null]
}

View File

@@ -0,0 +1,8 @@
// Second file - posts table
Table public.posts {
id bigint [pk, increment]
user_id bigint [not null]
title varchar(200) [not null]
content text
created_at timestamp [not null]
}

View File

@@ -0,0 +1,5 @@
// Third file - adds more columns to users table (tests merging)
Table public.users {
name varchar(100)
created_at timestamp [not null]
}

View File

@@ -0,0 +1,10 @@
// File with commented-out refs - should load last
// Contains relationships that depend on earlier tables
// Ref: public.posts.user_id > public.users.id [ondelete: CASCADE]
Table public.comments {
id bigint [pk, increment]
post_id bigint [not null]
content text [not null]
}

13
vendor/github.com/gdamore/encoding/.appveyor.yml generated vendored Normal file
View File

@@ -0,0 +1,13 @@
version: 1.0.{build}
clone_folder: c:\gopath\src\github.com\gdamore\encoding
environment:
GOPATH: c:\gopath
build_script:
- go version
- go env
- SET PATH=%LOCALAPPDATA%\atom\bin;%GOPATH%\bin;%PATH%
- go get -t ./...
- go build
- go install ./...
test_script:
- go test ./...

73
vendor/github.com/gdamore/encoding/CODE_OF_CONDUCT.md generated vendored Normal file
View File

@@ -0,0 +1,73 @@
# Contributor Covenant Code of Conduct
## Our Pledge
In the interest of fostering an open and welcoming environment, we as
contributors and maintainers pledge to making participation in our project and
our community a harassment-free experience for everyone, regardless of age, body
size, disability, ethnicity, gender identity and expression, level of experience,
nationality, personal appearance, race, religion, or sexual identity and
orientation.
## Our Standards
Examples of behavior that contributes to creating a positive environment
include:
* Using welcoming and inclusive language
* Being respectful of differing viewpoints and experiences
* Gracefully accepting constructive criticism
* Focusing on what is best for the community
* Showing empathy towards other community members
Examples of unacceptable behavior by participants include:
* The use of sexualized language or imagery and unwelcome sexual attention or
advances
* Trolling, insulting/derogatory comments, and personal or political attacks
* Public or private harassment
* Publishing others' private information, such as a physical or electronic
address, without explicit permission
* Other conduct which could reasonably be considered inappropriate in a
professional setting
## Our Responsibilities
Project maintainers are responsible for clarifying the standards of acceptable
behavior and are expected to take appropriate and fair corrective action in
response to any instances of unacceptable behavior.
Project maintainers have the right and responsibility to remove, edit, or
reject comments, commits, code, wiki edits, issues, and other contributions
that are not aligned to this Code of Conduct, or to ban temporarily or
permanently any contributor for other behaviors that they deem inappropriate,
threatening, offensive, or harmful.
## Scope
This Code of Conduct applies both within project spaces and in public spaces
when an individual is representing the project or its community. Examples of
representing a project or community include using an official project e-mail
address, posting via an official social media account, or acting as an appointed
representative at an online or offline event. Representation of a project may be
further defined and clarified by project maintainers.
## Enforcement
Instances of abusive, harassing, or otherwise unacceptable behavior may be
reported by contacting the project team at garrett@damore.org. All
complaints will be reviewed and investigated and will result in a response that
is deemed necessary and appropriate to the circumstances. The project team is
obligated to maintain confidentiality with regard to the reporter of an incident.
Further details of specific enforcement policies may be posted separately.
Project maintainers who do not follow or enforce the Code of Conduct in good
faith may face temporary or permanent repercussions as determined by other
members of the project's leadership.
## Attribution
This Code of Conduct is adapted from the [Contributor Covenant][homepage], version 1.4,
available at https://www.contributor-covenant.org/version/1/4/code-of-conduct.html
[homepage]: https://www.contributor-covenant.org

202
vendor/github.com/gdamore/encoding/LICENSE generated vendored Normal file
View File

@@ -0,0 +1,202 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "[]"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright [yyyy] [name of copyright owner]
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

Some files were not shown because too many files have changed in this diff Show More