13 Commits

Author SHA1 Message Date
c36b5ede2b feat(writer): 🎉 Enhance primary key handling and add tests
All checks were successful
CI / Test (1.24) (push) Successful in -26m18s
CI / Test (1.25) (push) Successful in -26m11s
CI / Build (push) Successful in -26m43s
CI / Lint (push) Successful in -26m34s
Release / Build and Release (push) Successful in -26m31s
Integration Tests / Integration Tests (push) Successful in -26m20s
* Implement checks for existing primary keys before adding new ones.
* Drop auto-generated primary keys if they exist.
* Add tests for primary key existence and column size specifiers.
* Improve type conversion handling for PostgreSQL compatibility.
2026-01-31 18:59:32 +02:00
51ab29f8e3 feat(writer): 🎉 Update index naming conventions for consistency
All checks were successful
CI / Test (1.24) (push) Successful in -26m25s
CI / Test (1.25) (push) Successful in -26m17s
CI / Lint (push) Successful in -26m32s
CI / Build (push) Successful in -26m42s
Release / Build and Release (push) Successful in -26m31s
Integration Tests / Integration Tests (push) Successful in -26m24s
* Use SQLName() for primary key constraint naming
* Enhance index name formatting with column suffix
2026-01-31 17:23:18 +02:00
f532fc110c feat(writer): 🎉 Enhance script execution order and add symlink skipping
All checks were successful
CI / Test (1.24) (push) Successful in -26m10s
CI / Test (1.25) (push) Successful in -26m8s
CI / Build (push) Successful in -26m44s
CI / Lint (push) Successful in -26m32s
Integration Tests / Integration Tests (push) Successful in -26m26s
* Update script execution to sort by Priority, Sequence, and Name.
* Add functionality to skip symbolic links during directory scanning.
* Improve documentation to reflect changes in execution order and features.
* Add tests for symlink skipping and ensure correct script sorting.
2026-01-31 16:59:17 +02:00
92dff99725 feat(writer): enhance type conversion for PostgreSQL compatibility and add tests
Some checks failed
CI / Test (1.24) (push) Successful in -26m32s
CI / Test (1.25) (push) Successful in -26m27s
CI / Build (push) Successful in -26m48s
CI / Lint (push) Successful in -26m33s
Integration Tests / Integration Tests (push) Failing after -26m51s
Release / Build and Release (push) Successful in -26m41s
2026-01-29 21:36:23 +02:00
283b568adb feat(pgsql): add execution reporting for SQL statements
All checks were successful
CI / Test (1.24) (push) Successful in -25m29s
CI / Test (1.25) (push) Successful in -25m13s
CI / Lint (push) Successful in -26m13s
CI / Build (push) Successful in -26m27s
Integration Tests / Integration Tests (push) Successful in -26m11s
Release / Build and Release (push) Successful in -25m8s
- Implemented ExecutionReport to track the execution status of SQL statements.
- Added SchemaReport and TableReport to monitor execution per schema and table.
- Enhanced WriteDatabase to execute SQL directly on a PostgreSQL database if a connection string is provided.
- Included error handling and logging for failed statements during execution.
- Added functionality to write execution reports to a JSON file.
- Introduced utility functions to extract table names from CREATE TABLE statements and truncate long SQL statements for error messages.
2026-01-29 21:16:14 +02:00
122743ee43 feat(writer): 🎉 Improve primary key handling by checking for explicit constraints and columns
Some checks failed
CI / Test (1.25) (push) Successful in -26m17s
CI / Test (1.24) (push) Successful in -25m44s
CI / Lint (push) Successful in -26m43s
CI / Build (push) Failing after -27m1s
Release / Build and Release (push) Successful in -26m39s
Integration Tests / Integration Tests (push) Successful in -26m25s
2026-01-28 22:08:27 +02:00
91b6046b9b feat(writer): 🎉 Enhance PostgreSQL writer, fixed bugs found using origin
Some checks failed
CI / Test (1.24) (push) Failing after -24m5s
CI / Test (1.25) (push) Successful in -23m53s
CI / Build (push) Failing after -26m29s
CI / Lint (push) Successful in -26m12s
Integration Tests / Integration Tests (push) Successful in -26m20s
Release / Build and Release (push) Successful in -25m7s
2026-01-28 21:59:25 +02:00
6f55505444 feat(writer): 🎉 Enhance model name generation and formatting
All checks were successful
CI / Test (1.24) (push) Successful in -27m27s
CI / Test (1.25) (push) Successful in -27m17s
CI / Lint (push) Successful in -27m27s
CI / Build (push) Successful in -27m38s
Release / Build and Release (push) Successful in -27m24s
Integration Tests / Integration Tests (push) Successful in -27m16s
* Update model name generation to include schema name.
* Add gofmt execution after writing output files.
* Refactor relationship field naming to include schema.
* Update tests to reflect changes in model names and relationships.
2026-01-10 18:28:41 +02:00
e0e7b64c69 feat(writer): 🎉 Resolve field name collisions with methods
All checks were successful
CI / Test (1.24) (push) Successful in -27m21s
CI / Test (1.25) (push) Successful in -27m12s
CI / Build (push) Successful in -27m37s
CI / Lint (push) Successful in -27m26s
Release / Build and Release (push) Successful in -27m25s
Integration Tests / Integration Tests (push) Successful in -27m20s
* Implement field name collision resolution in model generation.
* Add tests to verify renaming of fields that conflict with generated method names.
* Ensure primary key type safety in UpdateID method.
2026-01-10 17:54:33 +02:00
4181cb1fbd feat(writer): 🎉 Enhance relationship field naming and uniqueness
All checks were successful
CI / Test (1.24) (push) Successful in -27m15s
CI / Test (1.25) (push) Successful in -27m10s
CI / Build (push) Successful in -27m38s
CI / Lint (push) Successful in -27m25s
Release / Build and Release (push) Successful in -27m27s
Integration Tests / Integration Tests (push) Successful in -27m18s
* Update relationship field naming conventions for has-one and has-many relationships.
* Implement logic to ensure unique field names by tracking used names.
* Add tests to verify new naming conventions and uniqueness constraints.
2026-01-10 17:45:13 +02:00
120ffc6a5a feat(writer): 🎉 Update relationship field naming convention
All checks were successful
CI / Test (1.24) (push) Successful in -27m26s
CI / Test (1.25) (push) Successful in -27m14s
CI / Lint (push) Successful in -27m27s
CI / Build (push) Successful in -27m36s
Release / Build and Release (push) Successful in -27m22s
Integration Tests / Integration Tests (push) Successful in -27m17s
* Refactor generateRelationshipFieldName to use foreign key columns for unique naming.
* Add test for multiple references to the same table to ensure unique relationship field names.
* Update existing tests to reflect new naming convention.
2026-01-10 13:49:54 +02:00
b20ad35485 feat(writer): 🎉 Add sanitization for struct tag values
All checks were successful
CI / Test (1.24) (push) Successful in -27m25s
CI / Test (1.25) (push) Successful in -27m17s
CI / Build (push) Successful in -27m36s
CI / Lint (push) Successful in -27m23s
Release / Build and Release (push) Successful in -27m21s
Integration Tests / Integration Tests (push) Successful in -27m16s
* Implement SanitizeStructTagValue function to clean identifiers for struct tags.
* Update model data generation to use sanitized column names.
* Ensure safe handling of backticks in column names and types across writers.
2026-01-10 13:42:25 +02:00
f258f8baeb feat(writer): 🎉 Add filename sanitization for DBML identifiers
All checks were successful
CI / Test (1.24) (push) Successful in -27m23s
CI / Test (1.25) (push) Successful in -27m16s
CI / Build (push) Successful in -27m40s
CI / Lint (push) Successful in -27m29s
Release / Build and Release (push) Successful in -27m21s
Integration Tests / Integration Tests (push) Successful in -27m17s
* Implement SanitizeFilename function to clean identifiers
* Remove quotes, comments, and invalid characters from filenames
* Update filename generation in writers to use sanitized names
2026-01-10 13:32:33 +02:00
31 changed files with 2732 additions and 213 deletions

1
.gitignore vendored
View File

@@ -47,3 +47,4 @@ dist/
build/ build/
bin/ bin/
tests/integration/failed_statements_example.txt tests/integration/failed_statements_example.txt
test_output.log

View File

@@ -55,6 +55,7 @@ var (
mergeSkipSequences bool mergeSkipSequences bool
mergeSkipTables string // Comma-separated table names to skip mergeSkipTables string // Comma-separated table names to skip
mergeVerbose bool mergeVerbose bool
mergeReportPath string // Path to write merge report
) )
var mergeCmd = &cobra.Command{ var mergeCmd = &cobra.Command{
@@ -78,6 +79,12 @@ Examples:
--source pgsql --source-conn "postgres://user:pass@localhost/source_db" \ --source pgsql --source-conn "postgres://user:pass@localhost/source_db" \
--output json --output-path combined.json --output json --output-path combined.json
# Merge and execute on PostgreSQL database with report
relspec merge --target json --target-path base.json \
--source json --source-path additional.json \
--output pgsql --output-conn "postgres://user:pass@localhost/target_db" \
--merge-report merge-report.json
# Merge DBML and YAML, skip relations # Merge DBML and YAML, skip relations
relspec merge --target dbml --target-path schema.dbml \ relspec merge --target dbml --target-path schema.dbml \
--source yaml --source-path tables.yaml \ --source yaml --source-path tables.yaml \
@@ -115,6 +122,7 @@ func init() {
mergeCmd.Flags().BoolVar(&mergeSkipSequences, "skip-sequences", false, "Skip sequences during merge") mergeCmd.Flags().BoolVar(&mergeSkipSequences, "skip-sequences", false, "Skip sequences during merge")
mergeCmd.Flags().StringVar(&mergeSkipTables, "skip-tables", "", "Comma-separated list of table names to skip during merge") mergeCmd.Flags().StringVar(&mergeSkipTables, "skip-tables", "", "Comma-separated list of table names to skip during merge")
mergeCmd.Flags().BoolVar(&mergeVerbose, "verbose", false, "Show verbose output") mergeCmd.Flags().BoolVar(&mergeVerbose, "verbose", false, "Show verbose output")
mergeCmd.Flags().StringVar(&mergeReportPath, "merge-report", "", "Path to write merge report (JSON format)")
} }
func runMerge(cmd *cobra.Command, args []string) error { func runMerge(cmd *cobra.Command, args []string) error {
@@ -229,7 +237,7 @@ func runMerge(cmd *cobra.Command, args []string) error {
fmt.Fprintf(os.Stderr, " Path: %s\n", mergeOutputPath) fmt.Fprintf(os.Stderr, " Path: %s\n", mergeOutputPath)
} }
err = writeDatabaseForMerge(mergeOutputType, mergeOutputPath, "", targetDB, "Output") err = writeDatabaseForMerge(mergeOutputType, mergeOutputPath, mergeOutputConn, targetDB, "Output")
if err != nil { if err != nil {
return fmt.Errorf("failed to write output: %w", err) return fmt.Errorf("failed to write output: %w", err)
} }
@@ -376,7 +384,17 @@ func writeDatabaseForMerge(dbType, filePath, connString string, db *models.Datab
} }
writer = wtypeorm.NewWriter(&writers.WriterOptions{OutputPath: filePath}) writer = wtypeorm.NewWriter(&writers.WriterOptions{OutputPath: filePath})
case "pgsql": case "pgsql":
writer = wpgsql.NewWriter(&writers.WriterOptions{OutputPath: filePath}) writerOpts := &writers.WriterOptions{OutputPath: filePath}
if connString != "" {
writerOpts.Metadata = map[string]interface{}{
"connection_string": connString,
}
// Add report path if merge report is enabled
if mergeReportPath != "" {
writerOpts.Metadata["report_path"] = mergeReportPath
}
}
writer = wpgsql.NewWriter(writerOpts)
default: default:
return fmt.Errorf("%s: unsupported format '%s'", label, dbType) return fmt.Errorf("%s: unsupported format '%s'", label, dbType)
} }

View File

@@ -39,8 +39,8 @@ Example filenames (hyphen format):
1-002-create-posts.sql # Priority 1, Sequence 2 1-002-create-posts.sql # Priority 1, Sequence 2
10-10-create-newid.pgsql # Priority 10, Sequence 10 10-10-create-newid.pgsql # Priority 10, Sequence 10
Both formats can be mixed in the same directory. Both formats can be mixed in the same directory and subdirectories.
Scripts are executed in order: Priority (ascending), then Sequence (ascending).`, Scripts are executed in order: Priority (ascending), Sequence (ascending), Name (alphabetical).`,
} }
var scriptsListCmd = &cobra.Command{ var scriptsListCmd = &cobra.Command{
@@ -48,8 +48,8 @@ var scriptsListCmd = &cobra.Command{
Short: "List SQL scripts from a directory", Short: "List SQL scripts from a directory",
Long: `List SQL scripts from a directory and show their execution order. Long: `List SQL scripts from a directory and show their execution order.
The scripts are read from the specified directory and displayed in the order The scripts are read recursively from the specified directory and displayed in the order
they would be executed (Priority ascending, then Sequence ascending). they would be executed: Priority (ascending), then Sequence (ascending), then Name (alphabetical).
Example: Example:
relspec scripts list --dir ./migrations`, relspec scripts list --dir ./migrations`,
@@ -61,10 +61,10 @@ var scriptsExecuteCmd = &cobra.Command{
Short: "Execute SQL scripts against a database", Short: "Execute SQL scripts against a database",
Long: `Execute SQL scripts from a directory against a PostgreSQL database. Long: `Execute SQL scripts from a directory against a PostgreSQL database.
Scripts are executed in order: Priority (ascending), then Sequence (ascending). Scripts are executed in order: Priority (ascending), Sequence (ascending), Name (alphabetical).
Execution stops immediately on the first error. Execution stops immediately on the first error.
The directory is scanned recursively for files matching the patterns: The directory is scanned recursively for all subdirectories and files matching the patterns:
{priority}_{sequence}_{name}.sql or .pgsql (underscore format) {priority}_{sequence}_{name}.sql or .pgsql (underscore format)
{priority}-{sequence}-{name}.sql or .pgsql (hyphen format) {priority}-{sequence}-{name}.sql or .pgsql (hyphen format)
@@ -75,7 +75,7 @@ PostgreSQL Connection String Examples:
postgresql://user:pass@host/dbname?sslmode=require postgresql://user:pass@host/dbname?sslmode=require
Examples: Examples:
# Execute migration scripts # Execute migration scripts from a directory (including subdirectories)
relspec scripts execute --dir ./migrations \ relspec scripts execute --dir ./migrations \
--conn "postgres://user:pass@localhost:5432/mydb" --conn "postgres://user:pass@localhost:5432/mydb"
@@ -149,7 +149,7 @@ func runScriptsList(cmd *cobra.Command, args []string) error {
return nil return nil
} }
// Sort scripts by Priority then Sequence // Sort scripts by Priority, Sequence, then Name
sortedScripts := make([]*struct { sortedScripts := make([]*struct {
name string name string
priority int priority int
@@ -186,7 +186,10 @@ func runScriptsList(cmd *cobra.Command, args []string) error {
if sortedScripts[i].priority != sortedScripts[j].priority { if sortedScripts[i].priority != sortedScripts[j].priority {
return sortedScripts[i].priority < sortedScripts[j].priority return sortedScripts[i].priority < sortedScripts[j].priority
} }
if sortedScripts[i].sequence != sortedScripts[j].sequence {
return sortedScripts[i].sequence < sortedScripts[j].sequence return sortedScripts[i].sequence < sortedScripts[j].sequence
}
return sortedScripts[i].name < sortedScripts[j].name
}) })
fmt.Fprintf(os.Stderr, "Found %d script(s) in execution order:\n\n", len(sortedScripts)) fmt.Fprintf(os.Stderr, "Found %d script(s) in execution order:\n\n", len(sortedScripts))
@@ -242,7 +245,7 @@ func runScriptsExecute(cmd *cobra.Command, args []string) error {
fmt.Fprintf(os.Stderr, " ✓ Found %d script(s)\n\n", len(schema.Scripts)) fmt.Fprintf(os.Stderr, " ✓ Found %d script(s)\n\n", len(schema.Scripts))
// Step 2: Execute scripts // Step 2: Execute scripts
fmt.Fprintf(os.Stderr, "[2/2] Executing scripts in order (Priority → Sequence)...\n\n") fmt.Fprintf(os.Stderr, "[2/2] Executing scripts in order (Priority → Sequence → Name)...\n\n")
writer := sqlexec.NewWriter(&writers.WriterOptions{ writer := sqlexec.NewWriter(&writers.WriterOptions{
Metadata: map[string]any{ Metadata: map[string]any{

View File

@@ -4,31 +4,31 @@ import "strings"
var GoToStdTypes = map[string]string{ var GoToStdTypes = map[string]string{
"bool": "boolean", "bool": "boolean",
"int64": "integer", "int64": "bigint",
"int": "integer", "int": "integer",
"int8": "integer", "int8": "smallint",
"int16": "integer", "int16": "smallint",
"int32": "integer", "int32": "integer",
"uint": "integer", "uint": "integer",
"uint8": "integer", "uint8": "smallint",
"uint16": "integer", "uint16": "smallint",
"uint32": "integer", "uint32": "integer",
"uint64": "integer", "uint64": "bigint",
"uintptr": "integer", "uintptr": "bigint",
"znullint64": "integer", "znullint64": "bigint",
"znullint32": "integer", "znullint32": "integer",
"znullbyte": "integer", "znullbyte": "smallint",
"float64": "double", "float64": "double",
"float32": "double", "float32": "double",
"complex64": "double", "complex64": "double",
"complex128": "double", "complex128": "double",
"customfloat64": "double", "customfloat64": "double",
"string": "string", "string": "text",
"Pointer": "integer", "Pointer": "bigint",
"[]byte": "blob", "[]byte": "blob",
"customdate": "string", "customdate": "date",
"customtime": "string", "customtime": "time",
"customtimestamp": "string", "customtimestamp": "timestamp",
"sqlfloat64": "double", "sqlfloat64": "double",
"sqlfloat16": "double", "sqlfloat16": "double",
"sqluuid": "uuid", "sqluuid": "uuid",
@@ -36,9 +36,9 @@ var GoToStdTypes = map[string]string{
"sqljson": "json", "sqljson": "json",
"sqlint64": "bigint", "sqlint64": "bigint",
"sqlint32": "integer", "sqlint32": "integer",
"sqlint16": "integer", "sqlint16": "smallint",
"sqlbool": "boolean", "sqlbool": "boolean",
"sqlstring": "string", "sqlstring": "text",
"nullablejsonb": "jsonb", "nullablejsonb": "jsonb",
"nullablejson": "json", "nullablejson": "json",
"nullableuuid": "uuid", "nullableuuid": "uuid",
@@ -67,7 +67,7 @@ var GoToPGSQLTypes = map[string]string{
"float32": "real", "float32": "real",
"complex64": "double precision", "complex64": "double precision",
"complex128": "double precision", "complex128": "double precision",
"customfloat64": "double precisio", "customfloat64": "double precision",
"string": "text", "string": "text",
"Pointer": "bigint", "Pointer": "bigint",
"[]byte": "bytea", "[]byte": "bytea",
@@ -81,9 +81,9 @@ var GoToPGSQLTypes = map[string]string{
"sqljson": "json", "sqljson": "json",
"sqlint64": "bigint", "sqlint64": "bigint",
"sqlint32": "integer", "sqlint32": "integer",
"sqlint16": "integer", "sqlint16": "smallint",
"sqlbool": "boolean", "sqlbool": "boolean",
"sqlstring": "string", "sqlstring": "text",
"nullablejsonb": "jsonb", "nullablejsonb": "jsonb",
"nullablejson": "json", "nullablejson": "json",
"nullableuuid": "uuid", "nullableuuid": "uuid",

View File

@@ -128,9 +128,19 @@ func (r *Reader) readDirectoryDBML(dirPath string) (*models.Database, error) {
return db, nil return db, nil
} }
// stripQuotes removes surrounding quotes from an identifier // stripQuotes removes surrounding quotes and comments from an identifier
func stripQuotes(s string) string { func stripQuotes(s string) string {
s = strings.TrimSpace(s) s = strings.TrimSpace(s)
// Remove DBML comments in brackets (e.g., [note: 'description'])
// This handles inline comments like: "table_name" [note: 'comment']
commentRegex := regexp.MustCompile(`\s*\[.*?\]\s*`)
s = commentRegex.ReplaceAllString(s, "")
// Trim again after removing comments
s = strings.TrimSpace(s)
// Remove surrounding quotes (double or single)
if len(s) >= 2 && ((s[0] == '"' && s[len(s)-1] == '"') || (s[0] == '\'' && s[len(s)-1] == '\'')) { if len(s) >= 2 && ((s[0] == '"' && s[len(s)-1] == '"') || (s[0] == '\'' && s[len(s)-1] == '\'')) {
return s[1 : len(s)-1] return s[1 : len(s)-1]
} }
@@ -577,10 +587,10 @@ func (r *Reader) parseColumn(line, tableName, schemaName string) (*models.Column
refOp := strings.TrimSpace(refStr) refOp := strings.TrimSpace(refStr)
var isReverse bool var isReverse bool
if strings.HasPrefix(refOp, "<") { if strings.HasPrefix(refOp, "<") {
isReverse = column.IsPrimaryKey // < on PK means "is referenced by" (reverse) // < means "is referenced by" - only makes sense on PK columns
} else if strings.HasPrefix(refOp, ">") { isReverse = column.IsPrimaryKey
isReverse = !column.IsPrimaryKey // > on FK means reverse
} }
// > means "references" - always a forward FK, never reverse
constraint = r.parseRef(refStr) constraint = r.parseRef(refStr)
if constraint != nil { if constraint != nil {

View File

@@ -329,10 +329,10 @@ func (r *Reader) deriveRelationship(table *models.Table, fk *models.Constraint)
relationshipName := fmt.Sprintf("%s_to_%s", table.Name, fk.ReferencedTable) relationshipName := fmt.Sprintf("%s_to_%s", table.Name, fk.ReferencedTable)
relationship := models.InitRelationship(relationshipName, models.OneToMany) relationship := models.InitRelationship(relationshipName, models.OneToMany)
relationship.FromTable = fk.ReferencedTable relationship.FromTable = table.Name
relationship.FromSchema = fk.ReferencedSchema relationship.FromSchema = table.Schema
relationship.ToTable = table.Name relationship.ToTable = fk.ReferencedTable
relationship.ToSchema = table.Schema relationship.ToSchema = fk.ReferencedSchema
relationship.ForeignKey = fk.Name relationship.ForeignKey = fk.Name
// Store constraint actions in properties // Store constraint actions in properties

View File

@@ -328,12 +328,12 @@ func TestDeriveRelationship(t *testing.T) {
t.Errorf("Expected relationship type %s, got %s", models.OneToMany, rel.Type) t.Errorf("Expected relationship type %s, got %s", models.OneToMany, rel.Type)
} }
if rel.FromTable != "users" { if rel.FromTable != "orders" {
t.Errorf("Expected FromTable 'users', got '%s'", rel.FromTable) t.Errorf("Expected FromTable 'orders', got '%s'", rel.FromTable)
} }
if rel.ToTable != "orders" { if rel.ToTable != "users" {
t.Errorf("Expected ToTable 'orders', got '%s'", rel.ToTable) t.Errorf("Expected ToTable 'users', got '%s'", rel.ToTable)
} }
if rel.ForeignKey != "fk_orders_user_id" { if rel.ForeignKey != "fk_orders_user_id" {

View File

@@ -93,6 +93,7 @@ fmt.Printf("Found %d scripts\n", len(schema.Scripts))
## Features ## Features
- **Recursive Directory Scanning**: Automatically scans all subdirectories - **Recursive Directory Scanning**: Automatically scans all subdirectories
- **Symlink Skipping**: Symbolic links are automatically skipped (prevents loops and duplicates)
- **Multiple Extensions**: Supports both `.sql` and `.pgsql` files - **Multiple Extensions**: Supports both `.sql` and `.pgsql` files
- **Flexible Naming**: Extract metadata from filename patterns - **Flexible Naming**: Extract metadata from filename patterns
- **Error Handling**: Validates directory existence and file accessibility - **Error Handling**: Validates directory existence and file accessibility
@@ -153,8 +154,9 @@ go test ./pkg/readers/sqldir/
``` ```
Tests include: Tests include:
- Valid file parsing - Valid file parsing (underscore and hyphen formats)
- Recursive directory scanning - Recursive directory scanning
- Symlink skipping
- Invalid filename handling - Invalid filename handling
- Empty directory handling - Empty directory handling
- Error conditions - Error conditions

View File

@@ -107,11 +107,20 @@ func (r *Reader) readScripts() ([]*models.Script, error) {
return err return err
} }
// Skip directories // Don't process directories as files (WalkDir still descends into them recursively)
if d.IsDir() { if d.IsDir() {
return nil return nil
} }
// Skip symlinks
info, err := d.Info()
if err != nil {
return err
}
if info.Mode()&os.ModeSymlink != 0 {
return nil
}
// Get filename // Get filename
filename := d.Name() filename := d.Name()

View File

@@ -373,3 +373,65 @@ func TestReader_MixedFormat(t *testing.T) {
} }
} }
} }
func TestReader_SkipSymlinks(t *testing.T) {
// Create temporary test directory
tempDir, err := os.MkdirTemp("", "sqldir-test-symlink-*")
if err != nil {
t.Fatalf("Failed to create temp directory: %v", err)
}
defer os.RemoveAll(tempDir)
// Create a real SQL file
realFile := filepath.Join(tempDir, "1_001_real_file.sql")
if err := os.WriteFile(realFile, []byte("SELECT 1;"), 0644); err != nil {
t.Fatalf("Failed to create real file: %v", err)
}
// Create another file to link to
targetFile := filepath.Join(tempDir, "2_001_target.sql")
if err := os.WriteFile(targetFile, []byte("SELECT 2;"), 0644); err != nil {
t.Fatalf("Failed to create target file: %v", err)
}
// Create a symlink to the target file (this should be skipped)
symlinkFile := filepath.Join(tempDir, "3_001_symlink.sql")
if err := os.Symlink(targetFile, symlinkFile); err != nil {
// Skip test on systems that don't support symlinks (e.g., Windows without admin)
t.Skipf("Symlink creation not supported: %v", err)
}
// Create reader
reader := NewReader(&readers.ReaderOptions{
FilePath: tempDir,
})
// Read database
db, err := reader.ReadDatabase()
if err != nil {
t.Fatalf("ReadDatabase failed: %v", err)
}
schema := db.Schemas[0]
// Should only have 2 scripts (real_file and target), symlink should be skipped
if len(schema.Scripts) != 2 {
t.Errorf("Expected 2 scripts (symlink should be skipped), got %d", len(schema.Scripts))
}
// Verify the scripts are the real files, not the symlink
scriptNames := make(map[string]bool)
for _, script := range schema.Scripts {
scriptNames[script.Name] = true
}
if !scriptNames["real_file"] {
t.Error("Expected 'real_file' script to be present")
}
if !scriptNames["target"] {
t.Error("Expected 'target' script to be present")
}
if scriptNames["symlink"] {
t.Error("Symlink script should have been skipped but was found")
}
}

View File

@@ -5,6 +5,7 @@ import (
"strings" "strings"
"git.warky.dev/wdevs/relspecgo/pkg/models" "git.warky.dev/wdevs/relspecgo/pkg/models"
"git.warky.dev/wdevs/relspecgo/pkg/writers"
) )
// TemplateData represents the data passed to the template for code generation // TemplateData represents the data passed to the template for code generation
@@ -111,13 +112,17 @@ func NewModelData(table *models.Table, schema string, typeMapper *TypeMapper) *M
tableName = schema + "." + table.Name tableName = schema + "." + table.Name
} }
// Generate model name: singularize and convert to PascalCase // Generate model name: Model + Schema + Table (all PascalCase)
singularTable := Singularize(table.Name) singularTable := Singularize(table.Name)
modelName := SnakeCaseToPascalCase(singularTable) tablePart := SnakeCaseToPascalCase(singularTable)
// Add "Model" prefix if not already present // Include schema name in model name
if !hasModelPrefix(modelName) { var modelName string
modelName = "Model" + modelName if schema != "" {
schemaPart := SnakeCaseToPascalCase(schema)
modelName = "Model" + schemaPart + tablePart
} else {
modelName = "Model" + tablePart
} }
model := &ModelData{ model := &ModelData{
@@ -133,8 +138,10 @@ func NewModelData(table *models.Table, schema string, typeMapper *TypeMapper) *M
// Find primary key // Find primary key
for _, col := range table.Columns { for _, col := range table.Columns {
if col.IsPrimaryKey { if col.IsPrimaryKey {
model.PrimaryKeyField = SnakeCaseToPascalCase(col.Name) // Sanitize column name to remove backticks
model.IDColumnName = col.Name safeName := writers.SanitizeStructTagValue(col.Name)
model.PrimaryKeyField = SnakeCaseToPascalCase(safeName)
model.IDColumnName = safeName
// Check if PK type is a SQL type (contains resolvespec_common or sql_types) // Check if PK type is a SQL type (contains resolvespec_common or sql_types)
goType := typeMapper.SQLTypeToGoType(col.Type, col.NotNull) goType := typeMapper.SQLTypeToGoType(col.Type, col.NotNull)
model.PrimaryKeyIsSQL = strings.Contains(goType, "resolvespec_common") || strings.Contains(goType, "sql_types") model.PrimaryKeyIsSQL = strings.Contains(goType, "resolvespec_common") || strings.Contains(goType, "sql_types")
@@ -146,6 +153,8 @@ func NewModelData(table *models.Table, schema string, typeMapper *TypeMapper) *M
columns := sortColumns(table.Columns) columns := sortColumns(table.Columns)
for _, col := range columns { for _, col := range columns {
field := columnToField(col, table, typeMapper) field := columnToField(col, table, typeMapper)
// Check for name collision with generated methods and rename if needed
field.Name = resolveFieldNameCollision(field.Name)
model.Fields = append(model.Fields, field) model.Fields = append(model.Fields, field)
} }
@@ -154,10 +163,13 @@ func NewModelData(table *models.Table, schema string, typeMapper *TypeMapper) *M
// columnToField converts a models.Column to FieldData // columnToField converts a models.Column to FieldData
func columnToField(col *models.Column, table *models.Table, typeMapper *TypeMapper) *FieldData { func columnToField(col *models.Column, table *models.Table, typeMapper *TypeMapper) *FieldData {
fieldName := SnakeCaseToPascalCase(col.Name) // Sanitize column name first to remove backticks before generating field name
safeName := writers.SanitizeStructTagValue(col.Name)
fieldName := SnakeCaseToPascalCase(safeName)
goType := typeMapper.SQLTypeToGoType(col.Type, col.NotNull) goType := typeMapper.SQLTypeToGoType(col.Type, col.NotNull)
bunTag := typeMapper.BuildBunTag(col, table) bunTag := typeMapper.BuildBunTag(col, table)
jsonTag := col.Name // Use column name for JSON tag // Use same sanitized name for JSON tag
jsonTag := safeName
return &FieldData{ return &FieldData{
Name: fieldName, Name: fieldName,
@@ -184,9 +196,28 @@ func formatComment(description, comment string) string {
return comment return comment
} }
// hasModelPrefix checks if a name already has "Model" prefix // resolveFieldNameCollision checks if a field name conflicts with generated method names
func hasModelPrefix(name string) bool { // and adds an underscore suffix if there's a collision
return len(name) >= 5 && name[:5] == "Model" func resolveFieldNameCollision(fieldName string) string {
// List of method names that are generated by the template
reservedNames := map[string]bool{
"TableName": true,
"TableNameOnly": true,
"SchemaName": true,
"GetID": true,
"GetIDStr": true,
"SetID": true,
"UpdateID": true,
"GetIDName": true,
"GetPrefix": true,
}
// Check if field name conflicts with a reserved method name
if reservedNames[fieldName] {
return fieldName + "_"
}
return fieldName
} }
// sortColumns sorts columns by sequence, then by name // sortColumns sorts columns by sequence, then by name

View File

@@ -5,6 +5,7 @@ import (
"strings" "strings"
"git.warky.dev/wdevs/relspecgo/pkg/models" "git.warky.dev/wdevs/relspecgo/pkg/models"
"git.warky.dev/wdevs/relspecgo/pkg/writers"
) )
// TypeMapper handles type conversions between SQL and Go types for Bun // TypeMapper handles type conversions between SQL and Go types for Bun
@@ -164,11 +165,14 @@ func (tm *TypeMapper) BuildBunTag(column *models.Column, table *models.Table) st
var parts []string var parts []string
// Column name comes first (no prefix) // Column name comes first (no prefix)
parts = append(parts, column.Name) // Sanitize to remove backticks which would break struct tag syntax
safeName := writers.SanitizeStructTagValue(column.Name)
parts = append(parts, safeName)
// Add type if specified // Add type if specified
if column.Type != "" { if column.Type != "" {
typeStr := column.Type // Sanitize type to remove backticks
typeStr := writers.SanitizeStructTagValue(column.Type)
if column.Length > 0 { if column.Length > 0 {
typeStr = fmt.Sprintf("%s(%d)", typeStr, column.Length) typeStr = fmt.Sprintf("%s(%d)", typeStr, column.Length)
} else if column.Precision > 0 { } else if column.Precision > 0 {
@@ -188,7 +192,9 @@ func (tm *TypeMapper) BuildBunTag(column *models.Column, table *models.Table) st
// Default value // Default value
if column.Default != nil { if column.Default != nil {
parts = append(parts, fmt.Sprintf("default:%v", column.Default)) // Sanitize default value to remove backticks
safeDefault := writers.SanitizeStructTagValue(fmt.Sprintf("%v", column.Default))
parts = append(parts, fmt.Sprintf("default:%s", safeDefault))
} }
// Nullable (Bun uses nullzero for nullable fields) // Nullable (Bun uses nullzero for nullable fields)
@@ -263,7 +269,7 @@ func (tm *TypeMapper) NeedsFmtImport(generateGetIDStr bool) bool {
// GetSQLTypesImport returns the import path for sql_types (ResolveSpec common) // GetSQLTypesImport returns the import path for sql_types (ResolveSpec common)
func (tm *TypeMapper) GetSQLTypesImport() string { func (tm *TypeMapper) GetSQLTypesImport() string {
return "github.com/bitechdev/ResolveSpec/pkg/common" return "github.com/bitechdev/ResolveSpec/pkg/spectypes"
} }
// GetBunImport returns the import path for Bun // GetBunImport returns the import path for Bun

View File

@@ -4,6 +4,7 @@ import (
"fmt" "fmt"
"go/format" "go/format"
"os" "os"
"os/exec"
"path/filepath" "path/filepath"
"strings" "strings"
@@ -124,7 +125,16 @@ func (w *Writer) writeSingleFile(db *models.Database) error {
} }
// Write output // Write output
return w.writeOutput(formatted) if err := w.writeOutput(formatted); err != nil {
return err
}
// Run go fmt on the output file
if w.options.OutputPath != "" {
w.runGoFmt(w.options.OutputPath)
}
return nil
} }
// writeMultiFile writes each table to a separate file // writeMultiFile writes each table to a separate file
@@ -207,13 +217,19 @@ func (w *Writer) writeMultiFile(db *models.Database) error {
} }
// Generate filename: sql_{schema}_{table}.go // Generate filename: sql_{schema}_{table}.go
filename := fmt.Sprintf("sql_%s_%s.go", schema.Name, table.Name) // Sanitize schema and table names to remove quotes, comments, and invalid characters
safeSchemaName := writers.SanitizeFilename(schema.Name)
safeTableName := writers.SanitizeFilename(table.Name)
filename := fmt.Sprintf("sql_%s_%s.go", safeSchemaName, safeTableName)
filepath := filepath.Join(w.options.OutputPath, filename) filepath := filepath.Join(w.options.OutputPath, filename)
// Write file // Write file
if err := os.WriteFile(filepath, []byte(formatted), 0644); err != nil { if err := os.WriteFile(filepath, []byte(formatted), 0644); err != nil {
return fmt.Errorf("failed to write file %s: %w", filename, err) return fmt.Errorf("failed to write file %s: %w", filename, err)
} }
// Run go fmt on the generated file
w.runGoFmt(filepath)
} }
} }
@@ -222,6 +238,9 @@ func (w *Writer) writeMultiFile(db *models.Database) error {
// addRelationshipFields adds relationship fields to the model based on foreign keys // addRelationshipFields adds relationship fields to the model based on foreign keys
func (w *Writer) addRelationshipFields(modelData *ModelData, table *models.Table, schema *models.Schema, db *models.Database) { func (w *Writer) addRelationshipFields(modelData *ModelData, table *models.Table, schema *models.Schema, db *models.Database) {
// Track used field names to detect duplicates
usedFieldNames := make(map[string]int)
// For each foreign key in this table, add a belongs-to/has-one relationship // For each foreign key in this table, add a belongs-to/has-one relationship
for _, constraint := range table.Constraints { for _, constraint := range table.Constraints {
if constraint.Type != models.ForeignKeyConstraint { if constraint.Type != models.ForeignKeyConstraint {
@@ -235,8 +254,9 @@ func (w *Writer) addRelationshipFields(modelData *ModelData, table *models.Table
} }
// Create relationship field (has-one in Bun, similar to belongs-to in GORM) // Create relationship field (has-one in Bun, similar to belongs-to in GORM)
refModelName := w.getModelName(constraint.ReferencedTable) refModelName := w.getModelName(constraint.ReferencedSchema, constraint.ReferencedTable)
fieldName := w.generateRelationshipFieldName(constraint.ReferencedTable) fieldName := w.generateHasOneFieldName(constraint)
fieldName = w.ensureUniqueFieldName(fieldName, usedFieldNames)
relationTag := w.typeMapper.BuildRelationshipTag(constraint, "has-one") relationTag := w.typeMapper.BuildRelationshipTag(constraint, "has-one")
modelData.AddRelationshipField(&FieldData{ modelData.AddRelationshipField(&FieldData{
@@ -263,8 +283,9 @@ func (w *Writer) addRelationshipFields(modelData *ModelData, table *models.Table
// Check if this constraint references our table // Check if this constraint references our table
if constraint.ReferencedTable == table.Name && constraint.ReferencedSchema == schema.Name { if constraint.ReferencedTable == table.Name && constraint.ReferencedSchema == schema.Name {
// Add has-many relationship // Add has-many relationship
otherModelName := w.getModelName(otherTable.Name) otherModelName := w.getModelName(otherSchema.Name, otherTable.Name)
fieldName := w.generateRelationshipFieldName(otherTable.Name) + "s" // Pluralize fieldName := w.generateHasManyFieldName(constraint, otherSchema.Name, otherTable.Name)
fieldName = w.ensureUniqueFieldName(fieldName, usedFieldNames)
relationTag := w.typeMapper.BuildRelationshipTag(constraint, "has-many") relationTag := w.typeMapper.BuildRelationshipTag(constraint, "has-many")
modelData.AddRelationshipField(&FieldData{ modelData.AddRelationshipField(&FieldData{
@@ -295,22 +316,77 @@ func (w *Writer) findTable(schemaName, tableName string, db *models.Database) *m
return nil return nil
} }
// getModelName generates the model name from a table name // getModelName generates the model name from schema and table name
func (w *Writer) getModelName(tableName string) string { func (w *Writer) getModelName(schemaName, tableName string) string {
singular := Singularize(tableName) singular := Singularize(tableName)
modelName := SnakeCaseToPascalCase(singular) tablePart := SnakeCaseToPascalCase(singular)
if !hasModelPrefix(modelName) { // Include schema name in model name
modelName = "Model" + modelName var modelName string
if schemaName != "" {
schemaPart := SnakeCaseToPascalCase(schemaName)
modelName = "Model" + schemaPart + tablePart
} else {
modelName = "Model" + tablePart
} }
return modelName return modelName
} }
// generateRelationshipFieldName generates a field name for a relationship // generateHasOneFieldName generates a field name for has-one relationships
func (w *Writer) generateRelationshipFieldName(tableName string) string { // Uses the foreign key column name for uniqueness
// Use just the prefix (3 letters) for relationship fields func (w *Writer) generateHasOneFieldName(constraint *models.Constraint) string {
return GeneratePrefix(tableName) // Use the foreign key column name to ensure uniqueness
// If there are multiple columns, use the first one
if len(constraint.Columns) > 0 {
columnName := constraint.Columns[0]
// Convert to PascalCase for proper Go field naming
// e.g., "rid_filepointer_request" -> "RelRIDFilepointerRequest"
return "Rel" + SnakeCaseToPascalCase(columnName)
}
// Fallback to table-based prefix if no columns defined
return "Rel" + GeneratePrefix(constraint.ReferencedTable)
}
// generateHasManyFieldName generates a field name for has-many relationships
// Uses the foreign key column name + source table name to avoid duplicates
func (w *Writer) generateHasManyFieldName(constraint *models.Constraint, sourceSchemaName, sourceTableName string) string {
// For has-many, we need to include the source table name to avoid duplicates
// e.g., multiple tables referencing the same column on this table
if len(constraint.Columns) > 0 {
columnName := constraint.Columns[0]
// Get the model name for the source table (pluralized)
sourceModelName := w.getModelName(sourceSchemaName, sourceTableName)
// Remove "Model" prefix if present
sourceModelName = strings.TrimPrefix(sourceModelName, "Model")
// Convert column to PascalCase and combine with source table
// e.g., "rid_api_provider" + "Login" -> "RelRIDAPIProviderLogins"
columnPart := SnakeCaseToPascalCase(columnName)
return "Rel" + columnPart + Pluralize(sourceModelName)
}
// Fallback to table-based naming
sourceModelName := w.getModelName(sourceSchemaName, sourceTableName)
sourceModelName = strings.TrimPrefix(sourceModelName, "Model")
return "Rel" + Pluralize(sourceModelName)
}
// ensureUniqueFieldName ensures a field name is unique by adding numeric suffixes if needed
func (w *Writer) ensureUniqueFieldName(fieldName string, usedNames map[string]int) string {
originalName := fieldName
count := usedNames[originalName]
if count > 0 {
// Name is already used, add numeric suffix
fieldName = fmt.Sprintf("%s%d", originalName, count+1)
}
// Increment the counter for this base name
usedNames[originalName]++
return fieldName
} }
// getPackageName returns the package name from options or defaults to "models" // getPackageName returns the package name from options or defaults to "models"
@@ -341,6 +417,15 @@ func (w *Writer) writeOutput(content string) error {
return nil return nil
} }
// runGoFmt runs go fmt on the specified file
func (w *Writer) runGoFmt(filepath string) {
cmd := exec.Command("gofmt", "-w", filepath)
if err := cmd.Run(); err != nil {
// Don't fail the whole operation if gofmt fails, just warn
fmt.Fprintf(os.Stderr, "Warning: failed to run gofmt on %s: %v\n", filepath, err)
}
}
// shouldUseMultiFile determines whether to use multi-file mode based on metadata or output path // shouldUseMultiFile determines whether to use multi-file mode based on metadata or output path
func (w *Writer) shouldUseMultiFile() bool { func (w *Writer) shouldUseMultiFile() bool {
// Check if multi_file is explicitly set in metadata // Check if multi_file is explicitly set in metadata

View File

@@ -66,7 +66,7 @@ func TestWriter_WriteTable(t *testing.T) {
// Verify key elements are present // Verify key elements are present
expectations := []string{ expectations := []string{
"package models", "package models",
"type ModelUser struct", "type ModelPublicUser struct",
"bun.BaseModel", "bun.BaseModel",
"table:public.users", "table:public.users",
"alias:users", "alias:users",
@@ -78,9 +78,9 @@ func TestWriter_WriteTable(t *testing.T) {
"resolvespec_common.SqlTime", "resolvespec_common.SqlTime",
"bun:\"id", "bun:\"id",
"bun:\"email", "bun:\"email",
"func (m ModelUser) TableName() string", "func (m ModelPublicUser) TableName() string",
"return \"public.users\"", "return \"public.users\"",
"func (m ModelUser) GetID() int64", "func (m ModelPublicUser) GetID() int64",
} }
for _, expected := range expectations { for _, expected := range expectations {
@@ -175,12 +175,378 @@ func TestWriter_WriteDatabase_MultiFile(t *testing.T) {
postsStr := string(postsContent) postsStr := string(postsContent)
// Verify relationship is present with Bun format // Verify relationship is present with Bun format
if !strings.Contains(postsStr, "USE") { // Should now be RelUserID (has-one) instead of USE
t.Errorf("Missing relationship field USE") if !strings.Contains(postsStr, "RelUserID") {
t.Errorf("Missing relationship field RelUserID (new naming convention)")
} }
if !strings.Contains(postsStr, "rel:has-one") { if !strings.Contains(postsStr, "rel:has-one") {
t.Errorf("Missing Bun relationship tag: %s", postsStr) t.Errorf("Missing Bun relationship tag: %s", postsStr)
} }
// Check users file contains has-many relationship
usersContent, err := os.ReadFile(filepath.Join(tmpDir, "sql_public_users.go"))
if err != nil {
t.Fatalf("Failed to read users file: %v", err)
}
usersStr := string(usersContent)
// Should have RelUserIDPublicPosts (has-many) field - includes schema prefix
if !strings.Contains(usersStr, "RelUserIDPublicPosts") {
t.Errorf("Missing has-many relationship field RelUserIDPublicPosts")
}
}
func TestWriter_MultipleReferencesToSameTable(t *testing.T) {
// Test scenario: api_event table with multiple foreign keys to filepointer table
db := models.InitDatabase("testdb")
schema := models.InitSchema("org")
// Filepointer table
filepointer := models.InitTable("filepointer", "org")
filepointer.Columns["id_filepointer"] = &models.Column{
Name: "id_filepointer",
Type: "bigserial",
NotNull: true,
IsPrimaryKey: true,
}
schema.Tables = append(schema.Tables, filepointer)
// API event table with two foreign keys to filepointer
apiEvent := models.InitTable("api_event", "org")
apiEvent.Columns["id_api_event"] = &models.Column{
Name: "id_api_event",
Type: "bigserial",
NotNull: true,
IsPrimaryKey: true,
}
apiEvent.Columns["rid_filepointer_request"] = &models.Column{
Name: "rid_filepointer_request",
Type: "bigint",
NotNull: false,
}
apiEvent.Columns["rid_filepointer_response"] = &models.Column{
Name: "rid_filepointer_response",
Type: "bigint",
NotNull: false,
}
// Add constraints
apiEvent.Constraints["fk_request"] = &models.Constraint{
Name: "fk_request",
Type: models.ForeignKeyConstraint,
Columns: []string{"rid_filepointer_request"},
ReferencedTable: "filepointer",
ReferencedSchema: "org",
ReferencedColumns: []string{"id_filepointer"},
}
apiEvent.Constraints["fk_response"] = &models.Constraint{
Name: "fk_response",
Type: models.ForeignKeyConstraint,
Columns: []string{"rid_filepointer_response"},
ReferencedTable: "filepointer",
ReferencedSchema: "org",
ReferencedColumns: []string{"id_filepointer"},
}
schema.Tables = append(schema.Tables, apiEvent)
db.Schemas = append(db.Schemas, schema)
// Create writer
tmpDir := t.TempDir()
opts := &writers.WriterOptions{
PackageName: "models",
OutputPath: tmpDir,
Metadata: map[string]interface{}{
"multi_file": true,
},
}
writer := NewWriter(opts)
err := writer.WriteDatabase(db)
if err != nil {
t.Fatalf("WriteDatabase failed: %v", err)
}
// Read the api_event file
apiEventContent, err := os.ReadFile(filepath.Join(tmpDir, "sql_org_api_event.go"))
if err != nil {
t.Fatalf("Failed to read api_event file: %v", err)
}
contentStr := string(apiEventContent)
// Verify both relationships have unique names based on column names
expectations := []struct {
fieldName string
tag string
}{
{"RelRIDFilepointerRequest", "join:rid_filepointer_request=id_filepointer"},
{"RelRIDFilepointerResponse", "join:rid_filepointer_response=id_filepointer"},
}
for _, exp := range expectations {
if !strings.Contains(contentStr, exp.fieldName) {
t.Errorf("Missing relationship field: %s\nGenerated:\n%s", exp.fieldName, contentStr)
}
if !strings.Contains(contentStr, exp.tag) {
t.Errorf("Missing relationship tag: %s\nGenerated:\n%s", exp.tag, contentStr)
}
}
// Verify NO duplicate field names (old behavior would create duplicate "FIL" fields)
if strings.Contains(contentStr, "FIL *ModelFilepointer") {
t.Errorf("Found old prefix-based naming (FIL), should use column-based naming")
}
// Also verify has-many relationships on filepointer table
filepointerContent, err := os.ReadFile(filepath.Join(tmpDir, "sql_org_filepointer.go"))
if err != nil {
t.Fatalf("Failed to read filepointer file: %v", err)
}
filepointerStr := string(filepointerContent)
// Should have two different has-many relationships with unique names
hasManyExpectations := []string{
"RelRIDFilepointerRequestOrgAPIEvents", // Has many via rid_filepointer_request
"RelRIDFilepointerResponseOrgAPIEvents", // Has many via rid_filepointer_response
}
for _, exp := range hasManyExpectations {
if !strings.Contains(filepointerStr, exp) {
t.Errorf("Missing has-many relationship field: %s\nGenerated:\n%s", exp, filepointerStr)
}
}
}
func TestWriter_MultipleHasManyRelationships(t *testing.T) {
// Test scenario: api_provider table referenced by multiple tables via rid_api_provider
db := models.InitDatabase("testdb")
schema := models.InitSchema("org")
// Owner table
owner := models.InitTable("owner", "org")
owner.Columns["id_owner"] = &models.Column{
Name: "id_owner",
Type: "bigserial",
NotNull: true,
IsPrimaryKey: true,
}
schema.Tables = append(schema.Tables, owner)
// API Provider table
apiProvider := models.InitTable("api_provider", "org")
apiProvider.Columns["id_api_provider"] = &models.Column{
Name: "id_api_provider",
Type: "bigserial",
NotNull: true,
IsPrimaryKey: true,
}
apiProvider.Columns["rid_owner"] = &models.Column{
Name: "rid_owner",
Type: "bigint",
NotNull: true,
}
apiProvider.Constraints["fk_owner"] = &models.Constraint{
Name: "fk_owner",
Type: models.ForeignKeyConstraint,
Columns: []string{"rid_owner"},
ReferencedTable: "owner",
ReferencedSchema: "org",
ReferencedColumns: []string{"id_owner"},
}
schema.Tables = append(schema.Tables, apiProvider)
// Login table
login := models.InitTable("login", "org")
login.Columns["id_login"] = &models.Column{
Name: "id_login",
Type: "bigserial",
NotNull: true,
IsPrimaryKey: true,
}
login.Columns["rid_api_provider"] = &models.Column{
Name: "rid_api_provider",
Type: "bigint",
NotNull: true,
}
login.Constraints["fk_api_provider"] = &models.Constraint{
Name: "fk_api_provider",
Type: models.ForeignKeyConstraint,
Columns: []string{"rid_api_provider"},
ReferencedTable: "api_provider",
ReferencedSchema: "org",
ReferencedColumns: []string{"id_api_provider"},
}
schema.Tables = append(schema.Tables, login)
// Filepointer table
filepointer := models.InitTable("filepointer", "org")
filepointer.Columns["id_filepointer"] = &models.Column{
Name: "id_filepointer",
Type: "bigserial",
NotNull: true,
IsPrimaryKey: true,
}
filepointer.Columns["rid_api_provider"] = &models.Column{
Name: "rid_api_provider",
Type: "bigint",
NotNull: true,
}
filepointer.Constraints["fk_api_provider"] = &models.Constraint{
Name: "fk_api_provider",
Type: models.ForeignKeyConstraint,
Columns: []string{"rid_api_provider"},
ReferencedTable: "api_provider",
ReferencedSchema: "org",
ReferencedColumns: []string{"id_api_provider"},
}
schema.Tables = append(schema.Tables, filepointer)
// API Event table
apiEvent := models.InitTable("api_event", "org")
apiEvent.Columns["id_api_event"] = &models.Column{
Name: "id_api_event",
Type: "bigserial",
NotNull: true,
IsPrimaryKey: true,
}
apiEvent.Columns["rid_api_provider"] = &models.Column{
Name: "rid_api_provider",
Type: "bigint",
NotNull: true,
}
apiEvent.Constraints["fk_api_provider"] = &models.Constraint{
Name: "fk_api_provider",
Type: models.ForeignKeyConstraint,
Columns: []string{"rid_api_provider"},
ReferencedTable: "api_provider",
ReferencedSchema: "org",
ReferencedColumns: []string{"id_api_provider"},
}
schema.Tables = append(schema.Tables, apiEvent)
db.Schemas = append(db.Schemas, schema)
// Create writer
tmpDir := t.TempDir()
opts := &writers.WriterOptions{
PackageName: "models",
OutputPath: tmpDir,
Metadata: map[string]interface{}{
"multi_file": true,
},
}
writer := NewWriter(opts)
err := writer.WriteDatabase(db)
if err != nil {
t.Fatalf("WriteDatabase failed: %v", err)
}
// Read the api_provider file
apiProviderContent, err := os.ReadFile(filepath.Join(tmpDir, "sql_org_api_provider.go"))
if err != nil {
t.Fatalf("Failed to read api_provider file: %v", err)
}
contentStr := string(apiProviderContent)
// Verify all has-many relationships have unique names
hasManyExpectations := []string{
"RelRIDAPIProviderOrgLogins", // Has many via Login
"RelRIDAPIProviderOrgFilepointers", // Has many via Filepointer
"RelRIDAPIProviderOrgAPIEvents", // Has many via APIEvent
"RelRIDOwner", // Has one via rid_owner
}
for _, exp := range hasManyExpectations {
if !strings.Contains(contentStr, exp) {
t.Errorf("Missing relationship field: %s\nGenerated:\n%s", exp, contentStr)
}
}
// Verify NO duplicate field names
// Count occurrences of "RelRIDAPIProvider" fields - should have 3 unique ones
count := strings.Count(contentStr, "RelRIDAPIProvider")
if count != 3 {
t.Errorf("Expected 3 RelRIDAPIProvider* fields, found %d\nGenerated:\n%s", count, contentStr)
}
// Verify no duplicate declarations (would cause compilation error)
duplicatePattern := "RelRIDAPIProviders []*Model"
if strings.Contains(contentStr, duplicatePattern) {
t.Errorf("Found duplicate field declaration pattern, fields should be unique")
}
}
func TestWriter_FieldNameCollision(t *testing.T) {
// Test scenario: table with columns that would conflict with generated method names
table := models.InitTable("audit_table", "audit")
table.Columns["id_audit_table"] = &models.Column{
Name: "id_audit_table",
Type: "smallint",
NotNull: true,
IsPrimaryKey: true,
Sequence: 1,
}
table.Columns["table_name"] = &models.Column{
Name: "table_name",
Type: "varchar",
Length: 100,
NotNull: true,
Sequence: 2,
}
table.Columns["table_schema"] = &models.Column{
Name: "table_schema",
Type: "varchar",
Length: 100,
NotNull: true,
Sequence: 3,
}
// Create writer
tmpDir := t.TempDir()
opts := &writers.WriterOptions{
PackageName: "models",
OutputPath: filepath.Join(tmpDir, "test.go"),
}
writer := NewWriter(opts)
err := writer.WriteTable(table)
if err != nil {
t.Fatalf("WriteTable failed: %v", err)
}
// Read the generated file
content, err := os.ReadFile(opts.OutputPath)
if err != nil {
t.Fatalf("Failed to read generated file: %v", err)
}
generated := string(content)
// Verify that TableName field was renamed to TableName_ to avoid collision
if !strings.Contains(generated, "TableName_") {
t.Errorf("Expected field 'TableName_' (with underscore) but not found\nGenerated:\n%s", generated)
}
// Verify the struct tag still references the correct database column
if !strings.Contains(generated, `bun:"table_name,`) {
t.Errorf("Expected bun tag to reference 'table_name' column\nGenerated:\n%s", generated)
}
// Verify the TableName() method still exists and doesn't conflict
if !strings.Contains(generated, "func (m ModelAuditAuditTable) TableName() string") {
t.Errorf("TableName() method should still be generated\nGenerated:\n%s", generated)
}
// Verify NO field named just "TableName" (without underscore)
if strings.Contains(generated, "TableName resolvespec_common") || strings.Contains(generated, "TableName string") {
t.Errorf("Field 'TableName' without underscore should not exist (would conflict with method)\nGenerated:\n%s", generated)
}
} }
func TestTypeMapper_SQLTypeToGoType_Bun(t *testing.T) { func TestTypeMapper_SQLTypeToGoType_Bun(t *testing.T) {

View File

@@ -196,7 +196,9 @@ func (w *Writer) writeTableFile(table *models.Table, schema *models.Schema, db *
} }
// Generate filename: {tableName}.ts // Generate filename: {tableName}.ts
filename := filepath.Join(w.options.OutputPath, table.Name+".ts") // Sanitize table name to remove quotes, comments, and invalid characters
safeTableName := writers.SanitizeFilename(table.Name)
filename := filepath.Join(w.options.OutputPath, safeTableName+".ts")
return os.WriteFile(filename, []byte(code), 0644) return os.WriteFile(filename, []byte(code), 0644)
} }

View File

@@ -4,6 +4,7 @@ import (
"sort" "sort"
"git.warky.dev/wdevs/relspecgo/pkg/models" "git.warky.dev/wdevs/relspecgo/pkg/models"
"git.warky.dev/wdevs/relspecgo/pkg/writers"
) )
// TemplateData represents the data passed to the template for code generation // TemplateData represents the data passed to the template for code generation
@@ -24,6 +25,7 @@ type ModelData struct {
Fields []*FieldData Fields []*FieldData
Config *MethodConfig Config *MethodConfig
PrimaryKeyField string // Name of the primary key field PrimaryKeyField string // Name of the primary key field
PrimaryKeyType string // Go type of the primary key field
IDColumnName string // Name of the ID column in database IDColumnName string // Name of the ID column in database
Prefix string // 3-letter prefix Prefix string // 3-letter prefix
} }
@@ -109,13 +111,17 @@ func NewModelData(table *models.Table, schema string, typeMapper *TypeMapper) *M
tableName = schema + "." + table.Name tableName = schema + "." + table.Name
} }
// Generate model name: singularize and convert to PascalCase // Generate model name: Model + Schema + Table (all PascalCase)
singularTable := Singularize(table.Name) singularTable := Singularize(table.Name)
modelName := SnakeCaseToPascalCase(singularTable) tablePart := SnakeCaseToPascalCase(singularTable)
// Add "Model" prefix if not already present // Include schema name in model name
if !hasModelPrefix(modelName) { var modelName string
modelName = "Model" + modelName if schema != "" {
schemaPart := SnakeCaseToPascalCase(schema)
modelName = "Model" + schemaPart + tablePart
} else {
modelName = "Model" + tablePart
} }
model := &ModelData{ model := &ModelData{
@@ -131,8 +137,11 @@ func NewModelData(table *models.Table, schema string, typeMapper *TypeMapper) *M
// Find primary key // Find primary key
for _, col := range table.Columns { for _, col := range table.Columns {
if col.IsPrimaryKey { if col.IsPrimaryKey {
model.PrimaryKeyField = SnakeCaseToPascalCase(col.Name) // Sanitize column name to remove backticks
model.IDColumnName = col.Name safeName := writers.SanitizeStructTagValue(col.Name)
model.PrimaryKeyField = SnakeCaseToPascalCase(safeName)
model.PrimaryKeyType = typeMapper.SQLTypeToGoType(col.Type, col.NotNull)
model.IDColumnName = safeName
break break
} }
} }
@@ -141,6 +150,8 @@ func NewModelData(table *models.Table, schema string, typeMapper *TypeMapper) *M
columns := sortColumns(table.Columns) columns := sortColumns(table.Columns)
for _, col := range columns { for _, col := range columns {
field := columnToField(col, table, typeMapper) field := columnToField(col, table, typeMapper)
// Check for name collision with generated methods and rename if needed
field.Name = resolveFieldNameCollision(field.Name)
model.Fields = append(model.Fields, field) model.Fields = append(model.Fields, field)
} }
@@ -149,10 +160,13 @@ func NewModelData(table *models.Table, schema string, typeMapper *TypeMapper) *M
// columnToField converts a models.Column to FieldData // columnToField converts a models.Column to FieldData
func columnToField(col *models.Column, table *models.Table, typeMapper *TypeMapper) *FieldData { func columnToField(col *models.Column, table *models.Table, typeMapper *TypeMapper) *FieldData {
fieldName := SnakeCaseToPascalCase(col.Name) // Sanitize column name first to remove backticks before generating field name
safeName := writers.SanitizeStructTagValue(col.Name)
fieldName := SnakeCaseToPascalCase(safeName)
goType := typeMapper.SQLTypeToGoType(col.Type, col.NotNull) goType := typeMapper.SQLTypeToGoType(col.Type, col.NotNull)
gormTag := typeMapper.BuildGormTag(col, table) gormTag := typeMapper.BuildGormTag(col, table)
jsonTag := col.Name // Use column name for JSON tag // Use same sanitized name for JSON tag
jsonTag := safeName
return &FieldData{ return &FieldData{
Name: fieldName, Name: fieldName,
@@ -179,9 +193,28 @@ func formatComment(description, comment string) string {
return comment return comment
} }
// hasModelPrefix checks if a name already has "Model" prefix // resolveFieldNameCollision checks if a field name conflicts with generated method names
func hasModelPrefix(name string) bool { // and adds an underscore suffix if there's a collision
return len(name) >= 5 && name[:5] == "Model" func resolveFieldNameCollision(fieldName string) string {
// List of method names that are generated by the template
reservedNames := map[string]bool{
"TableName": true,
"TableNameOnly": true,
"SchemaName": true,
"GetID": true,
"GetIDStr": true,
"SetID": true,
"UpdateID": true,
"GetIDName": true,
"GetPrefix": true,
}
// Check if field name conflicts with a reserved method name
if reservedNames[fieldName] {
return fieldName + "_"
}
return fieldName
} }
// sortColumns sorts columns by sequence, then by name // sortColumns sorts columns by sequence, then by name

View File

@@ -62,7 +62,7 @@ func (m {{.Name}}) SetID(newid int64) {
{{if and .Config.GenerateUpdateID .PrimaryKeyField}} {{if and .Config.GenerateUpdateID .PrimaryKeyField}}
// UpdateID updates the primary key value // UpdateID updates the primary key value
func (m *{{.Name}}) UpdateID(newid int64) { func (m *{{.Name}}) UpdateID(newid int64) {
m.{{.PrimaryKeyField}} = int32(newid) m.{{.PrimaryKeyField}} = {{.PrimaryKeyType}}(newid)
} }
{{end}} {{end}}
{{if and .Config.GenerateGetIDName .IDColumnName}} {{if and .Config.GenerateGetIDName .IDColumnName}}

View File

@@ -5,6 +5,7 @@ import (
"strings" "strings"
"git.warky.dev/wdevs/relspecgo/pkg/models" "git.warky.dev/wdevs/relspecgo/pkg/models"
"git.warky.dev/wdevs/relspecgo/pkg/writers"
) )
// TypeMapper handles type conversions between SQL and Go types // TypeMapper handles type conversions between SQL and Go types
@@ -199,12 +200,15 @@ func (tm *TypeMapper) BuildGormTag(column *models.Column, table *models.Table) s
var parts []string var parts []string
// Always include column name (lowercase as per user requirement) // Always include column name (lowercase as per user requirement)
parts = append(parts, fmt.Sprintf("column:%s", column.Name)) // Sanitize to remove backticks which would break struct tag syntax
safeName := writers.SanitizeStructTagValue(column.Name)
parts = append(parts, fmt.Sprintf("column:%s", safeName))
// Add type if specified // Add type if specified
if column.Type != "" { if column.Type != "" {
// Include length, precision, scale if present // Include length, precision, scale if present
typeStr := column.Type // Sanitize type to remove backticks
typeStr := writers.SanitizeStructTagValue(column.Type)
if column.Length > 0 { if column.Length > 0 {
typeStr = fmt.Sprintf("%s(%d)", typeStr, column.Length) typeStr = fmt.Sprintf("%s(%d)", typeStr, column.Length)
} else if column.Precision > 0 { } else if column.Precision > 0 {
@@ -234,7 +238,9 @@ func (tm *TypeMapper) BuildGormTag(column *models.Column, table *models.Table) s
// Default value // Default value
if column.Default != nil { if column.Default != nil {
parts = append(parts, fmt.Sprintf("default:%v", column.Default)) // Sanitize default value to remove backticks
safeDefault := writers.SanitizeStructTagValue(fmt.Sprintf("%v", column.Default))
parts = append(parts, fmt.Sprintf("default:%s", safeDefault))
} }
// Check for unique constraint // Check for unique constraint
@@ -331,5 +337,5 @@ func (tm *TypeMapper) NeedsFmtImport(generateGetIDStr bool) bool {
// GetSQLTypesImport returns the import path for sql_types // GetSQLTypesImport returns the import path for sql_types
func (tm *TypeMapper) GetSQLTypesImport() string { func (tm *TypeMapper) GetSQLTypesImport() string {
return "github.com/bitechdev/ResolveSpec/pkg/common/sql_types" return "github.com/bitechdev/ResolveSpec/pkg/spectypes"
} }

View File

@@ -4,6 +4,7 @@ import (
"fmt" "fmt"
"go/format" "go/format"
"os" "os"
"os/exec"
"path/filepath" "path/filepath"
"strings" "strings"
@@ -121,7 +122,16 @@ func (w *Writer) writeSingleFile(db *models.Database) error {
} }
// Write output // Write output
return w.writeOutput(formatted) if err := w.writeOutput(formatted); err != nil {
return err
}
// Run go fmt on the output file
if w.options.OutputPath != "" {
w.runGoFmt(w.options.OutputPath)
}
return nil
} }
// writeMultiFile writes each table to a separate file // writeMultiFile writes each table to a separate file
@@ -201,13 +211,19 @@ func (w *Writer) writeMultiFile(db *models.Database) error {
} }
// Generate filename: sql_{schema}_{table}.go // Generate filename: sql_{schema}_{table}.go
filename := fmt.Sprintf("sql_%s_%s.go", schema.Name, table.Name) // Sanitize schema and table names to remove quotes, comments, and invalid characters
safeSchemaName := writers.SanitizeFilename(schema.Name)
safeTableName := writers.SanitizeFilename(table.Name)
filename := fmt.Sprintf("sql_%s_%s.go", safeSchemaName, safeTableName)
filepath := filepath.Join(w.options.OutputPath, filename) filepath := filepath.Join(w.options.OutputPath, filename)
// Write file // Write file
if err := os.WriteFile(filepath, []byte(formatted), 0644); err != nil { if err := os.WriteFile(filepath, []byte(formatted), 0644); err != nil {
return fmt.Errorf("failed to write file %s: %w", filename, err) return fmt.Errorf("failed to write file %s: %w", filename, err)
} }
// Run go fmt on the generated file
w.runGoFmt(filepath)
} }
} }
@@ -216,6 +232,9 @@ func (w *Writer) writeMultiFile(db *models.Database) error {
// addRelationshipFields adds relationship fields to the model based on foreign keys // addRelationshipFields adds relationship fields to the model based on foreign keys
func (w *Writer) addRelationshipFields(modelData *ModelData, table *models.Table, schema *models.Schema, db *models.Database) { func (w *Writer) addRelationshipFields(modelData *ModelData, table *models.Table, schema *models.Schema, db *models.Database) {
// Track used field names to detect duplicates
usedFieldNames := make(map[string]int)
// For each foreign key in this table, add a belongs-to relationship // For each foreign key in this table, add a belongs-to relationship
for _, constraint := range table.Constraints { for _, constraint := range table.Constraints {
if constraint.Type != models.ForeignKeyConstraint { if constraint.Type != models.ForeignKeyConstraint {
@@ -229,8 +248,9 @@ func (w *Writer) addRelationshipFields(modelData *ModelData, table *models.Table
} }
// Create relationship field (belongs-to) // Create relationship field (belongs-to)
refModelName := w.getModelName(constraint.ReferencedTable) refModelName := w.getModelName(constraint.ReferencedSchema, constraint.ReferencedTable)
fieldName := w.generateRelationshipFieldName(constraint.ReferencedTable) fieldName := w.generateBelongsToFieldName(constraint)
fieldName = w.ensureUniqueFieldName(fieldName, usedFieldNames)
relationTag := w.typeMapper.BuildRelationshipTag(constraint, false) relationTag := w.typeMapper.BuildRelationshipTag(constraint, false)
modelData.AddRelationshipField(&FieldData{ modelData.AddRelationshipField(&FieldData{
@@ -257,8 +277,9 @@ func (w *Writer) addRelationshipFields(modelData *ModelData, table *models.Table
// Check if this constraint references our table // Check if this constraint references our table
if constraint.ReferencedTable == table.Name && constraint.ReferencedSchema == schema.Name { if constraint.ReferencedTable == table.Name && constraint.ReferencedSchema == schema.Name {
// Add has-many relationship // Add has-many relationship
otherModelName := w.getModelName(otherTable.Name) otherModelName := w.getModelName(otherSchema.Name, otherTable.Name)
fieldName := w.generateRelationshipFieldName(otherTable.Name) + "s" // Pluralize fieldName := w.generateHasManyFieldName(constraint, otherSchema.Name, otherTable.Name)
fieldName = w.ensureUniqueFieldName(fieldName, usedFieldNames)
relationTag := w.typeMapper.BuildRelationshipTag(constraint, true) relationTag := w.typeMapper.BuildRelationshipTag(constraint, true)
modelData.AddRelationshipField(&FieldData{ modelData.AddRelationshipField(&FieldData{
@@ -289,22 +310,77 @@ func (w *Writer) findTable(schemaName, tableName string, db *models.Database) *m
return nil return nil
} }
// getModelName generates the model name from a table name // getModelName generates the model name from schema and table name
func (w *Writer) getModelName(tableName string) string { func (w *Writer) getModelName(schemaName, tableName string) string {
singular := Singularize(tableName) singular := Singularize(tableName)
modelName := SnakeCaseToPascalCase(singular) tablePart := SnakeCaseToPascalCase(singular)
if !hasModelPrefix(modelName) { // Include schema name in model name
modelName = "Model" + modelName var modelName string
if schemaName != "" {
schemaPart := SnakeCaseToPascalCase(schemaName)
modelName = "Model" + schemaPart + tablePart
} else {
modelName = "Model" + tablePart
} }
return modelName return modelName
} }
// generateRelationshipFieldName generates a field name for a relationship // generateBelongsToFieldName generates a field name for belongs-to relationships
func (w *Writer) generateRelationshipFieldName(tableName string) string { // Uses the foreign key column name for uniqueness
// Use just the prefix (3 letters) for relationship fields func (w *Writer) generateBelongsToFieldName(constraint *models.Constraint) string {
return GeneratePrefix(tableName) // Use the foreign key column name to ensure uniqueness
// If there are multiple columns, use the first one
if len(constraint.Columns) > 0 {
columnName := constraint.Columns[0]
// Convert to PascalCase for proper Go field naming
// e.g., "rid_filepointer_request" -> "RelRIDFilepointerRequest"
return "Rel" + SnakeCaseToPascalCase(columnName)
}
// Fallback to table-based prefix if no columns defined
return "Rel" + GeneratePrefix(constraint.ReferencedTable)
}
// generateHasManyFieldName generates a field name for has-many relationships
// Uses the foreign key column name + source table name to avoid duplicates
func (w *Writer) generateHasManyFieldName(constraint *models.Constraint, sourceSchemaName, sourceTableName string) string {
// For has-many, we need to include the source table name to avoid duplicates
// e.g., multiple tables referencing the same column on this table
if len(constraint.Columns) > 0 {
columnName := constraint.Columns[0]
// Get the model name for the source table (pluralized)
sourceModelName := w.getModelName(sourceSchemaName, sourceTableName)
// Remove "Model" prefix if present
sourceModelName = strings.TrimPrefix(sourceModelName, "Model")
// Convert column to PascalCase and combine with source table
// e.g., "rid_api_provider" + "Login" -> "RelRIDAPIProviderLogins"
columnPart := SnakeCaseToPascalCase(columnName)
return "Rel" + columnPart + Pluralize(sourceModelName)
}
// Fallback to table-based naming
sourceModelName := w.getModelName(sourceSchemaName, sourceTableName)
sourceModelName = strings.TrimPrefix(sourceModelName, "Model")
return "Rel" + Pluralize(sourceModelName)
}
// ensureUniqueFieldName ensures a field name is unique by adding numeric suffixes if needed
func (w *Writer) ensureUniqueFieldName(fieldName string, usedNames map[string]int) string {
originalName := fieldName
count := usedNames[originalName]
if count > 0 {
// Name is already used, add numeric suffix
fieldName = fmt.Sprintf("%s%d", originalName, count+1)
}
// Increment the counter for this base name
usedNames[originalName]++
return fieldName
} }
// getPackageName returns the package name from options or defaults to "models" // getPackageName returns the package name from options or defaults to "models"
@@ -335,6 +411,15 @@ func (w *Writer) writeOutput(content string) error {
return nil return nil
} }
// runGoFmt runs go fmt on the specified file
func (w *Writer) runGoFmt(filepath string) {
cmd := exec.Command("gofmt", "-w", filepath)
if err := cmd.Run(); err != nil {
// Don't fail the whole operation if gofmt fails, just warn
fmt.Fprintf(os.Stderr, "Warning: failed to run gofmt on %s: %v\n", filepath, err)
}
}
// shouldUseMultiFile determines whether to use multi-file mode based on metadata or output path // shouldUseMultiFile determines whether to use multi-file mode based on metadata or output path
func (w *Writer) shouldUseMultiFile() bool { func (w *Writer) shouldUseMultiFile() bool {
// Check if multi_file is explicitly set in metadata // Check if multi_file is explicitly set in metadata

View File

@@ -66,7 +66,7 @@ func TestWriter_WriteTable(t *testing.T) {
// Verify key elements are present // Verify key elements are present
expectations := []string{ expectations := []string{
"package models", "package models",
"type ModelUser struct", "type ModelPublicUser struct",
"ID", "ID",
"int64", "int64",
"Email", "Email",
@@ -75,9 +75,9 @@ func TestWriter_WriteTable(t *testing.T) {
"time.Time", "time.Time",
"gorm:\"column:id", "gorm:\"column:id",
"gorm:\"column:email", "gorm:\"column:email",
"func (m ModelUser) TableName() string", "func (m ModelPublicUser) TableName() string",
"return \"public.users\"", "return \"public.users\"",
"func (m ModelUser) GetID() int64", "func (m ModelPublicUser) GetID() int64",
} }
for _, expected := range expectations { for _, expected := range expectations {
@@ -164,9 +164,437 @@ func TestWriter_WriteDatabase_MultiFile(t *testing.T) {
t.Fatalf("Failed to read posts file: %v", err) t.Fatalf("Failed to read posts file: %v", err)
} }
if !strings.Contains(string(postsContent), "USE *ModelUser") { postsStr := string(postsContent)
// Relationship field should be present
t.Logf("Posts content:\n%s", string(postsContent)) // Verify relationship is present with new naming convention
// Should now be RelUserID (belongs-to) instead of USE
if !strings.Contains(postsStr, "RelUserID") {
t.Errorf("Missing relationship field RelUserID (new naming convention)")
}
// Check users file contains has-many relationship
usersContent, err := os.ReadFile(filepath.Join(tmpDir, "sql_public_users.go"))
if err != nil {
t.Fatalf("Failed to read users file: %v", err)
}
usersStr := string(usersContent)
// Should have RelUserIDPublicPosts (has-many) field - includes schema prefix
if !strings.Contains(usersStr, "RelUserIDPublicPosts") {
t.Errorf("Missing has-many relationship field RelUserIDPublicPosts")
}
}
func TestWriter_MultipleReferencesToSameTable(t *testing.T) {
// Test scenario: api_event table with multiple foreign keys to filepointer table
db := models.InitDatabase("testdb")
schema := models.InitSchema("org")
// Filepointer table
filepointer := models.InitTable("filepointer", "org")
filepointer.Columns["id_filepointer"] = &models.Column{
Name: "id_filepointer",
Type: "bigserial",
NotNull: true,
IsPrimaryKey: true,
}
schema.Tables = append(schema.Tables, filepointer)
// API event table with two foreign keys to filepointer
apiEvent := models.InitTable("api_event", "org")
apiEvent.Columns["id_api_event"] = &models.Column{
Name: "id_api_event",
Type: "bigserial",
NotNull: true,
IsPrimaryKey: true,
}
apiEvent.Columns["rid_filepointer_request"] = &models.Column{
Name: "rid_filepointer_request",
Type: "bigint",
NotNull: false,
}
apiEvent.Columns["rid_filepointer_response"] = &models.Column{
Name: "rid_filepointer_response",
Type: "bigint",
NotNull: false,
}
// Add constraints
apiEvent.Constraints["fk_request"] = &models.Constraint{
Name: "fk_request",
Type: models.ForeignKeyConstraint,
Columns: []string{"rid_filepointer_request"},
ReferencedTable: "filepointer",
ReferencedSchema: "org",
ReferencedColumns: []string{"id_filepointer"},
}
apiEvent.Constraints["fk_response"] = &models.Constraint{
Name: "fk_response",
Type: models.ForeignKeyConstraint,
Columns: []string{"rid_filepointer_response"},
ReferencedTable: "filepointer",
ReferencedSchema: "org",
ReferencedColumns: []string{"id_filepointer"},
}
schema.Tables = append(schema.Tables, apiEvent)
db.Schemas = append(db.Schemas, schema)
// Create writer
tmpDir := t.TempDir()
opts := &writers.WriterOptions{
PackageName: "models",
OutputPath: tmpDir,
Metadata: map[string]interface{}{
"multi_file": true,
},
}
writer := NewWriter(opts)
err := writer.WriteDatabase(db)
if err != nil {
t.Fatalf("WriteDatabase failed: %v", err)
}
// Read the api_event file
apiEventContent, err := os.ReadFile(filepath.Join(tmpDir, "sql_org_api_event.go"))
if err != nil {
t.Fatalf("Failed to read api_event file: %v", err)
}
contentStr := string(apiEventContent)
// Verify both relationships have unique names based on column names
expectations := []struct {
fieldName string
tag string
}{
{"RelRIDFilepointerRequest", "foreignKey:RIDFilepointerRequest"},
{"RelRIDFilepointerResponse", "foreignKey:RIDFilepointerResponse"},
}
for _, exp := range expectations {
if !strings.Contains(contentStr, exp.fieldName) {
t.Errorf("Missing relationship field: %s\nGenerated:\n%s", exp.fieldName, contentStr)
}
if !strings.Contains(contentStr, exp.tag) {
t.Errorf("Missing relationship tag: %s\nGenerated:\n%s", exp.tag, contentStr)
}
}
// Verify NO duplicate field names (old behavior would create duplicate "FIL" fields)
if strings.Contains(contentStr, "FIL *ModelFilepointer") {
t.Errorf("Found old prefix-based naming (FIL), should use column-based naming")
}
// Also verify has-many relationships on filepointer table
filepointerContent, err := os.ReadFile(filepath.Join(tmpDir, "sql_org_filepointer.go"))
if err != nil {
t.Fatalf("Failed to read filepointer file: %v", err)
}
filepointerStr := string(filepointerContent)
// Should have two different has-many relationships with unique names
hasManyExpectations := []string{
"RelRIDFilepointerRequestOrgAPIEvents", // Has many via rid_filepointer_request
"RelRIDFilepointerResponseOrgAPIEvents", // Has many via rid_filepointer_response
}
for _, exp := range hasManyExpectations {
if !strings.Contains(filepointerStr, exp) {
t.Errorf("Missing has-many relationship field: %s\nGenerated:\n%s", exp, filepointerStr)
}
}
}
func TestWriter_MultipleHasManyRelationships(t *testing.T) {
// Test scenario: api_provider table referenced by multiple tables via rid_api_provider
db := models.InitDatabase("testdb")
schema := models.InitSchema("org")
// Owner table
owner := models.InitTable("owner", "org")
owner.Columns["id_owner"] = &models.Column{
Name: "id_owner",
Type: "bigserial",
NotNull: true,
IsPrimaryKey: true,
}
schema.Tables = append(schema.Tables, owner)
// API Provider table
apiProvider := models.InitTable("api_provider", "org")
apiProvider.Columns["id_api_provider"] = &models.Column{
Name: "id_api_provider",
Type: "bigserial",
NotNull: true,
IsPrimaryKey: true,
}
apiProvider.Columns["rid_owner"] = &models.Column{
Name: "rid_owner",
Type: "bigint",
NotNull: true,
}
apiProvider.Constraints["fk_owner"] = &models.Constraint{
Name: "fk_owner",
Type: models.ForeignKeyConstraint,
Columns: []string{"rid_owner"},
ReferencedTable: "owner",
ReferencedSchema: "org",
ReferencedColumns: []string{"id_owner"},
}
schema.Tables = append(schema.Tables, apiProvider)
// Login table
login := models.InitTable("login", "org")
login.Columns["id_login"] = &models.Column{
Name: "id_login",
Type: "bigserial",
NotNull: true,
IsPrimaryKey: true,
}
login.Columns["rid_api_provider"] = &models.Column{
Name: "rid_api_provider",
Type: "bigint",
NotNull: true,
}
login.Constraints["fk_api_provider"] = &models.Constraint{
Name: "fk_api_provider",
Type: models.ForeignKeyConstraint,
Columns: []string{"rid_api_provider"},
ReferencedTable: "api_provider",
ReferencedSchema: "org",
ReferencedColumns: []string{"id_api_provider"},
}
schema.Tables = append(schema.Tables, login)
// Filepointer table
filepointer := models.InitTable("filepointer", "org")
filepointer.Columns["id_filepointer"] = &models.Column{
Name: "id_filepointer",
Type: "bigserial",
NotNull: true,
IsPrimaryKey: true,
}
filepointer.Columns["rid_api_provider"] = &models.Column{
Name: "rid_api_provider",
Type: "bigint",
NotNull: true,
}
filepointer.Constraints["fk_api_provider"] = &models.Constraint{
Name: "fk_api_provider",
Type: models.ForeignKeyConstraint,
Columns: []string{"rid_api_provider"},
ReferencedTable: "api_provider",
ReferencedSchema: "org",
ReferencedColumns: []string{"id_api_provider"},
}
schema.Tables = append(schema.Tables, filepointer)
// API Event table
apiEvent := models.InitTable("api_event", "org")
apiEvent.Columns["id_api_event"] = &models.Column{
Name: "id_api_event",
Type: "bigserial",
NotNull: true,
IsPrimaryKey: true,
}
apiEvent.Columns["rid_api_provider"] = &models.Column{
Name: "rid_api_provider",
Type: "bigint",
NotNull: true,
}
apiEvent.Constraints["fk_api_provider"] = &models.Constraint{
Name: "fk_api_provider",
Type: models.ForeignKeyConstraint,
Columns: []string{"rid_api_provider"},
ReferencedTable: "api_provider",
ReferencedSchema: "org",
ReferencedColumns: []string{"id_api_provider"},
}
schema.Tables = append(schema.Tables, apiEvent)
db.Schemas = append(db.Schemas, schema)
// Create writer
tmpDir := t.TempDir()
opts := &writers.WriterOptions{
PackageName: "models",
OutputPath: tmpDir,
Metadata: map[string]interface{}{
"multi_file": true,
},
}
writer := NewWriter(opts)
err := writer.WriteDatabase(db)
if err != nil {
t.Fatalf("WriteDatabase failed: %v", err)
}
// Read the api_provider file
apiProviderContent, err := os.ReadFile(filepath.Join(tmpDir, "sql_org_api_provider.go"))
if err != nil {
t.Fatalf("Failed to read api_provider file: %v", err)
}
contentStr := string(apiProviderContent)
// Verify all has-many relationships have unique names
hasManyExpectations := []string{
"RelRIDAPIProviderOrgLogins", // Has many via Login
"RelRIDAPIProviderOrgFilepointers", // Has many via Filepointer
"RelRIDAPIProviderOrgAPIEvents", // Has many via APIEvent
"RelRIDOwner", // Belongs to via rid_owner
}
for _, exp := range hasManyExpectations {
if !strings.Contains(contentStr, exp) {
t.Errorf("Missing relationship field: %s\nGenerated:\n%s", exp, contentStr)
}
}
// Verify NO duplicate field names
// Count occurrences of "RelRIDAPIProvider" fields - should have 3 unique ones
count := strings.Count(contentStr, "RelRIDAPIProvider")
if count != 3 {
t.Errorf("Expected 3 RelRIDAPIProvider* fields, found %d\nGenerated:\n%s", count, contentStr)
}
// Verify no duplicate declarations (would cause compilation error)
duplicatePattern := "RelRIDAPIProviders []*Model"
if strings.Contains(contentStr, duplicatePattern) {
t.Errorf("Found duplicate field declaration pattern, fields should be unique")
}
}
func TestWriter_FieldNameCollision(t *testing.T) {
// Test scenario: table with columns that would conflict with generated method names
table := models.InitTable("audit_table", "audit")
table.Columns["id_audit_table"] = &models.Column{
Name: "id_audit_table",
Type: "smallint",
NotNull: true,
IsPrimaryKey: true,
Sequence: 1,
}
table.Columns["table_name"] = &models.Column{
Name: "table_name",
Type: "varchar",
Length: 100,
NotNull: true,
Sequence: 2,
}
table.Columns["table_schema"] = &models.Column{
Name: "table_schema",
Type: "varchar",
Length: 100,
NotNull: true,
Sequence: 3,
}
// Create writer
tmpDir := t.TempDir()
opts := &writers.WriterOptions{
PackageName: "models",
OutputPath: filepath.Join(tmpDir, "test.go"),
}
writer := NewWriter(opts)
err := writer.WriteTable(table)
if err != nil {
t.Fatalf("WriteTable failed: %v", err)
}
// Read the generated file
content, err := os.ReadFile(opts.OutputPath)
if err != nil {
t.Fatalf("Failed to read generated file: %v", err)
}
generated := string(content)
// Verify that TableName field was renamed to TableName_ to avoid collision
if !strings.Contains(generated, "TableName_") {
t.Errorf("Expected field 'TableName_' (with underscore) but not found\nGenerated:\n%s", generated)
}
// Verify the struct tag still references the correct database column
if !strings.Contains(generated, `gorm:"column:table_name;`) {
t.Errorf("Expected gorm tag to reference 'table_name' column\nGenerated:\n%s", generated)
}
// Verify the TableName() method still exists and doesn't conflict
if !strings.Contains(generated, "func (m ModelAuditAuditTable) TableName() string") {
t.Errorf("TableName() method should still be generated\nGenerated:\n%s", generated)
}
// Verify NO field named just "TableName" (without underscore)
if strings.Contains(generated, "TableName sql_types") || strings.Contains(generated, "TableName string") {
t.Errorf("Field 'TableName' without underscore should not exist (would conflict with method)\nGenerated:\n%s", generated)
}
}
func TestWriter_UpdateIDTypeSafety(t *testing.T) {
// Test scenario: tables with different primary key types
tests := []struct {
name string
pkType string
expectedPK string
castType string
}{
{"int32_pk", "int", "int32", "int32(newid)"},
{"int16_pk", "smallint", "int16", "int16(newid)"},
{"int64_pk", "bigint", "int64", "int64(newid)"},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
table := models.InitTable("test_table", "public")
table.Columns["id"] = &models.Column{
Name: "id",
Type: tt.pkType,
NotNull: true,
IsPrimaryKey: true,
}
tmpDir := t.TempDir()
opts := &writers.WriterOptions{
PackageName: "models",
OutputPath: filepath.Join(tmpDir, "test.go"),
}
writer := NewWriter(opts)
err := writer.WriteTable(table)
if err != nil {
t.Fatalf("WriteTable failed: %v", err)
}
content, err := os.ReadFile(opts.OutputPath)
if err != nil {
t.Fatalf("Failed to read generated file: %v", err)
}
generated := string(content)
// Verify UpdateID method has correct type cast
if !strings.Contains(generated, tt.castType) {
t.Errorf("Expected UpdateID to cast to %s\nGenerated:\n%s", tt.castType, generated)
}
// Verify no invalid int32(newid) for non-int32 types
if tt.expectedPK != "int32" && strings.Contains(generated, "int32(newid)") {
t.Errorf("UpdateID should not cast to int32 for %s type\nGenerated:\n%s", tt.pkType, generated)
}
// Verify UpdateID parameter is int64 (for consistency)
if !strings.Contains(generated, "UpdateID(newid int64)") {
t.Errorf("UpdateID should accept int64 parameter\nGenerated:\n%s", generated)
}
})
} }
} }

View File

@@ -0,0 +1,217 @@
# PostgreSQL Naming Conventions
Standardized naming rules for all database objects in RelSpec PostgreSQL output.
## Quick Reference
| Object Type | Prefix | Format | Example |
| ----------------- | ----------- | ---------------------------------- | ------------------------ |
| Primary Key | `pk_` | `pk_<schema>_<table>` | `pk_public_users` |
| Foreign Key | `fk_` | `fk_<table>_<referenced_table>` | `fk_posts_users` |
| Unique Constraint | `uk_` | `uk_<table>_<column>` | `uk_users_email` |
| Unique Index | `uidx_` | `uidx_<table>_<column>` | `uidx_users_email` |
| Regular Index | `idx_` | `idx_<table>_<column>` | `idx_posts_user_id` |
| Check Constraint | `chk_` | `chk_<table>_<constraint_purpose>` | `chk_users_age_positive` |
| Sequence | `identity_` | `identity_<table>_<column>` | `identity_users_id` |
| Trigger | `t_` | `t_<purpose>_<table>` | `t_audit_users` |
| Trigger Function | `tf_` | `tf_<purpose>_<table>` | `tf_audit_users` |
## Naming Rules by Object Type
### Primary Keys
**Pattern:** `pk_<schema>_<table>`
- Include schema name to avoid collisions across schemas
- Use lowercase, snake_case format
- Examples:
- `pk_public_users`
- `pk_audit_audit_log`
- `pk_staging_temp_data`
### Foreign Keys
**Pattern:** `fk_<table>_<referenced_table>`
- Reference the table containing the FK followed by the referenced table
- Use lowercase, snake_case format
- Do NOT include column names in standard FK constraints
- Examples:
- `fk_posts_users` (posts.user_id → users.id)
- `fk_comments_posts` (comments.post_id → posts.id)
- `fk_order_items_orders` (order_items.order_id → orders.id)
### Unique Constraints
**Pattern:** `uk_<table>_<column>`
- Use `uk_` prefix strictly for database constraints (CONSTRAINT type)
- Include column name for clarity
- Examples:
- `uk_users_email`
- `uk_users_username`
- `uk_products_sku`
### Unique Indexes
**Pattern:** `uidx_<table>_<column>`
- Use `uidx_` prefix strictly for index type objects
- Distinguished from constraints for clarity and implementation flexibility
- Examples:
- `uidx_users_email`
- `uidx_sessions_token`
- `uidx_api_keys_key`
### Regular Indexes
**Pattern:** `idx_<table>_<column>`
- Standard indexes for query optimization
- Single column: `idx_<table>_<column>`
- Examples:
- `idx_posts_user_id`
- `idx_orders_created_at`
- `idx_users_status`
### Check Constraints
**Pattern:** `chk_<table>_<constraint_purpose>`
- Describe the constraint validation purpose
- Use lowercase, snake_case for the purpose
- Examples:
- `chk_users_age_positive` (CHECK (age > 0))
- `chk_orders_quantity_positive` (CHECK (quantity > 0))
- `chk_products_price_valid` (CHECK (price >= 0))
- `chk_users_status_enum` (CHECK (status IN ('active', 'inactive')))
### Sequences
**Pattern:** `identity_<table>_<column>`
- Used for SERIAL/IDENTITY columns
- Explicitly named for clarity and management
- Examples:
- `identity_users_id`
- `identity_posts_id`
- `identity_transactions_id`
### Triggers
**Pattern:** `t_<purpose>_<table>`
- Include purpose before table name
- Lowercase, snake_case format
- Examples:
- `t_audit_users` (audit trigger on users table)
- `t_update_timestamp_posts` (timestamp update trigger on posts)
- `t_validate_orders` (validation trigger on orders)
### Trigger Functions
**Pattern:** `tf_<purpose>_<table>`
- Pair with trigger naming convention
- Use `tf_` prefix to distinguish from triggers themselves
- Examples:
- `tf_audit_users` (function for t_audit_users)
- `tf_update_timestamp_posts` (function for t_update_timestamp_posts)
- `tf_validate_orders` (function for t_validate_orders)
## Multi-Column Objects
### Composite Primary Keys
**Pattern:** `pk_<schema>_<table>`
- Same as single-column PKs
- Example: `pk_public_order_items` (composite key on order_id + item_id)
### Composite Unique Constraints
**Pattern:** `uk_<table>_<column1>_<column2>_[...]`
- Append all column names in order
- Examples:
- `uk_users_email_domain` (UNIQUE(email, domain))
- `uk_inventory_warehouse_sku` (UNIQUE(warehouse_id, sku))
### Composite Unique Indexes
**Pattern:** `uidx_<table>_<column1>_<column2>_[...]`
- Append all column names in order
- Examples:
- `uidx_users_first_name_last_name` (UNIQUE INDEX on first_name, last_name)
- `uidx_sessions_user_id_device_id` (UNIQUE INDEX on user_id, device_id)
### Composite Regular Indexes
**Pattern:** `idx_<table>_<column1>_<column2>_[...]`
- Append all column names in order
- List columns in typical query filter order
- Examples:
- `idx_orders_user_id_created_at` (filter by user, then sort by created_at)
- `idx_logs_level_timestamp` (filter by level, then by timestamp)
## Special Cases & Conventions
### Audit Trail Tables
- Audit table naming: `<original_table>_audit` or `audit_<original_table>`
- Audit indexes follow standard pattern: `idx_<audit_table>_<column>`
- Examples:
- Users table audit: `users_audit` with `idx_users_audit_tablename`, `idx_users_audit_changedate`
- Posts table audit: `posts_audit` with `idx_posts_audit_tablename`, `idx_posts_audit_changedate`
### Temporal/Versioning Tables
- Use suffix `_history` or `_versions` if needed
- Apply standard naming rules with the full table name
- Examples:
- `idx_users_history_user_id`
- `uk_posts_versions_version_number`
### Schema-Specific Objects
- Always qualify with schema when needed: `pk_<schema>_<table>`
- Multiple schemas allowed: `pk_public_users`, `pk_staging_users`
### Reserved Words & Special Names
- Avoid PostgreSQL reserved keywords in object names
- If column/table names conflict, use quoted identifiers in DDL
- Naming convention rules still apply to the logical name
### Generated/Anonymous Indexes
- If an index lacks explicit naming, default to: `idx_<schema>_<table>`
- Should be replaced with explicit names following standards
- Examples (to be renamed):
- `idx_public_users` → should be `idx_users_<column>`
## Implementation Notes
### Code Generation
- Names are always lowercase in generated SQL
- Underscore separators are required
### Migration Safety
- Do NOT rename objects after creation without explicit migration
- Names should be consistent across all schema versions
- Test generated DDL against PostgreSQL before deployment
### Testing
- Ensure consistency across all table and constraint generation
- Test with reserved words to verify escaping
## Related Documentation
- PostgreSQL Identifier Rules: https://www.postgresql.org/docs/current/sql-syntax-lexical.html#SQL-IDENTIFIERS
- Constraint Documentation: https://www.postgresql.org/docs/current/ddl-constraints.html
- Index Documentation: https://www.postgresql.org/docs/current/indexes.html

View File

@@ -8,6 +8,7 @@ import (
"strings" "strings"
"git.warky.dev/wdevs/relspecgo/pkg/models" "git.warky.dev/wdevs/relspecgo/pkg/models"
"git.warky.dev/wdevs/relspecgo/pkg/pgsql"
"git.warky.dev/wdevs/relspecgo/pkg/writers" "git.warky.dev/wdevs/relspecgo/pkg/writers"
) )
@@ -335,7 +336,7 @@ func (w *MigrationWriter) generateAlterTableScripts(schema *models.Schema, model
SchemaName: schema.Name, SchemaName: schema.Name,
TableName: modelTable.Name, TableName: modelTable.Name,
ColumnName: modelCol.Name, ColumnName: modelCol.Name,
ColumnType: modelCol.Type, ColumnType: pgsql.ConvertSQLType(modelCol.Type),
Default: defaultVal, Default: defaultVal,
NotNull: modelCol.NotNull, NotNull: modelCol.NotNull,
}) })
@@ -359,7 +360,7 @@ func (w *MigrationWriter) generateAlterTableScripts(schema *models.Schema, model
SchemaName: schema.Name, SchemaName: schema.Name,
TableName: modelTable.Name, TableName: modelTable.Name,
ColumnName: modelCol.Name, ColumnName: modelCol.Name,
NewType: modelCol.Type, NewType: pgsql.ConvertSQLType(modelCol.Type),
}) })
if err != nil { if err != nil {
return nil, err return nil, err
@@ -427,9 +428,11 @@ func (w *MigrationWriter) generateIndexScripts(model *models.Schema, current *mo
for _, modelTable := range model.Tables { for _, modelTable := range model.Tables {
currentTable := currentTables[strings.ToLower(modelTable.Name)] currentTable := currentTables[strings.ToLower(modelTable.Name)]
// Process primary keys first // Process primary keys first - check explicit constraints
foundExplicitPK := false
for constraintName, constraint := range modelTable.Constraints { for constraintName, constraint := range modelTable.Constraints {
if constraint.Type == models.PrimaryKeyConstraint { if constraint.Type == models.PrimaryKeyConstraint {
foundExplicitPK = true
shouldCreate := true shouldCreate := true
if currentTable != nil { if currentTable != nil {
@@ -464,6 +467,53 @@ func (w *MigrationWriter) generateIndexScripts(model *models.Schema, current *mo
} }
} }
// If no explicit PK constraint, check for columns with IsPrimaryKey = true
if !foundExplicitPK {
pkColumns := []string{}
for _, col := range modelTable.Columns {
if col.IsPrimaryKey {
pkColumns = append(pkColumns, col.SQLName())
}
}
if len(pkColumns) > 0 {
sort.Strings(pkColumns)
constraintName := fmt.Sprintf("pk_%s_%s", model.SQLName(), modelTable.SQLName())
shouldCreate := true
if currentTable != nil {
// Check if a PK constraint already exists (by any name)
for _, constraint := range currentTable.Constraints {
if constraint.Type == models.PrimaryKeyConstraint {
shouldCreate = false
break
}
}
}
if shouldCreate {
sql, err := w.executor.ExecuteCreatePrimaryKey(CreatePrimaryKeyData{
SchemaName: model.Name,
TableName: modelTable.Name,
ConstraintName: constraintName,
Columns: strings.Join(pkColumns, ", "),
})
if err != nil {
return nil, err
}
script := MigrationScript{
ObjectName: fmt.Sprintf("%s.%s.%s", model.Name, modelTable.Name, constraintName),
ObjectType: "create primary key",
Schema: model.Name,
Priority: 160,
Sequence: len(scripts),
Body: sql,
}
scripts = append(scripts, script)
}
}
}
// Process indexes // Process indexes
for indexName, modelIndex := range modelTable.Indexes { for indexName, modelIndex := range modelTable.Indexes {
// Skip primary key indexes // Skip primary key indexes
@@ -703,7 +753,7 @@ func (w *MigrationWriter) generateAuditScripts(schema *models.Schema, auditConfi
} }
// Generate audit function // Generate audit function
funcName := fmt.Sprintf("ft_audit_%s", table.Name) funcName := fmt.Sprintf("tf_audit_%s", table.Name)
funcData := BuildAuditFunctionData(schema.Name, table, pk, config, auditSchema, auditConfig.UserFunction) funcData := BuildAuditFunctionData(schema.Name, table, pk, config, auditSchema, auditConfig.UserFunction)
funcSQL, err := w.executor.ExecuteAuditFunction(funcData) funcSQL, err := w.executor.ExecuteAuditFunction(funcData)

View File

@@ -121,7 +121,7 @@ func TestWriteMigration_WithAudit(t *testing.T) {
} }
// Verify audit function // Verify audit function
if !strings.Contains(output, "CREATE OR REPLACE FUNCTION public.ft_audit_users()") { if !strings.Contains(output, "CREATE OR REPLACE FUNCTION public.tf_audit_users()") {
t.Error("Migration missing audit function") t.Error("Migration missing audit function")
} }
@@ -177,7 +177,7 @@ func TestTemplateExecutor_AuditFunction(t *testing.T) {
data := AuditFunctionData{ data := AuditFunctionData{
SchemaName: "public", SchemaName: "public",
FunctionName: "ft_audit_users", FunctionName: "tf_audit_users",
TableName: "users", TableName: "users",
TablePrefix: "NULL", TablePrefix: "NULL",
PrimaryKey: "id", PrimaryKey: "id",
@@ -202,7 +202,7 @@ func TestTemplateExecutor_AuditFunction(t *testing.T) {
t.Logf("Generated SQL:\n%s", sql) t.Logf("Generated SQL:\n%s", sql)
if !strings.Contains(sql, "CREATE OR REPLACE FUNCTION public.ft_audit_users()") { if !strings.Contains(sql, "CREATE OR REPLACE FUNCTION public.tf_audit_users()") {
t.Error("SQL missing function definition") t.Error("SQL missing function definition")
} }
if !strings.Contains(sql, "IF TG_OP = 'INSERT'") { if !strings.Contains(sql, "IF TG_OP = 'INSERT'") {

View File

@@ -355,7 +355,7 @@ func BuildAuditFunctionData(
auditSchema string, auditSchema string,
userFunction string, userFunction string,
) AuditFunctionData { ) AuditFunctionData {
funcName := fmt.Sprintf("ft_audit_%s", table.Name) funcName := fmt.Sprintf("tf_audit_%s", table.Name)
// Build list of audited columns // Build list of audited columns
auditedColumns := make([]*models.Column, 0) auditedColumns := make([]*models.Column, 0)

View File

@@ -1,13 +1,19 @@
package pgsql package pgsql
import ( import (
"context"
"encoding/json"
"fmt" "fmt"
"io" "io"
"os" "os"
"sort" "sort"
"strings" "strings"
"time"
"github.com/jackc/pgx/v5"
"git.warky.dev/wdevs/relspecgo/pkg/models" "git.warky.dev/wdevs/relspecgo/pkg/models"
"git.warky.dev/wdevs/relspecgo/pkg/pgsql"
"git.warky.dev/wdevs/relspecgo/pkg/writers" "git.warky.dev/wdevs/relspecgo/pkg/writers"
) )
@@ -15,6 +21,38 @@ import (
type Writer struct { type Writer struct {
options *writers.WriterOptions options *writers.WriterOptions
writer io.Writer writer io.Writer
executionReport *ExecutionReport
}
// ExecutionReport tracks the execution status of SQL statements
type ExecutionReport struct {
TotalStatements int `json:"total_statements"`
ExecutedStatements int `json:"executed_statements"`
FailedStatements int `json:"failed_statements"`
Schemas []SchemaReport `json:"schemas"`
Errors []ExecutionError `json:"errors,omitempty"`
StartTime string `json:"start_time"`
EndTime string `json:"end_time"`
}
// SchemaReport tracks execution per schema
type SchemaReport struct {
Name string `json:"name"`
Tables []TableReport `json:"tables"`
}
// TableReport tracks execution per table
type TableReport struct {
Name string `json:"name"`
Created bool `json:"created"`
Error string `json:"error,omitempty"`
}
// ExecutionError represents a failed statement
type ExecutionError struct {
StatementNumber int `json:"statement_number"`
Statement string `json:"statement"`
Error string `json:"error"`
} }
// NewWriter creates a new PostgreSQL SQL writer // NewWriter creates a new PostgreSQL SQL writer
@@ -26,6 +64,11 @@ func NewWriter(options *writers.WriterOptions) *Writer {
// WriteDatabase writes the entire database schema as SQL // WriteDatabase writes the entire database schema as SQL
func (w *Writer) WriteDatabase(db *models.Database) error { func (w *Writer) WriteDatabase(db *models.Database) error {
// Check if we should execute SQL directly on a database
if connString, ok := w.options.Metadata["connection_string"].(string); ok && connString != "" {
return w.executeDatabaseSQL(db, connString)
}
var writer io.Writer var writer io.Writer
var file *os.File var file *os.File
var err error var err error
@@ -127,12 +170,74 @@ func (w *Writer) GenerateSchemaStatements(schema *models.Schema) ([]string, erro
// Phase 4: Primary keys // Phase 4: Primary keys
for _, table := range schema.Tables { for _, table := range schema.Tables {
// First check for explicit PrimaryKeyConstraint
var pkConstraint *models.Constraint
for _, constraint := range table.Constraints { for _, constraint := range table.Constraints {
if constraint.Type != models.PrimaryKeyConstraint { if constraint.Type == models.PrimaryKeyConstraint {
continue pkConstraint = constraint
break
} }
stmt := fmt.Sprintf("ALTER TABLE %s.%s ADD CONSTRAINT %s PRIMARY KEY (%s)", }
schema.SQLName(), table.SQLName(), constraint.Name, strings.Join(constraint.Columns, ", "))
var pkColumns []string
var pkName string
if pkConstraint != nil {
pkColumns = pkConstraint.Columns
pkName = pkConstraint.Name
} else {
// No explicit constraint, check for columns with IsPrimaryKey = true
pkCols := []string{}
for _, col := range table.Columns {
if col.IsPrimaryKey {
pkCols = append(pkCols, col.SQLName())
}
}
if len(pkCols) > 0 {
// Sort for consistent output
sort.Strings(pkCols)
pkColumns = pkCols
pkName = fmt.Sprintf("pk_%s_%s", schema.SQLName(), table.SQLName())
}
}
if len(pkColumns) > 0 {
// Auto-generated primary key names to check for and drop
autoGenPKNames := []string{
fmt.Sprintf("%s_pkey", table.Name),
fmt.Sprintf("%s_%s_pkey", schema.Name, table.Name),
}
// Wrap in DO block to drop auto-generated PK and add our named PK
stmt := fmt.Sprintf("DO $$\nDECLARE\n"+
" auto_pk_name text;\n"+
"BEGIN\n"+
" -- Drop auto-generated primary key if it exists\n"+
" SELECT constraint_name INTO auto_pk_name\n"+
" FROM information_schema.table_constraints\n"+
" WHERE table_schema = '%s'\n"+
" AND table_name = '%s'\n"+
" AND constraint_type = 'PRIMARY KEY'\n"+
" AND constraint_name IN (%s);\n"+
"\n"+
" IF auto_pk_name IS NOT NULL THEN\n"+
" EXECUTE 'ALTER TABLE %s.%s DROP CONSTRAINT ' || quote_ident(auto_pk_name);\n"+
" END IF;\n"+
"\n"+
" -- Add named primary key if it doesn't exist\n"+
" IF NOT EXISTS (\n"+
" SELECT 1 FROM information_schema.table_constraints\n"+
" WHERE table_schema = '%s'\n"+
" AND table_name = '%s'\n"+
" AND constraint_name = '%s'\n"+
" ) THEN\n"+
" ALTER TABLE %s.%s ADD CONSTRAINT %s PRIMARY KEY (%s);\n"+
" END IF;\n"+
"END;\n$$",
schema.Name, table.Name, formatStringList(autoGenPKNames),
schema.SQLName(), table.SQLName(),
schema.Name, table.Name, pkName,
schema.SQLName(), table.SQLName(), pkName, strings.Join(pkColumns, ", "))
statements = append(statements, stmt) statements = append(statements, stmt)
} }
} }
@@ -155,13 +260,30 @@ func (w *Writer) GenerateSchemaStatements(schema *models.Schema) ([]string, erro
indexType = "btree" indexType = "btree"
} }
// Build column expressions with operator class support for GIN indexes
columnExprs := make([]string, 0, len(index.Columns))
for _, colName := range index.Columns {
colExpr := colName
if col, ok := table.Columns[colName]; ok {
// For GIN indexes on text columns, add operator class
if strings.EqualFold(indexType, "gin") && isTextType(col.Type) {
opClass := extractOperatorClass(index.Comment)
if opClass == "" {
opClass = "gin_trgm_ops"
}
colExpr = fmt.Sprintf("%s %s", colName, opClass)
}
}
columnExprs = append(columnExprs, colExpr)
}
whereClause := "" whereClause := ""
if index.Where != "" { if index.Where != "" {
whereClause = fmt.Sprintf(" WHERE %s", index.Where) whereClause = fmt.Sprintf(" WHERE %s", index.Where)
} }
stmt := fmt.Sprintf("CREATE %sINDEX IF NOT EXISTS %s ON %s.%s USING %s (%s)%s", stmt := fmt.Sprintf("CREATE %sINDEX IF NOT EXISTS %s ON %s.%s USING %s (%s)%s",
uniqueStr, index.Name, schema.SQLName(), table.SQLName(), indexType, strings.Join(index.Columns, ", "), whereClause) uniqueStr, index.Name, schema.SQLName(), table.SQLName(), indexType, strings.Join(columnExprs, ", "), whereClause)
statements = append(statements, stmt) statements = append(statements, stmt)
} }
} }
@@ -188,7 +310,18 @@ func (w *Writer) GenerateSchemaStatements(schema *models.Schema) ([]string, erro
onUpdate = "NO ACTION" onUpdate = "NO ACTION"
} }
stmt := fmt.Sprintf("ALTER TABLE %s.%s ADD CONSTRAINT %s FOREIGN KEY (%s) REFERENCES %s.%s(%s) ON DELETE %s ON UPDATE %s", // Wrap in DO block to check for existing constraint
stmt := fmt.Sprintf("DO $$\nBEGIN\n"+
" IF NOT EXISTS (\n"+
" SELECT 1 FROM information_schema.table_constraints\n"+
" WHERE table_schema = '%s'\n"+
" AND table_name = '%s'\n"+
" AND constraint_name = '%s'\n"+
" ) THEN\n"+
" ALTER TABLE %s.%s ADD CONSTRAINT %s FOREIGN KEY (%s) REFERENCES %s.%s(%s) ON DELETE %s ON UPDATE %s;\n"+
" END IF;\n"+
"END;\n$$",
schema.Name, table.Name, constraint.Name,
schema.SQLName(), table.SQLName(), constraint.Name, schema.SQLName(), table.SQLName(), constraint.Name,
strings.Join(constraint.Columns, ", "), strings.Join(constraint.Columns, ", "),
strings.ToLower(refSchema), strings.ToLower(constraint.ReferencedTable), strings.ToLower(refSchema), strings.ToLower(constraint.ReferencedTable),
@@ -251,16 +384,28 @@ func (w *Writer) generateCreateTableStatement(schema *models.Schema, table *mode
func (w *Writer) generateColumnDefinition(col *models.Column) string { func (w *Writer) generateColumnDefinition(col *models.Column) string {
parts := []string{col.SQLName()} parts := []string{col.SQLName()}
// Type with length/precision // Type with length/precision - convert to valid PostgreSQL type
typeStr := col.Type baseType := pgsql.ConvertSQLType(col.Type)
typeStr := baseType
// Only add size specifiers for types that support them
if col.Length > 0 && col.Precision == 0 { if col.Length > 0 && col.Precision == 0 {
typeStr = fmt.Sprintf("%s(%d)", col.Type, col.Length) if supportsLength(baseType) {
} else if col.Precision > 0 { typeStr = fmt.Sprintf("%s(%d)", baseType, col.Length)
if col.Scale > 0 { } else if isTextTypeWithoutLength(baseType) {
typeStr = fmt.Sprintf("%s(%d,%d)", col.Type, col.Precision, col.Scale) // Convert text with length to varchar
} else { typeStr = fmt.Sprintf("varchar(%d)", col.Length)
typeStr = fmt.Sprintf("%s(%d)", col.Type, col.Precision)
} }
// For types that don't support length (integer, bigint, etc.), ignore the length
} else if col.Precision > 0 {
if supportsPrecision(baseType) {
if col.Scale > 0 {
typeStr = fmt.Sprintf("%s(%d,%d)", baseType, col.Precision, col.Scale)
} else {
typeStr = fmt.Sprintf("%s(%d)", baseType, col.Precision)
}
}
// For types that don't support precision, ignore it
} }
parts = append(parts, typeStr) parts = append(parts, typeStr)
@@ -273,12 +418,14 @@ func (w *Writer) generateColumnDefinition(col *models.Column) string {
if col.Default != nil { if col.Default != nil {
switch v := col.Default.(type) { switch v := col.Default.(type) {
case string: case string:
if strings.HasPrefix(v, "nextval") || strings.HasPrefix(v, "CURRENT_") || strings.Contains(v, "()") { // Strip backticks - DBML uses them for SQL expressions but PostgreSQL doesn't
parts = append(parts, fmt.Sprintf("DEFAULT %s", v)) cleanDefault := stripBackticks(v)
} else if v == "true" || v == "false" { if strings.HasPrefix(cleanDefault, "nextval") || strings.HasPrefix(cleanDefault, "CURRENT_") || strings.Contains(cleanDefault, "()") {
parts = append(parts, fmt.Sprintf("DEFAULT %s", v)) parts = append(parts, fmt.Sprintf("DEFAULT %s", cleanDefault))
} else if cleanDefault == "true" || cleanDefault == "false" {
parts = append(parts, fmt.Sprintf("DEFAULT %s", cleanDefault))
} else { } else {
parts = append(parts, fmt.Sprintf("DEFAULT '%s'", escapeQuote(v))) parts = append(parts, fmt.Sprintf("DEFAULT '%s'", escapeQuote(cleanDefault)))
} }
case bool: case bool:
parts = append(parts, fmt.Sprintf("DEFAULT %v", v)) parts = append(parts, fmt.Sprintf("DEFAULT %v", v))
@@ -405,13 +552,8 @@ func (w *Writer) writeCreateTables(schema *models.Schema) error {
columnDefs := make([]string, 0, len(columns)) columnDefs := make([]string, 0, len(columns))
for _, col := range columns { for _, col := range columns {
colDef := fmt.Sprintf(" %s %s", col.SQLName(), col.Type) // Use generateColumnDefinition to properly handle type, length, precision, and defaults
colDef := " " + w.generateColumnDefinition(col)
// Add default value if present
if col.Default != "" {
colDef += fmt.Sprintf(" DEFAULT %s", col.Default)
}
columnDefs = append(columnDefs, colDef) columnDefs = append(columnDefs, colDef)
} }
@@ -437,26 +579,58 @@ func (w *Writer) writePrimaryKeys(schema *models.Schema) error {
} }
} }
if pkConstraint == nil { var columnNames []string
// No explicit PK constraint, skip
continue
}
pkName := fmt.Sprintf("pk_%s_%s", schema.SQLName(), table.SQLName()) pkName := fmt.Sprintf("pk_%s_%s", schema.SQLName(), table.SQLName())
// Build column list if pkConstraint != nil {
columnNames := make([]string, 0, len(pkConstraint.Columns)) // Build column list from explicit constraint
columnNames = make([]string, 0, len(pkConstraint.Columns))
for _, colName := range pkConstraint.Columns { for _, colName := range pkConstraint.Columns {
if col, ok := table.Columns[colName]; ok { if col, ok := table.Columns[colName]; ok {
columnNames = append(columnNames, col.SQLName()) columnNames = append(columnNames, col.SQLName())
} }
} }
} else {
// No explicit PK constraint, check for columns with IsPrimaryKey = true
for _, col := range table.Columns {
if col.IsPrimaryKey {
columnNames = append(columnNames, col.SQLName())
}
}
// Sort for consistent output
sort.Strings(columnNames)
}
if len(columnNames) == 0 { if len(columnNames) == 0 {
continue continue
} }
fmt.Fprintf(w.writer, "DO $$\nBEGIN\n") // Auto-generated primary key names to check for and drop
autoGenPKNames := []string{
fmt.Sprintf("%s_pkey", table.Name),
fmt.Sprintf("%s_%s_pkey", schema.Name, table.Name),
}
fmt.Fprintf(w.writer, "DO $$\nDECLARE\n")
fmt.Fprintf(w.writer, " auto_pk_name text;\nBEGIN\n")
// Check for and drop auto-generated primary keys
fmt.Fprintf(w.writer, " -- Drop auto-generated primary key if it exists\n")
fmt.Fprintf(w.writer, " SELECT constraint_name INTO auto_pk_name\n")
fmt.Fprintf(w.writer, " FROM information_schema.table_constraints\n")
fmt.Fprintf(w.writer, " WHERE table_schema = '%s'\n", schema.Name)
fmt.Fprintf(w.writer, " AND table_name = '%s'\n", table.Name)
fmt.Fprintf(w.writer, " AND constraint_type = 'PRIMARY KEY'\n")
fmt.Fprintf(w.writer, " AND constraint_name IN (%s);\n", formatStringList(autoGenPKNames))
fmt.Fprintf(w.writer, "\n")
fmt.Fprintf(w.writer, " IF auto_pk_name IS NOT NULL THEN\n")
fmt.Fprintf(w.writer, " EXECUTE 'ALTER TABLE %s.%s DROP CONSTRAINT ' || quote_ident(auto_pk_name);\n",
schema.SQLName(), table.SQLName())
fmt.Fprintf(w.writer, " END IF;\n")
fmt.Fprintf(w.writer, "\n")
// Add our named primary key if it doesn't exist
fmt.Fprintf(w.writer, " -- Add named primary key if it doesn't exist\n")
fmt.Fprintf(w.writer, " IF NOT EXISTS (\n") fmt.Fprintf(w.writer, " IF NOT EXISTS (\n")
fmt.Fprintf(w.writer, " SELECT 1 FROM information_schema.table_constraints\n") fmt.Fprintf(w.writer, " SELECT 1 FROM information_schema.table_constraints\n")
fmt.Fprintf(w.writer, " WHERE table_schema = '%s'\n", schema.Name) fmt.Fprintf(w.writer, " WHERE table_schema = '%s'\n", schema.Name)
@@ -498,20 +672,30 @@ func (w *Writer) writeIndexes(schema *models.Schema) error {
if indexName == "" { if indexName == "" {
indexType := "idx" indexType := "idx"
if index.Unique { if index.Unique {
indexType = "uk" indexType = "uidx"
} }
indexName = fmt.Sprintf("%s_%s_%s", indexType, schema.SQLName(), table.SQLName()) columnSuffix := strings.Join(index.Columns, "_")
indexName = fmt.Sprintf("%s_%s_%s", indexType, table.SQLName(), strings.ToLower(columnSuffix))
} }
// Build column list // Build column list with operator class support for GIN indexes
columnNames := make([]string, 0, len(index.Columns)) columnExprs := make([]string, 0, len(index.Columns))
for _, colName := range index.Columns { for _, colName := range index.Columns {
if col, ok := table.Columns[colName]; ok { if col, ok := table.Columns[colName]; ok {
columnNames = append(columnNames, col.SQLName()) colExpr := col.SQLName()
// For GIN indexes on text columns, add operator class
if strings.EqualFold(index.Type, "gin") && isTextType(col.Type) {
opClass := extractOperatorClass(index.Comment)
if opClass == "" {
opClass = "gin_trgm_ops"
}
colExpr = fmt.Sprintf("%s %s", col.SQLName(), opClass)
}
columnExprs = append(columnExprs, colExpr)
} }
} }
if len(columnNames) == 0 { if len(columnExprs) == 0 {
continue continue
} }
@@ -520,10 +704,20 @@ func (w *Writer) writeIndexes(schema *models.Schema) error {
unique = "UNIQUE " unique = "UNIQUE "
} }
indexType := index.Type
if indexType == "" {
indexType = "btree"
}
whereClause := ""
if index.Where != "" {
whereClause = fmt.Sprintf(" WHERE %s", index.Where)
}
fmt.Fprintf(w.writer, "CREATE %sINDEX IF NOT EXISTS %s\n", fmt.Fprintf(w.writer, "CREATE %sINDEX IF NOT EXISTS %s\n",
unique, indexName) unique, indexName)
fmt.Fprintf(w.writer, " ON %s.%s USING btree (%s);\n\n", fmt.Fprintf(w.writer, " ON %s.%s USING %s (%s)%s;\n\n",
schema.SQLName(), table.SQLName(), strings.Join(columnNames, ", ")) schema.SQLName(), table.SQLName(), indexType, strings.Join(columnExprs, ", "), whereClause)
} }
} }
@@ -597,13 +791,6 @@ func (w *Writer) writeForeignKeys(schema *models.Schema) error {
onUpdate = strings.ToUpper(fkConstraint.OnUpdate) onUpdate = strings.ToUpper(fkConstraint.OnUpdate)
} }
fmt.Fprintf(w.writer, "ALTER TABLE %s.%s\n", schema.SQLName(), table.SQLName())
fmt.Fprintf(w.writer, " DROP CONSTRAINT IF EXISTS %s;\n", fkName)
fmt.Fprintf(w.writer, "\n")
fmt.Fprintf(w.writer, "ALTER TABLE %s.%s\n", schema.SQLName(), table.SQLName())
fmt.Fprintf(w.writer, " ADD CONSTRAINT %s\n", fkName)
fmt.Fprintf(w.writer, " FOREIGN KEY (%s)\n", strings.Join(sourceColumns, ", "))
// Use constraint's referenced schema/table or relationship's ToSchema/ToTable // Use constraint's referenced schema/table or relationship's ToSchema/ToTable
refSchema := fkConstraint.ReferencedSchema refSchema := fkConstraint.ReferencedSchema
if refSchema == "" { if refSchema == "" {
@@ -614,11 +801,24 @@ func (w *Writer) writeForeignKeys(schema *models.Schema) error {
refTable = rel.ToTable refTable = rel.ToTable
} }
// Use DO block to check if constraint exists before adding
fmt.Fprintf(w.writer, "DO $$\nBEGIN\n")
fmt.Fprintf(w.writer, " IF NOT EXISTS (\n")
fmt.Fprintf(w.writer, " SELECT 1 FROM information_schema.table_constraints\n")
fmt.Fprintf(w.writer, " WHERE table_schema = '%s'\n", schema.Name)
fmt.Fprintf(w.writer, " AND table_name = '%s'\n", table.Name)
fmt.Fprintf(w.writer, " AND constraint_name = '%s'\n", fkName)
fmt.Fprintf(w.writer, " ) THEN\n")
fmt.Fprintf(w.writer, " ALTER TABLE %s.%s\n", schema.SQLName(), table.SQLName())
fmt.Fprintf(w.writer, " ADD CONSTRAINT %s\n", fkName)
fmt.Fprintf(w.writer, " FOREIGN KEY (%s)\n", strings.Join(sourceColumns, ", "))
fmt.Fprintf(w.writer, " REFERENCES %s.%s (%s)\n", fmt.Fprintf(w.writer, " REFERENCES %s.%s (%s)\n",
refSchema, refTable, strings.Join(targetColumns, ", ")) refSchema, refTable, strings.Join(targetColumns, ", "))
fmt.Fprintf(w.writer, " ON DELETE %s\n", onDelete) fmt.Fprintf(w.writer, " ON DELETE %s\n", onDelete)
fmt.Fprintf(w.writer, " ON UPDATE %s\n", onUpdate) fmt.Fprintf(w.writer, " ON UPDATE %s\n", onUpdate)
fmt.Fprintf(w.writer, " DEFERRABLE;\n\n") fmt.Fprintf(w.writer, " DEFERRABLE;\n")
fmt.Fprintf(w.writer, " END IF;\n")
fmt.Fprintf(w.writer, "END;\n$$;\n\n")
} }
} }
@@ -718,11 +918,84 @@ func isIntegerType(colType string) bool {
return false return false
} }
// isTextType checks if a column type is a text type (for GIN index operator class)
func isTextType(colType string) bool {
textTypes := []string{"text", "varchar", "character varying", "char", "character", "string"}
lowerType := strings.ToLower(colType)
for _, t := range textTypes {
if strings.HasPrefix(lowerType, t) {
return true
}
}
return false
}
// supportsLength checks if a PostgreSQL type supports length specification
func supportsLength(colType string) bool {
lengthTypes := []string{"varchar", "character varying", "char", "character", "bit", "bit varying", "varbit"}
lowerType := strings.ToLower(colType)
for _, t := range lengthTypes {
if lowerType == t || strings.HasPrefix(lowerType, t+"(") {
return true
}
}
return false
}
// supportsPrecision checks if a PostgreSQL type supports precision/scale specification
func supportsPrecision(colType string) bool {
precisionTypes := []string{"numeric", "decimal", "time", "timestamp", "timestamptz", "timestamp with time zone", "timestamp without time zone", "time with time zone", "time without time zone", "interval"}
lowerType := strings.ToLower(colType)
for _, t := range precisionTypes {
if lowerType == t || strings.HasPrefix(lowerType, t+"(") {
return true
}
}
return false
}
// isTextTypeWithoutLength checks if type is text (which should convert to varchar when length is specified)
func isTextTypeWithoutLength(colType string) bool {
return strings.EqualFold(colType, "text")
}
// formatStringList formats a list of strings as a SQL-safe comma-separated quoted list
func formatStringList(items []string) string {
quoted := make([]string, len(items))
for i, item := range items {
quoted[i] = fmt.Sprintf("'%s'", escapeQuote(item))
}
return strings.Join(quoted, ", ")
}
// extractOperatorClass extracts operator class from index comment/note
// Looks for common operator classes like gin_trgm_ops, gist_trgm_ops, etc.
func extractOperatorClass(comment string) string {
if comment == "" {
return ""
}
lowerComment := strings.ToLower(comment)
// Common GIN/GiST operator classes
opClasses := []string{"gin_trgm_ops", "gist_trgm_ops", "gin_bigm_ops", "jsonb_ops", "jsonb_path_ops", "array_ops"}
for _, op := range opClasses {
if strings.Contains(lowerComment, op) {
return op
}
}
return ""
}
// escapeQuote escapes single quotes in strings for SQL // escapeQuote escapes single quotes in strings for SQL
func escapeQuote(s string) string { func escapeQuote(s string) string {
return strings.ReplaceAll(s, "'", "''") return strings.ReplaceAll(s, "'", "''")
} }
// stripBackticks removes backticks from SQL expressions
// DBML uses backticks for SQL expressions like `now()`, but PostgreSQL doesn't use backticks
func stripBackticks(s string) string {
return strings.ReplaceAll(s, "`", "")
}
// extractSequenceName extracts sequence name from nextval() expression // extractSequenceName extracts sequence name from nextval() expression
// Example: "nextval('public.users_id_seq'::regclass)" returns "users_id_seq" // Example: "nextval('public.users_id_seq'::regclass)" returns "users_id_seq"
func extractSequenceName(defaultExpr string) string { func extractSequenceName(defaultExpr string) string {
@@ -745,3 +1018,195 @@ func extractSequenceName(defaultExpr string) string {
} }
return fullName return fullName
} }
// executeDatabaseSQL executes SQL statements directly on a PostgreSQL database
func (w *Writer) executeDatabaseSQL(db *models.Database, connString string) error {
// Initialize execution report
w.executionReport = &ExecutionReport{
StartTime: getCurrentTimestamp(),
Schemas: make([]SchemaReport, 0),
Errors: make([]ExecutionError, 0),
}
// Generate SQL statements
statements, err := w.GenerateDatabaseStatements(db)
if err != nil {
return fmt.Errorf("failed to generate SQL statements: %w", err)
}
w.executionReport.TotalStatements = len(statements)
// Connect to database
ctx := context.Background()
conn, err := pgx.Connect(ctx, connString)
if err != nil {
return fmt.Errorf("failed to connect to database: %w", err)
}
defer conn.Close(ctx)
// Track schemas and tables
schemaMap := make(map[string]*SchemaReport)
currentSchema := ""
// Execute each statement
for i, stmt := range statements {
stmtTrimmed := strings.TrimSpace(stmt)
// Skip comments
if strings.HasPrefix(stmtTrimmed, "--") {
// Check if this is a schema comment to track schema changes
if strings.Contains(stmtTrimmed, "Schema:") {
parts := strings.Split(stmtTrimmed, "Schema:")
if len(parts) > 1 {
currentSchema = strings.TrimSpace(parts[1])
if _, exists := schemaMap[currentSchema]; !exists {
schemaReport := SchemaReport{
Name: currentSchema,
Tables: make([]TableReport, 0),
}
schemaMap[currentSchema] = &schemaReport
}
}
}
continue
}
// Skip empty statements
if stmtTrimmed == "" {
continue
}
fmt.Fprintf(os.Stderr, "Executing statement %d/%d...\n", i+1, len(statements))
_, execErr := conn.Exec(ctx, stmt)
if execErr != nil {
w.executionReport.FailedStatements++
execError := ExecutionError{
StatementNumber: i + 1,
Statement: truncateStatement(stmt),
Error: execErr.Error(),
}
w.executionReport.Errors = append(w.executionReport.Errors, execError)
// Track table creation failure
if strings.Contains(strings.ToUpper(stmtTrimmed), "CREATE TABLE") && currentSchema != "" {
tableName := extractTableNameFromCreate(stmtTrimmed)
if tableName != "" && schemaMap[currentSchema] != nil {
schemaMap[currentSchema].Tables = append(schemaMap[currentSchema].Tables, TableReport{
Name: tableName,
Created: false,
Error: execErr.Error(),
})
}
}
// Continue with next statement instead of failing completely
fmt.Fprintf(os.Stderr, "⚠ Warning: Statement %d failed: %v\n", i+1, execErr)
continue
}
w.executionReport.ExecutedStatements++
// Track successful table creation
if strings.Contains(strings.ToUpper(stmtTrimmed), "CREATE TABLE") && currentSchema != "" {
tableName := extractTableNameFromCreate(stmtTrimmed)
if tableName != "" && schemaMap[currentSchema] != nil {
schemaMap[currentSchema].Tables = append(schemaMap[currentSchema].Tables, TableReport{
Name: tableName,
Created: true,
})
}
}
}
// Convert schema map to slice
for _, schemaReport := range schemaMap {
w.executionReport.Schemas = append(w.executionReport.Schemas, *schemaReport)
}
w.executionReport.EndTime = getCurrentTimestamp()
// Write report if path is specified
if reportPath, ok := w.options.Metadata["report_path"].(string); ok && reportPath != "" {
if err := w.writeReport(reportPath); err != nil {
fmt.Fprintf(os.Stderr, "⚠ Warning: Failed to write report: %v\n", err)
} else {
fmt.Fprintf(os.Stderr, "✓ Report written to: %s\n", reportPath)
}
}
if w.executionReport.FailedStatements > 0 {
fmt.Fprintf(os.Stderr, "⚠ Completed with %d errors out of %d statements\n",
w.executionReport.FailedStatements, w.executionReport.TotalStatements)
} else {
fmt.Fprintf(os.Stderr, "✓ Successfully executed %d statements\n", w.executionReport.ExecutedStatements)
}
return nil
}
// writeReport writes the execution report to a JSON file
func (w *Writer) writeReport(reportPath string) error {
file, err := os.Create(reportPath)
if err != nil {
return fmt.Errorf("failed to create report file: %w", err)
}
defer file.Close()
encoder := json.NewEncoder(file)
encoder.SetIndent("", " ")
if err := encoder.Encode(w.executionReport); err != nil {
return fmt.Errorf("failed to encode report: %w", err)
}
return nil
}
// extractTableNameFromCreate extracts table name from CREATE TABLE statement
func extractTableNameFromCreate(stmt string) string {
// Match: CREATE TABLE [IF NOT EXISTS] schema.table_name or table_name
upper := strings.ToUpper(stmt)
idx := strings.Index(upper, "CREATE TABLE")
if idx == -1 {
return ""
}
rest := strings.TrimSpace(stmt[idx+12:]) // Skip "CREATE TABLE"
// Skip "IF NOT EXISTS"
if strings.HasPrefix(strings.ToUpper(rest), "IF NOT EXISTS") {
rest = strings.TrimSpace(rest[13:])
}
// Get the table name (first token before '(' or whitespace)
tokens := strings.FieldsFunc(rest, func(r rune) bool {
return r == '(' || r == ' ' || r == '\n' || r == '\t'
})
if len(tokens) == 0 {
return ""
}
// Handle schema.table format
fullName := tokens[0]
parts := strings.Split(fullName, ".")
if len(parts) > 1 {
return parts[len(parts)-1]
}
return fullName
}
// truncateStatement truncates long SQL statements for error messages
func truncateStatement(stmt string) string {
const maxLen = 200
if len(stmt) <= maxLen {
return stmt
}
return stmt[:maxLen] + "..."
}
// getCurrentTimestamp returns the current timestamp in a readable format
func getCurrentTimestamp() string {
return time.Now().Format("2006-01-02 15:04:05")
}

View File

@@ -45,11 +45,11 @@ func TestWriteDatabase(t *testing.T) {
// Add unique index // Add unique index
uniqueEmailIndex := &models.Index{ uniqueEmailIndex := &models.Index{
Name: "uk_users_email", Name: "uidx_users_email",
Unique: true, Unique: true,
Columns: []string{"email"}, Columns: []string{"email"},
} }
table.Indexes["uk_users_email"] = uniqueEmailIndex table.Indexes["uidx_users_email"] = uniqueEmailIndex
schema.Tables = append(schema.Tables, table) schema.Tables = append(schema.Tables, table)
db.Schemas = append(db.Schemas, schema) db.Schemas = append(db.Schemas, schema)
@@ -241,3 +241,200 @@ func TestIsIntegerType(t *testing.T) {
} }
} }
} }
func TestTypeConversion(t *testing.T) {
// Test that invalid Go types are converted to valid PostgreSQL types
db := models.InitDatabase("testdb")
schema := models.InitSchema("public")
// Create a test table with Go types instead of SQL types
table := models.InitTable("test_types", "public")
// Add columns with Go types (invalid for PostgreSQL)
stringCol := models.InitColumn("name", "test_types", "public")
stringCol.Type = "string" // Should be converted to "text"
table.Columns["name"] = stringCol
int64Col := models.InitColumn("big_id", "test_types", "public")
int64Col.Type = "int64" // Should be converted to "bigint"
table.Columns["big_id"] = int64Col
int16Col := models.InitColumn("small_id", "test_types", "public")
int16Col.Type = "int16" // Should be converted to "smallint"
table.Columns["small_id"] = int16Col
schema.Tables = append(schema.Tables, table)
db.Schemas = append(db.Schemas, schema)
// Create writer with output to buffer
var buf bytes.Buffer
options := &writers.WriterOptions{}
writer := NewWriter(options)
writer.writer = &buf
// Write the database
err := writer.WriteDatabase(db)
if err != nil {
t.Fatalf("WriteDatabase failed: %v", err)
}
output := buf.String()
// Print output for debugging
t.Logf("Generated SQL:\n%s", output)
// Verify that Go types were converted to PostgreSQL types
if strings.Contains(output, "string") {
t.Errorf("Output contains 'string' type - should be converted to 'text'\nFull output:\n%s", output)
}
if strings.Contains(output, "int64") {
t.Errorf("Output contains 'int64' type - should be converted to 'bigint'\nFull output:\n%s", output)
}
if strings.Contains(output, "int16") {
t.Errorf("Output contains 'int16' type - should be converted to 'smallint'\nFull output:\n%s", output)
}
// Verify correct PostgreSQL types are present
if !strings.Contains(output, "text") {
t.Errorf("Output missing 'text' type (converted from 'string')\nFull output:\n%s", output)
}
if !strings.Contains(output, "bigint") {
t.Errorf("Output missing 'bigint' type (converted from 'int64')\nFull output:\n%s", output)
}
if !strings.Contains(output, "smallint") {
t.Errorf("Output missing 'smallint' type (converted from 'int16')\nFull output:\n%s", output)
}
}
func TestPrimaryKeyExistenceCheck(t *testing.T) {
db := models.InitDatabase("testdb")
schema := models.InitSchema("public")
table := models.InitTable("products", "public")
idCol := models.InitColumn("id", "products", "public")
idCol.Type = "integer"
idCol.IsPrimaryKey = true
table.Columns["id"] = idCol
nameCol := models.InitColumn("name", "products", "public")
nameCol.Type = "text"
table.Columns["name"] = nameCol
schema.Tables = append(schema.Tables, table)
db.Schemas = append(db.Schemas, schema)
var buf bytes.Buffer
options := &writers.WriterOptions{}
writer := NewWriter(options)
writer.writer = &buf
err := writer.WriteDatabase(db)
if err != nil {
t.Fatalf("WriteDatabase failed: %v", err)
}
output := buf.String()
t.Logf("Generated SQL:\n%s", output)
// Verify our naming convention is used
if !strings.Contains(output, "pk_public_products") {
t.Errorf("Output missing expected primary key name 'pk_public_products'\nFull output:\n%s", output)
}
// Verify it drops auto-generated primary keys
if !strings.Contains(output, "products_pkey") || !strings.Contains(output, "DROP CONSTRAINT") {
t.Errorf("Output missing logic to drop auto-generated primary key\nFull output:\n%s", output)
}
// Verify it checks for our specific named constraint before adding it
if !strings.Contains(output, "constraint_name = 'pk_public_products'") {
t.Errorf("Output missing check for our named primary key constraint\nFull output:\n%s", output)
}
}
func TestColumnSizeSpecifiers(t *testing.T) {
db := models.InitDatabase("testdb")
schema := models.InitSchema("public")
table := models.InitTable("test_sizes", "public")
// Integer with invalid size specifier - should ignore size
integerCol := models.InitColumn("int_col", "test_sizes", "public")
integerCol.Type = "integer"
integerCol.Length = 32
table.Columns["int_col"] = integerCol
// Bigint with invalid size specifier - should ignore size
bigintCol := models.InitColumn("bigint_col", "test_sizes", "public")
bigintCol.Type = "bigint"
bigintCol.Length = 64
table.Columns["bigint_col"] = bigintCol
// Smallint with invalid size specifier - should ignore size
smallintCol := models.InitColumn("smallint_col", "test_sizes", "public")
smallintCol.Type = "smallint"
smallintCol.Length = 16
table.Columns["smallint_col"] = smallintCol
// Text with length - should convert to varchar
textCol := models.InitColumn("text_col", "test_sizes", "public")
textCol.Type = "text"
textCol.Length = 100
table.Columns["text_col"] = textCol
// Varchar with length - should keep varchar with length
varcharCol := models.InitColumn("varchar_col", "test_sizes", "public")
varcharCol.Type = "varchar"
varcharCol.Length = 50
table.Columns["varchar_col"] = varcharCol
// Decimal with precision and scale - should keep them
decimalCol := models.InitColumn("decimal_col", "test_sizes", "public")
decimalCol.Type = "decimal"
decimalCol.Precision = 19
decimalCol.Scale = 4
table.Columns["decimal_col"] = decimalCol
schema.Tables = append(schema.Tables, table)
db.Schemas = append(db.Schemas, schema)
var buf bytes.Buffer
options := &writers.WriterOptions{}
writer := NewWriter(options)
writer.writer = &buf
err := writer.WriteDatabase(db)
if err != nil {
t.Fatalf("WriteDatabase failed: %v", err)
}
output := buf.String()
t.Logf("Generated SQL:\n%s", output)
// Verify invalid size specifiers are NOT present
invalidPatterns := []string{
"integer(32)",
"bigint(64)",
"smallint(16)",
"text(100)",
}
for _, pattern := range invalidPatterns {
if strings.Contains(output, pattern) {
t.Errorf("Output contains invalid pattern '%s' - PostgreSQL doesn't support this\nFull output:\n%s", pattern, output)
}
}
// Verify valid patterns ARE present
validPatterns := []string{
"integer", // without size
"bigint", // without size
"smallint", // without size
"varchar(100)", // text converted to varchar with length
"varchar(50)", // varchar with length
"decimal(19,4)", // decimal with precision and scale
}
for _, pattern := range validPatterns {
if !strings.Contains(output, pattern) {
t.Errorf("Output missing expected pattern '%s'\nFull output:\n%s", pattern, output)
}
}
}

View File

@@ -4,7 +4,7 @@ The SQL Executor Writer (`sqlexec`) executes SQL scripts from `models.Script` ob
## Features ## Features
- **Ordered Execution**: Scripts execute in Priority→Sequence order - **Ordered Execution**: Scripts execute in Priority→Sequence→Name order
- **PostgreSQL Support**: Uses `pgx/v5` driver for robust PostgreSQL connectivity - **PostgreSQL Support**: Uses `pgx/v5` driver for robust PostgreSQL connectivity
- **Stop on Error**: Execution halts immediately on first error (default behavior) - **Stop on Error**: Execution halts immediately on first error (default behavior)
- **Progress Reporting**: Prints execution status to stdout - **Progress Reporting**: Prints execution status to stdout
@@ -103,19 +103,40 @@ Scripts are sorted and executed based on:
1. **Priority** (ascending): Lower priority values execute first 1. **Priority** (ascending): Lower priority values execute first
2. **Sequence** (ascending): Within same priority, lower sequence values execute first 2. **Sequence** (ascending): Within same priority, lower sequence values execute first
3. **Name** (ascending): Within same priority and sequence, alphabetical order by name
### Example Execution Order ### Example Execution Order
Given these scripts: Given these scripts:
``` ```
Script A: Priority=2, Sequence=1 Script A: Priority=2, Sequence=1, Name="zebra"
Script B: Priority=1, Sequence=3 Script B: Priority=1, Sequence=3, Name="script"
Script C: Priority=1, Sequence=1 Script C: Priority=1, Sequence=1, Name="apple"
Script D: Priority=1, Sequence=2 Script D: Priority=1, Sequence=1, Name="beta"
Script E: Priority=3, Sequence=1 Script E: Priority=3, Sequence=1, Name="script"
``` ```
Execution order: **C → D → B → A → E** Execution order: **C (apple) → D (beta) → B → A → E**
### Directory-based Sorting Example
Given these files:
```
1_001_create_schema.sql
1_001_create_users.sql ← Alphabetically before "drop_tables"
1_001_drop_tables.sql
1_002_add_indexes.sql
2_001_constraints.sql
```
Execution order (note alphabetical sorting at same priority/sequence):
```
1_001_create_schema.sql
1_001_create_users.sql
1_001_drop_tables.sql
1_002_add_indexes.sql
2_001_constraints.sql
```
## Output ## Output

View File

@@ -86,20 +86,23 @@ func (w *Writer) WriteTable(table *models.Table) error {
return fmt.Errorf("WriteTable is not supported for SQL script execution") return fmt.Errorf("WriteTable is not supported for SQL script execution")
} }
// executeScripts executes scripts in Priority then Sequence order // executeScripts executes scripts in Priority, Sequence, then Name order
func (w *Writer) executeScripts(ctx context.Context, conn *pgx.Conn, scripts []*models.Script) error { func (w *Writer) executeScripts(ctx context.Context, conn *pgx.Conn, scripts []*models.Script) error {
if len(scripts) == 0 { if len(scripts) == 0 {
return nil return nil
} }
// Sort scripts by Priority (ascending) then Sequence (ascending) // Sort scripts by Priority (ascending), Sequence (ascending), then Name (ascending)
sortedScripts := make([]*models.Script, len(scripts)) sortedScripts := make([]*models.Script, len(scripts))
copy(sortedScripts, scripts) copy(sortedScripts, scripts)
sort.Slice(sortedScripts, func(i, j int) bool { sort.Slice(sortedScripts, func(i, j int) bool {
if sortedScripts[i].Priority != sortedScripts[j].Priority { if sortedScripts[i].Priority != sortedScripts[j].Priority {
return sortedScripts[i].Priority < sortedScripts[j].Priority return sortedScripts[i].Priority < sortedScripts[j].Priority
} }
if sortedScripts[i].Sequence != sortedScripts[j].Sequence {
return sortedScripts[i].Sequence < sortedScripts[j].Sequence return sortedScripts[i].Sequence < sortedScripts[j].Sequence
}
return sortedScripts[i].Name < sortedScripts[j].Name
}) })
// Execute each script in order // Execute each script in order

View File

@@ -99,13 +99,13 @@ func TestWriter_WriteTable(t *testing.T) {
} }
} }
// TestScriptSorting verifies that scripts are sorted correctly by Priority then Sequence // TestScriptSorting verifies that scripts are sorted correctly by Priority, Sequence, then Name
func TestScriptSorting(t *testing.T) { func TestScriptSorting(t *testing.T) {
scripts := []*models.Script{ scripts := []*models.Script{
{Name: "script1", Priority: 2, Sequence: 1, SQL: "SELECT 1;"}, {Name: "z_script1", Priority: 2, Sequence: 1, SQL: "SELECT 1;"},
{Name: "script2", Priority: 1, Sequence: 3, SQL: "SELECT 2;"}, {Name: "script2", Priority: 1, Sequence: 3, SQL: "SELECT 2;"},
{Name: "script3", Priority: 1, Sequence: 1, SQL: "SELECT 3;"}, {Name: "a_script3", Priority: 1, Sequence: 1, SQL: "SELECT 3;"},
{Name: "script4", Priority: 1, Sequence: 2, SQL: "SELECT 4;"}, {Name: "b_script4", Priority: 1, Sequence: 1, SQL: "SELECT 4;"},
{Name: "script5", Priority: 3, Sequence: 1, SQL: "SELECT 5;"}, {Name: "script5", Priority: 3, Sequence: 1, SQL: "SELECT 5;"},
{Name: "script6", Priority: 2, Sequence: 2, SQL: "SELECT 6;"}, {Name: "script6", Priority: 2, Sequence: 2, SQL: "SELECT 6;"},
} }
@@ -114,23 +114,33 @@ func TestScriptSorting(t *testing.T) {
sortedScripts := make([]*models.Script, len(scripts)) sortedScripts := make([]*models.Script, len(scripts))
copy(sortedScripts, scripts) copy(sortedScripts, scripts)
// Use the same sorting logic from executeScripts // Sort by Priority, Sequence, then Name (matching executeScripts logic)
for i := 0; i < len(sortedScripts)-1; i++ { for i := 0; i < len(sortedScripts)-1; i++ {
for j := i + 1; j < len(sortedScripts); j++ { for j := i + 1; j < len(sortedScripts); j++ {
if sortedScripts[i].Priority > sortedScripts[j].Priority || si, sj := sortedScripts[i], sortedScripts[j]
(sortedScripts[i].Priority == sortedScripts[j].Priority && // Compare by priority first
sortedScripts[i].Sequence > sortedScripts[j].Sequence) { if si.Priority > sj.Priority {
sortedScripts[i], sortedScripts[j] = sortedScripts[j], sortedScripts[i]
} else if si.Priority == sj.Priority {
// If same priority, compare by sequence
if si.Sequence > sj.Sequence {
sortedScripts[i], sortedScripts[j] = sortedScripts[j], sortedScripts[i]
} else if si.Sequence == sj.Sequence {
// If same sequence, compare by name
if si.Name > sj.Name {
sortedScripts[i], sortedScripts[j] = sortedScripts[j], sortedScripts[i] sortedScripts[i], sortedScripts[j] = sortedScripts[j], sortedScripts[i]
} }
} }
} }
}
}
// Expected order after sorting // Expected order after sorting (Priority -> Sequence -> Name)
expectedOrder := []string{ expectedOrder := []string{
"script3", // Priority 1, Sequence 1 "a_script3", // Priority 1, Sequence 1, Name a_script3
"script4", // Priority 1, Sequence 2 "b_script4", // Priority 1, Sequence 1, Name b_script4
"script2", // Priority 1, Sequence 3 "script2", // Priority 1, Sequence 3
"script1", // Priority 2, Sequence 1 "z_script1", // Priority 2, Sequence 1
"script6", // Priority 2, Sequence 2 "script6", // Priority 2, Sequence 2
"script5", // Priority 3, Sequence 1 "script5", // Priority 3, Sequence 1
} }
@@ -153,6 +163,13 @@ func TestScriptSorting(t *testing.T) {
t.Errorf("Sequence not ascending at position %d with same priority %d: %d > %d", t.Errorf("Sequence not ascending at position %d with same priority %d: %d > %d",
i, sortedScripts[i].Priority, sortedScripts[i].Sequence, sortedScripts[i+1].Sequence) i, sortedScripts[i].Priority, sortedScripts[i].Sequence, sortedScripts[i+1].Sequence)
} }
// Within same priority and sequence, names should be ascending
if sortedScripts[i].Priority == sortedScripts[i+1].Priority &&
sortedScripts[i].Sequence == sortedScripts[i+1].Sequence &&
sortedScripts[i].Name > sortedScripts[i+1].Name {
t.Errorf("Name not ascending at position %d with same priority/sequence: %s > %s",
i, sortedScripts[i].Name, sortedScripts[i+1].Name)
}
} }
} }

View File

@@ -1,6 +1,9 @@
package writers package writers
import ( import (
"regexp"
"strings"
"git.warky.dev/wdevs/relspecgo/pkg/models" "git.warky.dev/wdevs/relspecgo/pkg/models"
) )
@@ -28,3 +31,56 @@ type WriterOptions struct {
// Additional options can be added here as needed // Additional options can be added here as needed
Metadata map[string]interface{} Metadata map[string]interface{}
} }
// SanitizeFilename removes quotes, comments, and invalid characters from identifiers
// to make them safe for use in filenames. This handles:
// - Double and single quotes: "table_name" or 'table_name' -> table_name
// - DBML comments: table [note: 'description'] -> table
// - Invalid filename characters: replaced with underscores
func SanitizeFilename(name string) string {
// Remove DBML/DCTX style comments in brackets (e.g., [note: 'description'])
commentRegex := regexp.MustCompile(`\s*\[.*?\]\s*`)
name = commentRegex.ReplaceAllString(name, "")
// Remove quotes (both single and double)
name = strings.ReplaceAll(name, `"`, "")
name = strings.ReplaceAll(name, `'`, "")
// Remove backticks (MySQL style identifiers)
name = strings.ReplaceAll(name, "`", "")
// Replace invalid filename characters with underscores
// Invalid chars: / \ : * ? " < > | and control characters
invalidChars := regexp.MustCompile(`[/\\:*?"<>|\x00-\x1f\x7f]`)
name = invalidChars.ReplaceAllString(name, "_")
// Trim whitespace and consecutive underscores
name = strings.TrimSpace(name)
name = regexp.MustCompile(`_+`).ReplaceAllString(name, "_")
name = strings.Trim(name, "_")
return name
}
// SanitizeStructTagValue sanitizes a value to be safely used inside Go struct tags.
// Go struct tags are delimited by backticks, so any backtick in the value would break the syntax.
// This function:
// - Removes DBML/DCTX comments in brackets
// - Removes all quotes (double, single, and backticks)
// - Returns a clean identifier safe for use in struct tags and field names
func SanitizeStructTagValue(value string) string {
// Remove DBML/DCTX style comments in brackets (e.g., [note: 'description'])
commentRegex := regexp.MustCompile(`\s*\[.*?\]\s*`)
value = commentRegex.ReplaceAllString(value, "")
// Trim whitespace
value = strings.TrimSpace(value)
// Remove all quotes: backticks, double quotes, and single quotes
// This ensures the value is clean for use as Go identifiers and struct tag values
value = strings.ReplaceAll(value, "`", "")
value = strings.ReplaceAll(value, `"`, "")
value = strings.ReplaceAll(value, `'`, "")
return value
}

346
vendor/modules.txt vendored
View File

@@ -1,6 +1,92 @@
# 4d63.com/gocheckcompilerdirectives v1.3.0
## explicit; go 1.22.0
# 4d63.com/gochecknoglobals v0.2.2
## explicit; go 1.18
# github.com/4meepo/tagalign v1.4.2
## explicit; go 1.22.0
# github.com/Abirdcfly/dupword v0.1.3
## explicit; go 1.22.0
# github.com/Antonboom/errname v1.0.0
## explicit; go 1.22.1
# github.com/Antonboom/nilnil v1.0.1
## explicit; go 1.22.0
# github.com/Antonboom/testifylint v1.5.2
## explicit; go 1.22.1
# github.com/BurntSushi/toml v1.4.1-0.20240526193622-a339e1f7089c
## explicit; go 1.18
# github.com/Crocmagnon/fatcontext v0.7.1
## explicit; go 1.22.0
# github.com/Djarvur/go-err113 v0.0.0-20210108212216-aea10b59be24
## explicit; go 1.13
# github.com/GaijinEntertainment/go-exhaustruct/v3 v3.3.1
## explicit; go 1.23.0
# github.com/Masterminds/semver/v3 v3.3.0
## explicit; go 1.21
# github.com/OpenPeeDeeP/depguard/v2 v2.2.1
## explicit; go 1.23.0
# github.com/alecthomas/go-check-sumtype v0.3.1
## explicit; go 1.22.0
# github.com/alexkohler/nakedret/v2 v2.0.5
## explicit; go 1.21
# github.com/alexkohler/prealloc v1.0.0
## explicit; go 1.15
# github.com/alingse/asasalint v0.0.11
## explicit; go 1.18
# github.com/alingse/nilnesserr v0.1.2
## explicit; go 1.22.0
# github.com/ashanbrown/forbidigo v1.6.0
## explicit; go 1.13
# github.com/ashanbrown/makezero v1.2.0
## explicit; go 1.12
# github.com/beorn7/perks v1.0.1
## explicit; go 1.11
# github.com/bkielbasa/cyclop v1.2.3
## explicit; go 1.22.0
# github.com/blizzy78/varnamelen v0.8.0
## explicit; go 1.16
# github.com/bombsimon/wsl/v4 v4.5.0
## explicit; go 1.22
# github.com/breml/bidichk v0.3.2
## explicit; go 1.22.0
# github.com/breml/errchkjson v0.4.0
## explicit; go 1.22.0
# github.com/butuzov/ireturn v0.3.1
## explicit; go 1.18
# github.com/butuzov/mirror v1.3.0
## explicit; go 1.19
# github.com/catenacyber/perfsprint v0.8.2
## explicit; go 1.22.0
# github.com/ccojocar/zxcvbn-go v1.0.2
## explicit; go 1.20
# github.com/cespare/xxhash/v2 v2.3.0
## explicit; go 1.11
# github.com/charithe/durationcheck v0.0.10
## explicit; go 1.14
# github.com/chavacava/garif v0.1.0
## explicit; go 1.16
# github.com/ckaznocha/intrange v0.3.0
## explicit; go 1.22
# github.com/curioswitch/go-reassign v0.3.0
## explicit; go 1.21
# github.com/daixiang0/gci v0.13.5
## explicit; go 1.21
# github.com/davecgh/go-spew v1.1.1 # github.com/davecgh/go-spew v1.1.1
## explicit ## explicit
github.com/davecgh/go-spew/spew github.com/davecgh/go-spew/spew
# github.com/denis-tingaikin/go-header v0.5.0
## explicit; go 1.21
# github.com/ettle/strcase v0.2.0
## explicit; go 1.12
# github.com/fatih/color v1.18.0
## explicit; go 1.17
# github.com/fatih/structtag v1.2.0
## explicit; go 1.12
# github.com/firefart/nonamedreturns v1.0.5
## explicit; go 1.18
# github.com/fsnotify/fsnotify v1.5.4
## explicit; go 1.16
# github.com/fzipp/gocyclo v0.6.0
## explicit; go 1.18
# github.com/gdamore/encoding v1.0.1 # github.com/gdamore/encoding v1.0.1
## explicit; go 1.9 ## explicit; go 1.9
github.com/gdamore/encoding github.com/gdamore/encoding
@@ -44,9 +130,75 @@ github.com/gdamore/tcell/v2/terminfo/x/xfce
github.com/gdamore/tcell/v2/terminfo/x/xterm github.com/gdamore/tcell/v2/terminfo/x/xterm
github.com/gdamore/tcell/v2/terminfo/x/xterm_ghostty github.com/gdamore/tcell/v2/terminfo/x/xterm_ghostty
github.com/gdamore/tcell/v2/terminfo/x/xterm_kitty github.com/gdamore/tcell/v2/terminfo/x/xterm_kitty
# github.com/ghostiam/protogetter v0.3.9
## explicit; go 1.22.0
# github.com/go-critic/go-critic v0.12.0
## explicit; go 1.22.0
# github.com/go-toolsmith/astcast v1.1.0
## explicit; go 1.16
# github.com/go-toolsmith/astcopy v1.1.0
## explicit; go 1.16
# github.com/go-toolsmith/astequal v1.2.0
## explicit; go 1.18
# github.com/go-toolsmith/astfmt v1.1.0
## explicit; go 1.16
# github.com/go-toolsmith/astp v1.1.0
## explicit; go 1.16
# github.com/go-toolsmith/strparse v1.1.0
## explicit; go 1.16
# github.com/go-toolsmith/typep v1.1.0
## explicit; go 1.16
# github.com/go-viper/mapstructure/v2 v2.2.1
## explicit; go 1.18
# github.com/go-xmlfmt/xmlfmt v1.1.3
## explicit
# github.com/gobwas/glob v0.2.3
## explicit
# github.com/gofrs/flock v0.12.1
## explicit; go 1.21.0
# github.com/golang/protobuf v1.5.3
## explicit; go 1.9
# github.com/golangci/dupl v0.0.0-20250308024227-f665c8d69b32
## explicit; go 1.22.0
# github.com/golangci/go-printf-func-name v0.1.0
## explicit; go 1.22.0
# github.com/golangci/gofmt v0.0.0-20250106114630-d62b90e6713d
## explicit; go 1.22.0
# github.com/golangci/golangci-lint v1.64.8
## explicit; go 1.23.0
# github.com/golangci/misspell v0.6.0
## explicit; go 1.21
# github.com/golangci/plugin-module-register v0.1.1
## explicit; go 1.21
# github.com/golangci/revgrep v0.8.0
## explicit; go 1.21
# github.com/golangci/unconvert v0.0.0-20240309020433-c5143eacb3ed
## explicit; go 1.20
# github.com/google/go-cmp v0.7.0
## explicit; go 1.21
# github.com/google/uuid v1.6.0 # github.com/google/uuid v1.6.0
## explicit ## explicit
github.com/google/uuid github.com/google/uuid
# github.com/gordonklaus/ineffassign v0.1.0
## explicit; go 1.14
# github.com/gostaticanalysis/analysisutil v0.7.1
## explicit; go 1.16
# github.com/gostaticanalysis/comment v1.5.0
## explicit; go 1.22.9
# github.com/gostaticanalysis/forcetypeassert v0.2.0
## explicit; go 1.23.0
# github.com/gostaticanalysis/nilerr v0.1.1
## explicit; go 1.15
# github.com/hashicorp/go-immutable-radix/v2 v2.1.0
## explicit; go 1.18
# github.com/hashicorp/go-version v1.7.0
## explicit
# github.com/hashicorp/golang-lru/v2 v2.0.7
## explicit; go 1.18
# github.com/hashicorp/hcl v1.0.0
## explicit
# github.com/hexops/gotextdiff v1.0.3
## explicit; go 1.16
# github.com/inconshreveable/mousetrap v1.1.0 # github.com/inconshreveable/mousetrap v1.1.0
## explicit; go 1.18 ## explicit; go 1.18
github.com/inconshreveable/mousetrap github.com/inconshreveable/mousetrap
@@ -68,23 +220,115 @@ github.com/jackc/pgx/v5/pgconn/ctxwatch
github.com/jackc/pgx/v5/pgconn/internal/bgreader github.com/jackc/pgx/v5/pgconn/internal/bgreader
github.com/jackc/pgx/v5/pgproto3 github.com/jackc/pgx/v5/pgproto3
github.com/jackc/pgx/v5/pgtype github.com/jackc/pgx/v5/pgtype
# github.com/jgautheron/goconst v1.7.1
## explicit; go 1.13
# github.com/jingyugao/rowserrcheck v1.1.1
## explicit; go 1.13
# github.com/jinzhu/inflection v1.0.0 # github.com/jinzhu/inflection v1.0.0
## explicit ## explicit
github.com/jinzhu/inflection github.com/jinzhu/inflection
# github.com/jjti/go-spancheck v0.6.4
## explicit; go 1.22.1
# github.com/julz/importas v0.2.0
## explicit; go 1.20
# github.com/karamaru-alpha/copyloopvar v1.2.1
## explicit; go 1.21
# github.com/kisielk/errcheck v1.9.0
## explicit; go 1.22.0
# github.com/kkHAIKE/contextcheck v1.1.6
## explicit; go 1.23.0
# github.com/kr/pretty v0.3.1 # github.com/kr/pretty v0.3.1
## explicit; go 1.12 ## explicit; go 1.12
# github.com/kulti/thelper v0.6.3
## explicit; go 1.18
# github.com/kunwardeep/paralleltest v1.0.10
## explicit; go 1.17
# github.com/lasiar/canonicalheader v1.1.2
## explicit; go 1.22.0
# github.com/ldez/exptostd v0.4.2
## explicit; go 1.22.0
# github.com/ldez/gomoddirectives v0.6.1
## explicit; go 1.22.0
# github.com/ldez/grignotin v0.9.0
## explicit; go 1.22.0
# github.com/ldez/tagliatelle v0.7.1
## explicit; go 1.22.0
# github.com/ldez/usetesting v0.4.2
## explicit; go 1.22.0
# github.com/leonklingele/grouper v1.1.2
## explicit; go 1.18
# github.com/lucasb-eyer/go-colorful v1.2.0 # github.com/lucasb-eyer/go-colorful v1.2.0
## explicit; go 1.12 ## explicit; go 1.12
github.com/lucasb-eyer/go-colorful github.com/lucasb-eyer/go-colorful
# github.com/macabu/inamedparam v0.1.3
## explicit; go 1.20
# github.com/magiconair/properties v1.8.6
## explicit; go 1.13
# github.com/maratori/testableexamples v1.0.0
## explicit; go 1.19
# github.com/maratori/testpackage v1.1.1
## explicit; go 1.20
# github.com/matoous/godox v1.1.0
## explicit; go 1.18
# github.com/mattn/go-colorable v0.1.14
## explicit; go 1.18
# github.com/mattn/go-isatty v0.0.20
## explicit; go 1.15
# github.com/mattn/go-runewidth v0.0.16 # github.com/mattn/go-runewidth v0.0.16
## explicit; go 1.9 ## explicit; go 1.9
github.com/mattn/go-runewidth github.com/mattn/go-runewidth
# github.com/matttproud/golang_protobuf_extensions v1.0.1
## explicit
# github.com/mgechev/revive v1.7.0
## explicit; go 1.22.1
# github.com/mitchellh/go-homedir v1.1.0
## explicit
# github.com/mitchellh/mapstructure v1.5.0
## explicit; go 1.14
# github.com/moricho/tparallel v0.3.2
## explicit; go 1.20
# github.com/nakabonne/nestif v0.3.1
## explicit; go 1.15
# github.com/nishanths/exhaustive v0.12.0
## explicit; go 1.18
# github.com/nishanths/predeclared v0.2.2
## explicit; go 1.14
# github.com/nunnatsa/ginkgolinter v0.19.1
## explicit; go 1.23.0
# github.com/olekukonko/tablewriter v0.0.5
## explicit; go 1.12
# github.com/pelletier/go-toml v1.9.5
## explicit; go 1.12
# github.com/pelletier/go-toml/v2 v2.2.3
## explicit; go 1.21.0
# github.com/pmezard/go-difflib v1.0.0 # github.com/pmezard/go-difflib v1.0.0
## explicit ## explicit
github.com/pmezard/go-difflib/difflib github.com/pmezard/go-difflib/difflib
# github.com/polyfloyd/go-errorlint v1.7.1
## explicit; go 1.22.0
# github.com/prometheus/client_golang v1.12.1
## explicit; go 1.13
# github.com/prometheus/client_model v0.2.0
## explicit; go 1.9
# github.com/prometheus/common v0.32.1
## explicit; go 1.13
# github.com/prometheus/procfs v0.7.3
## explicit; go 1.13
# github.com/puzpuzpuz/xsync/v3 v3.5.1 # github.com/puzpuzpuz/xsync/v3 v3.5.1
## explicit; go 1.18 ## explicit; go 1.18
github.com/puzpuzpuz/xsync/v3 github.com/puzpuzpuz/xsync/v3
# github.com/quasilyte/go-ruleguard v0.4.3-0.20240823090925-0fe6f58b47b1
## explicit; go 1.19
# github.com/quasilyte/go-ruleguard/dsl v0.3.22
## explicit; go 1.15
# github.com/quasilyte/gogrep v0.5.0
## explicit; go 1.16
# github.com/quasilyte/regex/syntax v0.0.0-20210819130434-b3f0c404a727
## explicit; go 1.14
# github.com/quasilyte/stdinfo v0.0.0-20220114132959-f7386bf02567
## explicit; go 1.17
# github.com/raeperd/recvcheck v0.2.0
## explicit; go 1.22.0
# github.com/rivo/tview v0.42.0 # github.com/rivo/tview v0.42.0
## explicit; go 1.18 ## explicit; go 1.18
github.com/rivo/tview github.com/rivo/tview
@@ -93,20 +337,76 @@ github.com/rivo/tview
github.com/rivo/uniseg github.com/rivo/uniseg
# github.com/rogpeppe/go-internal v1.14.1 # github.com/rogpeppe/go-internal v1.14.1
## explicit; go 1.23 ## explicit; go 1.23
# github.com/ryancurrah/gomodguard v1.3.5
## explicit; go 1.22.0
# github.com/ryanrolds/sqlclosecheck v0.5.1
## explicit; go 1.20
# github.com/sanposhiho/wastedassign/v2 v2.1.0
## explicit; go 1.18
# github.com/santhosh-tekuri/jsonschema/v6 v6.0.1
## explicit; go 1.21
# github.com/sashamelentyev/interfacebloat v1.1.0
## explicit; go 1.18
# github.com/sashamelentyev/usestdlibvars v1.28.0
## explicit; go 1.20
# github.com/securego/gosec/v2 v2.22.2
## explicit; go 1.23.0
# github.com/sirupsen/logrus v1.9.3
## explicit; go 1.13
# github.com/sivchari/containedctx v1.0.3
## explicit; go 1.17
# github.com/sivchari/tenv v1.12.1
## explicit; go 1.22.0
# github.com/sonatard/noctx v0.1.0
## explicit; go 1.22.0
# github.com/sourcegraph/go-diff v0.7.0
## explicit; go 1.14
# github.com/spf13/afero v1.12.0
## explicit; go 1.21
# github.com/spf13/cast v1.5.0
## explicit; go 1.18
# github.com/spf13/cobra v1.10.2 # github.com/spf13/cobra v1.10.2
## explicit; go 1.15 ## explicit; go 1.15
github.com/spf13/cobra github.com/spf13/cobra
# github.com/spf13/jwalterweatherman v1.1.0
## explicit
# github.com/spf13/pflag v1.0.10 # github.com/spf13/pflag v1.0.10
## explicit; go 1.12 ## explicit; go 1.12
github.com/spf13/pflag github.com/spf13/pflag
# github.com/spf13/viper v1.12.0
## explicit; go 1.17
# github.com/ssgreg/nlreturn/v2 v2.2.1
## explicit; go 1.13
# github.com/stbenjam/no-sprintf-host-port v0.2.0
## explicit; go 1.18
# github.com/stretchr/objx v0.5.2
## explicit; go 1.20
# github.com/stretchr/testify v1.11.1 # github.com/stretchr/testify v1.11.1
## explicit; go 1.17 ## explicit; go 1.17
github.com/stretchr/testify/assert github.com/stretchr/testify/assert
github.com/stretchr/testify/assert/yaml github.com/stretchr/testify/assert/yaml
github.com/stretchr/testify/require github.com/stretchr/testify/require
# github.com/subosito/gotenv v1.4.1
## explicit; go 1.18
# github.com/tdakkota/asciicheck v0.4.1
## explicit; go 1.22.0
# github.com/tetafro/godot v1.5.0
## explicit; go 1.20
# github.com/timakin/bodyclose v0.0.0-20241017074812-ed6a65f985e3
## explicit; go 1.12
# github.com/timonwong/loggercheck v0.10.1
## explicit; go 1.22.0
# github.com/tmthrgd/go-hex v0.0.0-20190904060850-447a3041c3bc # github.com/tmthrgd/go-hex v0.0.0-20190904060850-447a3041c3bc
## explicit ## explicit
github.com/tmthrgd/go-hex github.com/tmthrgd/go-hex
# github.com/tomarrell/wrapcheck/v2 v2.10.0
## explicit; go 1.21
# github.com/tommy-muehle/go-mnd/v2 v2.5.1
## explicit; go 1.12
# github.com/ultraware/funlen v0.2.0
## explicit; go 1.22.0
# github.com/ultraware/whitespace v0.2.0
## explicit; go 1.20
# github.com/uptrace/bun v1.2.16 # github.com/uptrace/bun v1.2.16
## explicit; go 1.24.0 ## explicit; go 1.24.0
github.com/uptrace/bun github.com/uptrace/bun
@@ -118,6 +418,10 @@ github.com/uptrace/bun/internal
github.com/uptrace/bun/internal/parser github.com/uptrace/bun/internal/parser
github.com/uptrace/bun/internal/tagparser github.com/uptrace/bun/internal/tagparser
github.com/uptrace/bun/schema github.com/uptrace/bun/schema
# github.com/uudashr/gocognit v1.2.0
## explicit; go 1.19
# github.com/uudashr/iface v1.3.1
## explicit; go 1.22.1
# github.com/vmihailenco/msgpack/v5 v5.4.1 # github.com/vmihailenco/msgpack/v5 v5.4.1
## explicit; go 1.19 ## explicit; go 1.19
github.com/vmihailenco/msgpack/v5 github.com/vmihailenco/msgpack/v5
@@ -127,9 +431,37 @@ github.com/vmihailenco/msgpack/v5/msgpcode
github.com/vmihailenco/tagparser/v2 github.com/vmihailenco/tagparser/v2
github.com/vmihailenco/tagparser/v2/internal github.com/vmihailenco/tagparser/v2/internal
github.com/vmihailenco/tagparser/v2/internal/parser github.com/vmihailenco/tagparser/v2/internal/parser
# github.com/xen0n/gosmopolitan v1.2.2
## explicit; go 1.19
# github.com/yagipy/maintidx v1.0.0
## explicit; go 1.17
# github.com/yeya24/promlinter v0.3.0
## explicit; go 1.20
# github.com/ykadowak/zerologlint v0.1.5
## explicit; go 1.19
# gitlab.com/bosi/decorder v0.4.2
## explicit; go 1.20
# go-simpler.org/musttag v0.13.0
## explicit; go 1.20
# go-simpler.org/sloglint v0.9.0
## explicit; go 1.22.0
# go.uber.org/atomic v1.7.0
## explicit; go 1.13
# go.uber.org/automaxprocs v1.6.0
## explicit; go 1.20
# go.uber.org/multierr v1.6.0
## explicit; go 1.12
# go.uber.org/zap v1.24.0
## explicit; go 1.19
# golang.org/x/crypto v0.41.0 # golang.org/x/crypto v0.41.0
## explicit; go 1.23.0 ## explicit; go 1.23.0
golang.org/x/crypto/pbkdf2 golang.org/x/crypto/pbkdf2
# golang.org/x/exp/typeparams v0.0.0-20250210185358-939b2ce775ac
## explicit; go 1.18
# golang.org/x/mod v0.26.0
## explicit; go 1.23.0
# golang.org/x/sync v0.16.0
## explicit; go 1.23.0
# golang.org/x/sys v0.38.0 # golang.org/x/sys v0.38.0
## explicit; go 1.24.0 ## explicit; go 1.24.0
golang.org/x/sys/cpu golang.org/x/sys/cpu
@@ -156,6 +488,20 @@ golang.org/x/text/transform
golang.org/x/text/unicode/bidi golang.org/x/text/unicode/bidi
golang.org/x/text/unicode/norm golang.org/x/text/unicode/norm
golang.org/x/text/width golang.org/x/text/width
# golang.org/x/tools v0.35.0
## explicit; go 1.23.0
# google.golang.org/protobuf v1.36.5
## explicit; go 1.21
# gopkg.in/ini.v1 v1.67.0
## explicit
# gopkg.in/yaml.v2 v2.4.0
## explicit; go 1.15
# gopkg.in/yaml.v3 v3.0.1 # gopkg.in/yaml.v3 v3.0.1
## explicit ## explicit
gopkg.in/yaml.v3 gopkg.in/yaml.v3
# honnef.co/go/tools v0.6.1
## explicit; go 1.23
# mvdan.cc/gofumpt v0.7.0
## explicit; go 1.22
# mvdan.cc/unparam v0.0.0-20240528143540-8a5130ca722f
## explicit; go 1.21