13 Commits

Author SHA1 Message Date
92dff99725 feat(writer): enhance type conversion for PostgreSQL compatibility and add tests
Some checks failed
CI / Test (1.24) (push) Successful in -26m32s
CI / Test (1.25) (push) Successful in -26m27s
CI / Build (push) Successful in -26m48s
CI / Lint (push) Successful in -26m33s
Integration Tests / Integration Tests (push) Failing after -26m51s
Release / Build and Release (push) Successful in -26m41s
2026-01-29 21:36:23 +02:00
283b568adb feat(pgsql): add execution reporting for SQL statements
All checks were successful
CI / Test (1.24) (push) Successful in -25m29s
CI / Test (1.25) (push) Successful in -25m13s
CI / Lint (push) Successful in -26m13s
CI / Build (push) Successful in -26m27s
Integration Tests / Integration Tests (push) Successful in -26m11s
Release / Build and Release (push) Successful in -25m8s
- Implemented ExecutionReport to track the execution status of SQL statements.
- Added SchemaReport and TableReport to monitor execution per schema and table.
- Enhanced WriteDatabase to execute SQL directly on a PostgreSQL database if a connection string is provided.
- Included error handling and logging for failed statements during execution.
- Added functionality to write execution reports to a JSON file.
- Introduced utility functions to extract table names from CREATE TABLE statements and truncate long SQL statements for error messages.
2026-01-29 21:16:14 +02:00
122743ee43 feat(writer): 🎉 Improve primary key handling by checking for explicit constraints and columns
Some checks failed
CI / Test (1.25) (push) Successful in -26m17s
CI / Test (1.24) (push) Successful in -25m44s
CI / Lint (push) Successful in -26m43s
CI / Build (push) Failing after -27m1s
Release / Build and Release (push) Successful in -26m39s
Integration Tests / Integration Tests (push) Successful in -26m25s
2026-01-28 22:08:27 +02:00
91b6046b9b feat(writer): 🎉 Enhance PostgreSQL writer, fixed bugs found using origin
Some checks failed
CI / Test (1.24) (push) Failing after -24m5s
CI / Test (1.25) (push) Successful in -23m53s
CI / Build (push) Failing after -26m29s
CI / Lint (push) Successful in -26m12s
Integration Tests / Integration Tests (push) Successful in -26m20s
Release / Build and Release (push) Successful in -25m7s
2026-01-28 21:59:25 +02:00
6f55505444 feat(writer): 🎉 Enhance model name generation and formatting
All checks were successful
CI / Test (1.24) (push) Successful in -27m27s
CI / Test (1.25) (push) Successful in -27m17s
CI / Lint (push) Successful in -27m27s
CI / Build (push) Successful in -27m38s
Release / Build and Release (push) Successful in -27m24s
Integration Tests / Integration Tests (push) Successful in -27m16s
* Update model name generation to include schema name.
* Add gofmt execution after writing output files.
* Refactor relationship field naming to include schema.
* Update tests to reflect changes in model names and relationships.
2026-01-10 18:28:41 +02:00
e0e7b64c69 feat(writer): 🎉 Resolve field name collisions with methods
All checks were successful
CI / Test (1.24) (push) Successful in -27m21s
CI / Test (1.25) (push) Successful in -27m12s
CI / Build (push) Successful in -27m37s
CI / Lint (push) Successful in -27m26s
Release / Build and Release (push) Successful in -27m25s
Integration Tests / Integration Tests (push) Successful in -27m20s
* Implement field name collision resolution in model generation.
* Add tests to verify renaming of fields that conflict with generated method names.
* Ensure primary key type safety in UpdateID method.
2026-01-10 17:54:33 +02:00
4181cb1fbd feat(writer): 🎉 Enhance relationship field naming and uniqueness
All checks were successful
CI / Test (1.24) (push) Successful in -27m15s
CI / Test (1.25) (push) Successful in -27m10s
CI / Build (push) Successful in -27m38s
CI / Lint (push) Successful in -27m25s
Release / Build and Release (push) Successful in -27m27s
Integration Tests / Integration Tests (push) Successful in -27m18s
* Update relationship field naming conventions for has-one and has-many relationships.
* Implement logic to ensure unique field names by tracking used names.
* Add tests to verify new naming conventions and uniqueness constraints.
2026-01-10 17:45:13 +02:00
120ffc6a5a feat(writer): 🎉 Update relationship field naming convention
All checks were successful
CI / Test (1.24) (push) Successful in -27m26s
CI / Test (1.25) (push) Successful in -27m14s
CI / Lint (push) Successful in -27m27s
CI / Build (push) Successful in -27m36s
Release / Build and Release (push) Successful in -27m22s
Integration Tests / Integration Tests (push) Successful in -27m17s
* Refactor generateRelationshipFieldName to use foreign key columns for unique naming.
* Add test for multiple references to the same table to ensure unique relationship field names.
* Update existing tests to reflect new naming convention.
2026-01-10 13:49:54 +02:00
b20ad35485 feat(writer): 🎉 Add sanitization for struct tag values
All checks were successful
CI / Test (1.24) (push) Successful in -27m25s
CI / Test (1.25) (push) Successful in -27m17s
CI / Build (push) Successful in -27m36s
CI / Lint (push) Successful in -27m23s
Release / Build and Release (push) Successful in -27m21s
Integration Tests / Integration Tests (push) Successful in -27m16s
* Implement SanitizeStructTagValue function to clean identifiers for struct tags.
* Update model data generation to use sanitized column names.
* Ensure safe handling of backticks in column names and types across writers.
2026-01-10 13:42:25 +02:00
f258f8baeb feat(writer): 🎉 Add filename sanitization for DBML identifiers
All checks were successful
CI / Test (1.24) (push) Successful in -27m23s
CI / Test (1.25) (push) Successful in -27m16s
CI / Build (push) Successful in -27m40s
CI / Lint (push) Successful in -27m29s
Release / Build and Release (push) Successful in -27m21s
Integration Tests / Integration Tests (push) Successful in -27m17s
* Implement SanitizeFilename function to clean identifiers
* Remove quotes, comments, and invalid characters from filenames
* Update filename generation in writers to use sanitized names
2026-01-10 13:32:33 +02:00
6388daba56 feat(reader): 🎉 Add support for multi-file DBML loading
All checks were successful
CI / Test (1.24) (push) Successful in -27m13s
CI / Test (1.25) (push) Successful in -27m5s
CI / Build (push) Successful in -27m16s
CI / Lint (push) Successful in -27m0s
Integration Tests / Integration Tests (push) Successful in -27m14s
Release / Build and Release (push) Successful in -25m52s
* Implement directory reading for DBML files.
* Merge schemas and tables from multiple files.
* Add tests for multi-file loading and merging behavior.
* Enhance file discovery and sorting logic.
2026-01-10 13:17:30 +02:00
f6c3f2b460 feat(bun): 🎉 Enhance nullability handling in column parsing
All checks were successful
CI / Test (1.24) (push) Successful in -27m40s
CI / Test (1.25) (push) Successful in -27m32s
CI / Lint (push) Successful in -27m46s
CI / Build (push) Successful in -27m56s
Integration Tests / Integration Tests (push) Successful in -27m40s
* Introduce explicit nullability markers in column tags.
* Update logic to infer nullability based on Go types when no markers are present.
* Ensure correct tags are generated for nullable and non-nullable fields.
2026-01-04 22:11:44 +02:00
156e655571 chore(ci): 🎉 Install PostgreSQL client for integration tests
Some checks failed
CI / Test (1.24) (push) Successful in -27m31s
CI / Lint (push) Successful in -27m52s
CI / Test (1.25) (push) Successful in -27m35s
CI / Build (push) Successful in -28m5s
Integration Tests / Integration Tests (push) Failing after -27m44s
2026-01-04 22:04:20 +02:00
28 changed files with 2720 additions and 167 deletions

View File

@@ -46,6 +46,11 @@ jobs:
- name: Download dependencies - name: Download dependencies
run: go mod download run: go mod download
- name: Install PostgreSQL client
run: |
sudo apt-get update
sudo apt-get install -y postgresql-client
- name: Initialize test database - name: Initialize test database
env: env:
PGPASSWORD: relspec_test_password PGPASSWORD: relspec_test_password

View File

@@ -55,6 +55,7 @@ var (
mergeSkipSequences bool mergeSkipSequences bool
mergeSkipTables string // Comma-separated table names to skip mergeSkipTables string // Comma-separated table names to skip
mergeVerbose bool mergeVerbose bool
mergeReportPath string // Path to write merge report
) )
var mergeCmd = &cobra.Command{ var mergeCmd = &cobra.Command{
@@ -78,6 +79,12 @@ Examples:
--source pgsql --source-conn "postgres://user:pass@localhost/source_db" \ --source pgsql --source-conn "postgres://user:pass@localhost/source_db" \
--output json --output-path combined.json --output json --output-path combined.json
# Merge and execute on PostgreSQL database with report
relspec merge --target json --target-path base.json \
--source json --source-path additional.json \
--output pgsql --output-conn "postgres://user:pass@localhost/target_db" \
--merge-report merge-report.json
# Merge DBML and YAML, skip relations # Merge DBML and YAML, skip relations
relspec merge --target dbml --target-path schema.dbml \ relspec merge --target dbml --target-path schema.dbml \
--source yaml --source-path tables.yaml \ --source yaml --source-path tables.yaml \
@@ -115,6 +122,7 @@ func init() {
mergeCmd.Flags().BoolVar(&mergeSkipSequences, "skip-sequences", false, "Skip sequences during merge") mergeCmd.Flags().BoolVar(&mergeSkipSequences, "skip-sequences", false, "Skip sequences during merge")
mergeCmd.Flags().StringVar(&mergeSkipTables, "skip-tables", "", "Comma-separated list of table names to skip during merge") mergeCmd.Flags().StringVar(&mergeSkipTables, "skip-tables", "", "Comma-separated list of table names to skip during merge")
mergeCmd.Flags().BoolVar(&mergeVerbose, "verbose", false, "Show verbose output") mergeCmd.Flags().BoolVar(&mergeVerbose, "verbose", false, "Show verbose output")
mergeCmd.Flags().StringVar(&mergeReportPath, "merge-report", "", "Path to write merge report (JSON format)")
} }
func runMerge(cmd *cobra.Command, args []string) error { func runMerge(cmd *cobra.Command, args []string) error {
@@ -229,7 +237,7 @@ func runMerge(cmd *cobra.Command, args []string) error {
fmt.Fprintf(os.Stderr, " Path: %s\n", mergeOutputPath) fmt.Fprintf(os.Stderr, " Path: %s\n", mergeOutputPath)
} }
err = writeDatabaseForMerge(mergeOutputType, mergeOutputPath, "", targetDB, "Output") err = writeDatabaseForMerge(mergeOutputType, mergeOutputPath, mergeOutputConn, targetDB, "Output")
if err != nil { if err != nil {
return fmt.Errorf("failed to write output: %w", err) return fmt.Errorf("failed to write output: %w", err)
} }
@@ -376,7 +384,17 @@ func writeDatabaseForMerge(dbType, filePath, connString string, db *models.Datab
} }
writer = wtypeorm.NewWriter(&writers.WriterOptions{OutputPath: filePath}) writer = wtypeorm.NewWriter(&writers.WriterOptions{OutputPath: filePath})
case "pgsql": case "pgsql":
writer = wpgsql.NewWriter(&writers.WriterOptions{OutputPath: filePath}) writerOpts := &writers.WriterOptions{OutputPath: filePath}
if connString != "" {
writerOpts.Metadata = map[string]interface{}{
"connection_string": connString,
}
// Add report path if merge report is enabled
if mergeReportPath != "" {
writerOpts.Metadata["report_path"] = mergeReportPath
}
}
writer = wpgsql.NewWriter(writerOpts)
default: default:
return fmt.Errorf("%s: unsupported format '%s'", label, dbType) return fmt.Errorf("%s: unsupported format '%s'", label, dbType)
} }

View File

@@ -4,31 +4,31 @@ import "strings"
var GoToStdTypes = map[string]string{ var GoToStdTypes = map[string]string{
"bool": "boolean", "bool": "boolean",
"int64": "integer", "int64": "bigint",
"int": "integer", "int": "integer",
"int8": "integer", "int8": "smallint",
"int16": "integer", "int16": "smallint",
"int32": "integer", "int32": "integer",
"uint": "integer", "uint": "integer",
"uint8": "integer", "uint8": "smallint",
"uint16": "integer", "uint16": "smallint",
"uint32": "integer", "uint32": "integer",
"uint64": "integer", "uint64": "bigint",
"uintptr": "integer", "uintptr": "bigint",
"znullint64": "integer", "znullint64": "bigint",
"znullint32": "integer", "znullint32": "integer",
"znullbyte": "integer", "znullbyte": "smallint",
"float64": "double", "float64": "double",
"float32": "double", "float32": "double",
"complex64": "double", "complex64": "double",
"complex128": "double", "complex128": "double",
"customfloat64": "double", "customfloat64": "double",
"string": "string", "string": "text",
"Pointer": "integer", "Pointer": "bigint",
"[]byte": "blob", "[]byte": "blob",
"customdate": "string", "customdate": "date",
"customtime": "string", "customtime": "time",
"customtimestamp": "string", "customtimestamp": "timestamp",
"sqlfloat64": "double", "sqlfloat64": "double",
"sqlfloat16": "double", "sqlfloat16": "double",
"sqluuid": "uuid", "sqluuid": "uuid",
@@ -36,9 +36,9 @@ var GoToStdTypes = map[string]string{
"sqljson": "json", "sqljson": "json",
"sqlint64": "bigint", "sqlint64": "bigint",
"sqlint32": "integer", "sqlint32": "integer",
"sqlint16": "integer", "sqlint16": "smallint",
"sqlbool": "boolean", "sqlbool": "boolean",
"sqlstring": "string", "sqlstring": "text",
"nullablejsonb": "jsonb", "nullablejsonb": "jsonb",
"nullablejson": "json", "nullablejson": "json",
"nullableuuid": "uuid", "nullableuuid": "uuid",
@@ -67,7 +67,7 @@ var GoToPGSQLTypes = map[string]string{
"float32": "real", "float32": "real",
"complex64": "double precision", "complex64": "double precision",
"complex128": "double precision", "complex128": "double precision",
"customfloat64": "double precisio", "customfloat64": "double precision",
"string": "text", "string": "text",
"Pointer": "bigint", "Pointer": "bigint",
"[]byte": "bytea", "[]byte": "bytea",
@@ -81,9 +81,9 @@ var GoToPGSQLTypes = map[string]string{
"sqljson": "json", "sqljson": "json",
"sqlint64": "bigint", "sqlint64": "bigint",
"sqlint32": "integer", "sqlint32": "integer",
"sqlint16": "integer", "sqlint16": "smallint",
"sqlbool": "boolean", "sqlbool": "boolean",
"sqlstring": "string", "sqlstring": "text",
"nullablejsonb": "jsonb", "nullablejsonb": "jsonb",
"nullablejson": "json", "nullablejson": "json",
"nullableuuid": "uuid", "nullableuuid": "uuid",

View File

@@ -632,6 +632,9 @@ func (r *Reader) parseColumn(fieldName string, fieldType ast.Expr, tag string, s
column.Name = parts[0] column.Name = parts[0]
} }
// Track if we found explicit nullability markers
hasExplicitNullableMarker := false
// Parse tag attributes // Parse tag attributes
for _, part := range parts[1:] { for _, part := range parts[1:] {
kv := strings.SplitN(part, ":", 2) kv := strings.SplitN(part, ":", 2)
@@ -649,6 +652,10 @@ func (r *Reader) parseColumn(fieldName string, fieldType ast.Expr, tag string, s
column.IsPrimaryKey = true column.IsPrimaryKey = true
case "notnull": case "notnull":
column.NotNull = true column.NotNull = true
hasExplicitNullableMarker = true
case "nullzero":
column.NotNull = false
hasExplicitNullableMarker = true
case "autoincrement": case "autoincrement":
column.AutoIncrement = true column.AutoIncrement = true
case "default": case "default":
@@ -664,17 +671,15 @@ func (r *Reader) parseColumn(fieldName string, fieldType ast.Expr, tag string, s
// Determine if nullable based on Go type and bun tags // Determine if nullable based on Go type and bun tags
// In Bun: // In Bun:
// - nullzero tag means the field is nullable (can be NULL in DB) // - explicit "notnull" tag means NOT NULL
// - absence of nullzero means the field is NOT NULL // - explicit "nullzero" tag means nullable
// - primitive types (int64, bool, string) are NOT NULL by default // - absence of explicit markers: infer from Go type
column.NotNull = true if !hasExplicitNullableMarker {
// Primary keys are always NOT NULL // Infer from Go type if no explicit marker found
if strings.Contains(bunTag, "nullzero") {
column.NotNull = false
} else {
column.NotNull = !r.isNullableGoType(fieldType) column.NotNull = !r.isNullableGoType(fieldType)
} }
// Primary keys are always NOT NULL
if column.IsPrimaryKey { if column.IsPrimaryKey {
column.NotNull = true column.NotNull = true
} }

View File

@@ -4,7 +4,9 @@ import (
"bufio" "bufio"
"fmt" "fmt"
"os" "os"
"path/filepath"
"regexp" "regexp"
"sort"
"strings" "strings"
"git.warky.dev/wdevs/relspecgo/pkg/models" "git.warky.dev/wdevs/relspecgo/pkg/models"
@@ -24,11 +26,23 @@ func NewReader(options *readers.ReaderOptions) *Reader {
} }
// ReadDatabase reads and parses DBML input, returning a Database model // ReadDatabase reads and parses DBML input, returning a Database model
// If FilePath points to a directory, all .dbml files are loaded and merged
func (r *Reader) ReadDatabase() (*models.Database, error) { func (r *Reader) ReadDatabase() (*models.Database, error) {
if r.options.FilePath == "" { if r.options.FilePath == "" {
return nil, fmt.Errorf("file path is required for DBML reader") return nil, fmt.Errorf("file path is required for DBML reader")
} }
// Check if path is a directory
info, err := os.Stat(r.options.FilePath)
if err != nil {
return nil, fmt.Errorf("failed to stat path: %w", err)
}
if info.IsDir() {
return r.readDirectoryDBML(r.options.FilePath)
}
// Single file - existing logic
content, err := os.ReadFile(r.options.FilePath) content, err := os.ReadFile(r.options.FilePath)
if err != nil { if err != nil {
return nil, fmt.Errorf("failed to read file: %w", err) return nil, fmt.Errorf("failed to read file: %w", err)
@@ -67,15 +81,301 @@ func (r *Reader) ReadTable() (*models.Table, error) {
return schema.Tables[0], nil return schema.Tables[0], nil
} }
// stripQuotes removes surrounding quotes from an identifier // readDirectoryDBML processes all .dbml files in directory
// Returns merged Database model
func (r *Reader) readDirectoryDBML(dirPath string) (*models.Database, error) {
// Discover and sort DBML files
files, err := r.discoverDBMLFiles(dirPath)
if err != nil {
return nil, fmt.Errorf("failed to discover DBML files: %w", err)
}
// If no files found, return empty database
if len(files) == 0 {
db := models.InitDatabase("database")
if r.options.Metadata != nil {
if name, ok := r.options.Metadata["name"].(string); ok {
db.Name = name
}
}
return db, nil
}
// Initialize database (will be merged with files)
var db *models.Database
// Process each file in sorted order
for _, filePath := range files {
content, err := os.ReadFile(filePath)
if err != nil {
return nil, fmt.Errorf("failed to read file %s: %w", filePath, err)
}
fileDB, err := r.parseDBML(string(content))
if err != nil {
return nil, fmt.Errorf("failed to parse file %s: %w", filePath, err)
}
// First file initializes the database
if db == nil {
db = fileDB
} else {
// Subsequent files are merged
mergeDatabase(db, fileDB)
}
}
return db, nil
}
// stripQuotes removes surrounding quotes and comments from an identifier
func stripQuotes(s string) string { func stripQuotes(s string) string {
s = strings.TrimSpace(s) s = strings.TrimSpace(s)
// Remove DBML comments in brackets (e.g., [note: 'description'])
// This handles inline comments like: "table_name" [note: 'comment']
commentRegex := regexp.MustCompile(`\s*\[.*?\]\s*`)
s = commentRegex.ReplaceAllString(s, "")
// Trim again after removing comments
s = strings.TrimSpace(s)
// Remove surrounding quotes (double or single)
if len(s) >= 2 && ((s[0] == '"' && s[len(s)-1] == '"') || (s[0] == '\'' && s[len(s)-1] == '\'')) { if len(s) >= 2 && ((s[0] == '"' && s[len(s)-1] == '"') || (s[0] == '\'' && s[len(s)-1] == '\'')) {
return s[1 : len(s)-1] return s[1 : len(s)-1]
} }
return s return s
} }
// parseFilePrefix extracts numeric prefix from filename
// Examples: "1_schema.dbml" -> (1, true), "tables.dbml" -> (0, false)
func parseFilePrefix(filename string) (int, bool) {
base := filepath.Base(filename)
re := regexp.MustCompile(`^(\d+)[_-]`)
matches := re.FindStringSubmatch(base)
if len(matches) > 1 {
var prefix int
_, err := fmt.Sscanf(matches[1], "%d", &prefix)
if err == nil {
return prefix, true
}
}
return 0, false
}
// hasCommentedRefs scans file content for commented-out Ref statements
// Returns true if file contains lines like: // Ref: table.col > other.col
func hasCommentedRefs(filePath string) (bool, error) {
content, err := os.ReadFile(filePath)
if err != nil {
return false, err
}
scanner := bufio.NewScanner(strings.NewReader(string(content)))
commentedRefRegex := regexp.MustCompile(`^\s*//.*Ref:\s+`)
for scanner.Scan() {
line := scanner.Text()
if commentedRefRegex.MatchString(line) {
return true, nil
}
}
return false, nil
}
// discoverDBMLFiles finds all .dbml files in directory and returns them sorted
func (r *Reader) discoverDBMLFiles(dirPath string) ([]string, error) {
pattern := filepath.Join(dirPath, "*.dbml")
files, err := filepath.Glob(pattern)
if err != nil {
return nil, fmt.Errorf("failed to glob .dbml files: %w", err)
}
return sortDBMLFiles(files), nil
}
// sortDBMLFiles sorts files by:
// 1. Files without commented refs (by numeric prefix, then alphabetically)
// 2. Files with commented refs (by numeric prefix, then alphabetically)
func sortDBMLFiles(files []string) []string {
// Create a slice to hold file info for sorting
type fileInfo struct {
path string
hasCommented bool
prefix int
hasPrefix bool
basename string
}
fileInfos := make([]fileInfo, 0, len(files))
for _, file := range files {
hasCommented, err := hasCommentedRefs(file)
if err != nil {
// If we can't read the file, treat it as not having commented refs
hasCommented = false
}
prefix, hasPrefix := parseFilePrefix(file)
basename := filepath.Base(file)
fileInfos = append(fileInfos, fileInfo{
path: file,
hasCommented: hasCommented,
prefix: prefix,
hasPrefix: hasPrefix,
basename: basename,
})
}
// Sort by: hasCommented (false first), hasPrefix (true first), prefix, basename
sort.Slice(fileInfos, func(i, j int) bool {
// First, sort by commented refs (files without commented refs come first)
if fileInfos[i].hasCommented != fileInfos[j].hasCommented {
return !fileInfos[i].hasCommented
}
// Then by presence of prefix (files with prefix come first)
if fileInfos[i].hasPrefix != fileInfos[j].hasPrefix {
return fileInfos[i].hasPrefix
}
// If both have prefix, sort by prefix value
if fileInfos[i].hasPrefix && fileInfos[j].hasPrefix {
if fileInfos[i].prefix != fileInfos[j].prefix {
return fileInfos[i].prefix < fileInfos[j].prefix
}
}
// Finally, sort alphabetically by basename
return fileInfos[i].basename < fileInfos[j].basename
})
// Extract sorted paths
sortedFiles := make([]string, len(fileInfos))
for i, info := range fileInfos {
sortedFiles[i] = info.path
}
return sortedFiles
}
// mergeTable combines two table definitions
// Merges: Columns (map), Constraints (map), Indexes (map), Relationships (map)
// Uses first non-empty Description
func mergeTable(baseTable, fileTable *models.Table) {
// Merge columns (map naturally merges - later keys overwrite)
for key, col := range fileTable.Columns {
baseTable.Columns[key] = col
}
// Merge constraints
for key, constraint := range fileTable.Constraints {
baseTable.Constraints[key] = constraint
}
// Merge indexes
for key, index := range fileTable.Indexes {
baseTable.Indexes[key] = index
}
// Merge relationships
for key, rel := range fileTable.Relationships {
baseTable.Relationships[key] = rel
}
// Use first non-empty description
if baseTable.Description == "" && fileTable.Description != "" {
baseTable.Description = fileTable.Description
}
// Merge metadata maps
if baseTable.Metadata == nil {
baseTable.Metadata = make(map[string]any)
}
for key, val := range fileTable.Metadata {
baseTable.Metadata[key] = val
}
}
// mergeSchema finds or creates schema and merges tables
func mergeSchema(baseDB *models.Database, fileSchema *models.Schema) {
// Find existing schema by name (normalize names by stripping quotes)
var existingSchema *models.Schema
fileSchemaName := stripQuotes(fileSchema.Name)
for _, schema := range baseDB.Schemas {
if stripQuotes(schema.Name) == fileSchemaName {
existingSchema = schema
break
}
}
// If schema doesn't exist, add it and return
if existingSchema == nil {
baseDB.Schemas = append(baseDB.Schemas, fileSchema)
return
}
// Merge tables from fileSchema into existingSchema
for _, fileTable := range fileSchema.Tables {
// Find existing table by name (normalize names by stripping quotes)
var existingTable *models.Table
fileTableName := stripQuotes(fileTable.Name)
for _, table := range existingSchema.Tables {
if stripQuotes(table.Name) == fileTableName {
existingTable = table
break
}
}
// If table doesn't exist, add it
if existingTable == nil {
existingSchema.Tables = append(existingSchema.Tables, fileTable)
} else {
// Merge table properties - tables are identical, skip
mergeTable(existingTable, fileTable)
}
}
// Merge other schema properties
existingSchema.Views = append(existingSchema.Views, fileSchema.Views...)
existingSchema.Sequences = append(existingSchema.Sequences, fileSchema.Sequences...)
existingSchema.Scripts = append(existingSchema.Scripts, fileSchema.Scripts...)
// Merge permissions
if existingSchema.Permissions == nil {
existingSchema.Permissions = make(map[string]string)
}
for key, val := range fileSchema.Permissions {
existingSchema.Permissions[key] = val
}
// Merge metadata
if existingSchema.Metadata == nil {
existingSchema.Metadata = make(map[string]any)
}
for key, val := range fileSchema.Metadata {
existingSchema.Metadata[key] = val
}
}
// mergeDatabase merges schemas from fileDB into baseDB
func mergeDatabase(baseDB, fileDB *models.Database) {
// Merge each schema from fileDB
for _, fileSchema := range fileDB.Schemas {
mergeSchema(baseDB, fileSchema)
}
// Merge domains
baseDB.Domains = append(baseDB.Domains, fileDB.Domains...)
// Use first non-empty description
if baseDB.Description == "" && fileDB.Description != "" {
baseDB.Description = fileDB.Description
}
}
// parseDBML parses DBML content and returns a Database model // parseDBML parses DBML content and returns a Database model
func (r *Reader) parseDBML(content string) (*models.Database, error) { func (r *Reader) parseDBML(content string) (*models.Database, error) {
db := models.InitDatabase("database") db := models.InitDatabase("database")
@@ -287,10 +587,10 @@ func (r *Reader) parseColumn(line, tableName, schemaName string) (*models.Column
refOp := strings.TrimSpace(refStr) refOp := strings.TrimSpace(refStr)
var isReverse bool var isReverse bool
if strings.HasPrefix(refOp, "<") { if strings.HasPrefix(refOp, "<") {
isReverse = column.IsPrimaryKey // < on PK means "is referenced by" (reverse) // < means "is referenced by" - only makes sense on PK columns
} else if strings.HasPrefix(refOp, ">") { isReverse = column.IsPrimaryKey
isReverse = !column.IsPrimaryKey // > on FK means reverse
} }
// > means "references" - always a forward FK, never reverse
constraint = r.parseRef(refStr) constraint = r.parseRef(refStr)
if constraint != nil { if constraint != nil {
@@ -332,27 +632,31 @@ func (r *Reader) parseIndex(line, tableName, schemaName string) *models.Index {
// Format: (columns) [attributes] OR columnname [attributes] // Format: (columns) [attributes] OR columnname [attributes]
var columns []string var columns []string
if strings.Contains(line, "(") && strings.Contains(line, ")") { // Find the attributes section to avoid parsing parentheses in notes/attributes
attrStart := strings.Index(line, "[")
columnPart := line
if attrStart > 0 {
columnPart = line[:attrStart]
}
if strings.Contains(columnPart, "(") && strings.Contains(columnPart, ")") {
// Multi-column format: (col1, col2) [attributes] // Multi-column format: (col1, col2) [attributes]
colStart := strings.Index(line, "(") colStart := strings.Index(columnPart, "(")
colEnd := strings.Index(line, ")") colEnd := strings.Index(columnPart, ")")
if colStart >= colEnd { if colStart >= colEnd {
return nil return nil
} }
columnsStr := line[colStart+1 : colEnd] columnsStr := columnPart[colStart+1 : colEnd]
for _, col := range strings.Split(columnsStr, ",") { for _, col := range strings.Split(columnsStr, ",") {
columns = append(columns, stripQuotes(strings.TrimSpace(col))) columns = append(columns, stripQuotes(strings.TrimSpace(col)))
} }
} else if strings.Contains(line, "[") { } else if attrStart > 0 {
// Single column format: columnname [attributes] // Single column format: columnname [attributes]
// Extract column name before the bracket // Extract column name before the bracket
idx := strings.Index(line, "[") colName := strings.TrimSpace(columnPart)
if idx > 0 { if colName != "" {
colName := strings.TrimSpace(line[:idx]) columns = []string{stripQuotes(colName)}
if colName != "" {
columns = []string{stripQuotes(colName)}
}
} }
} }

View File

@@ -1,6 +1,7 @@
package dbml package dbml
import ( import (
"os"
"path/filepath" "path/filepath"
"testing" "testing"
@@ -517,3 +518,286 @@ func TestGetForeignKeys(t *testing.T) {
t.Error("Expected foreign key constraint type") t.Error("Expected foreign key constraint type")
} }
} }
// Tests for multi-file directory loading
func TestReadDirectory_MultipleFiles(t *testing.T) {
opts := &readers.ReaderOptions{
FilePath: filepath.Join("..", "..", "..", "tests", "assets", "dbml", "multifile"),
}
reader := NewReader(opts)
db, err := reader.ReadDatabase()
if err != nil {
t.Fatalf("ReadDatabase() error = %v", err)
}
if db == nil {
t.Fatal("ReadDatabase() returned nil database")
}
// Should have public schema
if len(db.Schemas) == 0 {
t.Fatal("Expected at least one schema")
}
var publicSchema *models.Schema
for _, schema := range db.Schemas {
if schema.Name == "public" {
publicSchema = schema
break
}
}
if publicSchema == nil {
t.Fatal("Public schema not found")
}
// Should have 3 tables: users, posts, comments
if len(publicSchema.Tables) != 3 {
t.Fatalf("Expected 3 tables, got %d", len(publicSchema.Tables))
}
// Find tables
var usersTable, postsTable, commentsTable *models.Table
for _, table := range publicSchema.Tables {
switch table.Name {
case "users":
usersTable = table
case "posts":
postsTable = table
case "comments":
commentsTable = table
}
}
if usersTable == nil {
t.Fatal("Users table not found")
}
if postsTable == nil {
t.Fatal("Posts table not found")
}
if commentsTable == nil {
t.Fatal("Comments table not found")
}
// Verify users table has merged columns from 1_users.dbml and 3_add_columns.dbml
expectedUserColumns := []string{"id", "email", "name", "created_at"}
if len(usersTable.Columns) != len(expectedUserColumns) {
t.Errorf("Expected %d columns in users table, got %d", len(expectedUserColumns), len(usersTable.Columns))
}
for _, colName := range expectedUserColumns {
if _, exists := usersTable.Columns[colName]; !exists {
t.Errorf("Expected column '%s' in users table", colName)
}
}
// Verify posts table columns
expectedPostColumns := []string{"id", "user_id", "title", "content", "created_at"}
for _, colName := range expectedPostColumns {
if _, exists := postsTable.Columns[colName]; !exists {
t.Errorf("Expected column '%s' in posts table", colName)
}
}
}
func TestReadDirectory_TableMerging(t *testing.T) {
opts := &readers.ReaderOptions{
FilePath: filepath.Join("..", "..", "..", "tests", "assets", "dbml", "multifile"),
}
reader := NewReader(opts)
db, err := reader.ReadDatabase()
if err != nil {
t.Fatalf("ReadDatabase() error = %v", err)
}
// Find users table
var usersTable *models.Table
for _, schema := range db.Schemas {
for _, table := range schema.Tables {
if table.Name == "users" && schema.Name == "public" {
usersTable = table
break
}
}
}
if usersTable == nil {
t.Fatal("Users table not found")
}
// Verify columns from file 1 (id, email)
if _, exists := usersTable.Columns["id"]; !exists {
t.Error("Column 'id' from 1_users.dbml not found")
}
if _, exists := usersTable.Columns["email"]; !exists {
t.Error("Column 'email' from 1_users.dbml not found")
}
// Verify columns from file 3 (name, created_at)
if _, exists := usersTable.Columns["name"]; !exists {
t.Error("Column 'name' from 3_add_columns.dbml not found")
}
if _, exists := usersTable.Columns["created_at"]; !exists {
t.Error("Column 'created_at' from 3_add_columns.dbml not found")
}
// Verify column properties from file 1
emailCol := usersTable.Columns["email"]
if !emailCol.NotNull {
t.Error("Email column should be not null (from 1_users.dbml)")
}
if emailCol.Type != "varchar(255)" {
t.Errorf("Expected email type 'varchar(255)', got '%s'", emailCol.Type)
}
}
func TestReadDirectory_CommentedRefsLast(t *testing.T) {
// This test verifies that files with commented refs are processed last
// by checking that the file discovery returns them in the correct order
dirPath := filepath.Join("..", "..", "..", "tests", "assets", "dbml", "multifile")
opts := &readers.ReaderOptions{
FilePath: dirPath,
}
reader := NewReader(opts)
files, err := reader.discoverDBMLFiles(dirPath)
if err != nil {
t.Fatalf("discoverDBMLFiles() error = %v", err)
}
if len(files) < 2 {
t.Skip("Not enough files to test ordering")
}
// Check that 9_refs.dbml (which has commented refs) comes last
lastFile := filepath.Base(files[len(files)-1])
if lastFile != "9_refs.dbml" {
t.Errorf("Expected last file to be '9_refs.dbml' (has commented refs), got '%s'", lastFile)
}
// Check that numbered files without commented refs come first
firstFile := filepath.Base(files[0])
if firstFile != "1_users.dbml" {
t.Errorf("Expected first file to be '1_users.dbml', got '%s'", firstFile)
}
}
func TestReadDirectory_EmptyDirectory(t *testing.T) {
// Create a temporary empty directory
tmpDir := filepath.Join("..", "..", "..", "tests", "assets", "dbml", "empty_test_dir")
err := os.MkdirAll(tmpDir, 0755)
if err != nil {
t.Fatalf("Failed to create temp directory: %v", err)
}
defer os.RemoveAll(tmpDir)
opts := &readers.ReaderOptions{
FilePath: tmpDir,
}
reader := NewReader(opts)
db, err := reader.ReadDatabase()
if err != nil {
t.Fatalf("ReadDatabase() should not error on empty directory, got: %v", err)
}
if db == nil {
t.Fatal("ReadDatabase() returned nil database")
}
// Empty directory should return empty database
if len(db.Schemas) != 0 {
t.Errorf("Expected 0 schemas for empty directory, got %d", len(db.Schemas))
}
}
func TestReadDatabase_BackwardCompat(t *testing.T) {
// Test that single file loading still works
opts := &readers.ReaderOptions{
FilePath: filepath.Join("..", "..", "..", "tests", "assets", "dbml", "simple.dbml"),
}
reader := NewReader(opts)
db, err := reader.ReadDatabase()
if err != nil {
t.Fatalf("ReadDatabase() error = %v", err)
}
if db == nil {
t.Fatal("ReadDatabase() returned nil database")
}
if len(db.Schemas) == 0 {
t.Fatal("Expected at least one schema")
}
schema := db.Schemas[0]
if len(schema.Tables) != 1 {
t.Fatalf("Expected 1 table, got %d", len(schema.Tables))
}
table := schema.Tables[0]
if table.Name != "users" {
t.Errorf("Expected table name 'users', got '%s'", table.Name)
}
}
func TestParseFilePrefix(t *testing.T) {
tests := []struct {
filename string
wantPrefix int
wantHas bool
}{
{"1_schema.dbml", 1, true},
{"2_tables.dbml", 2, true},
{"10_relationships.dbml", 10, true},
{"99_data.dbml", 99, true},
{"schema.dbml", 0, false},
{"tables_no_prefix.dbml", 0, false},
{"/path/to/1_file.dbml", 1, true},
{"/path/to/file.dbml", 0, false},
{"1-file.dbml", 1, true},
{"2-another.dbml", 2, true},
}
for _, tt := range tests {
t.Run(tt.filename, func(t *testing.T) {
gotPrefix, gotHas := parseFilePrefix(tt.filename)
if gotPrefix != tt.wantPrefix {
t.Errorf("parseFilePrefix(%s) prefix = %d, want %d", tt.filename, gotPrefix, tt.wantPrefix)
}
if gotHas != tt.wantHas {
t.Errorf("parseFilePrefix(%s) hasPrefix = %v, want %v", tt.filename, gotHas, tt.wantHas)
}
})
}
}
func TestHasCommentedRefs(t *testing.T) {
// Test with the actual multifile test fixtures
tests := []struct {
filename string
wantHas bool
}{
{filepath.Join("..", "..", "..", "tests", "assets", "dbml", "multifile", "1_users.dbml"), false},
{filepath.Join("..", "..", "..", "tests", "assets", "dbml", "multifile", "2_posts.dbml"), false},
{filepath.Join("..", "..", "..", "tests", "assets", "dbml", "multifile", "3_add_columns.dbml"), false},
{filepath.Join("..", "..", "..", "tests", "assets", "dbml", "multifile", "9_refs.dbml"), true},
}
for _, tt := range tests {
t.Run(filepath.Base(tt.filename), func(t *testing.T) {
gotHas, err := hasCommentedRefs(tt.filename)
if err != nil {
t.Fatalf("hasCommentedRefs() error = %v", err)
}
if gotHas != tt.wantHas {
t.Errorf("hasCommentedRefs(%s) = %v, want %v", filepath.Base(tt.filename), gotHas, tt.wantHas)
}
})
}
}

View File

@@ -329,10 +329,10 @@ func (r *Reader) deriveRelationship(table *models.Table, fk *models.Constraint)
relationshipName := fmt.Sprintf("%s_to_%s", table.Name, fk.ReferencedTable) relationshipName := fmt.Sprintf("%s_to_%s", table.Name, fk.ReferencedTable)
relationship := models.InitRelationship(relationshipName, models.OneToMany) relationship := models.InitRelationship(relationshipName, models.OneToMany)
relationship.FromTable = fk.ReferencedTable relationship.FromTable = table.Name
relationship.FromSchema = fk.ReferencedSchema relationship.FromSchema = table.Schema
relationship.ToTable = table.Name relationship.ToTable = fk.ReferencedTable
relationship.ToSchema = table.Schema relationship.ToSchema = fk.ReferencedSchema
relationship.ForeignKey = fk.Name relationship.ForeignKey = fk.Name
// Store constraint actions in properties // Store constraint actions in properties

View File

@@ -328,12 +328,12 @@ func TestDeriveRelationship(t *testing.T) {
t.Errorf("Expected relationship type %s, got %s", models.OneToMany, rel.Type) t.Errorf("Expected relationship type %s, got %s", models.OneToMany, rel.Type)
} }
if rel.FromTable != "users" { if rel.FromTable != "orders" {
t.Errorf("Expected FromTable 'users', got '%s'", rel.FromTable) t.Errorf("Expected FromTable 'orders', got '%s'", rel.FromTable)
} }
if rel.ToTable != "orders" { if rel.ToTable != "users" {
t.Errorf("Expected ToTable 'orders', got '%s'", rel.ToTable) t.Errorf("Expected ToTable 'users', got '%s'", rel.ToTable)
} }
if rel.ForeignKey != "fk_orders_user_id" { if rel.ForeignKey != "fk_orders_user_id" {

View File

@@ -5,6 +5,7 @@ import (
"strings" "strings"
"git.warky.dev/wdevs/relspecgo/pkg/models" "git.warky.dev/wdevs/relspecgo/pkg/models"
"git.warky.dev/wdevs/relspecgo/pkg/writers"
) )
// TemplateData represents the data passed to the template for code generation // TemplateData represents the data passed to the template for code generation
@@ -111,13 +112,17 @@ func NewModelData(table *models.Table, schema string, typeMapper *TypeMapper) *M
tableName = schema + "." + table.Name tableName = schema + "." + table.Name
} }
// Generate model name: singularize and convert to PascalCase // Generate model name: Model + Schema + Table (all PascalCase)
singularTable := Singularize(table.Name) singularTable := Singularize(table.Name)
modelName := SnakeCaseToPascalCase(singularTable) tablePart := SnakeCaseToPascalCase(singularTable)
// Add "Model" prefix if not already present // Include schema name in model name
if !hasModelPrefix(modelName) { var modelName string
modelName = "Model" + modelName if schema != "" {
schemaPart := SnakeCaseToPascalCase(schema)
modelName = "Model" + schemaPart + tablePart
} else {
modelName = "Model" + tablePart
} }
model := &ModelData{ model := &ModelData{
@@ -133,8 +138,10 @@ func NewModelData(table *models.Table, schema string, typeMapper *TypeMapper) *M
// Find primary key // Find primary key
for _, col := range table.Columns { for _, col := range table.Columns {
if col.IsPrimaryKey { if col.IsPrimaryKey {
model.PrimaryKeyField = SnakeCaseToPascalCase(col.Name) // Sanitize column name to remove backticks
model.IDColumnName = col.Name safeName := writers.SanitizeStructTagValue(col.Name)
model.PrimaryKeyField = SnakeCaseToPascalCase(safeName)
model.IDColumnName = safeName
// Check if PK type is a SQL type (contains resolvespec_common or sql_types) // Check if PK type is a SQL type (contains resolvespec_common or sql_types)
goType := typeMapper.SQLTypeToGoType(col.Type, col.NotNull) goType := typeMapper.SQLTypeToGoType(col.Type, col.NotNull)
model.PrimaryKeyIsSQL = strings.Contains(goType, "resolvespec_common") || strings.Contains(goType, "sql_types") model.PrimaryKeyIsSQL = strings.Contains(goType, "resolvespec_common") || strings.Contains(goType, "sql_types")
@@ -146,6 +153,8 @@ func NewModelData(table *models.Table, schema string, typeMapper *TypeMapper) *M
columns := sortColumns(table.Columns) columns := sortColumns(table.Columns)
for _, col := range columns { for _, col := range columns {
field := columnToField(col, table, typeMapper) field := columnToField(col, table, typeMapper)
// Check for name collision with generated methods and rename if needed
field.Name = resolveFieldNameCollision(field.Name)
model.Fields = append(model.Fields, field) model.Fields = append(model.Fields, field)
} }
@@ -154,10 +163,13 @@ func NewModelData(table *models.Table, schema string, typeMapper *TypeMapper) *M
// columnToField converts a models.Column to FieldData // columnToField converts a models.Column to FieldData
func columnToField(col *models.Column, table *models.Table, typeMapper *TypeMapper) *FieldData { func columnToField(col *models.Column, table *models.Table, typeMapper *TypeMapper) *FieldData {
fieldName := SnakeCaseToPascalCase(col.Name) // Sanitize column name first to remove backticks before generating field name
safeName := writers.SanitizeStructTagValue(col.Name)
fieldName := SnakeCaseToPascalCase(safeName)
goType := typeMapper.SQLTypeToGoType(col.Type, col.NotNull) goType := typeMapper.SQLTypeToGoType(col.Type, col.NotNull)
bunTag := typeMapper.BuildBunTag(col, table) bunTag := typeMapper.BuildBunTag(col, table)
jsonTag := col.Name // Use column name for JSON tag // Use same sanitized name for JSON tag
jsonTag := safeName
return &FieldData{ return &FieldData{
Name: fieldName, Name: fieldName,
@@ -184,9 +196,28 @@ func formatComment(description, comment string) string {
return comment return comment
} }
// hasModelPrefix checks if a name already has "Model" prefix // resolveFieldNameCollision checks if a field name conflicts with generated method names
func hasModelPrefix(name string) bool { // and adds an underscore suffix if there's a collision
return len(name) >= 5 && name[:5] == "Model" func resolveFieldNameCollision(fieldName string) string {
// List of method names that are generated by the template
reservedNames := map[string]bool{
"TableName": true,
"TableNameOnly": true,
"SchemaName": true,
"GetID": true,
"GetIDStr": true,
"SetID": true,
"UpdateID": true,
"GetIDName": true,
"GetPrefix": true,
}
// Check if field name conflicts with a reserved method name
if reservedNames[fieldName] {
return fieldName + "_"
}
return fieldName
} }
// sortColumns sorts columns by sequence, then by name // sortColumns sorts columns by sequence, then by name

View File

@@ -5,6 +5,7 @@ import (
"strings" "strings"
"git.warky.dev/wdevs/relspecgo/pkg/models" "git.warky.dev/wdevs/relspecgo/pkg/models"
"git.warky.dev/wdevs/relspecgo/pkg/writers"
) )
// TypeMapper handles type conversions between SQL and Go types for Bun // TypeMapper handles type conversions between SQL and Go types for Bun
@@ -164,11 +165,14 @@ func (tm *TypeMapper) BuildBunTag(column *models.Column, table *models.Table) st
var parts []string var parts []string
// Column name comes first (no prefix) // Column name comes first (no prefix)
parts = append(parts, column.Name) // Sanitize to remove backticks which would break struct tag syntax
safeName := writers.SanitizeStructTagValue(column.Name)
parts = append(parts, safeName)
// Add type if specified // Add type if specified
if column.Type != "" { if column.Type != "" {
typeStr := column.Type // Sanitize type to remove backticks
typeStr := writers.SanitizeStructTagValue(column.Type)
if column.Length > 0 { if column.Length > 0 {
typeStr = fmt.Sprintf("%s(%d)", typeStr, column.Length) typeStr = fmt.Sprintf("%s(%d)", typeStr, column.Length)
} else if column.Precision > 0 { } else if column.Precision > 0 {
@@ -188,12 +192,17 @@ func (tm *TypeMapper) BuildBunTag(column *models.Column, table *models.Table) st
// Default value // Default value
if column.Default != nil { if column.Default != nil {
parts = append(parts, fmt.Sprintf("default:%v", column.Default)) // Sanitize default value to remove backticks
safeDefault := writers.SanitizeStructTagValue(fmt.Sprintf("%v", column.Default))
parts = append(parts, fmt.Sprintf("default:%s", safeDefault))
} }
// Nullable (Bun uses nullzero for nullable fields) // Nullable (Bun uses nullzero for nullable fields)
// and notnull tag for explicitly non-nullable fields
if !column.NotNull && !column.IsPrimaryKey { if !column.NotNull && !column.IsPrimaryKey {
parts = append(parts, "nullzero") parts = append(parts, "nullzero")
} else if column.NotNull && !column.IsPrimaryKey {
parts = append(parts, "notnull")
} }
// Check for indexes (unique indexes should be added to tag) // Check for indexes (unique indexes should be added to tag)
@@ -260,7 +269,7 @@ func (tm *TypeMapper) NeedsFmtImport(generateGetIDStr bool) bool {
// GetSQLTypesImport returns the import path for sql_types (ResolveSpec common) // GetSQLTypesImport returns the import path for sql_types (ResolveSpec common)
func (tm *TypeMapper) GetSQLTypesImport() string { func (tm *TypeMapper) GetSQLTypesImport() string {
return "github.com/bitechdev/ResolveSpec/pkg/common" return "github.com/bitechdev/ResolveSpec/pkg/spectypes"
} }
// GetBunImport returns the import path for Bun // GetBunImport returns the import path for Bun

View File

@@ -4,6 +4,7 @@ import (
"fmt" "fmt"
"go/format" "go/format"
"os" "os"
"os/exec"
"path/filepath" "path/filepath"
"strings" "strings"
@@ -124,7 +125,16 @@ func (w *Writer) writeSingleFile(db *models.Database) error {
} }
// Write output // Write output
return w.writeOutput(formatted) if err := w.writeOutput(formatted); err != nil {
return err
}
// Run go fmt on the output file
if w.options.OutputPath != "" {
w.runGoFmt(w.options.OutputPath)
}
return nil
} }
// writeMultiFile writes each table to a separate file // writeMultiFile writes each table to a separate file
@@ -207,13 +217,19 @@ func (w *Writer) writeMultiFile(db *models.Database) error {
} }
// Generate filename: sql_{schema}_{table}.go // Generate filename: sql_{schema}_{table}.go
filename := fmt.Sprintf("sql_%s_%s.go", schema.Name, table.Name) // Sanitize schema and table names to remove quotes, comments, and invalid characters
safeSchemaName := writers.SanitizeFilename(schema.Name)
safeTableName := writers.SanitizeFilename(table.Name)
filename := fmt.Sprintf("sql_%s_%s.go", safeSchemaName, safeTableName)
filepath := filepath.Join(w.options.OutputPath, filename) filepath := filepath.Join(w.options.OutputPath, filename)
// Write file // Write file
if err := os.WriteFile(filepath, []byte(formatted), 0644); err != nil { if err := os.WriteFile(filepath, []byte(formatted), 0644); err != nil {
return fmt.Errorf("failed to write file %s: %w", filename, err) return fmt.Errorf("failed to write file %s: %w", filename, err)
} }
// Run go fmt on the generated file
w.runGoFmt(filepath)
} }
} }
@@ -222,6 +238,9 @@ func (w *Writer) writeMultiFile(db *models.Database) error {
// addRelationshipFields adds relationship fields to the model based on foreign keys // addRelationshipFields adds relationship fields to the model based on foreign keys
func (w *Writer) addRelationshipFields(modelData *ModelData, table *models.Table, schema *models.Schema, db *models.Database) { func (w *Writer) addRelationshipFields(modelData *ModelData, table *models.Table, schema *models.Schema, db *models.Database) {
// Track used field names to detect duplicates
usedFieldNames := make(map[string]int)
// For each foreign key in this table, add a belongs-to/has-one relationship // For each foreign key in this table, add a belongs-to/has-one relationship
for _, constraint := range table.Constraints { for _, constraint := range table.Constraints {
if constraint.Type != models.ForeignKeyConstraint { if constraint.Type != models.ForeignKeyConstraint {
@@ -235,8 +254,9 @@ func (w *Writer) addRelationshipFields(modelData *ModelData, table *models.Table
} }
// Create relationship field (has-one in Bun, similar to belongs-to in GORM) // Create relationship field (has-one in Bun, similar to belongs-to in GORM)
refModelName := w.getModelName(constraint.ReferencedTable) refModelName := w.getModelName(constraint.ReferencedSchema, constraint.ReferencedTable)
fieldName := w.generateRelationshipFieldName(constraint.ReferencedTable) fieldName := w.generateHasOneFieldName(constraint)
fieldName = w.ensureUniqueFieldName(fieldName, usedFieldNames)
relationTag := w.typeMapper.BuildRelationshipTag(constraint, "has-one") relationTag := w.typeMapper.BuildRelationshipTag(constraint, "has-one")
modelData.AddRelationshipField(&FieldData{ modelData.AddRelationshipField(&FieldData{
@@ -263,8 +283,9 @@ func (w *Writer) addRelationshipFields(modelData *ModelData, table *models.Table
// Check if this constraint references our table // Check if this constraint references our table
if constraint.ReferencedTable == table.Name && constraint.ReferencedSchema == schema.Name { if constraint.ReferencedTable == table.Name && constraint.ReferencedSchema == schema.Name {
// Add has-many relationship // Add has-many relationship
otherModelName := w.getModelName(otherTable.Name) otherModelName := w.getModelName(otherSchema.Name, otherTable.Name)
fieldName := w.generateRelationshipFieldName(otherTable.Name) + "s" // Pluralize fieldName := w.generateHasManyFieldName(constraint, otherSchema.Name, otherTable.Name)
fieldName = w.ensureUniqueFieldName(fieldName, usedFieldNames)
relationTag := w.typeMapper.BuildRelationshipTag(constraint, "has-many") relationTag := w.typeMapper.BuildRelationshipTag(constraint, "has-many")
modelData.AddRelationshipField(&FieldData{ modelData.AddRelationshipField(&FieldData{
@@ -295,22 +316,77 @@ func (w *Writer) findTable(schemaName, tableName string, db *models.Database) *m
return nil return nil
} }
// getModelName generates the model name from a table name // getModelName generates the model name from schema and table name
func (w *Writer) getModelName(tableName string) string { func (w *Writer) getModelName(schemaName, tableName string) string {
singular := Singularize(tableName) singular := Singularize(tableName)
modelName := SnakeCaseToPascalCase(singular) tablePart := SnakeCaseToPascalCase(singular)
if !hasModelPrefix(modelName) { // Include schema name in model name
modelName = "Model" + modelName var modelName string
if schemaName != "" {
schemaPart := SnakeCaseToPascalCase(schemaName)
modelName = "Model" + schemaPart + tablePart
} else {
modelName = "Model" + tablePart
} }
return modelName return modelName
} }
// generateRelationshipFieldName generates a field name for a relationship // generateHasOneFieldName generates a field name for has-one relationships
func (w *Writer) generateRelationshipFieldName(tableName string) string { // Uses the foreign key column name for uniqueness
// Use just the prefix (3 letters) for relationship fields func (w *Writer) generateHasOneFieldName(constraint *models.Constraint) string {
return GeneratePrefix(tableName) // Use the foreign key column name to ensure uniqueness
// If there are multiple columns, use the first one
if len(constraint.Columns) > 0 {
columnName := constraint.Columns[0]
// Convert to PascalCase for proper Go field naming
// e.g., "rid_filepointer_request" -> "RelRIDFilepointerRequest"
return "Rel" + SnakeCaseToPascalCase(columnName)
}
// Fallback to table-based prefix if no columns defined
return "Rel" + GeneratePrefix(constraint.ReferencedTable)
}
// generateHasManyFieldName generates a field name for has-many relationships
// Uses the foreign key column name + source table name to avoid duplicates
func (w *Writer) generateHasManyFieldName(constraint *models.Constraint, sourceSchemaName, sourceTableName string) string {
// For has-many, we need to include the source table name to avoid duplicates
// e.g., multiple tables referencing the same column on this table
if len(constraint.Columns) > 0 {
columnName := constraint.Columns[0]
// Get the model name for the source table (pluralized)
sourceModelName := w.getModelName(sourceSchemaName, sourceTableName)
// Remove "Model" prefix if present
sourceModelName = strings.TrimPrefix(sourceModelName, "Model")
// Convert column to PascalCase and combine with source table
// e.g., "rid_api_provider" + "Login" -> "RelRIDAPIProviderLogins"
columnPart := SnakeCaseToPascalCase(columnName)
return "Rel" + columnPart + Pluralize(sourceModelName)
}
// Fallback to table-based naming
sourceModelName := w.getModelName(sourceSchemaName, sourceTableName)
sourceModelName = strings.TrimPrefix(sourceModelName, "Model")
return "Rel" + Pluralize(sourceModelName)
}
// ensureUniqueFieldName ensures a field name is unique by adding numeric suffixes if needed
func (w *Writer) ensureUniqueFieldName(fieldName string, usedNames map[string]int) string {
originalName := fieldName
count := usedNames[originalName]
if count > 0 {
// Name is already used, add numeric suffix
fieldName = fmt.Sprintf("%s%d", originalName, count+1)
}
// Increment the counter for this base name
usedNames[originalName]++
return fieldName
} }
// getPackageName returns the package name from options or defaults to "models" // getPackageName returns the package name from options or defaults to "models"
@@ -341,6 +417,15 @@ func (w *Writer) writeOutput(content string) error {
return nil return nil
} }
// runGoFmt runs go fmt on the specified file
func (w *Writer) runGoFmt(filepath string) {
cmd := exec.Command("gofmt", "-w", filepath)
if err := cmd.Run(); err != nil {
// Don't fail the whole operation if gofmt fails, just warn
fmt.Fprintf(os.Stderr, "Warning: failed to run gofmt on %s: %v\n", filepath, err)
}
}
// shouldUseMultiFile determines whether to use multi-file mode based on metadata or output path // shouldUseMultiFile determines whether to use multi-file mode based on metadata or output path
func (w *Writer) shouldUseMultiFile() bool { func (w *Writer) shouldUseMultiFile() bool {
// Check if multi_file is explicitly set in metadata // Check if multi_file is explicitly set in metadata

View File

@@ -66,7 +66,7 @@ func TestWriter_WriteTable(t *testing.T) {
// Verify key elements are present // Verify key elements are present
expectations := []string{ expectations := []string{
"package models", "package models",
"type ModelUser struct", "type ModelPublicUser struct",
"bun.BaseModel", "bun.BaseModel",
"table:public.users", "table:public.users",
"alias:users", "alias:users",
@@ -78,9 +78,9 @@ func TestWriter_WriteTable(t *testing.T) {
"resolvespec_common.SqlTime", "resolvespec_common.SqlTime",
"bun:\"id", "bun:\"id",
"bun:\"email", "bun:\"email",
"func (m ModelUser) TableName() string", "func (m ModelPublicUser) TableName() string",
"return \"public.users\"", "return \"public.users\"",
"func (m ModelUser) GetID() int64", "func (m ModelPublicUser) GetID() int64",
} }
for _, expected := range expectations { for _, expected := range expectations {
@@ -175,12 +175,378 @@ func TestWriter_WriteDatabase_MultiFile(t *testing.T) {
postsStr := string(postsContent) postsStr := string(postsContent)
// Verify relationship is present with Bun format // Verify relationship is present with Bun format
if !strings.Contains(postsStr, "USE") { // Should now be RelUserID (has-one) instead of USE
t.Errorf("Missing relationship field USE") if !strings.Contains(postsStr, "RelUserID") {
t.Errorf("Missing relationship field RelUserID (new naming convention)")
} }
if !strings.Contains(postsStr, "rel:has-one") { if !strings.Contains(postsStr, "rel:has-one") {
t.Errorf("Missing Bun relationship tag: %s", postsStr) t.Errorf("Missing Bun relationship tag: %s", postsStr)
} }
// Check users file contains has-many relationship
usersContent, err := os.ReadFile(filepath.Join(tmpDir, "sql_public_users.go"))
if err != nil {
t.Fatalf("Failed to read users file: %v", err)
}
usersStr := string(usersContent)
// Should have RelUserIDPublicPosts (has-many) field - includes schema prefix
if !strings.Contains(usersStr, "RelUserIDPublicPosts") {
t.Errorf("Missing has-many relationship field RelUserIDPublicPosts")
}
}
func TestWriter_MultipleReferencesToSameTable(t *testing.T) {
// Test scenario: api_event table with multiple foreign keys to filepointer table
db := models.InitDatabase("testdb")
schema := models.InitSchema("org")
// Filepointer table
filepointer := models.InitTable("filepointer", "org")
filepointer.Columns["id_filepointer"] = &models.Column{
Name: "id_filepointer",
Type: "bigserial",
NotNull: true,
IsPrimaryKey: true,
}
schema.Tables = append(schema.Tables, filepointer)
// API event table with two foreign keys to filepointer
apiEvent := models.InitTable("api_event", "org")
apiEvent.Columns["id_api_event"] = &models.Column{
Name: "id_api_event",
Type: "bigserial",
NotNull: true,
IsPrimaryKey: true,
}
apiEvent.Columns["rid_filepointer_request"] = &models.Column{
Name: "rid_filepointer_request",
Type: "bigint",
NotNull: false,
}
apiEvent.Columns["rid_filepointer_response"] = &models.Column{
Name: "rid_filepointer_response",
Type: "bigint",
NotNull: false,
}
// Add constraints
apiEvent.Constraints["fk_request"] = &models.Constraint{
Name: "fk_request",
Type: models.ForeignKeyConstraint,
Columns: []string{"rid_filepointer_request"},
ReferencedTable: "filepointer",
ReferencedSchema: "org",
ReferencedColumns: []string{"id_filepointer"},
}
apiEvent.Constraints["fk_response"] = &models.Constraint{
Name: "fk_response",
Type: models.ForeignKeyConstraint,
Columns: []string{"rid_filepointer_response"},
ReferencedTable: "filepointer",
ReferencedSchema: "org",
ReferencedColumns: []string{"id_filepointer"},
}
schema.Tables = append(schema.Tables, apiEvent)
db.Schemas = append(db.Schemas, schema)
// Create writer
tmpDir := t.TempDir()
opts := &writers.WriterOptions{
PackageName: "models",
OutputPath: tmpDir,
Metadata: map[string]interface{}{
"multi_file": true,
},
}
writer := NewWriter(opts)
err := writer.WriteDatabase(db)
if err != nil {
t.Fatalf("WriteDatabase failed: %v", err)
}
// Read the api_event file
apiEventContent, err := os.ReadFile(filepath.Join(tmpDir, "sql_org_api_event.go"))
if err != nil {
t.Fatalf("Failed to read api_event file: %v", err)
}
contentStr := string(apiEventContent)
// Verify both relationships have unique names based on column names
expectations := []struct {
fieldName string
tag string
}{
{"RelRIDFilepointerRequest", "join:rid_filepointer_request=id_filepointer"},
{"RelRIDFilepointerResponse", "join:rid_filepointer_response=id_filepointer"},
}
for _, exp := range expectations {
if !strings.Contains(contentStr, exp.fieldName) {
t.Errorf("Missing relationship field: %s\nGenerated:\n%s", exp.fieldName, contentStr)
}
if !strings.Contains(contentStr, exp.tag) {
t.Errorf("Missing relationship tag: %s\nGenerated:\n%s", exp.tag, contentStr)
}
}
// Verify NO duplicate field names (old behavior would create duplicate "FIL" fields)
if strings.Contains(contentStr, "FIL *ModelFilepointer") {
t.Errorf("Found old prefix-based naming (FIL), should use column-based naming")
}
// Also verify has-many relationships on filepointer table
filepointerContent, err := os.ReadFile(filepath.Join(tmpDir, "sql_org_filepointer.go"))
if err != nil {
t.Fatalf("Failed to read filepointer file: %v", err)
}
filepointerStr := string(filepointerContent)
// Should have two different has-many relationships with unique names
hasManyExpectations := []string{
"RelRIDFilepointerRequestOrgAPIEvents", // Has many via rid_filepointer_request
"RelRIDFilepointerResponseOrgAPIEvents", // Has many via rid_filepointer_response
}
for _, exp := range hasManyExpectations {
if !strings.Contains(filepointerStr, exp) {
t.Errorf("Missing has-many relationship field: %s\nGenerated:\n%s", exp, filepointerStr)
}
}
}
func TestWriter_MultipleHasManyRelationships(t *testing.T) {
// Test scenario: api_provider table referenced by multiple tables via rid_api_provider
db := models.InitDatabase("testdb")
schema := models.InitSchema("org")
// Owner table
owner := models.InitTable("owner", "org")
owner.Columns["id_owner"] = &models.Column{
Name: "id_owner",
Type: "bigserial",
NotNull: true,
IsPrimaryKey: true,
}
schema.Tables = append(schema.Tables, owner)
// API Provider table
apiProvider := models.InitTable("api_provider", "org")
apiProvider.Columns["id_api_provider"] = &models.Column{
Name: "id_api_provider",
Type: "bigserial",
NotNull: true,
IsPrimaryKey: true,
}
apiProvider.Columns["rid_owner"] = &models.Column{
Name: "rid_owner",
Type: "bigint",
NotNull: true,
}
apiProvider.Constraints["fk_owner"] = &models.Constraint{
Name: "fk_owner",
Type: models.ForeignKeyConstraint,
Columns: []string{"rid_owner"},
ReferencedTable: "owner",
ReferencedSchema: "org",
ReferencedColumns: []string{"id_owner"},
}
schema.Tables = append(schema.Tables, apiProvider)
// Login table
login := models.InitTable("login", "org")
login.Columns["id_login"] = &models.Column{
Name: "id_login",
Type: "bigserial",
NotNull: true,
IsPrimaryKey: true,
}
login.Columns["rid_api_provider"] = &models.Column{
Name: "rid_api_provider",
Type: "bigint",
NotNull: true,
}
login.Constraints["fk_api_provider"] = &models.Constraint{
Name: "fk_api_provider",
Type: models.ForeignKeyConstraint,
Columns: []string{"rid_api_provider"},
ReferencedTable: "api_provider",
ReferencedSchema: "org",
ReferencedColumns: []string{"id_api_provider"},
}
schema.Tables = append(schema.Tables, login)
// Filepointer table
filepointer := models.InitTable("filepointer", "org")
filepointer.Columns["id_filepointer"] = &models.Column{
Name: "id_filepointer",
Type: "bigserial",
NotNull: true,
IsPrimaryKey: true,
}
filepointer.Columns["rid_api_provider"] = &models.Column{
Name: "rid_api_provider",
Type: "bigint",
NotNull: true,
}
filepointer.Constraints["fk_api_provider"] = &models.Constraint{
Name: "fk_api_provider",
Type: models.ForeignKeyConstraint,
Columns: []string{"rid_api_provider"},
ReferencedTable: "api_provider",
ReferencedSchema: "org",
ReferencedColumns: []string{"id_api_provider"},
}
schema.Tables = append(schema.Tables, filepointer)
// API Event table
apiEvent := models.InitTable("api_event", "org")
apiEvent.Columns["id_api_event"] = &models.Column{
Name: "id_api_event",
Type: "bigserial",
NotNull: true,
IsPrimaryKey: true,
}
apiEvent.Columns["rid_api_provider"] = &models.Column{
Name: "rid_api_provider",
Type: "bigint",
NotNull: true,
}
apiEvent.Constraints["fk_api_provider"] = &models.Constraint{
Name: "fk_api_provider",
Type: models.ForeignKeyConstraint,
Columns: []string{"rid_api_provider"},
ReferencedTable: "api_provider",
ReferencedSchema: "org",
ReferencedColumns: []string{"id_api_provider"},
}
schema.Tables = append(schema.Tables, apiEvent)
db.Schemas = append(db.Schemas, schema)
// Create writer
tmpDir := t.TempDir()
opts := &writers.WriterOptions{
PackageName: "models",
OutputPath: tmpDir,
Metadata: map[string]interface{}{
"multi_file": true,
},
}
writer := NewWriter(opts)
err := writer.WriteDatabase(db)
if err != nil {
t.Fatalf("WriteDatabase failed: %v", err)
}
// Read the api_provider file
apiProviderContent, err := os.ReadFile(filepath.Join(tmpDir, "sql_org_api_provider.go"))
if err != nil {
t.Fatalf("Failed to read api_provider file: %v", err)
}
contentStr := string(apiProviderContent)
// Verify all has-many relationships have unique names
hasManyExpectations := []string{
"RelRIDAPIProviderOrgLogins", // Has many via Login
"RelRIDAPIProviderOrgFilepointers", // Has many via Filepointer
"RelRIDAPIProviderOrgAPIEvents", // Has many via APIEvent
"RelRIDOwner", // Has one via rid_owner
}
for _, exp := range hasManyExpectations {
if !strings.Contains(contentStr, exp) {
t.Errorf("Missing relationship field: %s\nGenerated:\n%s", exp, contentStr)
}
}
// Verify NO duplicate field names
// Count occurrences of "RelRIDAPIProvider" fields - should have 3 unique ones
count := strings.Count(contentStr, "RelRIDAPIProvider")
if count != 3 {
t.Errorf("Expected 3 RelRIDAPIProvider* fields, found %d\nGenerated:\n%s", count, contentStr)
}
// Verify no duplicate declarations (would cause compilation error)
duplicatePattern := "RelRIDAPIProviders []*Model"
if strings.Contains(contentStr, duplicatePattern) {
t.Errorf("Found duplicate field declaration pattern, fields should be unique")
}
}
func TestWriter_FieldNameCollision(t *testing.T) {
// Test scenario: table with columns that would conflict with generated method names
table := models.InitTable("audit_table", "audit")
table.Columns["id_audit_table"] = &models.Column{
Name: "id_audit_table",
Type: "smallint",
NotNull: true,
IsPrimaryKey: true,
Sequence: 1,
}
table.Columns["table_name"] = &models.Column{
Name: "table_name",
Type: "varchar",
Length: 100,
NotNull: true,
Sequence: 2,
}
table.Columns["table_schema"] = &models.Column{
Name: "table_schema",
Type: "varchar",
Length: 100,
NotNull: true,
Sequence: 3,
}
// Create writer
tmpDir := t.TempDir()
opts := &writers.WriterOptions{
PackageName: "models",
OutputPath: filepath.Join(tmpDir, "test.go"),
}
writer := NewWriter(opts)
err := writer.WriteTable(table)
if err != nil {
t.Fatalf("WriteTable failed: %v", err)
}
// Read the generated file
content, err := os.ReadFile(opts.OutputPath)
if err != nil {
t.Fatalf("Failed to read generated file: %v", err)
}
generated := string(content)
// Verify that TableName field was renamed to TableName_ to avoid collision
if !strings.Contains(generated, "TableName_") {
t.Errorf("Expected field 'TableName_' (with underscore) but not found\nGenerated:\n%s", generated)
}
// Verify the struct tag still references the correct database column
if !strings.Contains(generated, `bun:"table_name,`) {
t.Errorf("Expected bun tag to reference 'table_name' column\nGenerated:\n%s", generated)
}
// Verify the TableName() method still exists and doesn't conflict
if !strings.Contains(generated, "func (m ModelAuditAuditTable) TableName() string") {
t.Errorf("TableName() method should still be generated\nGenerated:\n%s", generated)
}
// Verify NO field named just "TableName" (without underscore)
if strings.Contains(generated, "TableName resolvespec_common") || strings.Contains(generated, "TableName string") {
t.Errorf("Field 'TableName' without underscore should not exist (would conflict with method)\nGenerated:\n%s", generated)
}
} }
func TestTypeMapper_SQLTypeToGoType_Bun(t *testing.T) { func TestTypeMapper_SQLTypeToGoType_Bun(t *testing.T) {

View File

@@ -126,7 +126,15 @@ func (w *Writer) tableToDBML(t *models.Table) string {
attrs = append(attrs, "increment") attrs = append(attrs, "increment")
} }
if column.Default != nil { if column.Default != nil {
attrs = append(attrs, fmt.Sprintf("default: `%v`", column.Default)) // Check if default value contains backticks (DBML expressions like `now()`)
defaultStr := fmt.Sprintf("%v", column.Default)
if strings.HasPrefix(defaultStr, "`") && strings.HasSuffix(defaultStr, "`") {
// Already an expression with backticks, use as-is
attrs = append(attrs, fmt.Sprintf("default: %s", defaultStr))
} else {
// Regular value, wrap in single quotes
attrs = append(attrs, fmt.Sprintf("default: '%v'", column.Default))
}
} }
if len(attrs) > 0 { if len(attrs) > 0 {

View File

@@ -196,7 +196,9 @@ func (w *Writer) writeTableFile(table *models.Table, schema *models.Schema, db *
} }
// Generate filename: {tableName}.ts // Generate filename: {tableName}.ts
filename := filepath.Join(w.options.OutputPath, table.Name+".ts") // Sanitize table name to remove quotes, comments, and invalid characters
safeTableName := writers.SanitizeFilename(table.Name)
filename := filepath.Join(w.options.OutputPath, safeTableName+".ts")
return os.WriteFile(filename, []byte(code), 0644) return os.WriteFile(filename, []byte(code), 0644)
} }

View File

@@ -4,6 +4,7 @@ import (
"sort" "sort"
"git.warky.dev/wdevs/relspecgo/pkg/models" "git.warky.dev/wdevs/relspecgo/pkg/models"
"git.warky.dev/wdevs/relspecgo/pkg/writers"
) )
// TemplateData represents the data passed to the template for code generation // TemplateData represents the data passed to the template for code generation
@@ -24,6 +25,7 @@ type ModelData struct {
Fields []*FieldData Fields []*FieldData
Config *MethodConfig Config *MethodConfig
PrimaryKeyField string // Name of the primary key field PrimaryKeyField string // Name of the primary key field
PrimaryKeyType string // Go type of the primary key field
IDColumnName string // Name of the ID column in database IDColumnName string // Name of the ID column in database
Prefix string // 3-letter prefix Prefix string // 3-letter prefix
} }
@@ -109,13 +111,17 @@ func NewModelData(table *models.Table, schema string, typeMapper *TypeMapper) *M
tableName = schema + "." + table.Name tableName = schema + "." + table.Name
} }
// Generate model name: singularize and convert to PascalCase // Generate model name: Model + Schema + Table (all PascalCase)
singularTable := Singularize(table.Name) singularTable := Singularize(table.Name)
modelName := SnakeCaseToPascalCase(singularTable) tablePart := SnakeCaseToPascalCase(singularTable)
// Add "Model" prefix if not already present // Include schema name in model name
if !hasModelPrefix(modelName) { var modelName string
modelName = "Model" + modelName if schema != "" {
schemaPart := SnakeCaseToPascalCase(schema)
modelName = "Model" + schemaPart + tablePart
} else {
modelName = "Model" + tablePart
} }
model := &ModelData{ model := &ModelData{
@@ -131,8 +137,11 @@ func NewModelData(table *models.Table, schema string, typeMapper *TypeMapper) *M
// Find primary key // Find primary key
for _, col := range table.Columns { for _, col := range table.Columns {
if col.IsPrimaryKey { if col.IsPrimaryKey {
model.PrimaryKeyField = SnakeCaseToPascalCase(col.Name) // Sanitize column name to remove backticks
model.IDColumnName = col.Name safeName := writers.SanitizeStructTagValue(col.Name)
model.PrimaryKeyField = SnakeCaseToPascalCase(safeName)
model.PrimaryKeyType = typeMapper.SQLTypeToGoType(col.Type, col.NotNull)
model.IDColumnName = safeName
break break
} }
} }
@@ -141,6 +150,8 @@ func NewModelData(table *models.Table, schema string, typeMapper *TypeMapper) *M
columns := sortColumns(table.Columns) columns := sortColumns(table.Columns)
for _, col := range columns { for _, col := range columns {
field := columnToField(col, table, typeMapper) field := columnToField(col, table, typeMapper)
// Check for name collision with generated methods and rename if needed
field.Name = resolveFieldNameCollision(field.Name)
model.Fields = append(model.Fields, field) model.Fields = append(model.Fields, field)
} }
@@ -149,10 +160,13 @@ func NewModelData(table *models.Table, schema string, typeMapper *TypeMapper) *M
// columnToField converts a models.Column to FieldData // columnToField converts a models.Column to FieldData
func columnToField(col *models.Column, table *models.Table, typeMapper *TypeMapper) *FieldData { func columnToField(col *models.Column, table *models.Table, typeMapper *TypeMapper) *FieldData {
fieldName := SnakeCaseToPascalCase(col.Name) // Sanitize column name first to remove backticks before generating field name
safeName := writers.SanitizeStructTagValue(col.Name)
fieldName := SnakeCaseToPascalCase(safeName)
goType := typeMapper.SQLTypeToGoType(col.Type, col.NotNull) goType := typeMapper.SQLTypeToGoType(col.Type, col.NotNull)
gormTag := typeMapper.BuildGormTag(col, table) gormTag := typeMapper.BuildGormTag(col, table)
jsonTag := col.Name // Use column name for JSON tag // Use same sanitized name for JSON tag
jsonTag := safeName
return &FieldData{ return &FieldData{
Name: fieldName, Name: fieldName,
@@ -179,9 +193,28 @@ func formatComment(description, comment string) string {
return comment return comment
} }
// hasModelPrefix checks if a name already has "Model" prefix // resolveFieldNameCollision checks if a field name conflicts with generated method names
func hasModelPrefix(name string) bool { // and adds an underscore suffix if there's a collision
return len(name) >= 5 && name[:5] == "Model" func resolveFieldNameCollision(fieldName string) string {
// List of method names that are generated by the template
reservedNames := map[string]bool{
"TableName": true,
"TableNameOnly": true,
"SchemaName": true,
"GetID": true,
"GetIDStr": true,
"SetID": true,
"UpdateID": true,
"GetIDName": true,
"GetPrefix": true,
}
// Check if field name conflicts with a reserved method name
if reservedNames[fieldName] {
return fieldName + "_"
}
return fieldName
} }
// sortColumns sorts columns by sequence, then by name // sortColumns sorts columns by sequence, then by name

View File

@@ -62,7 +62,7 @@ func (m {{.Name}}) SetID(newid int64) {
{{if and .Config.GenerateUpdateID .PrimaryKeyField}} {{if and .Config.GenerateUpdateID .PrimaryKeyField}}
// UpdateID updates the primary key value // UpdateID updates the primary key value
func (m *{{.Name}}) UpdateID(newid int64) { func (m *{{.Name}}) UpdateID(newid int64) {
m.{{.PrimaryKeyField}} = int32(newid) m.{{.PrimaryKeyField}} = {{.PrimaryKeyType}}(newid)
} }
{{end}} {{end}}
{{if and .Config.GenerateGetIDName .IDColumnName}} {{if and .Config.GenerateGetIDName .IDColumnName}}

View File

@@ -5,6 +5,7 @@ import (
"strings" "strings"
"git.warky.dev/wdevs/relspecgo/pkg/models" "git.warky.dev/wdevs/relspecgo/pkg/models"
"git.warky.dev/wdevs/relspecgo/pkg/writers"
) )
// TypeMapper handles type conversions between SQL and Go types // TypeMapper handles type conversions between SQL and Go types
@@ -199,12 +200,15 @@ func (tm *TypeMapper) BuildGormTag(column *models.Column, table *models.Table) s
var parts []string var parts []string
// Always include column name (lowercase as per user requirement) // Always include column name (lowercase as per user requirement)
parts = append(parts, fmt.Sprintf("column:%s", column.Name)) // Sanitize to remove backticks which would break struct tag syntax
safeName := writers.SanitizeStructTagValue(column.Name)
parts = append(parts, fmt.Sprintf("column:%s", safeName))
// Add type if specified // Add type if specified
if column.Type != "" { if column.Type != "" {
// Include length, precision, scale if present // Include length, precision, scale if present
typeStr := column.Type // Sanitize type to remove backticks
typeStr := writers.SanitizeStructTagValue(column.Type)
if column.Length > 0 { if column.Length > 0 {
typeStr = fmt.Sprintf("%s(%d)", typeStr, column.Length) typeStr = fmt.Sprintf("%s(%d)", typeStr, column.Length)
} else if column.Precision > 0 { } else if column.Precision > 0 {
@@ -234,7 +238,9 @@ func (tm *TypeMapper) BuildGormTag(column *models.Column, table *models.Table) s
// Default value // Default value
if column.Default != nil { if column.Default != nil {
parts = append(parts, fmt.Sprintf("default:%v", column.Default)) // Sanitize default value to remove backticks
safeDefault := writers.SanitizeStructTagValue(fmt.Sprintf("%v", column.Default))
parts = append(parts, fmt.Sprintf("default:%s", safeDefault))
} }
// Check for unique constraint // Check for unique constraint
@@ -331,5 +337,5 @@ func (tm *TypeMapper) NeedsFmtImport(generateGetIDStr bool) bool {
// GetSQLTypesImport returns the import path for sql_types // GetSQLTypesImport returns the import path for sql_types
func (tm *TypeMapper) GetSQLTypesImport() string { func (tm *TypeMapper) GetSQLTypesImport() string {
return "github.com/bitechdev/ResolveSpec/pkg/common/sql_types" return "github.com/bitechdev/ResolveSpec/pkg/spectypes"
} }

View File

@@ -4,6 +4,7 @@ import (
"fmt" "fmt"
"go/format" "go/format"
"os" "os"
"os/exec"
"path/filepath" "path/filepath"
"strings" "strings"
@@ -121,7 +122,16 @@ func (w *Writer) writeSingleFile(db *models.Database) error {
} }
// Write output // Write output
return w.writeOutput(formatted) if err := w.writeOutput(formatted); err != nil {
return err
}
// Run go fmt on the output file
if w.options.OutputPath != "" {
w.runGoFmt(w.options.OutputPath)
}
return nil
} }
// writeMultiFile writes each table to a separate file // writeMultiFile writes each table to a separate file
@@ -201,13 +211,19 @@ func (w *Writer) writeMultiFile(db *models.Database) error {
} }
// Generate filename: sql_{schema}_{table}.go // Generate filename: sql_{schema}_{table}.go
filename := fmt.Sprintf("sql_%s_%s.go", schema.Name, table.Name) // Sanitize schema and table names to remove quotes, comments, and invalid characters
safeSchemaName := writers.SanitizeFilename(schema.Name)
safeTableName := writers.SanitizeFilename(table.Name)
filename := fmt.Sprintf("sql_%s_%s.go", safeSchemaName, safeTableName)
filepath := filepath.Join(w.options.OutputPath, filename) filepath := filepath.Join(w.options.OutputPath, filename)
// Write file // Write file
if err := os.WriteFile(filepath, []byte(formatted), 0644); err != nil { if err := os.WriteFile(filepath, []byte(formatted), 0644); err != nil {
return fmt.Errorf("failed to write file %s: %w", filename, err) return fmt.Errorf("failed to write file %s: %w", filename, err)
} }
// Run go fmt on the generated file
w.runGoFmt(filepath)
} }
} }
@@ -216,6 +232,9 @@ func (w *Writer) writeMultiFile(db *models.Database) error {
// addRelationshipFields adds relationship fields to the model based on foreign keys // addRelationshipFields adds relationship fields to the model based on foreign keys
func (w *Writer) addRelationshipFields(modelData *ModelData, table *models.Table, schema *models.Schema, db *models.Database) { func (w *Writer) addRelationshipFields(modelData *ModelData, table *models.Table, schema *models.Schema, db *models.Database) {
// Track used field names to detect duplicates
usedFieldNames := make(map[string]int)
// For each foreign key in this table, add a belongs-to relationship // For each foreign key in this table, add a belongs-to relationship
for _, constraint := range table.Constraints { for _, constraint := range table.Constraints {
if constraint.Type != models.ForeignKeyConstraint { if constraint.Type != models.ForeignKeyConstraint {
@@ -229,8 +248,9 @@ func (w *Writer) addRelationshipFields(modelData *ModelData, table *models.Table
} }
// Create relationship field (belongs-to) // Create relationship field (belongs-to)
refModelName := w.getModelName(constraint.ReferencedTable) refModelName := w.getModelName(constraint.ReferencedSchema, constraint.ReferencedTable)
fieldName := w.generateRelationshipFieldName(constraint.ReferencedTable) fieldName := w.generateBelongsToFieldName(constraint)
fieldName = w.ensureUniqueFieldName(fieldName, usedFieldNames)
relationTag := w.typeMapper.BuildRelationshipTag(constraint, false) relationTag := w.typeMapper.BuildRelationshipTag(constraint, false)
modelData.AddRelationshipField(&FieldData{ modelData.AddRelationshipField(&FieldData{
@@ -257,8 +277,9 @@ func (w *Writer) addRelationshipFields(modelData *ModelData, table *models.Table
// Check if this constraint references our table // Check if this constraint references our table
if constraint.ReferencedTable == table.Name && constraint.ReferencedSchema == schema.Name { if constraint.ReferencedTable == table.Name && constraint.ReferencedSchema == schema.Name {
// Add has-many relationship // Add has-many relationship
otherModelName := w.getModelName(otherTable.Name) otherModelName := w.getModelName(otherSchema.Name, otherTable.Name)
fieldName := w.generateRelationshipFieldName(otherTable.Name) + "s" // Pluralize fieldName := w.generateHasManyFieldName(constraint, otherSchema.Name, otherTable.Name)
fieldName = w.ensureUniqueFieldName(fieldName, usedFieldNames)
relationTag := w.typeMapper.BuildRelationshipTag(constraint, true) relationTag := w.typeMapper.BuildRelationshipTag(constraint, true)
modelData.AddRelationshipField(&FieldData{ modelData.AddRelationshipField(&FieldData{
@@ -289,22 +310,77 @@ func (w *Writer) findTable(schemaName, tableName string, db *models.Database) *m
return nil return nil
} }
// getModelName generates the model name from a table name // getModelName generates the model name from schema and table name
func (w *Writer) getModelName(tableName string) string { func (w *Writer) getModelName(schemaName, tableName string) string {
singular := Singularize(tableName) singular := Singularize(tableName)
modelName := SnakeCaseToPascalCase(singular) tablePart := SnakeCaseToPascalCase(singular)
if !hasModelPrefix(modelName) { // Include schema name in model name
modelName = "Model" + modelName var modelName string
if schemaName != "" {
schemaPart := SnakeCaseToPascalCase(schemaName)
modelName = "Model" + schemaPart + tablePart
} else {
modelName = "Model" + tablePart
} }
return modelName return modelName
} }
// generateRelationshipFieldName generates a field name for a relationship // generateBelongsToFieldName generates a field name for belongs-to relationships
func (w *Writer) generateRelationshipFieldName(tableName string) string { // Uses the foreign key column name for uniqueness
// Use just the prefix (3 letters) for relationship fields func (w *Writer) generateBelongsToFieldName(constraint *models.Constraint) string {
return GeneratePrefix(tableName) // Use the foreign key column name to ensure uniqueness
// If there are multiple columns, use the first one
if len(constraint.Columns) > 0 {
columnName := constraint.Columns[0]
// Convert to PascalCase for proper Go field naming
// e.g., "rid_filepointer_request" -> "RelRIDFilepointerRequest"
return "Rel" + SnakeCaseToPascalCase(columnName)
}
// Fallback to table-based prefix if no columns defined
return "Rel" + GeneratePrefix(constraint.ReferencedTable)
}
// generateHasManyFieldName generates a field name for has-many relationships
// Uses the foreign key column name + source table name to avoid duplicates
func (w *Writer) generateHasManyFieldName(constraint *models.Constraint, sourceSchemaName, sourceTableName string) string {
// For has-many, we need to include the source table name to avoid duplicates
// e.g., multiple tables referencing the same column on this table
if len(constraint.Columns) > 0 {
columnName := constraint.Columns[0]
// Get the model name for the source table (pluralized)
sourceModelName := w.getModelName(sourceSchemaName, sourceTableName)
// Remove "Model" prefix if present
sourceModelName = strings.TrimPrefix(sourceModelName, "Model")
// Convert column to PascalCase and combine with source table
// e.g., "rid_api_provider" + "Login" -> "RelRIDAPIProviderLogins"
columnPart := SnakeCaseToPascalCase(columnName)
return "Rel" + columnPart + Pluralize(sourceModelName)
}
// Fallback to table-based naming
sourceModelName := w.getModelName(sourceSchemaName, sourceTableName)
sourceModelName = strings.TrimPrefix(sourceModelName, "Model")
return "Rel" + Pluralize(sourceModelName)
}
// ensureUniqueFieldName ensures a field name is unique by adding numeric suffixes if needed
func (w *Writer) ensureUniqueFieldName(fieldName string, usedNames map[string]int) string {
originalName := fieldName
count := usedNames[originalName]
if count > 0 {
// Name is already used, add numeric suffix
fieldName = fmt.Sprintf("%s%d", originalName, count+1)
}
// Increment the counter for this base name
usedNames[originalName]++
return fieldName
} }
// getPackageName returns the package name from options or defaults to "models" // getPackageName returns the package name from options or defaults to "models"
@@ -335,6 +411,15 @@ func (w *Writer) writeOutput(content string) error {
return nil return nil
} }
// runGoFmt runs go fmt on the specified file
func (w *Writer) runGoFmt(filepath string) {
cmd := exec.Command("gofmt", "-w", filepath)
if err := cmd.Run(); err != nil {
// Don't fail the whole operation if gofmt fails, just warn
fmt.Fprintf(os.Stderr, "Warning: failed to run gofmt on %s: %v\n", filepath, err)
}
}
// shouldUseMultiFile determines whether to use multi-file mode based on metadata or output path // shouldUseMultiFile determines whether to use multi-file mode based on metadata or output path
func (w *Writer) shouldUseMultiFile() bool { func (w *Writer) shouldUseMultiFile() bool {
// Check if multi_file is explicitly set in metadata // Check if multi_file is explicitly set in metadata

View File

@@ -66,7 +66,7 @@ func TestWriter_WriteTable(t *testing.T) {
// Verify key elements are present // Verify key elements are present
expectations := []string{ expectations := []string{
"package models", "package models",
"type ModelUser struct", "type ModelPublicUser struct",
"ID", "ID",
"int64", "int64",
"Email", "Email",
@@ -75,9 +75,9 @@ func TestWriter_WriteTable(t *testing.T) {
"time.Time", "time.Time",
"gorm:\"column:id", "gorm:\"column:id",
"gorm:\"column:email", "gorm:\"column:email",
"func (m ModelUser) TableName() string", "func (m ModelPublicUser) TableName() string",
"return \"public.users\"", "return \"public.users\"",
"func (m ModelUser) GetID() int64", "func (m ModelPublicUser) GetID() int64",
} }
for _, expected := range expectations { for _, expected := range expectations {
@@ -164,9 +164,437 @@ func TestWriter_WriteDatabase_MultiFile(t *testing.T) {
t.Fatalf("Failed to read posts file: %v", err) t.Fatalf("Failed to read posts file: %v", err)
} }
if !strings.Contains(string(postsContent), "USE *ModelUser") { postsStr := string(postsContent)
// Relationship field should be present
t.Logf("Posts content:\n%s", string(postsContent)) // Verify relationship is present with new naming convention
// Should now be RelUserID (belongs-to) instead of USE
if !strings.Contains(postsStr, "RelUserID") {
t.Errorf("Missing relationship field RelUserID (new naming convention)")
}
// Check users file contains has-many relationship
usersContent, err := os.ReadFile(filepath.Join(tmpDir, "sql_public_users.go"))
if err != nil {
t.Fatalf("Failed to read users file: %v", err)
}
usersStr := string(usersContent)
// Should have RelUserIDPublicPosts (has-many) field - includes schema prefix
if !strings.Contains(usersStr, "RelUserIDPublicPosts") {
t.Errorf("Missing has-many relationship field RelUserIDPublicPosts")
}
}
func TestWriter_MultipleReferencesToSameTable(t *testing.T) {
// Test scenario: api_event table with multiple foreign keys to filepointer table
db := models.InitDatabase("testdb")
schema := models.InitSchema("org")
// Filepointer table
filepointer := models.InitTable("filepointer", "org")
filepointer.Columns["id_filepointer"] = &models.Column{
Name: "id_filepointer",
Type: "bigserial",
NotNull: true,
IsPrimaryKey: true,
}
schema.Tables = append(schema.Tables, filepointer)
// API event table with two foreign keys to filepointer
apiEvent := models.InitTable("api_event", "org")
apiEvent.Columns["id_api_event"] = &models.Column{
Name: "id_api_event",
Type: "bigserial",
NotNull: true,
IsPrimaryKey: true,
}
apiEvent.Columns["rid_filepointer_request"] = &models.Column{
Name: "rid_filepointer_request",
Type: "bigint",
NotNull: false,
}
apiEvent.Columns["rid_filepointer_response"] = &models.Column{
Name: "rid_filepointer_response",
Type: "bigint",
NotNull: false,
}
// Add constraints
apiEvent.Constraints["fk_request"] = &models.Constraint{
Name: "fk_request",
Type: models.ForeignKeyConstraint,
Columns: []string{"rid_filepointer_request"},
ReferencedTable: "filepointer",
ReferencedSchema: "org",
ReferencedColumns: []string{"id_filepointer"},
}
apiEvent.Constraints["fk_response"] = &models.Constraint{
Name: "fk_response",
Type: models.ForeignKeyConstraint,
Columns: []string{"rid_filepointer_response"},
ReferencedTable: "filepointer",
ReferencedSchema: "org",
ReferencedColumns: []string{"id_filepointer"},
}
schema.Tables = append(schema.Tables, apiEvent)
db.Schemas = append(db.Schemas, schema)
// Create writer
tmpDir := t.TempDir()
opts := &writers.WriterOptions{
PackageName: "models",
OutputPath: tmpDir,
Metadata: map[string]interface{}{
"multi_file": true,
},
}
writer := NewWriter(opts)
err := writer.WriteDatabase(db)
if err != nil {
t.Fatalf("WriteDatabase failed: %v", err)
}
// Read the api_event file
apiEventContent, err := os.ReadFile(filepath.Join(tmpDir, "sql_org_api_event.go"))
if err != nil {
t.Fatalf("Failed to read api_event file: %v", err)
}
contentStr := string(apiEventContent)
// Verify both relationships have unique names based on column names
expectations := []struct {
fieldName string
tag string
}{
{"RelRIDFilepointerRequest", "foreignKey:RIDFilepointerRequest"},
{"RelRIDFilepointerResponse", "foreignKey:RIDFilepointerResponse"},
}
for _, exp := range expectations {
if !strings.Contains(contentStr, exp.fieldName) {
t.Errorf("Missing relationship field: %s\nGenerated:\n%s", exp.fieldName, contentStr)
}
if !strings.Contains(contentStr, exp.tag) {
t.Errorf("Missing relationship tag: %s\nGenerated:\n%s", exp.tag, contentStr)
}
}
// Verify NO duplicate field names (old behavior would create duplicate "FIL" fields)
if strings.Contains(contentStr, "FIL *ModelFilepointer") {
t.Errorf("Found old prefix-based naming (FIL), should use column-based naming")
}
// Also verify has-many relationships on filepointer table
filepointerContent, err := os.ReadFile(filepath.Join(tmpDir, "sql_org_filepointer.go"))
if err != nil {
t.Fatalf("Failed to read filepointer file: %v", err)
}
filepointerStr := string(filepointerContent)
// Should have two different has-many relationships with unique names
hasManyExpectations := []string{
"RelRIDFilepointerRequestOrgAPIEvents", // Has many via rid_filepointer_request
"RelRIDFilepointerResponseOrgAPIEvents", // Has many via rid_filepointer_response
}
for _, exp := range hasManyExpectations {
if !strings.Contains(filepointerStr, exp) {
t.Errorf("Missing has-many relationship field: %s\nGenerated:\n%s", exp, filepointerStr)
}
}
}
func TestWriter_MultipleHasManyRelationships(t *testing.T) {
// Test scenario: api_provider table referenced by multiple tables via rid_api_provider
db := models.InitDatabase("testdb")
schema := models.InitSchema("org")
// Owner table
owner := models.InitTable("owner", "org")
owner.Columns["id_owner"] = &models.Column{
Name: "id_owner",
Type: "bigserial",
NotNull: true,
IsPrimaryKey: true,
}
schema.Tables = append(schema.Tables, owner)
// API Provider table
apiProvider := models.InitTable("api_provider", "org")
apiProvider.Columns["id_api_provider"] = &models.Column{
Name: "id_api_provider",
Type: "bigserial",
NotNull: true,
IsPrimaryKey: true,
}
apiProvider.Columns["rid_owner"] = &models.Column{
Name: "rid_owner",
Type: "bigint",
NotNull: true,
}
apiProvider.Constraints["fk_owner"] = &models.Constraint{
Name: "fk_owner",
Type: models.ForeignKeyConstraint,
Columns: []string{"rid_owner"},
ReferencedTable: "owner",
ReferencedSchema: "org",
ReferencedColumns: []string{"id_owner"},
}
schema.Tables = append(schema.Tables, apiProvider)
// Login table
login := models.InitTable("login", "org")
login.Columns["id_login"] = &models.Column{
Name: "id_login",
Type: "bigserial",
NotNull: true,
IsPrimaryKey: true,
}
login.Columns["rid_api_provider"] = &models.Column{
Name: "rid_api_provider",
Type: "bigint",
NotNull: true,
}
login.Constraints["fk_api_provider"] = &models.Constraint{
Name: "fk_api_provider",
Type: models.ForeignKeyConstraint,
Columns: []string{"rid_api_provider"},
ReferencedTable: "api_provider",
ReferencedSchema: "org",
ReferencedColumns: []string{"id_api_provider"},
}
schema.Tables = append(schema.Tables, login)
// Filepointer table
filepointer := models.InitTable("filepointer", "org")
filepointer.Columns["id_filepointer"] = &models.Column{
Name: "id_filepointer",
Type: "bigserial",
NotNull: true,
IsPrimaryKey: true,
}
filepointer.Columns["rid_api_provider"] = &models.Column{
Name: "rid_api_provider",
Type: "bigint",
NotNull: true,
}
filepointer.Constraints["fk_api_provider"] = &models.Constraint{
Name: "fk_api_provider",
Type: models.ForeignKeyConstraint,
Columns: []string{"rid_api_provider"},
ReferencedTable: "api_provider",
ReferencedSchema: "org",
ReferencedColumns: []string{"id_api_provider"},
}
schema.Tables = append(schema.Tables, filepointer)
// API Event table
apiEvent := models.InitTable("api_event", "org")
apiEvent.Columns["id_api_event"] = &models.Column{
Name: "id_api_event",
Type: "bigserial",
NotNull: true,
IsPrimaryKey: true,
}
apiEvent.Columns["rid_api_provider"] = &models.Column{
Name: "rid_api_provider",
Type: "bigint",
NotNull: true,
}
apiEvent.Constraints["fk_api_provider"] = &models.Constraint{
Name: "fk_api_provider",
Type: models.ForeignKeyConstraint,
Columns: []string{"rid_api_provider"},
ReferencedTable: "api_provider",
ReferencedSchema: "org",
ReferencedColumns: []string{"id_api_provider"},
}
schema.Tables = append(schema.Tables, apiEvent)
db.Schemas = append(db.Schemas, schema)
// Create writer
tmpDir := t.TempDir()
opts := &writers.WriterOptions{
PackageName: "models",
OutputPath: tmpDir,
Metadata: map[string]interface{}{
"multi_file": true,
},
}
writer := NewWriter(opts)
err := writer.WriteDatabase(db)
if err != nil {
t.Fatalf("WriteDatabase failed: %v", err)
}
// Read the api_provider file
apiProviderContent, err := os.ReadFile(filepath.Join(tmpDir, "sql_org_api_provider.go"))
if err != nil {
t.Fatalf("Failed to read api_provider file: %v", err)
}
contentStr := string(apiProviderContent)
// Verify all has-many relationships have unique names
hasManyExpectations := []string{
"RelRIDAPIProviderOrgLogins", // Has many via Login
"RelRIDAPIProviderOrgFilepointers", // Has many via Filepointer
"RelRIDAPIProviderOrgAPIEvents", // Has many via APIEvent
"RelRIDOwner", // Belongs to via rid_owner
}
for _, exp := range hasManyExpectations {
if !strings.Contains(contentStr, exp) {
t.Errorf("Missing relationship field: %s\nGenerated:\n%s", exp, contentStr)
}
}
// Verify NO duplicate field names
// Count occurrences of "RelRIDAPIProvider" fields - should have 3 unique ones
count := strings.Count(contentStr, "RelRIDAPIProvider")
if count != 3 {
t.Errorf("Expected 3 RelRIDAPIProvider* fields, found %d\nGenerated:\n%s", count, contentStr)
}
// Verify no duplicate declarations (would cause compilation error)
duplicatePattern := "RelRIDAPIProviders []*Model"
if strings.Contains(contentStr, duplicatePattern) {
t.Errorf("Found duplicate field declaration pattern, fields should be unique")
}
}
func TestWriter_FieldNameCollision(t *testing.T) {
// Test scenario: table with columns that would conflict with generated method names
table := models.InitTable("audit_table", "audit")
table.Columns["id_audit_table"] = &models.Column{
Name: "id_audit_table",
Type: "smallint",
NotNull: true,
IsPrimaryKey: true,
Sequence: 1,
}
table.Columns["table_name"] = &models.Column{
Name: "table_name",
Type: "varchar",
Length: 100,
NotNull: true,
Sequence: 2,
}
table.Columns["table_schema"] = &models.Column{
Name: "table_schema",
Type: "varchar",
Length: 100,
NotNull: true,
Sequence: 3,
}
// Create writer
tmpDir := t.TempDir()
opts := &writers.WriterOptions{
PackageName: "models",
OutputPath: filepath.Join(tmpDir, "test.go"),
}
writer := NewWriter(opts)
err := writer.WriteTable(table)
if err != nil {
t.Fatalf("WriteTable failed: %v", err)
}
// Read the generated file
content, err := os.ReadFile(opts.OutputPath)
if err != nil {
t.Fatalf("Failed to read generated file: %v", err)
}
generated := string(content)
// Verify that TableName field was renamed to TableName_ to avoid collision
if !strings.Contains(generated, "TableName_") {
t.Errorf("Expected field 'TableName_' (with underscore) but not found\nGenerated:\n%s", generated)
}
// Verify the struct tag still references the correct database column
if !strings.Contains(generated, `gorm:"column:table_name;`) {
t.Errorf("Expected gorm tag to reference 'table_name' column\nGenerated:\n%s", generated)
}
// Verify the TableName() method still exists and doesn't conflict
if !strings.Contains(generated, "func (m ModelAuditAuditTable) TableName() string") {
t.Errorf("TableName() method should still be generated\nGenerated:\n%s", generated)
}
// Verify NO field named just "TableName" (without underscore)
if strings.Contains(generated, "TableName sql_types") || strings.Contains(generated, "TableName string") {
t.Errorf("Field 'TableName' without underscore should not exist (would conflict with method)\nGenerated:\n%s", generated)
}
}
func TestWriter_UpdateIDTypeSafety(t *testing.T) {
// Test scenario: tables with different primary key types
tests := []struct {
name string
pkType string
expectedPK string
castType string
}{
{"int32_pk", "int", "int32", "int32(newid)"},
{"int16_pk", "smallint", "int16", "int16(newid)"},
{"int64_pk", "bigint", "int64", "int64(newid)"},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
table := models.InitTable("test_table", "public")
table.Columns["id"] = &models.Column{
Name: "id",
Type: tt.pkType,
NotNull: true,
IsPrimaryKey: true,
}
tmpDir := t.TempDir()
opts := &writers.WriterOptions{
PackageName: "models",
OutputPath: filepath.Join(tmpDir, "test.go"),
}
writer := NewWriter(opts)
err := writer.WriteTable(table)
if err != nil {
t.Fatalf("WriteTable failed: %v", err)
}
content, err := os.ReadFile(opts.OutputPath)
if err != nil {
t.Fatalf("Failed to read generated file: %v", err)
}
generated := string(content)
// Verify UpdateID method has correct type cast
if !strings.Contains(generated, tt.castType) {
t.Errorf("Expected UpdateID to cast to %s\nGenerated:\n%s", tt.castType, generated)
}
// Verify no invalid int32(newid) for non-int32 types
if tt.expectedPK != "int32" && strings.Contains(generated, "int32(newid)") {
t.Errorf("UpdateID should not cast to int32 for %s type\nGenerated:\n%s", tt.pkType, generated)
}
// Verify UpdateID parameter is int64 (for consistency)
if !strings.Contains(generated, "UpdateID(newid int64)") {
t.Errorf("UpdateID should accept int64 parameter\nGenerated:\n%s", generated)
}
})
} }
} }

View File

@@ -8,6 +8,7 @@ import (
"strings" "strings"
"git.warky.dev/wdevs/relspecgo/pkg/models" "git.warky.dev/wdevs/relspecgo/pkg/models"
"git.warky.dev/wdevs/relspecgo/pkg/pgsql"
"git.warky.dev/wdevs/relspecgo/pkg/writers" "git.warky.dev/wdevs/relspecgo/pkg/writers"
) )
@@ -335,7 +336,7 @@ func (w *MigrationWriter) generateAlterTableScripts(schema *models.Schema, model
SchemaName: schema.Name, SchemaName: schema.Name,
TableName: modelTable.Name, TableName: modelTable.Name,
ColumnName: modelCol.Name, ColumnName: modelCol.Name,
ColumnType: modelCol.Type, ColumnType: pgsql.ConvertSQLType(modelCol.Type),
Default: defaultVal, Default: defaultVal,
NotNull: modelCol.NotNull, NotNull: modelCol.NotNull,
}) })
@@ -359,7 +360,7 @@ func (w *MigrationWriter) generateAlterTableScripts(schema *models.Schema, model
SchemaName: schema.Name, SchemaName: schema.Name,
TableName: modelTable.Name, TableName: modelTable.Name,
ColumnName: modelCol.Name, ColumnName: modelCol.Name,
NewType: modelCol.Type, NewType: pgsql.ConvertSQLType(modelCol.Type),
}) })
if err != nil { if err != nil {
return nil, err return nil, err
@@ -427,9 +428,11 @@ func (w *MigrationWriter) generateIndexScripts(model *models.Schema, current *mo
for _, modelTable := range model.Tables { for _, modelTable := range model.Tables {
currentTable := currentTables[strings.ToLower(modelTable.Name)] currentTable := currentTables[strings.ToLower(modelTable.Name)]
// Process primary keys first // Process primary keys first - check explicit constraints
foundExplicitPK := false
for constraintName, constraint := range modelTable.Constraints { for constraintName, constraint := range modelTable.Constraints {
if constraint.Type == models.PrimaryKeyConstraint { if constraint.Type == models.PrimaryKeyConstraint {
foundExplicitPK = true
shouldCreate := true shouldCreate := true
if currentTable != nil { if currentTable != nil {
@@ -464,6 +467,53 @@ func (w *MigrationWriter) generateIndexScripts(model *models.Schema, current *mo
} }
} }
// If no explicit PK constraint, check for columns with IsPrimaryKey = true
if !foundExplicitPK {
pkColumns := []string{}
for _, col := range modelTable.Columns {
if col.IsPrimaryKey {
pkColumns = append(pkColumns, col.SQLName())
}
}
if len(pkColumns) > 0 {
sort.Strings(pkColumns)
constraintName := fmt.Sprintf("pk_%s_%s", strings.ToLower(model.Name), strings.ToLower(modelTable.Name))
shouldCreate := true
if currentTable != nil {
// Check if a PK constraint already exists (by any name)
for _, constraint := range currentTable.Constraints {
if constraint.Type == models.PrimaryKeyConstraint {
shouldCreate = false
break
}
}
}
if shouldCreate {
sql, err := w.executor.ExecuteCreatePrimaryKey(CreatePrimaryKeyData{
SchemaName: model.Name,
TableName: modelTable.Name,
ConstraintName: constraintName,
Columns: strings.Join(pkColumns, ", "),
})
if err != nil {
return nil, err
}
script := MigrationScript{
ObjectName: fmt.Sprintf("%s.%s.%s", model.Name, modelTable.Name, constraintName),
ObjectType: "create primary key",
Schema: model.Name,
Priority: 160,
Sequence: len(scripts),
Body: sql,
}
scripts = append(scripts, script)
}
}
}
// Process indexes // Process indexes
for indexName, modelIndex := range modelTable.Indexes { for indexName, modelIndex := range modelTable.Indexes {
// Skip primary key indexes // Skip primary key indexes

View File

@@ -1,20 +1,58 @@
package pgsql package pgsql
import ( import (
"context"
"encoding/json"
"fmt" "fmt"
"io" "io"
"os" "os"
"sort" "sort"
"strings" "strings"
"time"
"github.com/jackc/pgx/v5"
"git.warky.dev/wdevs/relspecgo/pkg/models" "git.warky.dev/wdevs/relspecgo/pkg/models"
"git.warky.dev/wdevs/relspecgo/pkg/pgsql"
"git.warky.dev/wdevs/relspecgo/pkg/writers" "git.warky.dev/wdevs/relspecgo/pkg/writers"
) )
// Writer implements the Writer interface for PostgreSQL SQL output // Writer implements the Writer interface for PostgreSQL SQL output
type Writer struct { type Writer struct {
options *writers.WriterOptions options *writers.WriterOptions
writer io.Writer writer io.Writer
executionReport *ExecutionReport
}
// ExecutionReport tracks the execution status of SQL statements
type ExecutionReport struct {
TotalStatements int `json:"total_statements"`
ExecutedStatements int `json:"executed_statements"`
FailedStatements int `json:"failed_statements"`
Schemas []SchemaReport `json:"schemas"`
Errors []ExecutionError `json:"errors,omitempty"`
StartTime string `json:"start_time"`
EndTime string `json:"end_time"`
}
// SchemaReport tracks execution per schema
type SchemaReport struct {
Name string `json:"name"`
Tables []TableReport `json:"tables"`
}
// TableReport tracks execution per table
type TableReport struct {
Name string `json:"name"`
Created bool `json:"created"`
Error string `json:"error,omitempty"`
}
// ExecutionError represents a failed statement
type ExecutionError struct {
StatementNumber int `json:"statement_number"`
Statement string `json:"statement"`
Error string `json:"error"`
} }
// NewWriter creates a new PostgreSQL SQL writer // NewWriter creates a new PostgreSQL SQL writer
@@ -26,6 +64,11 @@ func NewWriter(options *writers.WriterOptions) *Writer {
// WriteDatabase writes the entire database schema as SQL // WriteDatabase writes the entire database schema as SQL
func (w *Writer) WriteDatabase(db *models.Database) error { func (w *Writer) WriteDatabase(db *models.Database) error {
// Check if we should execute SQL directly on a database
if connString, ok := w.options.Metadata["connection_string"].(string); ok && connString != "" {
return w.executeDatabaseSQL(db, connString)
}
var writer io.Writer var writer io.Writer
var file *os.File var file *os.File
var err error var err error
@@ -127,13 +170,35 @@ func (w *Writer) GenerateSchemaStatements(schema *models.Schema) ([]string, erro
// Phase 4: Primary keys // Phase 4: Primary keys
for _, table := range schema.Tables { for _, table := range schema.Tables {
// First check for explicit PrimaryKeyConstraint
var pkConstraint *models.Constraint
for _, constraint := range table.Constraints { for _, constraint := range table.Constraints {
if constraint.Type != models.PrimaryKeyConstraint { if constraint.Type == models.PrimaryKeyConstraint {
continue pkConstraint = constraint
break
} }
}
if pkConstraint != nil {
stmt := fmt.Sprintf("ALTER TABLE %s.%s ADD CONSTRAINT %s PRIMARY KEY (%s)", stmt := fmt.Sprintf("ALTER TABLE %s.%s ADD CONSTRAINT %s PRIMARY KEY (%s)",
schema.SQLName(), table.SQLName(), constraint.Name, strings.Join(constraint.Columns, ", ")) schema.SQLName(), table.SQLName(), pkConstraint.Name, strings.Join(pkConstraint.Columns, ", "))
statements = append(statements, stmt) statements = append(statements, stmt)
} else {
// No explicit constraint, check for columns with IsPrimaryKey = true
pkColumns := []string{}
for _, col := range table.Columns {
if col.IsPrimaryKey {
pkColumns = append(pkColumns, col.SQLName())
}
}
if len(pkColumns) > 0 {
// Sort for consistent output
sort.Strings(pkColumns)
pkName := fmt.Sprintf("pk_%s_%s", schema.SQLName(), table.SQLName())
stmt := fmt.Sprintf("ALTER TABLE %s.%s ADD CONSTRAINT %s PRIMARY KEY (%s)",
schema.SQLName(), table.SQLName(), pkName, strings.Join(pkColumns, ", "))
statements = append(statements, stmt)
}
} }
} }
@@ -155,13 +220,30 @@ func (w *Writer) GenerateSchemaStatements(schema *models.Schema) ([]string, erro
indexType = "btree" indexType = "btree"
} }
// Build column expressions with operator class support for GIN indexes
columnExprs := make([]string, 0, len(index.Columns))
for _, colName := range index.Columns {
colExpr := colName
if col, ok := table.Columns[colName]; ok {
// For GIN indexes on text columns, add operator class
if strings.EqualFold(indexType, "gin") && isTextType(col.Type) {
opClass := extractOperatorClass(index.Comment)
if opClass == "" {
opClass = "gin_trgm_ops"
}
colExpr = fmt.Sprintf("%s %s", colName, opClass)
}
}
columnExprs = append(columnExprs, colExpr)
}
whereClause := "" whereClause := ""
if index.Where != "" { if index.Where != "" {
whereClause = fmt.Sprintf(" WHERE %s", index.Where) whereClause = fmt.Sprintf(" WHERE %s", index.Where)
} }
stmt := fmt.Sprintf("CREATE %sINDEX IF NOT EXISTS %s ON %s.%s USING %s (%s)%s", stmt := fmt.Sprintf("CREATE %sINDEX IF NOT EXISTS %s ON %s.%s USING %s (%s)%s",
uniqueStr, index.Name, schema.SQLName(), table.SQLName(), indexType, strings.Join(index.Columns, ", "), whereClause) uniqueStr, index.Name, schema.SQLName(), table.SQLName(), indexType, strings.Join(columnExprs, ", "), whereClause)
statements = append(statements, stmt) statements = append(statements, stmt)
} }
} }
@@ -251,15 +333,16 @@ func (w *Writer) generateCreateTableStatement(schema *models.Schema, table *mode
func (w *Writer) generateColumnDefinition(col *models.Column) string { func (w *Writer) generateColumnDefinition(col *models.Column) string {
parts := []string{col.SQLName()} parts := []string{col.SQLName()}
// Type with length/precision // Type with length/precision - convert to valid PostgreSQL type
typeStr := col.Type baseType := pgsql.ConvertSQLType(col.Type)
typeStr := baseType
if col.Length > 0 && col.Precision == 0 { if col.Length > 0 && col.Precision == 0 {
typeStr = fmt.Sprintf("%s(%d)", col.Type, col.Length) typeStr = fmt.Sprintf("%s(%d)", baseType, col.Length)
} else if col.Precision > 0 { } else if col.Precision > 0 {
if col.Scale > 0 { if col.Scale > 0 {
typeStr = fmt.Sprintf("%s(%d,%d)", col.Type, col.Precision, col.Scale) typeStr = fmt.Sprintf("%s(%d,%d)", baseType, col.Precision, col.Scale)
} else { } else {
typeStr = fmt.Sprintf("%s(%d)", col.Type, col.Precision) typeStr = fmt.Sprintf("%s(%d)", baseType, col.Precision)
} }
} }
parts = append(parts, typeStr) parts = append(parts, typeStr)
@@ -273,12 +356,14 @@ func (w *Writer) generateColumnDefinition(col *models.Column) string {
if col.Default != nil { if col.Default != nil {
switch v := col.Default.(type) { switch v := col.Default.(type) {
case string: case string:
if strings.HasPrefix(v, "nextval") || strings.HasPrefix(v, "CURRENT_") || strings.Contains(v, "()") { // Strip backticks - DBML uses them for SQL expressions but PostgreSQL doesn't
parts = append(parts, fmt.Sprintf("DEFAULT %s", v)) cleanDefault := stripBackticks(v)
} else if v == "true" || v == "false" { if strings.HasPrefix(cleanDefault, "nextval") || strings.HasPrefix(cleanDefault, "CURRENT_") || strings.Contains(cleanDefault, "()") {
parts = append(parts, fmt.Sprintf("DEFAULT %s", v)) parts = append(parts, fmt.Sprintf("DEFAULT %s", cleanDefault))
} else if cleanDefault == "true" || cleanDefault == "false" {
parts = append(parts, fmt.Sprintf("DEFAULT %s", cleanDefault))
} else { } else {
parts = append(parts, fmt.Sprintf("DEFAULT '%s'", escapeQuote(v))) parts = append(parts, fmt.Sprintf("DEFAULT '%s'", escapeQuote(cleanDefault)))
} }
case bool: case bool:
parts = append(parts, fmt.Sprintf("DEFAULT %v", v)) parts = append(parts, fmt.Sprintf("DEFAULT %v", v))
@@ -405,11 +490,13 @@ func (w *Writer) writeCreateTables(schema *models.Schema) error {
columnDefs := make([]string, 0, len(columns)) columnDefs := make([]string, 0, len(columns))
for _, col := range columns { for _, col := range columns {
colDef := fmt.Sprintf(" %s %s", col.SQLName(), col.Type) colDef := fmt.Sprintf(" %s %s", col.SQLName(), pgsql.ConvertSQLType(col.Type))
// Add default value if present // Add default value if present
if col.Default != "" { if col.Default != nil && col.Default != "" {
colDef += fmt.Sprintf(" DEFAULT %s", col.Default) // Strip backticks - DBML uses them for SQL expressions but PostgreSQL doesn't
defaultVal := fmt.Sprintf("%v", col.Default)
colDef += fmt.Sprintf(" DEFAULT %s", stripBackticks(defaultVal))
} }
columnDefs = append(columnDefs, colDef) columnDefs = append(columnDefs, colDef)
@@ -437,19 +524,26 @@ func (w *Writer) writePrimaryKeys(schema *models.Schema) error {
} }
} }
if pkConstraint == nil { var columnNames []string
// No explicit PK constraint, skip
continue
}
pkName := fmt.Sprintf("pk_%s_%s", schema.SQLName(), table.SQLName()) pkName := fmt.Sprintf("pk_%s_%s", schema.SQLName(), table.SQLName())
// Build column list if pkConstraint != nil {
columnNames := make([]string, 0, len(pkConstraint.Columns)) // Build column list from explicit constraint
for _, colName := range pkConstraint.Columns { columnNames = make([]string, 0, len(pkConstraint.Columns))
if col, ok := table.Columns[colName]; ok { for _, colName := range pkConstraint.Columns {
columnNames = append(columnNames, col.SQLName()) if col, ok := table.Columns[colName]; ok {
columnNames = append(columnNames, col.SQLName())
}
} }
} else {
// No explicit PK constraint, check for columns with IsPrimaryKey = true
for _, col := range table.Columns {
if col.IsPrimaryKey {
columnNames = append(columnNames, col.SQLName())
}
}
// Sort for consistent output
sort.Strings(columnNames)
} }
if len(columnNames) == 0 { if len(columnNames) == 0 {
@@ -503,15 +597,24 @@ func (w *Writer) writeIndexes(schema *models.Schema) error {
indexName = fmt.Sprintf("%s_%s_%s", indexType, schema.SQLName(), table.SQLName()) indexName = fmt.Sprintf("%s_%s_%s", indexType, schema.SQLName(), table.SQLName())
} }
// Build column list // Build column list with operator class support for GIN indexes
columnNames := make([]string, 0, len(index.Columns)) columnExprs := make([]string, 0, len(index.Columns))
for _, colName := range index.Columns { for _, colName := range index.Columns {
if col, ok := table.Columns[colName]; ok { if col, ok := table.Columns[colName]; ok {
columnNames = append(columnNames, col.SQLName()) colExpr := col.SQLName()
// For GIN indexes on text columns, add operator class
if strings.EqualFold(index.Type, "gin") && isTextType(col.Type) {
opClass := extractOperatorClass(index.Comment)
if opClass == "" {
opClass = "gin_trgm_ops"
}
colExpr = fmt.Sprintf("%s %s", col.SQLName(), opClass)
}
columnExprs = append(columnExprs, colExpr)
} }
} }
if len(columnNames) == 0 { if len(columnExprs) == 0 {
continue continue
} }
@@ -520,10 +623,20 @@ func (w *Writer) writeIndexes(schema *models.Schema) error {
unique = "UNIQUE " unique = "UNIQUE "
} }
indexType := index.Type
if indexType == "" {
indexType = "btree"
}
whereClause := ""
if index.Where != "" {
whereClause = fmt.Sprintf(" WHERE %s", index.Where)
}
fmt.Fprintf(w.writer, "CREATE %sINDEX IF NOT EXISTS %s\n", fmt.Fprintf(w.writer, "CREATE %sINDEX IF NOT EXISTS %s\n",
unique, indexName) unique, indexName)
fmt.Fprintf(w.writer, " ON %s.%s USING btree (%s);\n\n", fmt.Fprintf(w.writer, " ON %s.%s USING %s (%s)%s;\n\n",
schema.SQLName(), table.SQLName(), strings.Join(columnNames, ", ")) schema.SQLName(), table.SQLName(), indexType, strings.Join(columnExprs, ", "), whereClause)
} }
} }
@@ -718,11 +831,46 @@ func isIntegerType(colType string) bool {
return false return false
} }
// isTextType checks if a column type is a text type (for GIN index operator class)
func isTextType(colType string) bool {
textTypes := []string{"text", "varchar", "character varying", "char", "character", "string"}
lowerType := strings.ToLower(colType)
for _, t := range textTypes {
if strings.HasPrefix(lowerType, t) {
return true
}
}
return false
}
// extractOperatorClass extracts operator class from index comment/note
// Looks for common operator classes like gin_trgm_ops, gist_trgm_ops, etc.
func extractOperatorClass(comment string) string {
if comment == "" {
return ""
}
lowerComment := strings.ToLower(comment)
// Common GIN/GiST operator classes
opClasses := []string{"gin_trgm_ops", "gist_trgm_ops", "gin_bigm_ops", "jsonb_ops", "jsonb_path_ops", "array_ops"}
for _, op := range opClasses {
if strings.Contains(lowerComment, op) {
return op
}
}
return ""
}
// escapeQuote escapes single quotes in strings for SQL // escapeQuote escapes single quotes in strings for SQL
func escapeQuote(s string) string { func escapeQuote(s string) string {
return strings.ReplaceAll(s, "'", "''") return strings.ReplaceAll(s, "'", "''")
} }
// stripBackticks removes backticks from SQL expressions
// DBML uses backticks for SQL expressions like `now()`, but PostgreSQL doesn't use backticks
func stripBackticks(s string) string {
return strings.ReplaceAll(s, "`", "")
}
// extractSequenceName extracts sequence name from nextval() expression // extractSequenceName extracts sequence name from nextval() expression
// Example: "nextval('public.users_id_seq'::regclass)" returns "users_id_seq" // Example: "nextval('public.users_id_seq'::regclass)" returns "users_id_seq"
func extractSequenceName(defaultExpr string) string { func extractSequenceName(defaultExpr string) string {
@@ -745,3 +893,195 @@ func extractSequenceName(defaultExpr string) string {
} }
return fullName return fullName
} }
// executeDatabaseSQL executes SQL statements directly on a PostgreSQL database
func (w *Writer) executeDatabaseSQL(db *models.Database, connString string) error {
// Initialize execution report
w.executionReport = &ExecutionReport{
StartTime: getCurrentTimestamp(),
Schemas: make([]SchemaReport, 0),
Errors: make([]ExecutionError, 0),
}
// Generate SQL statements
statements, err := w.GenerateDatabaseStatements(db)
if err != nil {
return fmt.Errorf("failed to generate SQL statements: %w", err)
}
w.executionReport.TotalStatements = len(statements)
// Connect to database
ctx := context.Background()
conn, err := pgx.Connect(ctx, connString)
if err != nil {
return fmt.Errorf("failed to connect to database: %w", err)
}
defer conn.Close(ctx)
// Track schemas and tables
schemaMap := make(map[string]*SchemaReport)
currentSchema := ""
// Execute each statement
for i, stmt := range statements {
stmtTrimmed := strings.TrimSpace(stmt)
// Skip comments
if strings.HasPrefix(stmtTrimmed, "--") {
// Check if this is a schema comment to track schema changes
if strings.Contains(stmtTrimmed, "Schema:") {
parts := strings.Split(stmtTrimmed, "Schema:")
if len(parts) > 1 {
currentSchema = strings.TrimSpace(parts[1])
if _, exists := schemaMap[currentSchema]; !exists {
schemaReport := SchemaReport{
Name: currentSchema,
Tables: make([]TableReport, 0),
}
schemaMap[currentSchema] = &schemaReport
}
}
}
continue
}
// Skip empty statements
if stmtTrimmed == "" {
continue
}
fmt.Fprintf(os.Stderr, "Executing statement %d/%d...\n", i+1, len(statements))
_, execErr := conn.Exec(ctx, stmt)
if execErr != nil {
w.executionReport.FailedStatements++
execError := ExecutionError{
StatementNumber: i + 1,
Statement: truncateStatement(stmt),
Error: execErr.Error(),
}
w.executionReport.Errors = append(w.executionReport.Errors, execError)
// Track table creation failure
if strings.Contains(strings.ToUpper(stmtTrimmed), "CREATE TABLE") && currentSchema != "" {
tableName := extractTableNameFromCreate(stmtTrimmed)
if tableName != "" && schemaMap[currentSchema] != nil {
schemaMap[currentSchema].Tables = append(schemaMap[currentSchema].Tables, TableReport{
Name: tableName,
Created: false,
Error: execErr.Error(),
})
}
}
// Continue with next statement instead of failing completely
fmt.Fprintf(os.Stderr, "⚠ Warning: Statement %d failed: %v\n", i+1, execErr)
continue
}
w.executionReport.ExecutedStatements++
// Track successful table creation
if strings.Contains(strings.ToUpper(stmtTrimmed), "CREATE TABLE") && currentSchema != "" {
tableName := extractTableNameFromCreate(stmtTrimmed)
if tableName != "" && schemaMap[currentSchema] != nil {
schemaMap[currentSchema].Tables = append(schemaMap[currentSchema].Tables, TableReport{
Name: tableName,
Created: true,
})
}
}
}
// Convert schema map to slice
for _, schemaReport := range schemaMap {
w.executionReport.Schemas = append(w.executionReport.Schemas, *schemaReport)
}
w.executionReport.EndTime = getCurrentTimestamp()
// Write report if path is specified
if reportPath, ok := w.options.Metadata["report_path"].(string); ok && reportPath != "" {
if err := w.writeReport(reportPath); err != nil {
fmt.Fprintf(os.Stderr, "⚠ Warning: Failed to write report: %v\n", err)
} else {
fmt.Fprintf(os.Stderr, "✓ Report written to: %s\n", reportPath)
}
}
if w.executionReport.FailedStatements > 0 {
fmt.Fprintf(os.Stderr, "⚠ Completed with %d errors out of %d statements\n",
w.executionReport.FailedStatements, w.executionReport.TotalStatements)
} else {
fmt.Fprintf(os.Stderr, "✓ Successfully executed %d statements\n", w.executionReport.ExecutedStatements)
}
return nil
}
// writeReport writes the execution report to a JSON file
func (w *Writer) writeReport(reportPath string) error {
file, err := os.Create(reportPath)
if err != nil {
return fmt.Errorf("failed to create report file: %w", err)
}
defer file.Close()
encoder := json.NewEncoder(file)
encoder.SetIndent("", " ")
if err := encoder.Encode(w.executionReport); err != nil {
return fmt.Errorf("failed to encode report: %w", err)
}
return nil
}
// extractTableNameFromCreate extracts table name from CREATE TABLE statement
func extractTableNameFromCreate(stmt string) string {
// Match: CREATE TABLE [IF NOT EXISTS] schema.table_name or table_name
upper := strings.ToUpper(stmt)
idx := strings.Index(upper, "CREATE TABLE")
if idx == -1 {
return ""
}
rest := strings.TrimSpace(stmt[idx+12:]) // Skip "CREATE TABLE"
// Skip "IF NOT EXISTS"
if strings.HasPrefix(strings.ToUpper(rest), "IF NOT EXISTS") {
rest = strings.TrimSpace(rest[13:])
}
// Get the table name (first token before '(' or whitespace)
tokens := strings.FieldsFunc(rest, func(r rune) bool {
return r == '(' || r == ' ' || r == '\n' || r == '\t'
})
if len(tokens) == 0 {
return ""
}
// Handle schema.table format
fullName := tokens[0]
parts := strings.Split(fullName, ".")
if len(parts) > 1 {
return parts[len(parts)-1]
}
return fullName
}
// truncateStatement truncates long SQL statements for error messages
func truncateStatement(stmt string) string {
const maxLen = 200
if len(stmt) <= maxLen {
return stmt
}
return stmt[:maxLen] + "..."
}
// getCurrentTimestamp returns the current timestamp in a readable format
func getCurrentTimestamp() string {
return time.Now().Format("2006-01-02 15:04:05")
}

View File

@@ -241,3 +241,67 @@ func TestIsIntegerType(t *testing.T) {
} }
} }
} }
func TestTypeConversion(t *testing.T) {
// Test that invalid Go types are converted to valid PostgreSQL types
db := models.InitDatabase("testdb")
schema := models.InitSchema("public")
// Create a test table with Go types instead of SQL types
table := models.InitTable("test_types", "public")
// Add columns with Go types (invalid for PostgreSQL)
stringCol := models.InitColumn("name", "test_types", "public")
stringCol.Type = "string" // Should be converted to "text"
table.Columns["name"] = stringCol
int64Col := models.InitColumn("big_id", "test_types", "public")
int64Col.Type = "int64" // Should be converted to "bigint"
table.Columns["big_id"] = int64Col
int16Col := models.InitColumn("small_id", "test_types", "public")
int16Col.Type = "int16" // Should be converted to "smallint"
table.Columns["small_id"] = int16Col
schema.Tables = append(schema.Tables, table)
db.Schemas = append(db.Schemas, schema)
// Create writer with output to buffer
var buf bytes.Buffer
options := &writers.WriterOptions{}
writer := NewWriter(options)
writer.writer = &buf
// Write the database
err := writer.WriteDatabase(db)
if err != nil {
t.Fatalf("WriteDatabase failed: %v", err)
}
output := buf.String()
// Print output for debugging
t.Logf("Generated SQL:\n%s", output)
// Verify that Go types were converted to PostgreSQL types
if strings.Contains(output, "string") {
t.Errorf("Output contains 'string' type - should be converted to 'text'\nFull output:\n%s", output)
}
if strings.Contains(output, "int64") {
t.Errorf("Output contains 'int64' type - should be converted to 'bigint'\nFull output:\n%s", output)
}
if strings.Contains(output, "int16") {
t.Errorf("Output contains 'int16' type - should be converted to 'smallint'\nFull output:\n%s", output)
}
// Verify correct PostgreSQL types are present
if !strings.Contains(output, "text") {
t.Errorf("Output missing 'text' type (converted from 'string')\nFull output:\n%s", output)
}
if !strings.Contains(output, "bigint") {
t.Errorf("Output missing 'bigint' type (converted from 'int64')\nFull output:\n%s", output)
}
if !strings.Contains(output, "smallint") {
t.Errorf("Output missing 'smallint' type (converted from 'int16')\nFull output:\n%s", output)
}
}

View File

@@ -1,6 +1,9 @@
package writers package writers
import ( import (
"regexp"
"strings"
"git.warky.dev/wdevs/relspecgo/pkg/models" "git.warky.dev/wdevs/relspecgo/pkg/models"
) )
@@ -28,3 +31,56 @@ type WriterOptions struct {
// Additional options can be added here as needed // Additional options can be added here as needed
Metadata map[string]interface{} Metadata map[string]interface{}
} }
// SanitizeFilename removes quotes, comments, and invalid characters from identifiers
// to make them safe for use in filenames. This handles:
// - Double and single quotes: "table_name" or 'table_name' -> table_name
// - DBML comments: table [note: 'description'] -> table
// - Invalid filename characters: replaced with underscores
func SanitizeFilename(name string) string {
// Remove DBML/DCTX style comments in brackets (e.g., [note: 'description'])
commentRegex := regexp.MustCompile(`\s*\[.*?\]\s*`)
name = commentRegex.ReplaceAllString(name, "")
// Remove quotes (both single and double)
name = strings.ReplaceAll(name, `"`, "")
name = strings.ReplaceAll(name, `'`, "")
// Remove backticks (MySQL style identifiers)
name = strings.ReplaceAll(name, "`", "")
// Replace invalid filename characters with underscores
// Invalid chars: / \ : * ? " < > | and control characters
invalidChars := regexp.MustCompile(`[/\\:*?"<>|\x00-\x1f\x7f]`)
name = invalidChars.ReplaceAllString(name, "_")
// Trim whitespace and consecutive underscores
name = strings.TrimSpace(name)
name = regexp.MustCompile(`_+`).ReplaceAllString(name, "_")
name = strings.Trim(name, "_")
return name
}
// SanitizeStructTagValue sanitizes a value to be safely used inside Go struct tags.
// Go struct tags are delimited by backticks, so any backtick in the value would break the syntax.
// This function:
// - Removes DBML/DCTX comments in brackets
// - Removes all quotes (double, single, and backticks)
// - Returns a clean identifier safe for use in struct tags and field names
func SanitizeStructTagValue(value string) string {
// Remove DBML/DCTX style comments in brackets (e.g., [note: 'description'])
commentRegex := regexp.MustCompile(`\s*\[.*?\]\s*`)
value = commentRegex.ReplaceAllString(value, "")
// Trim whitespace
value = strings.TrimSpace(value)
// Remove all quotes: backticks, double quotes, and single quotes
// This ensures the value is clean for use as Go identifiers and struct tag values
value = strings.ReplaceAll(value, "`", "")
value = strings.ReplaceAll(value, `"`, "")
value = strings.ReplaceAll(value, `'`, "")
return value
}

View File

@@ -0,0 +1,5 @@
// First file - users table basic structure
Table public.users {
id bigint [pk, increment]
email varchar(255) [unique, not null]
}

View File

@@ -0,0 +1,8 @@
// Second file - posts table
Table public.posts {
id bigint [pk, increment]
user_id bigint [not null]
title varchar(200) [not null]
content text
created_at timestamp [not null]
}

View File

@@ -0,0 +1,5 @@
// Third file - adds more columns to users table (tests merging)
Table public.users {
name varchar(100)
created_at timestamp [not null]
}

View File

@@ -0,0 +1,10 @@
// File with commented-out refs - should load last
// Contains relationships that depend on earlier tables
// Ref: public.posts.user_id > public.users.id [ondelete: CASCADE]
Table public.comments {
id bigint [pk, increment]
post_id bigint [not null]
content text [not null]
}

346
vendor/modules.txt vendored
View File

@@ -1,6 +1,92 @@
# 4d63.com/gocheckcompilerdirectives v1.3.0
## explicit; go 1.22.0
# 4d63.com/gochecknoglobals v0.2.2
## explicit; go 1.18
# github.com/4meepo/tagalign v1.4.2
## explicit; go 1.22.0
# github.com/Abirdcfly/dupword v0.1.3
## explicit; go 1.22.0
# github.com/Antonboom/errname v1.0.0
## explicit; go 1.22.1
# github.com/Antonboom/nilnil v1.0.1
## explicit; go 1.22.0
# github.com/Antonboom/testifylint v1.5.2
## explicit; go 1.22.1
# github.com/BurntSushi/toml v1.4.1-0.20240526193622-a339e1f7089c
## explicit; go 1.18
# github.com/Crocmagnon/fatcontext v0.7.1
## explicit; go 1.22.0
# github.com/Djarvur/go-err113 v0.0.0-20210108212216-aea10b59be24
## explicit; go 1.13
# github.com/GaijinEntertainment/go-exhaustruct/v3 v3.3.1
## explicit; go 1.23.0
# github.com/Masterminds/semver/v3 v3.3.0
## explicit; go 1.21
# github.com/OpenPeeDeeP/depguard/v2 v2.2.1
## explicit; go 1.23.0
# github.com/alecthomas/go-check-sumtype v0.3.1
## explicit; go 1.22.0
# github.com/alexkohler/nakedret/v2 v2.0.5
## explicit; go 1.21
# github.com/alexkohler/prealloc v1.0.0
## explicit; go 1.15
# github.com/alingse/asasalint v0.0.11
## explicit; go 1.18
# github.com/alingse/nilnesserr v0.1.2
## explicit; go 1.22.0
# github.com/ashanbrown/forbidigo v1.6.0
## explicit; go 1.13
# github.com/ashanbrown/makezero v1.2.0
## explicit; go 1.12
# github.com/beorn7/perks v1.0.1
## explicit; go 1.11
# github.com/bkielbasa/cyclop v1.2.3
## explicit; go 1.22.0
# github.com/blizzy78/varnamelen v0.8.0
## explicit; go 1.16
# github.com/bombsimon/wsl/v4 v4.5.0
## explicit; go 1.22
# github.com/breml/bidichk v0.3.2
## explicit; go 1.22.0
# github.com/breml/errchkjson v0.4.0
## explicit; go 1.22.0
# github.com/butuzov/ireturn v0.3.1
## explicit; go 1.18
# github.com/butuzov/mirror v1.3.0
## explicit; go 1.19
# github.com/catenacyber/perfsprint v0.8.2
## explicit; go 1.22.0
# github.com/ccojocar/zxcvbn-go v1.0.2
## explicit; go 1.20
# github.com/cespare/xxhash/v2 v2.3.0
## explicit; go 1.11
# github.com/charithe/durationcheck v0.0.10
## explicit; go 1.14
# github.com/chavacava/garif v0.1.0
## explicit; go 1.16
# github.com/ckaznocha/intrange v0.3.0
## explicit; go 1.22
# github.com/curioswitch/go-reassign v0.3.0
## explicit; go 1.21
# github.com/daixiang0/gci v0.13.5
## explicit; go 1.21
# github.com/davecgh/go-spew v1.1.1 # github.com/davecgh/go-spew v1.1.1
## explicit ## explicit
github.com/davecgh/go-spew/spew github.com/davecgh/go-spew/spew
# github.com/denis-tingaikin/go-header v0.5.0
## explicit; go 1.21
# github.com/ettle/strcase v0.2.0
## explicit; go 1.12
# github.com/fatih/color v1.18.0
## explicit; go 1.17
# github.com/fatih/structtag v1.2.0
## explicit; go 1.12
# github.com/firefart/nonamedreturns v1.0.5
## explicit; go 1.18
# github.com/fsnotify/fsnotify v1.5.4
## explicit; go 1.16
# github.com/fzipp/gocyclo v0.6.0
## explicit; go 1.18
# github.com/gdamore/encoding v1.0.1 # github.com/gdamore/encoding v1.0.1
## explicit; go 1.9 ## explicit; go 1.9
github.com/gdamore/encoding github.com/gdamore/encoding
@@ -44,9 +130,75 @@ github.com/gdamore/tcell/v2/terminfo/x/xfce
github.com/gdamore/tcell/v2/terminfo/x/xterm github.com/gdamore/tcell/v2/terminfo/x/xterm
github.com/gdamore/tcell/v2/terminfo/x/xterm_ghostty github.com/gdamore/tcell/v2/terminfo/x/xterm_ghostty
github.com/gdamore/tcell/v2/terminfo/x/xterm_kitty github.com/gdamore/tcell/v2/terminfo/x/xterm_kitty
# github.com/ghostiam/protogetter v0.3.9
## explicit; go 1.22.0
# github.com/go-critic/go-critic v0.12.0
## explicit; go 1.22.0
# github.com/go-toolsmith/astcast v1.1.0
## explicit; go 1.16
# github.com/go-toolsmith/astcopy v1.1.0
## explicit; go 1.16
# github.com/go-toolsmith/astequal v1.2.0
## explicit; go 1.18
# github.com/go-toolsmith/astfmt v1.1.0
## explicit; go 1.16
# github.com/go-toolsmith/astp v1.1.0
## explicit; go 1.16
# github.com/go-toolsmith/strparse v1.1.0
## explicit; go 1.16
# github.com/go-toolsmith/typep v1.1.0
## explicit; go 1.16
# github.com/go-viper/mapstructure/v2 v2.2.1
## explicit; go 1.18
# github.com/go-xmlfmt/xmlfmt v1.1.3
## explicit
# github.com/gobwas/glob v0.2.3
## explicit
# github.com/gofrs/flock v0.12.1
## explicit; go 1.21.0
# github.com/golang/protobuf v1.5.3
## explicit; go 1.9
# github.com/golangci/dupl v0.0.0-20250308024227-f665c8d69b32
## explicit; go 1.22.0
# github.com/golangci/go-printf-func-name v0.1.0
## explicit; go 1.22.0
# github.com/golangci/gofmt v0.0.0-20250106114630-d62b90e6713d
## explicit; go 1.22.0
# github.com/golangci/golangci-lint v1.64.8
## explicit; go 1.23.0
# github.com/golangci/misspell v0.6.0
## explicit; go 1.21
# github.com/golangci/plugin-module-register v0.1.1
## explicit; go 1.21
# github.com/golangci/revgrep v0.8.0
## explicit; go 1.21
# github.com/golangci/unconvert v0.0.0-20240309020433-c5143eacb3ed
## explicit; go 1.20
# github.com/google/go-cmp v0.7.0
## explicit; go 1.21
# github.com/google/uuid v1.6.0 # github.com/google/uuid v1.6.0
## explicit ## explicit
github.com/google/uuid github.com/google/uuid
# github.com/gordonklaus/ineffassign v0.1.0
## explicit; go 1.14
# github.com/gostaticanalysis/analysisutil v0.7.1
## explicit; go 1.16
# github.com/gostaticanalysis/comment v1.5.0
## explicit; go 1.22.9
# github.com/gostaticanalysis/forcetypeassert v0.2.0
## explicit; go 1.23.0
# github.com/gostaticanalysis/nilerr v0.1.1
## explicit; go 1.15
# github.com/hashicorp/go-immutable-radix/v2 v2.1.0
## explicit; go 1.18
# github.com/hashicorp/go-version v1.7.0
## explicit
# github.com/hashicorp/golang-lru/v2 v2.0.7
## explicit; go 1.18
# github.com/hashicorp/hcl v1.0.0
## explicit
# github.com/hexops/gotextdiff v1.0.3
## explicit; go 1.16
# github.com/inconshreveable/mousetrap v1.1.0 # github.com/inconshreveable/mousetrap v1.1.0
## explicit; go 1.18 ## explicit; go 1.18
github.com/inconshreveable/mousetrap github.com/inconshreveable/mousetrap
@@ -68,23 +220,115 @@ github.com/jackc/pgx/v5/pgconn/ctxwatch
github.com/jackc/pgx/v5/pgconn/internal/bgreader github.com/jackc/pgx/v5/pgconn/internal/bgreader
github.com/jackc/pgx/v5/pgproto3 github.com/jackc/pgx/v5/pgproto3
github.com/jackc/pgx/v5/pgtype github.com/jackc/pgx/v5/pgtype
# github.com/jgautheron/goconst v1.7.1
## explicit; go 1.13
# github.com/jingyugao/rowserrcheck v1.1.1
## explicit; go 1.13
# github.com/jinzhu/inflection v1.0.0 # github.com/jinzhu/inflection v1.0.0
## explicit ## explicit
github.com/jinzhu/inflection github.com/jinzhu/inflection
# github.com/jjti/go-spancheck v0.6.4
## explicit; go 1.22.1
# github.com/julz/importas v0.2.0
## explicit; go 1.20
# github.com/karamaru-alpha/copyloopvar v1.2.1
## explicit; go 1.21
# github.com/kisielk/errcheck v1.9.0
## explicit; go 1.22.0
# github.com/kkHAIKE/contextcheck v1.1.6
## explicit; go 1.23.0
# github.com/kr/pretty v0.3.1 # github.com/kr/pretty v0.3.1
## explicit; go 1.12 ## explicit; go 1.12
# github.com/kulti/thelper v0.6.3
## explicit; go 1.18
# github.com/kunwardeep/paralleltest v1.0.10
## explicit; go 1.17
# github.com/lasiar/canonicalheader v1.1.2
## explicit; go 1.22.0
# github.com/ldez/exptostd v0.4.2
## explicit; go 1.22.0
# github.com/ldez/gomoddirectives v0.6.1
## explicit; go 1.22.0
# github.com/ldez/grignotin v0.9.0
## explicit; go 1.22.0
# github.com/ldez/tagliatelle v0.7.1
## explicit; go 1.22.0
# github.com/ldez/usetesting v0.4.2
## explicit; go 1.22.0
# github.com/leonklingele/grouper v1.1.2
## explicit; go 1.18
# github.com/lucasb-eyer/go-colorful v1.2.0 # github.com/lucasb-eyer/go-colorful v1.2.0
## explicit; go 1.12 ## explicit; go 1.12
github.com/lucasb-eyer/go-colorful github.com/lucasb-eyer/go-colorful
# github.com/macabu/inamedparam v0.1.3
## explicit; go 1.20
# github.com/magiconair/properties v1.8.6
## explicit; go 1.13
# github.com/maratori/testableexamples v1.0.0
## explicit; go 1.19
# github.com/maratori/testpackage v1.1.1
## explicit; go 1.20
# github.com/matoous/godox v1.1.0
## explicit; go 1.18
# github.com/mattn/go-colorable v0.1.14
## explicit; go 1.18
# github.com/mattn/go-isatty v0.0.20
## explicit; go 1.15
# github.com/mattn/go-runewidth v0.0.16 # github.com/mattn/go-runewidth v0.0.16
## explicit; go 1.9 ## explicit; go 1.9
github.com/mattn/go-runewidth github.com/mattn/go-runewidth
# github.com/matttproud/golang_protobuf_extensions v1.0.1
## explicit
# github.com/mgechev/revive v1.7.0
## explicit; go 1.22.1
# github.com/mitchellh/go-homedir v1.1.0
## explicit
# github.com/mitchellh/mapstructure v1.5.0
## explicit; go 1.14
# github.com/moricho/tparallel v0.3.2
## explicit; go 1.20
# github.com/nakabonne/nestif v0.3.1
## explicit; go 1.15
# github.com/nishanths/exhaustive v0.12.0
## explicit; go 1.18
# github.com/nishanths/predeclared v0.2.2
## explicit; go 1.14
# github.com/nunnatsa/ginkgolinter v0.19.1
## explicit; go 1.23.0
# github.com/olekukonko/tablewriter v0.0.5
## explicit; go 1.12
# github.com/pelletier/go-toml v1.9.5
## explicit; go 1.12
# github.com/pelletier/go-toml/v2 v2.2.3
## explicit; go 1.21.0
# github.com/pmezard/go-difflib v1.0.0 # github.com/pmezard/go-difflib v1.0.0
## explicit ## explicit
github.com/pmezard/go-difflib/difflib github.com/pmezard/go-difflib/difflib
# github.com/polyfloyd/go-errorlint v1.7.1
## explicit; go 1.22.0
# github.com/prometheus/client_golang v1.12.1
## explicit; go 1.13
# github.com/prometheus/client_model v0.2.0
## explicit; go 1.9
# github.com/prometheus/common v0.32.1
## explicit; go 1.13
# github.com/prometheus/procfs v0.7.3
## explicit; go 1.13
# github.com/puzpuzpuz/xsync/v3 v3.5.1 # github.com/puzpuzpuz/xsync/v3 v3.5.1
## explicit; go 1.18 ## explicit; go 1.18
github.com/puzpuzpuz/xsync/v3 github.com/puzpuzpuz/xsync/v3
# github.com/quasilyte/go-ruleguard v0.4.3-0.20240823090925-0fe6f58b47b1
## explicit; go 1.19
# github.com/quasilyte/go-ruleguard/dsl v0.3.22
## explicit; go 1.15
# github.com/quasilyte/gogrep v0.5.0
## explicit; go 1.16
# github.com/quasilyte/regex/syntax v0.0.0-20210819130434-b3f0c404a727
## explicit; go 1.14
# github.com/quasilyte/stdinfo v0.0.0-20220114132959-f7386bf02567
## explicit; go 1.17
# github.com/raeperd/recvcheck v0.2.0
## explicit; go 1.22.0
# github.com/rivo/tview v0.42.0 # github.com/rivo/tview v0.42.0
## explicit; go 1.18 ## explicit; go 1.18
github.com/rivo/tview github.com/rivo/tview
@@ -93,20 +337,76 @@ github.com/rivo/tview
github.com/rivo/uniseg github.com/rivo/uniseg
# github.com/rogpeppe/go-internal v1.14.1 # github.com/rogpeppe/go-internal v1.14.1
## explicit; go 1.23 ## explicit; go 1.23
# github.com/ryancurrah/gomodguard v1.3.5
## explicit; go 1.22.0
# github.com/ryanrolds/sqlclosecheck v0.5.1
## explicit; go 1.20
# github.com/sanposhiho/wastedassign/v2 v2.1.0
## explicit; go 1.18
# github.com/santhosh-tekuri/jsonschema/v6 v6.0.1
## explicit; go 1.21
# github.com/sashamelentyev/interfacebloat v1.1.0
## explicit; go 1.18
# github.com/sashamelentyev/usestdlibvars v1.28.0
## explicit; go 1.20
# github.com/securego/gosec/v2 v2.22.2
## explicit; go 1.23.0
# github.com/sirupsen/logrus v1.9.3
## explicit; go 1.13
# github.com/sivchari/containedctx v1.0.3
## explicit; go 1.17
# github.com/sivchari/tenv v1.12.1
## explicit; go 1.22.0
# github.com/sonatard/noctx v0.1.0
## explicit; go 1.22.0
# github.com/sourcegraph/go-diff v0.7.0
## explicit; go 1.14
# github.com/spf13/afero v1.12.0
## explicit; go 1.21
# github.com/spf13/cast v1.5.0
## explicit; go 1.18
# github.com/spf13/cobra v1.10.2 # github.com/spf13/cobra v1.10.2
## explicit; go 1.15 ## explicit; go 1.15
github.com/spf13/cobra github.com/spf13/cobra
# github.com/spf13/jwalterweatherman v1.1.0
## explicit
# github.com/spf13/pflag v1.0.10 # github.com/spf13/pflag v1.0.10
## explicit; go 1.12 ## explicit; go 1.12
github.com/spf13/pflag github.com/spf13/pflag
# github.com/spf13/viper v1.12.0
## explicit; go 1.17
# github.com/ssgreg/nlreturn/v2 v2.2.1
## explicit; go 1.13
# github.com/stbenjam/no-sprintf-host-port v0.2.0
## explicit; go 1.18
# github.com/stretchr/objx v0.5.2
## explicit; go 1.20
# github.com/stretchr/testify v1.11.1 # github.com/stretchr/testify v1.11.1
## explicit; go 1.17 ## explicit; go 1.17
github.com/stretchr/testify/assert github.com/stretchr/testify/assert
github.com/stretchr/testify/assert/yaml github.com/stretchr/testify/assert/yaml
github.com/stretchr/testify/require github.com/stretchr/testify/require
# github.com/subosito/gotenv v1.4.1
## explicit; go 1.18
# github.com/tdakkota/asciicheck v0.4.1
## explicit; go 1.22.0
# github.com/tetafro/godot v1.5.0
## explicit; go 1.20
# github.com/timakin/bodyclose v0.0.0-20241017074812-ed6a65f985e3
## explicit; go 1.12
# github.com/timonwong/loggercheck v0.10.1
## explicit; go 1.22.0
# github.com/tmthrgd/go-hex v0.0.0-20190904060850-447a3041c3bc # github.com/tmthrgd/go-hex v0.0.0-20190904060850-447a3041c3bc
## explicit ## explicit
github.com/tmthrgd/go-hex github.com/tmthrgd/go-hex
# github.com/tomarrell/wrapcheck/v2 v2.10.0
## explicit; go 1.21
# github.com/tommy-muehle/go-mnd/v2 v2.5.1
## explicit; go 1.12
# github.com/ultraware/funlen v0.2.0
## explicit; go 1.22.0
# github.com/ultraware/whitespace v0.2.0
## explicit; go 1.20
# github.com/uptrace/bun v1.2.16 # github.com/uptrace/bun v1.2.16
## explicit; go 1.24.0 ## explicit; go 1.24.0
github.com/uptrace/bun github.com/uptrace/bun
@@ -118,6 +418,10 @@ github.com/uptrace/bun/internal
github.com/uptrace/bun/internal/parser github.com/uptrace/bun/internal/parser
github.com/uptrace/bun/internal/tagparser github.com/uptrace/bun/internal/tagparser
github.com/uptrace/bun/schema github.com/uptrace/bun/schema
# github.com/uudashr/gocognit v1.2.0
## explicit; go 1.19
# github.com/uudashr/iface v1.3.1
## explicit; go 1.22.1
# github.com/vmihailenco/msgpack/v5 v5.4.1 # github.com/vmihailenco/msgpack/v5 v5.4.1
## explicit; go 1.19 ## explicit; go 1.19
github.com/vmihailenco/msgpack/v5 github.com/vmihailenco/msgpack/v5
@@ -127,9 +431,37 @@ github.com/vmihailenco/msgpack/v5/msgpcode
github.com/vmihailenco/tagparser/v2 github.com/vmihailenco/tagparser/v2
github.com/vmihailenco/tagparser/v2/internal github.com/vmihailenco/tagparser/v2/internal
github.com/vmihailenco/tagparser/v2/internal/parser github.com/vmihailenco/tagparser/v2/internal/parser
# github.com/xen0n/gosmopolitan v1.2.2
## explicit; go 1.19
# github.com/yagipy/maintidx v1.0.0
## explicit; go 1.17
# github.com/yeya24/promlinter v0.3.0
## explicit; go 1.20
# github.com/ykadowak/zerologlint v0.1.5
## explicit; go 1.19
# gitlab.com/bosi/decorder v0.4.2
## explicit; go 1.20
# go-simpler.org/musttag v0.13.0
## explicit; go 1.20
# go-simpler.org/sloglint v0.9.0
## explicit; go 1.22.0
# go.uber.org/atomic v1.7.0
## explicit; go 1.13
# go.uber.org/automaxprocs v1.6.0
## explicit; go 1.20
# go.uber.org/multierr v1.6.0
## explicit; go 1.12
# go.uber.org/zap v1.24.0
## explicit; go 1.19
# golang.org/x/crypto v0.41.0 # golang.org/x/crypto v0.41.0
## explicit; go 1.23.0 ## explicit; go 1.23.0
golang.org/x/crypto/pbkdf2 golang.org/x/crypto/pbkdf2
# golang.org/x/exp/typeparams v0.0.0-20250210185358-939b2ce775ac
## explicit; go 1.18
# golang.org/x/mod v0.26.0
## explicit; go 1.23.0
# golang.org/x/sync v0.16.0
## explicit; go 1.23.0
# golang.org/x/sys v0.38.0 # golang.org/x/sys v0.38.0
## explicit; go 1.24.0 ## explicit; go 1.24.0
golang.org/x/sys/cpu golang.org/x/sys/cpu
@@ -156,6 +488,20 @@ golang.org/x/text/transform
golang.org/x/text/unicode/bidi golang.org/x/text/unicode/bidi
golang.org/x/text/unicode/norm golang.org/x/text/unicode/norm
golang.org/x/text/width golang.org/x/text/width
# golang.org/x/tools v0.35.0
## explicit; go 1.23.0
# google.golang.org/protobuf v1.36.5
## explicit; go 1.21
# gopkg.in/ini.v1 v1.67.0
## explicit
# gopkg.in/yaml.v2 v2.4.0
## explicit; go 1.15
# gopkg.in/yaml.v3 v3.0.1 # gopkg.in/yaml.v3 v3.0.1
## explicit ## explicit
gopkg.in/yaml.v3 gopkg.in/yaml.v3
# honnef.co/go/tools v0.6.1
## explicit; go 1.23
# mvdan.cc/gofumpt v0.7.0
## explicit; go 1.22
# mvdan.cc/unparam v0.0.0-20240528143540-8a5130ca722f
## explicit; go 1.21