Compare commits

...

12 Commits

Author SHA1 Message Date
Hein
d30fc24f55 chore(release): update package version to 1.0.48
All checks were successful
Release / pkg-deb (push) Successful in -32m6s
Release / test (push) Successful in -32m44s
Release / release (push) Successful in -32m5s
Release / pkg-aur (push) Successful in -32m38s
Release / pkg-rpm (push) Successful in -30m46s
2026-04-30 16:07:33 +02:00
Hein
16a489d0b8 style(pkg): align json and numeric type mappings 2026-04-30 16:07:16 +02:00
Hein
3524e86282 feat: add --types flag and stdlib nullable type support for bun/gorm writers
* Fix pgsql reader double-quoting defaults: normalizePostgresDefault strips
  surrounding SQL string literal quotes from column_default before storing,
  matching the convention used by every other reader.

* Add NullableTypes field to WriterOptions with NullableTypeResolveSpec
  (default) and NullableTypeStdlib constants.

* Both bun and gorm TypeMappers now accept a typeStyle parameter. stdlib
  mode produces sql.NullString/NullInt32/NullTime etc. for nullable scalars,
  plain Go slices for arrays, and time.Time for NOT NULL timestamps. Default
  resolvespec behaviour is unchanged.

* Add --types flag to convert and split commands.

* Update bun/README.md and gorm/README.md with side-by-side generated code
  examples, updated type mapping tables, and Writer Options documentation.
2026-04-30 16:00:54 +02:00
Hein
1e54fdcd7f Merge branch 'master' of git.warky.dev:wdevs/relspecgo 2026-04-30 15:15:34 +02:00
fb104ea084 feat: PostgreSQL connections opened by relspec set application_name by default to relspecgo/<version>
All checks were successful
Release / test (push) Successful in -31m41s
Release / release (push) Successful in -28m47s
Release / pkg-aur (push) Successful in -32m40s
Release / pkg-deb (push) Successful in -32m25s
Release / pkg-rpm (push) Successful in -28m30s
2026-04-26 17:48:26 +02:00
837160b77a feat(pgsql): implement application_name handling in connection 2026-04-26 17:45:25 +02:00
ed7130bba8 refactor(pkg): canonicalize base types and adjust length handling
* Update base types to keep explicit modifier forms
* Modify length handling for vector types in tests
2026-04-26 17:35:15 +02:00
4ca1810d07 refactor(dctx): sort table columns and indexes for deterministic output
Some checks failed
Release / test (push) Failing after -31m18s
Release / release (push) Has been skipped
Release / pkg-aur (push) Has been skipped
Release / pkg-deb (push) Has been skipped
Release / pkg-rpm (push) Has been skipped
2026-04-26 12:50:39 +02:00
c0880cb076 feat(pkg): preserve PostgreSQL types in mapDataType function
Some checks failed
Release / test (push) Failing after -31m27s
Release / release (push) Has been skipped
Release / pkg-aur (push) Has been skipped
Release / pkg-deb (push) Has been skipped
Release / pkg-rpm (push) Has been skipped
* Add support for known PostgreSQL types and modifiers
* Implement canonicalization for PostgreSQL types
* Introduce unit tests for PostgreSQL type handling
2026-04-26 12:43:44 +02:00
988798998d test(drawdb): add test for converting column types with modifiers
* Implement tests to ensure explicit type modifiers are preserved during conversion.
* Validate behavior for varchar, numeric, and custom vector types.
2026-04-26 12:35:54 +02:00
535a91d4be feat(docs): add comprehensive story of RelSpecGo's development journey 2026-04-08 22:21:24 +02:00
Hein
3d9cc7ec58 .
All checks were successful
Release / Build and Release (push) Successful in -25m33s
2026-02-20 16:32:19 +02:00
42 changed files with 2113 additions and 415 deletions

0
.codex Normal file
View File

View File

@@ -42,6 +42,11 @@ relspec convert --from pgsql --from-conn "postgres://..." --to sqlite --to-path
relspec convert --from json --from-list "a.json,b.json" --to yaml --to-path merged.yaml relspec convert --from json --from-list "a.json,b.json" --to yaml --to-path merged.yaml
``` ```
PostgreSQL connections opened by relspec set `application_name` by default to
`relspecgo/<version>` (with component suffixes internally, e.g. readers/writers).
If you need a custom value, provide `application_name` explicitly in the connection
string query parameters.
### `merge` — Additive schema merge (never modifies existing items) ### `merge` — Additive schema merge (never modifies existing items)
```bash ```bash

219
Story.md Normal file
View File

@@ -0,0 +1,219 @@
# From Scripts to RelSpec: What Years of Database Pain Taught Me
It started as a need.
A problem Ive carried with me since my early PHP days.
Every project meant doing the same work again. Same patterns, same fixes—just in a different codebase.
It became frustrating fast.
I wanted something solid. Not another workaround.
## The Early Tools Phase
Like most things in development, it began small.
A simple PHP script.
Then a few Python scripts.
Just tools—nothing fancy. The goal was straightforward: generate code faster and remove repetitive work. I even experimented with Clarion templates at one point, trying to bend existing systems into something useful.
Then came SQL scripts.
Then PostgreSQL migration stored procedures.
Then small Go programs using templates.
Each step was solving a problem I had at the time. Nothing unified. Nothing polished. Just survival tools.
---
## Argitek: The First Real Attempt
Eventually, those scattered ideas turned into something more structured: Argitek.
Argitek powered a few real systems, including Powerbid. On paper, it sounded solid:
> “Argitek Next is a powerful code generation tool designed to streamline your development workflow.”
And technically, it worked.
It could generate code from predefined templates, adapt to different scenarios, and reduce repetitive work. But something was off.
It never felt *complete*.
Not something I could confidently release.
So I did what many developers do with almost-good-enough tools—I parked it.
---
## The Breaking Point: Database Migrations
Over the years, one problem kept coming back:
Database migrations.
Not the clean, theoretical kind. The real ones.
* PostgreSQL to ORM mismatches
* DBML to SQL hacks
* GORM inconsistencies
* Manual fixes after “automated” migrations failed
It was always messy. Always unpredictable. Always more work than expected.
By 2025, after a particularly tough year, I had accumulated enough of these problems to stop ignoring them.
---
## December 2025: RelSpecGo Begins
In December 2025, I bootstrapped something new:
**RelSpecGo**
It started simple:
* Initial LICENSE
* Basic configuration
* A direction
By late December:
* SQL writer implemented
* Diff command added
January 2026:
* Documentation
February 2026:
* Schema editor UI (focused on relationships)
* MSSQL DDL writer
* Template support with `--from-list`
---
## April 2026: A Real Tool Emerges
By April 2026, it became something I could finally stand behind.
RelSpecGo reached version **1.0.44**, with:
* Packaging for AUR, Debian, and RPM
* Updated documentation and README
* A full toolchain for:
* Convert
* Merge
* Inspect
* Diff
* Template
* Edit
Support includes:
* bun
* dbml
* drizzle
* gorm
* prisma
* mssql
* pgsql
* sqlite
Plus:
* TUI editor
* Template engine
* Bidirectional schema handling
👉 RelSpecGo: [https://git.warky.dev/wdevs/relspecgo](https://git.warky.dev/wdevs/relspecgo)
This wasnt just another generator anymore.
It became a system for managing *database truth*.
---
## Lessons Learned (The Hard Way)
This journey wasnt about tools. It was about understanding databases properly.
Here are the principles that stuck:
### 1. Data Loss Is Not Acceptable
Changing table structures should **never** result in lost data. If it does, the process is broken.
### 2. Minimal Beats Clever
The simpler the system, the easier it is to trust—and to fix.
### 3. Respect the Database
If you fight database rules, you will lose. Stay aligned with them.
### 4. Indexes and Keys Matter More Than You Think
Performance and correctness both depend on them. Ignore them at your own risk.
### 5. Version-Control Your Backend Logic
SQL scripts, functions, migrations—these must live in version control. No exceptions.
### 6. Its Not Migration—Its Adaptation
Youre not just moving data. Youre fixing inconsistencies and aligning systems.
### 7. Migrations Never Go as Planned
Always assume something will break. Plan for it.
### 8. One Source of Truth Is Non-Negotiable
Your database schema must have a single, authoritative definition.
### 9. ORM Mapping Is a First-Class Concern
Your application models must reflect the database correctly. Drift causes bugs.
### 10. Audit Trails Are Critical
If you cant track changes, you cant trust your system.
### 11. Manage Database Functions Properly
They are part of your system—not an afterthought.
### 12. If Its Hard to Understand, Its Too Complex
Clarity is a feature. Complexity is technical debt.
### 13. GUIDs Have Their Place
Especially when moving data across systems. They solve real problems.
### 14. But Simplicity Still Wins
Numbered primary keys are predictable, efficient, and easy to reason about.
### 15. JSON Is Power—Use It Carefully
It adds flexibility, but too much turns structure into chaos.
---
## Closing Thoughts
Looking back, this wasnt about building a tool.
It was about:
* Reducing friction
* Making systems predictable
* Respecting the database as the core of the system
RelSpecGo is just the current result of that journey.
Not the end.
Just the first version that feels *right*.

View File

@@ -52,6 +52,7 @@ var (
convertPackageName string convertPackageName string
convertSchemaFilter string convertSchemaFilter string
convertFlattenSchema bool convertFlattenSchema bool
convertNullableTypes string
) )
var convertCmd = &cobra.Command{ var convertCmd = &cobra.Command{
@@ -175,6 +176,7 @@ func init() {
convertCmd.Flags().StringVar(&convertPackageName, "package", "", "Package name (for code generation formats like gorm/bun)") convertCmd.Flags().StringVar(&convertPackageName, "package", "", "Package name (for code generation formats like gorm/bun)")
convertCmd.Flags().StringVar(&convertSchemaFilter, "schema", "", "Filter to a specific schema by name (required for formats like dctx that only support single schemas)") convertCmd.Flags().StringVar(&convertSchemaFilter, "schema", "", "Filter to a specific schema by name (required for formats like dctx that only support single schemas)")
convertCmd.Flags().BoolVar(&convertFlattenSchema, "flatten-schema", false, "Flatten schema.table names to schema_table (useful for databases like SQLite that do not support schemas)") convertCmd.Flags().BoolVar(&convertFlattenSchema, "flatten-schema", false, "Flatten schema.table names to schema_table (useful for databases like SQLite that do not support schemas)")
convertCmd.Flags().StringVar(&convertNullableTypes, "types", "", "Nullable type package for code-gen writers (bun/gorm): 'resolvespec' (default) or 'stdlib' (database/sql)")
err := convertCmd.MarkFlagRequired("from") err := convertCmd.MarkFlagRequired("from")
if err != nil { if err != nil {
@@ -241,7 +243,7 @@ func runConvert(cmd *cobra.Command, args []string) error {
fmt.Fprintf(os.Stderr, " Schema: %s\n", convertSchemaFilter) fmt.Fprintf(os.Stderr, " Schema: %s\n", convertSchemaFilter)
} }
if err := writeDatabase(db, convertTargetType, convertTargetPath, convertPackageName, convertSchemaFilter, convertFlattenSchema); err != nil { if err := writeDatabase(db, convertTargetType, convertTargetPath, convertPackageName, convertSchemaFilter, convertFlattenSchema, convertNullableTypes); err != nil {
return fmt.Errorf("failed to write target: %w", err) return fmt.Errorf("failed to write target: %w", err)
} }
@@ -381,13 +383,14 @@ func readDatabaseForConvert(dbType, filePath, connString string) (*models.Databa
return db, nil return db, nil
} }
func writeDatabase(db *models.Database, dbType, outputPath, packageName, schemaFilter string, flattenSchema bool) error { func writeDatabase(db *models.Database, dbType, outputPath, packageName, schemaFilter string, flattenSchema bool, nullableTypes string) error {
var writer writers.Writer var writer writers.Writer
writerOpts := &writers.WriterOptions{ writerOpts := &writers.WriterOptions{
OutputPath: outputPath, OutputPath: outputPath,
PackageName: packageName, PackageName: packageName,
FlattenSchema: flattenSchema, FlattenSchema: flattenSchema,
NullableTypes: nullableTypes,
} }
switch strings.ToLower(dbType) { switch strings.ToLower(dbType) {

View File

@@ -22,6 +22,7 @@ var (
splitDatabaseName string splitDatabaseName string
splitExcludeSchema string splitExcludeSchema string
splitExcludeTables string splitExcludeTables string
splitNullableTypes string
) )
var splitCmd = &cobra.Command{ var splitCmd = &cobra.Command{
@@ -110,6 +111,7 @@ func init() {
splitCmd.Flags().StringVar(&splitTables, "tables", "", "Comma-separated list of table names to include (case-insensitive)") splitCmd.Flags().StringVar(&splitTables, "tables", "", "Comma-separated list of table names to include (case-insensitive)")
splitCmd.Flags().StringVar(&splitExcludeSchema, "exclude-schema", "", "Comma-separated list of schema names to exclude") splitCmd.Flags().StringVar(&splitExcludeSchema, "exclude-schema", "", "Comma-separated list of schema names to exclude")
splitCmd.Flags().StringVar(&splitExcludeTables, "exclude-tables", "", "Comma-separated list of table names to exclude (case-insensitive)") splitCmd.Flags().StringVar(&splitExcludeTables, "exclude-tables", "", "Comma-separated list of table names to exclude (case-insensitive)")
splitCmd.Flags().StringVar(&splitNullableTypes, "types", "", "Nullable type package for code-gen writers (bun/gorm): 'resolvespec' (default) or 'stdlib' (database/sql)")
err := splitCmd.MarkFlagRequired("from") err := splitCmd.MarkFlagRequired("from")
if err != nil { if err != nil {
@@ -185,6 +187,7 @@ func runSplit(cmd *cobra.Command, args []string) error {
splitPackageName, splitPackageName,
"", // no schema filter for split "", // no schema filter for split
false, // no flatten-schema for split false, // no flatten-schema for split
splitNullableTypes,
) )
if err != nil { if err != nil {
return fmt.Errorf("failed to write output: %w", err) return fmt.Errorf("failed to write output: %w", err)

View File

@@ -1,6 +1,6 @@
# Maintainer: Hein (Warky Devs) <hein@warky.dev> # Maintainer: Hein (Warky Devs) <hein@warky.dev>
pkgname=relspec pkgname=relspec
pkgver=1.0.44 pkgver=1.0.48
pkgrel=1 pkgrel=1
pkgdesc="RelSpec is a comprehensive database relations management tool that reads, transforms, and writes database table specifications across multiple formats and ORMs." pkgdesc="RelSpec is a comprehensive database relations management tool that reads, transforms, and writes database table specifications across multiple formats and ORMs."
arch=('x86_64' 'aarch64') arch=('x86_64' 'aarch64')

View File

@@ -1,5 +1,5 @@
Name: relspec Name: relspec
Version: 1.0.44 Version: 1.0.48
Release: 1%{?dist} Release: 1%{?dist}
Summary: RelSpec is a comprehensive database relations management tool that reads, transforms, and writes database table specifications across multiple formats and ORMs. Summary: RelSpec is a comprehensive database relations management tool that reads, transforms, and writes database table specifications across multiple formats and ORMs.

85
pkg/pgsql/connection.go Normal file
View File

@@ -0,0 +1,85 @@
package pgsql
import (
"context"
"fmt"
"runtime/debug"
"strings"
"github.com/jackc/pgx/v5"
)
const (
defaultApplicationPrefix = "relspecgo"
postgresIdentifierMaxLen = 63
)
// BuildApplicationName returns a PostgreSQL application_name in the form:
// relspecgo/<version>[:<component>]
func BuildApplicationName(component string) string {
appName := fmt.Sprintf("%s/%s", defaultApplicationPrefix, relspecVersion())
component = strings.TrimSpace(component)
if component != "" {
appName = appName + ":" + component
}
if len(appName) > postgresIdentifierMaxLen {
appName = appName[:postgresIdentifierMaxLen]
}
return appName
}
// ParseConfigWithApplicationName parses a connection string and applies a default
// application_name when one is not explicitly provided by the caller.
func ParseConfigWithApplicationName(connString, component string) (*pgx.ConnConfig, error) {
cfg, err := pgx.ParseConfig(connString)
if err != nil {
return nil, err
}
if cfg.RuntimeParams == nil {
cfg.RuntimeParams = map[string]string{}
}
if strings.TrimSpace(cfg.RuntimeParams["application_name"]) == "" {
cfg.RuntimeParams["application_name"] = BuildApplicationName(component)
}
return cfg, nil
}
// Connect establishes a PostgreSQL connection with a default relspec
// application_name when the caller does not provide one in the DSN.
func Connect(ctx context.Context, connString, component string) (*pgx.Conn, error) {
cfg, err := ParseConfigWithApplicationName(connString, component)
if err != nil {
return nil, err
}
return pgx.ConnectConfig(ctx, cfg)
}
func relspecVersion() string {
info, ok := debug.ReadBuildInfo()
if !ok {
return "dev"
}
version := strings.TrimSpace(info.Main.Version)
if version != "" && version != "(devel)" {
return version
}
for _, setting := range info.Settings {
if setting.Key == "vcs.revision" {
revision := strings.TrimSpace(setting.Value)
if len(revision) >= 7 {
return revision[:7]
}
if revision != "" {
return revision
}
}
}
return "dev"
}

View File

@@ -0,0 +1,53 @@
package pgsql
import (
"strings"
"testing"
)
func TestBuildApplicationName_IncludesVersion(t *testing.T) {
got := BuildApplicationName("")
if !strings.HasPrefix(got, "relspecgo/") {
t.Fatalf("BuildApplicationName() = %q, expected prefix relspecgo/", got)
}
}
func TestBuildApplicationName_IncludesComponent(t *testing.T) {
got := BuildApplicationName("reader-pgsql")
if !strings.Contains(got, ":reader-pgsql") {
t.Fatalf("BuildApplicationName(component) = %q, expected component suffix", got)
}
}
func TestBuildApplicationName_RespectsPostgresLengthLimit(t *testing.T) {
got := BuildApplicationName(strings.Repeat("x", 200))
if len(got) > 63 {
t.Fatalf("BuildApplicationName() length = %d, expected <= 63", len(got))
}
}
func TestParseConfigWithApplicationName_AddsWhenMissing(t *testing.T) {
cfg, err := ParseConfigWithApplicationName("postgres://user:pass@localhost:5432/db", "reader-pgsql")
if err != nil {
t.Fatalf("ParseConfigWithApplicationName() error = %v", err)
}
appName := cfg.RuntimeParams["application_name"]
if appName == "" {
t.Fatal("expected application_name to be set")
}
if !strings.HasPrefix(appName, "relspecgo/") {
t.Fatalf("application_name = %q, expected relspecgo/<version> prefix", appName)
}
}
func TestParseConfigWithApplicationName_PreservesExplicitValue(t *testing.T) {
cfg, err := ParseConfigWithApplicationName("postgres://user:pass@localhost:5432/db?application_name=custom-app", "reader-pgsql")
if err != nil {
t.Fatalf("ParseConfigWithApplicationName() error = %v", err)
}
if got := cfg.RuntimeParams["application_name"]; got != "custom-app" {
t.Fatalf("application_name = %q, expected %q", got, "custom-app")
}
}

250
pkg/pgsql/types_registry.go Normal file
View File

@@ -0,0 +1,250 @@
package pgsql
import (
"sort"
"strings"
)
// TypeSpec describes PostgreSQL type capabilities used by parsers/writers.
type TypeSpec struct {
SupportsLength bool
SupportsPrecision bool
}
var postgresBaseTypes = map[string]TypeSpec{
// Numeric types
"smallint": {},
"integer": {},
"bigint": {},
"decimal": {SupportsPrecision: true},
"numeric": {SupportsPrecision: true},
"real": {},
"double precision": {},
"smallserial": {},
"serial": {},
"bigserial": {},
"money": {},
// Character types
"char": {SupportsLength: true},
"character": {SupportsLength: true},
"varchar": {SupportsLength: true},
"character varying": {SupportsLength: true},
"text": {},
"name": {},
// Binary
"bytea": {},
// Date/time
"timestamp": {SupportsPrecision: true},
"timestamp without time zone": {SupportsPrecision: true},
"timestamp with time zone": {SupportsPrecision: true},
"time": {SupportsPrecision: true},
"time without time zone": {SupportsPrecision: true},
"time with time zone": {SupportsPrecision: true},
"date": {},
"interval": {SupportsPrecision: true},
// Boolean
"boolean": {},
// Geometric
"point": {},
"line": {},
"lseg": {},
"box": {},
"path": {},
"polygon": {},
"circle": {},
// Network
"cidr": {},
"inet": {},
"macaddr": {},
"macaddr8": {},
// Bit string
"bit": {SupportsLength: true},
"bit varying": {SupportsLength: true},
"varbit": {SupportsLength: true},
// Text search
"tsvector": {},
"tsquery": {},
// UUID/XML/JSON
"uuid": {},
"xml": {},
"json": {},
"jsonb": {},
// Range
"int4range": {},
"int8range": {},
"numrange": {},
"tsrange": {},
"tstzrange": {},
"daterange": {},
"int4multirange": {},
"int8multirange": {},
"nummultirange": {},
"tsmultirange": {},
"tstzmultirange": {},
"datemultirange": {},
// Object identifier
"oid": {},
"regclass": {},
"regproc": {},
"regtype": {},
// Pseudo-ish/common built-ins seen in schemas
"record": {},
"void": {},
// Common extensions
"citext": {},
"hstore": {},
"ltree": {},
"lquery": {},
"ltxtquery": {},
"vector": {}, // pgvector: keep explicit modifier form (vector(dim))
"halfvec": {}, // pgvector: keep explicit modifier form (halfvec(dim))
"sparsevec": {}, // pgvector: keep explicit modifier form (sparsevec(dim))
}
var postgresTypeAliases = map[string]string{
// Integer aliases
"int2": "smallint",
"int4": "integer",
"int8": "bigint",
"int": "integer",
// Serial aliases
"serial2": "smallserial",
"serial4": "serial",
"serial8": "bigserial",
// Character aliases
"bpchar": "char",
// Float aliases
"float4": "real",
"float8": "double precision",
"float": "double precision",
// Time aliases
"timestamptz": "timestamp with time zone",
"timetz": "time with time zone",
// Bit alias
"varbit": "bit varying",
// Boolean alias
"bool": "boolean",
}
// GetPostgresBaseTypes returns a sorted-ish stable list of registered base type names.
func GetPostgresBaseTypes() []string {
result := make([]string, 0, len(postgresBaseTypes))
for t := range postgresBaseTypes {
result = append(result, t)
}
sort.Strings(result)
return result
}
// GetPostgresTypes returns the registered PostgreSQL types.
// When includeArrays is true, each base type also includes an array variant ("type[]").
func GetPostgresTypes(includeArrays bool) []string {
base := GetPostgresBaseTypes()
if !includeArrays {
return base
}
result := make([]string, 0, len(base)*2)
result = append(result, base...)
for _, t := range base {
result = append(result, t+"[]")
}
return result
}
// ExtractBaseType returns the type without outer array suffixes and modifiers.
// Examples:
// - varchar(255) -> varchar
// - text[] -> text
// - numeric(10,2)[] -> numeric
func ExtractBaseType(sqlType string) string {
t := normalizeTypeToken(sqlType)
t = strings.TrimSpace(stripArraySuffixes(t))
if idx := strings.Index(t, "("); idx > 0 {
t = strings.TrimSpace(t[:idx])
}
return t
}
// ExtractBaseTypeLower is ExtractBaseType with lowercase normalization.
func ExtractBaseTypeLower(sqlType string) string {
return strings.ToLower(ExtractBaseType(sqlType))
}
// IsArrayType reports whether the SQL type has one or more [] suffixes.
func IsArrayType(sqlType string) bool {
t := normalizeTypeToken(sqlType)
return strings.HasSuffix(t, "[]")
}
// ElementType returns the underlying element type for array types.
// For non-array types, it returns the input unchanged.
func ElementType(sqlType string) string {
t := normalizeTypeToken(sqlType)
return stripArraySuffixes(t)
}
// CanonicalizeBaseType resolves aliases to canonical PostgreSQL type names.
func CanonicalizeBaseType(baseType string) string {
base := strings.ToLower(normalizeTypeToken(baseType))
if canonical, ok := postgresTypeAliases[base]; ok {
return canonical
}
return base
}
// IsKnownPostgresType reports whether a type (including array forms) exists in the registry.
func IsKnownPostgresType(sqlType string) bool {
base := CanonicalizeBaseType(ExtractBaseTypeLower(sqlType))
_, ok := postgresBaseTypes[base]
return ok
}
// SupportsLength reports if this SQL type accepts a single length/dimension modifier.
func SupportsLength(sqlType string) bool {
base := CanonicalizeBaseType(ExtractBaseTypeLower(sqlType))
spec, ok := postgresBaseTypes[base]
return ok && spec.SupportsLength
}
// SupportsPrecision reports if this SQL type accepts precision (and possibly scale).
func SupportsPrecision(sqlType string) bool {
base := CanonicalizeBaseType(ExtractBaseTypeLower(sqlType))
spec, ok := postgresBaseTypes[base]
return ok && spec.SupportsPrecision
}
// HasExplicitTypeModifier reports if the type already includes "(...)".
func HasExplicitTypeModifier(sqlType string) bool {
return strings.Contains(sqlType, "(")
}
func stripArraySuffixes(t string) string {
for strings.HasSuffix(t, "[]") {
t = strings.TrimSpace(strings.TrimSuffix(t, "[]"))
}
return t
}
func normalizeTypeToken(t string) string {
return strings.Join(strings.Fields(strings.TrimSpace(t)), " ")
}

View File

@@ -0,0 +1,99 @@
package pgsql
import "testing"
func TestPostgresTypeRegistry_MasterListIncludesRequestedTypes(t *testing.T) {
required := []string{
"vector",
"integer",
"citext",
}
types := make(map[string]bool)
for _, typ := range GetPostgresTypes(true) {
types[typ] = true
}
for _, typ := range required {
if !types[typ] {
t.Fatalf("master type list missing %q", typ)
}
if !types[typ+"[]"] {
t.Fatalf("master type list missing array variant %q", typ+"[]")
}
}
}
func TestPostgresTypeRegistry_TypeParsingAndCapabilities(t *testing.T) {
tests := []struct {
input string
wantBase string
wantCanonicalBase string
wantArray bool
wantKnown bool
wantLength bool
wantPrecision bool
}{
{
input: "integer[]",
wantBase: "integer",
wantCanonicalBase: "integer",
wantArray: true,
wantKnown: true,
},
{
input: "citext[]",
wantBase: "citext",
wantCanonicalBase: "citext",
wantArray: true,
wantKnown: true,
},
{
input: "vector(1536)",
wantBase: "vector",
wantCanonicalBase: "vector",
wantKnown: true,
wantLength: false,
},
{
input: "numeric(10,2)",
wantBase: "numeric",
wantCanonicalBase: "numeric",
wantKnown: true,
wantPrecision: true,
},
{
input: "int4",
wantBase: "int4",
wantCanonicalBase: "integer",
wantKnown: true,
},
}
for _, tt := range tests {
t.Run(tt.input, func(t *testing.T) {
base := ExtractBaseTypeLower(tt.input)
if base != tt.wantBase {
t.Fatalf("ExtractBaseTypeLower(%q) = %q, want %q", tt.input, base, tt.wantBase)
}
canonical := CanonicalizeBaseType(base)
if canonical != tt.wantCanonicalBase {
t.Fatalf("CanonicalizeBaseType(%q) = %q, want %q", base, canonical, tt.wantCanonicalBase)
}
if IsArrayType(tt.input) != tt.wantArray {
t.Fatalf("IsArrayType(%q) = %v, want %v", tt.input, IsArrayType(tt.input), tt.wantArray)
}
if IsKnownPostgresType(tt.input) != tt.wantKnown {
t.Fatalf("IsKnownPostgresType(%q) = %v, want %v", tt.input, IsKnownPostgresType(tt.input), tt.wantKnown)
}
if SupportsLength(tt.input) != tt.wantLength {
t.Fatalf("SupportsLength(%q) = %v, want %v", tt.input, SupportsLength(tt.input), tt.wantLength)
}
if SupportsPrecision(tt.input) != tt.wantPrecision {
t.Fatalf("SupportsPrecision(%q) = %v, want %v", tt.input, SupportsPrecision(tt.input), tt.wantPrecision)
}
})
}
}

View File

@@ -12,6 +12,7 @@ import (
"strings" "strings"
"git.warky.dev/wdevs/relspecgo/pkg/models" "git.warky.dev/wdevs/relspecgo/pkg/models"
"git.warky.dev/wdevs/relspecgo/pkg/pgsql"
"git.warky.dev/wdevs/relspecgo/pkg/readers" "git.warky.dev/wdevs/relspecgo/pkg/readers"
) )
@@ -700,16 +701,22 @@ func (r *Reader) extractBunTag(tag string) string {
// parseTypeWithLength parses a type string and extracts length if present // parseTypeWithLength parses a type string and extracts length if present
// e.g., "varchar(255)" returns ("varchar", 255) // e.g., "varchar(255)" returns ("varchar", 255)
func (r *Reader) parseTypeWithLength(typeStr string) (baseType string, length int) { func (r *Reader) parseTypeWithLength(typeStr string) (baseType string, length int) {
typeStr = strings.TrimSpace(typeStr)
baseType = typeStr
// Check for type with length: varchar(255), char(10), etc. // Check for type with length: varchar(255), char(10), etc.
re := regexp.MustCompile(`^([a-zA-Z\s]+)\((\d+)\)$`) re := regexp.MustCompile(`^([a-zA-Z\s]+)\((\d+)\)$`)
matches := re.FindStringSubmatch(typeStr) matches := re.FindStringSubmatch(typeStr)
if len(matches) == 3 { if len(matches) == 3 {
if _, err := fmt.Sscanf(matches[2], "%d", &length); err == nil { rawBaseType := strings.TrimSpace(matches[1])
baseType = strings.TrimSpace(matches[1]) if pgsql.SupportsLength(rawBaseType) {
return if _, err := fmt.Sscanf(matches[2], "%d", &length); err == nil {
baseType = pgsql.CanonicalizeBaseType(rawBaseType)
return
}
} }
} }
baseType = typeStr
return return
} }

View File

@@ -71,8 +71,11 @@ func TestReader_ReadDatabase_Simple(t *testing.T) {
if !emailCol.NotNull { if !emailCol.NotNull {
t.Error("Column 'email' should be NOT NULL (explicit 'notnull' tag)") t.Error("Column 'email' should be NOT NULL (explicit 'notnull' tag)")
} }
if emailCol.Type != "varchar" || emailCol.Length != 255 { if emailCol.Type != "varchar" && emailCol.Type != "varchar(255)" {
t.Errorf("Expected email type 'varchar(255)', got '%s' with length %d", emailCol.Type, emailCol.Length) t.Errorf("Expected email type 'varchar' or 'varchar(255)', got '%s' with length %d", emailCol.Type, emailCol.Length)
}
if emailCol.Length != 255 {
t.Errorf("Expected email length 255, got %d", emailCol.Length)
} }
// Verify name column - primitive string type should be NOT NULL by default in Bun // Verify name column - primitive string type should be NOT NULL by default in Bun
@@ -356,6 +359,33 @@ func TestReader_ReadDatabase_Complex(t *testing.T) {
} }
} }
func TestParseTypeWithLength_PreservesExplicitTypeModifiers(t *testing.T) {
reader := &Reader{}
tests := []struct {
input string
wantType string
wantLength int
}{
{"varchar(255)", "varchar", 255},
{"character varying(120)", "character varying", 120},
{"vector(1536)", "vector(1536)", 0},
{"numeric(10,2)", "numeric(10,2)", 0},
}
for _, tt := range tests {
t.Run(tt.input, func(t *testing.T) {
gotType, gotLength := reader.parseTypeWithLength(tt.input)
if gotType != tt.wantType {
t.Fatalf("parseTypeWithLength(%q) type = %q, want %q", tt.input, gotType, tt.wantType)
}
if gotLength != tt.wantLength {
t.Fatalf("parseTypeWithLength(%q) length = %d, want %d", tt.input, gotLength, tt.wantLength)
}
})
}
}
func TestReader_ReadSchema(t *testing.T) { func TestReader_ReadSchema(t *testing.T) {
opts := &readers.ReaderOptions{ opts := &readers.ReaderOptions{
FilePath: filepath.Join("..", "..", "..", "tests", "assets", "bun", "simple.go"), FilePath: filepath.Join("..", "..", "..", "tests", "assets", "bun", "simple.go"),
@@ -485,9 +515,9 @@ func TestReader_NullableTypes(t *testing.T) {
// Test all nullability scenarios // Test all nullability scenarios
tests := []struct { tests := []struct {
column string column string
notNull bool notNull bool
reason string reason string
}{ }{
{"id", true, "primary key"}, {"id", true, "primary key"},
{"user_id", true, "explicit notnull tag"}, {"user_id", true, "explicit notnull tag"},

View File

@@ -567,110 +567,182 @@ func (r *Reader) parseDBML(content string) (*models.Database, error) {
// parseColumn parses a DBML column definition // parseColumn parses a DBML column definition
func (r *Reader) parseColumn(line, tableName, schemaName string) (*models.Column, *models.Constraint) { func (r *Reader) parseColumn(line, tableName, schemaName string) (*models.Column, *models.Constraint) {
// Format: column_name type [attributes] // comment // Format: column_name type [attributes] // comment
parts := strings.Fields(line) lineNoComment, inlineComment := splitInlineComment(line)
if len(parts) < 2 { signature, attrs := splitColumnSignatureAndAttrs(lineNoComment)
columnName, columnType, ok := parseColumnSignature(signature)
if !ok {
return nil, nil return nil, nil
} }
columnName := stripQuotes(parts[0])
columnType := stripQuotes(parts[1])
column := models.InitColumn(columnName, tableName, schemaName) column := models.InitColumn(columnName, tableName, schemaName)
column.Type = columnType column.Type = columnType
var constraint *models.Constraint var constraint *models.Constraint
// Parse attributes in brackets // Parse attributes in brackets
if strings.Contains(line, "[") && strings.Contains(line, "]") { if attrs != "" {
attrStart := strings.Index(line, "[") attrList := strings.Split(attrs, ",")
attrEnd := strings.Index(line, "]")
if attrStart < attrEnd {
attrs := line[attrStart+1 : attrEnd]
attrList := strings.Split(attrs, ",")
for _, attr := range attrList { for _, attr := range attrList {
attr = strings.TrimSpace(attr) attr = strings.TrimSpace(attr)
if strings.Contains(attr, "primary key") || attr == "pk" { if strings.Contains(attr, "primary key") || attr == "pk" {
column.IsPrimaryKey = true column.IsPrimaryKey = true
column.NotNull = true column.NotNull = true
} else if strings.Contains(attr, "not null") { } else if strings.Contains(attr, "not null") {
column.NotNull = true column.NotNull = true
} else if attr == "increment" { } else if attr == "increment" {
column.AutoIncrement = true column.AutoIncrement = true
} else if strings.HasPrefix(attr, "default:") { } else if strings.HasPrefix(attr, "default:") {
defaultVal := strings.TrimSpace(strings.TrimPrefix(attr, "default:")) defaultVal := strings.TrimSpace(strings.TrimPrefix(attr, "default:"))
column.Default = strings.Trim(defaultVal, "'\"") column.Default = strings.Trim(defaultVal, "'\"")
} else if attr == "unique" { } else if attr == "unique" {
// Create a unique constraint // Create a unique constraint
// Clean table name by removing leading underscores to avoid double underscores // Clean table name by removing leading underscores to avoid double underscores
cleanTableName := strings.TrimLeft(tableName, "_") cleanTableName := strings.TrimLeft(tableName, "_")
uniqueConstraint := models.InitConstraint( uniqueConstraint := models.InitConstraint(
fmt.Sprintf("ukey_%s_%s", cleanTableName, columnName), fmt.Sprintf("ukey_%s_%s", cleanTableName, columnName),
models.UniqueConstraint, models.UniqueConstraint,
) )
uniqueConstraint.Schema = schemaName uniqueConstraint.Schema = schemaName
uniqueConstraint.Table = tableName uniqueConstraint.Table = tableName
uniqueConstraint.Columns = []string{columnName} uniqueConstraint.Columns = []string{columnName}
// Store it to be added later // Store it to be added later
if constraint == nil { if constraint == nil {
constraint = uniqueConstraint constraint = uniqueConstraint
} }
} else if strings.HasPrefix(attr, "note:") { } else if strings.HasPrefix(attr, "note:") {
// Parse column note/comment // Parse column note/comment
note := strings.TrimSpace(strings.TrimPrefix(attr, "note:")) note := strings.TrimSpace(strings.TrimPrefix(attr, "note:"))
column.Comment = strings.Trim(note, "'\"") column.Comment = strings.Trim(note, "'\"")
} else if strings.HasPrefix(attr, "ref:") { } else if strings.HasPrefix(attr, "ref:") {
// Parse inline reference // Parse inline reference
// DBML semantics depend on context: // DBML semantics depend on context:
// - On FK column: ref: < target means "this FK references target" // - On FK column: ref: < target means "this FK references target"
// - On PK column: ref: < source means "source references this PK" (reverse notation) // - On PK column: ref: < source means "source references this PK" (reverse notation)
refStr := strings.TrimSpace(strings.TrimPrefix(attr, "ref:")) refStr := strings.TrimSpace(strings.TrimPrefix(attr, "ref:"))
// Check relationship direction operator // Check relationship direction operator
refOp := strings.TrimSpace(refStr) refOp := strings.TrimSpace(refStr)
var isReverse bool var isReverse bool
if strings.HasPrefix(refOp, "<") { if strings.HasPrefix(refOp, "<") {
// < means "is referenced by" - only makes sense on PK columns // < means "is referenced by" - only makes sense on PK columns
isReverse = column.IsPrimaryKey isReverse = column.IsPrimaryKey
} }
// > means "references" - always a forward FK, never reverse // > means "references" - always a forward FK, never reverse
constraint = r.parseRef(refStr) constraint = r.parseRef(refStr)
if constraint != nil { if constraint != nil {
if isReverse { if isReverse {
// Reverse: parsed ref is SOURCE, current column is TARGET // Reverse: parsed ref is SOURCE, current column is TARGET
// Constraint should be ON the source table // Constraint should be ON the source table
constraint.Schema = constraint.ReferencedSchema constraint.Schema = constraint.ReferencedSchema
constraint.Table = constraint.ReferencedTable constraint.Table = constraint.ReferencedTable
constraint.Columns = constraint.ReferencedColumns constraint.Columns = constraint.ReferencedColumns
constraint.ReferencedSchema = schemaName constraint.ReferencedSchema = schemaName
constraint.ReferencedTable = tableName constraint.ReferencedTable = tableName
constraint.ReferencedColumns = []string{columnName} constraint.ReferencedColumns = []string{columnName}
} else { } else {
// Forward: current column is SOURCE, parsed ref is TARGET // Forward: current column is SOURCE, parsed ref is TARGET
// Standard FK: constraint is ON current table // Standard FK: constraint is ON current table
constraint.Schema = schemaName constraint.Schema = schemaName
constraint.Table = tableName constraint.Table = tableName
constraint.Columns = []string{columnName} constraint.Columns = []string{columnName}
}
// Generate constraint name based on table and columns
constraint.Name = fmt.Sprintf("fk_%s_%s", constraint.Table, strings.Join(constraint.Columns, "_"))
} }
// Generate constraint name based on table and columns
constraint.Name = fmt.Sprintf("fk_%s_%s", constraint.Table, strings.Join(constraint.Columns, "_"))
} }
} }
} }
} }
// Parse inline comment // Parse inline comment
if strings.Contains(line, "//") { if inlineComment != "" {
commentStart := strings.Index(line, "//") column.Comment = inlineComment
column.Comment = strings.TrimSpace(line[commentStart+2:])
} }
return column, constraint return column, constraint
} }
func splitInlineComment(line string) (content string, inlineComment string) {
commentStart := strings.Index(line, "//")
if commentStart == -1 {
return line, ""
}
return strings.TrimSpace(line[:commentStart]), strings.TrimSpace(line[commentStart+2:])
}
func splitColumnSignatureAndAttrs(line string) (signature string, attrs string) {
trimmed := strings.TrimSpace(line)
if trimmed == "" || !strings.HasSuffix(trimmed, "]") {
return trimmed, ""
}
bracketDepth := 0
for i := len(trimmed) - 1; i >= 0; i-- {
switch trimmed[i] {
case ']':
bracketDepth++
case '[':
bracketDepth--
if bracketDepth == 0 {
// DBML attributes are a trailing [ ... ] block preceded by whitespace.
// This avoids confusing array types like text[] with attribute blocks.
if i > 0 && (trimmed[i-1] == ' ' || trimmed[i-1] == '\t') {
return strings.TrimSpace(trimmed[:i]), strings.TrimSpace(trimmed[i+1 : len(trimmed)-1])
}
}
}
}
return trimmed, ""
}
func parseColumnSignature(signature string) (columnName string, columnType string, ok bool) {
signature = strings.TrimSpace(signature)
if signature == "" {
return "", "", false
}
var splitAt int
if signature[0] == '"' || signature[0] == '\'' {
quote := signature[0]
splitAt = 1
for splitAt < len(signature) {
if signature[splitAt] == quote {
splitAt++
break
}
splitAt++
}
} else {
for splitAt < len(signature) && signature[splitAt] != ' ' && signature[splitAt] != '\t' {
splitAt++
}
}
if splitAt <= 0 || splitAt >= len(signature) {
return "", "", false
}
columnName = stripQuotes(strings.TrimSpace(signature[:splitAt]))
columnType = stripWrappingQuotes(strings.TrimSpace(signature[splitAt:]))
if columnName == "" || columnType == "" {
return "", "", false
}
return columnName, columnType, true
}
func stripWrappingQuotes(s string) string {
s = strings.TrimSpace(s)
if len(s) >= 2 && ((s[0] == '"' && s[len(s)-1] == '"') || (s[0] == '\'' && s[len(s)-1] == '\'')) {
return s[1 : len(s)-1]
}
return s
}
// parseIndex parses a DBML index definition // parseIndex parses a DBML index definition
func (r *Reader) parseIndex(line, tableName, schemaName string) *models.Index { func (r *Reader) parseIndex(line, tableName, schemaName string) *models.Index {
// Format: (columns) [attributes] OR columnname [attributes] // Format: (columns) [attributes] OR columnname [attributes]

View File

@@ -839,6 +839,67 @@ func TestConstraintNaming(t *testing.T) {
} }
} }
func TestParseColumn_PostgresTypes(t *testing.T) {
reader := &Reader{}
tests := []struct {
name string
line string
wantName string
wantType string
wantNotNull bool
wantComment string
}{
{
name: "array type with attrs",
line: "tags text[] [not null]",
wantName: "tags",
wantType: "text[]",
wantNotNull: true,
},
{
name: "vector with dimension",
line: "embedding vector(1536)",
wantName: "embedding",
wantType: "vector(1536)",
},
{
name: "multi word timestamp type",
line: "published_at timestamp with time zone",
wantName: "published_at",
wantType: "timestamp with time zone",
},
{
name: "array type with inline comment",
line: "labels varchar(20)[] // column labels",
wantName: "labels",
wantType: "varchar(20)[]",
wantComment: "column labels",
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
col, _ := reader.parseColumn(tt.line, "events", "public")
if col == nil {
t.Fatalf("parseColumn() returned nil column")
}
if col.Name != tt.wantName {
t.Errorf("column name = %q, want %q", col.Name, tt.wantName)
}
if col.Type != tt.wantType {
t.Errorf("column type = %q, want %q", col.Type, tt.wantType)
}
if col.NotNull != tt.wantNotNull {
t.Errorf("column not null = %v, want %v", col.NotNull, tt.wantNotNull)
}
if col.Comment != tt.wantComment {
t.Errorf("column comment = %q, want %q", col.Comment, tt.wantComment)
}
})
}
}
func getKeys[V any](m map[string]V) []string { func getKeys[V any](m map[string]V) []string {
keys := make([]string, 0, len(m)) keys := make([]string, 0, len(m))
for k := range m { for k := range m {

View File

@@ -7,6 +7,7 @@ import (
"strings" "strings"
"git.warky.dev/wdevs/relspecgo/pkg/models" "git.warky.dev/wdevs/relspecgo/pkg/models"
"git.warky.dev/wdevs/relspecgo/pkg/pgsql"
"git.warky.dev/wdevs/relspecgo/pkg/readers" "git.warky.dev/wdevs/relspecgo/pkg/readers"
) )
@@ -232,7 +233,19 @@ func (r *Reader) convertField(dctxField *models.DCTXField, tableName string) ([]
// mapDataType maps Clarion data types to SQL types // mapDataType maps Clarion data types to SQL types
func (r *Reader) mapDataType(clarionType string, size int) (sqlType string, precision int) { func (r *Reader) mapDataType(clarionType string, size int) (sqlType string, precision int) {
switch strings.ToUpper(clarionType) { trimmedType := strings.TrimSpace(clarionType)
// Preserve known PostgreSQL types (including arrays and extension types)
// from DCTX input instead of coercing them to generic text.
if pgsql.IsKnownPostgresType(trimmedType) {
pgType := canonicalizePostgresType(trimmedType)
if !pgsql.HasExplicitTypeModifier(pgType) && size > 0 && pgsql.SupportsLength(pgType) {
return pgType, size
}
return pgType, 0
}
switch strings.ToUpper(trimmedType) {
case "LONG": case "LONG":
if size == 8 { if size == 8 {
return "bigint", 0 return "bigint", 0
@@ -306,6 +319,32 @@ func (r *Reader) mapDataType(clarionType string, size int) (sqlType string, prec
} }
} }
func canonicalizePostgresType(typeStr string) string {
t := strings.ToLower(strings.Join(strings.Fields(strings.TrimSpace(typeStr)), " "))
if t == "" {
return ""
}
// Handle array suffixes
arrayCount := 0
for strings.HasSuffix(t, "[]") {
arrayCount++
t = strings.TrimSpace(strings.TrimSuffix(t, "[]"))
}
// Handle optional type modifier
modifier := ""
if idx := strings.Index(t, "("); idx > 0 {
if end := strings.LastIndex(t, ")"); end > idx {
modifier = t[idx : end+1]
t = strings.TrimSpace(t[:idx])
}
}
base := pgsql.CanonicalizeBaseType(t)
return base + modifier + strings.Repeat("[]", arrayCount)
}
// processKeys processes DCTX keys and converts them to indexes and primary keys // processKeys processes DCTX keys and converts them to indexes and primary keys
func (r *Reader) processKeys(dctxTable *models.DCTXTable, table *models.Table, fieldGuidMap map[string]string) error { func (r *Reader) processKeys(dctxTable *models.DCTXTable, table *models.Table, fieldGuidMap map[string]string) error {
for _, dctxKey := range dctxTable.Keys { for _, dctxKey := range dctxTable.Keys {

View File

@@ -493,3 +493,55 @@ func TestRelationships(t *testing.T) {
} }
} }
} }
func TestMapDataType_PostgresTypes(t *testing.T) {
reader := &Reader{}
tests := []struct {
name string
inputType string
size int
wantType string
wantLength int
}{
{
name: "integer array preserved",
inputType: "integer[]",
wantType: "integer[]",
},
{
name: "citext array preserved",
inputType: "citext[]",
wantType: "citext[]",
},
{
name: "vector modifier preserved",
inputType: "vector(1536)",
wantType: "vector(1536)",
},
{
name: "alias canonicalized in array",
inputType: "int4[]",
wantType: "integer[]",
},
{
name: "varchar length from size",
inputType: "varchar",
size: 120,
wantType: "varchar",
wantLength: 120,
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
gotType, gotLength := reader.mapDataType(tt.inputType, tt.size)
if gotType != tt.wantType {
t.Fatalf("mapDataType(%q, %d) type = %q, want %q", tt.inputType, tt.size, gotType, tt.wantType)
}
if gotLength != tt.wantLength {
t.Fatalf("mapDataType(%q, %d) length = %d, want %d", tt.inputType, tt.size, gotLength, tt.wantLength)
}
})
}
}

View File

@@ -8,6 +8,7 @@ import (
"strings" "strings"
"git.warky.dev/wdevs/relspecgo/pkg/models" "git.warky.dev/wdevs/relspecgo/pkg/models"
"git.warky.dev/wdevs/relspecgo/pkg/pgsql"
"git.warky.dev/wdevs/relspecgo/pkg/readers" "git.warky.dev/wdevs/relspecgo/pkg/readers"
"git.warky.dev/wdevs/relspecgo/pkg/writers/drawdb" "git.warky.dev/wdevs/relspecgo/pkg/writers/drawdb"
) )
@@ -231,30 +232,35 @@ func (r *Reader) convertToColumn(field *drawdb.DrawDBField, tableName, schemaNam
// Parse type and dimensions // Parse type and dimensions
typeStr := field.Type typeStr := field.Type
typeStr = strings.TrimSpace(typeStr)
column.Type = typeStr column.Type = typeStr
// Try to extract length/precision from type string like "varchar(255)" or "decimal(10,2)" // Try to extract length/precision from type string like "varchar(255)" or "decimal(10,2)"
if strings.Contains(typeStr, "(") { if strings.Contains(typeStr, "(") {
parts := strings.Split(typeStr, "(") parts := strings.Split(typeStr, "(")
column.Type = parts[0] baseType := strings.TrimSpace(parts[0])
if len(parts) > 1 { if len(parts) > 1 {
dimensions := strings.TrimSuffix(parts[1], ")") dimensions := strings.TrimSuffix(parts[1], ")")
if strings.Contains(dimensions, ",") { if strings.Contains(dimensions, ",") {
// Precision and scale (e.g., decimal(10,2)) // Precision and scale (e.g., decimal(10,2), numeric(10,2))
dims := strings.Split(dimensions, ",") if pgsql.SupportsPrecision(baseType) {
if precision, err := strconv.Atoi(strings.TrimSpace(dims[0])); err == nil { dims := strings.Split(dimensions, ",")
column.Precision = precision if precision, err := strconv.Atoi(strings.TrimSpace(dims[0])); err == nil {
} column.Precision = precision
if len(dims) > 1 { }
if scale, err := strconv.Atoi(strings.TrimSpace(dims[1])); err == nil { if len(dims) > 1 {
column.Scale = scale if scale, err := strconv.Atoi(strings.TrimSpace(dims[1])); err == nil {
column.Scale = scale
}
} }
} }
} else { } else {
// Just length (e.g., varchar(255)) // Just length (e.g., varchar(255))
if length, err := strconv.Atoi(dimensions); err == nil { if pgsql.SupportsLength(baseType) {
column.Length = length if length, err := strconv.Atoi(dimensions); err == nil {
column.Length = length
}
} }
} }
} }

View File

@@ -6,6 +6,7 @@ import (
"git.warky.dev/wdevs/relspecgo/pkg/models" "git.warky.dev/wdevs/relspecgo/pkg/models"
"git.warky.dev/wdevs/relspecgo/pkg/readers" "git.warky.dev/wdevs/relspecgo/pkg/readers"
"git.warky.dev/wdevs/relspecgo/pkg/writers/drawdb"
) )
func TestReader_ReadDatabase_Simple(t *testing.T) { func TestReader_ReadDatabase_Simple(t *testing.T) {
@@ -288,6 +289,61 @@ func TestReader_ReadDatabase_Complex(t *testing.T) {
} }
} }
func TestConvertToColumn_PreservesExplicitTypeModifiers(t *testing.T) {
reader := &Reader{}
tests := []struct {
name string
fieldType string
wantType string
wantLength int
wantPrecision int
wantScale int
}{
{
name: "varchar with length",
fieldType: "varchar(255)",
wantType: "varchar(255)",
wantLength: 255,
},
{
name: "numeric precision/scale",
fieldType: "numeric(10,2)",
wantType: "numeric(10,2)",
wantPrecision: 10,
wantScale: 2,
},
{
name: "custom vector modifier",
fieldType: "vector(1536)",
wantType: "vector(1536)",
wantLength: 0,
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
field := &drawdb.DrawDBField{
Name: tt.name,
Type: tt.fieldType,
}
col := reader.convertToColumn(field, "events", "public")
if col.Type != tt.wantType {
t.Fatalf("column type = %q, want %q", col.Type, tt.wantType)
}
if col.Length != tt.wantLength {
t.Fatalf("column length = %d, want %d", col.Length, tt.wantLength)
}
if col.Precision != tt.wantPrecision {
t.Fatalf("column precision = %d, want %d", col.Precision, tt.wantPrecision)
}
if col.Scale != tt.wantScale {
t.Fatalf("column scale = %d, want %d", col.Scale, tt.wantScale)
}
})
}
}
func TestReader_ReadSchema(t *testing.T) { func TestReader_ReadSchema(t *testing.T) {
opts := &readers.ReaderOptions{ opts := &readers.ReaderOptions{
FilePath: filepath.Join("..", "..", "..", "tests", "assets", "drawdb", "simple.json"), FilePath: filepath.Join("..", "..", "..", "tests", "assets", "drawdb", "simple.json"),

View File

@@ -12,6 +12,7 @@ import (
"strings" "strings"
"git.warky.dev/wdevs/relspecgo/pkg/models" "git.warky.dev/wdevs/relspecgo/pkg/models"
"git.warky.dev/wdevs/relspecgo/pkg/pgsql"
"git.warky.dev/wdevs/relspecgo/pkg/readers" "git.warky.dev/wdevs/relspecgo/pkg/readers"
) )
@@ -676,19 +677,8 @@ func (r *Reader) extractTableFromGormTag(tag string) (tablename string, schemaNa
// deriveTableName derives a table name from struct name // deriveTableName derives a table name from struct name
func (r *Reader) deriveTableName(structName string) string { func (r *Reader) deriveTableName(structName string) string {
// Remove "Model" prefix if present // Remove "Model" prefix if present, use the name as-is without transformation
name := strings.TrimPrefix(structName, "Model") return strings.TrimPrefix(structName, "Model")
// Convert PascalCase to snake_case
var result strings.Builder
for i, r := range name {
if i > 0 && r >= 'A' && r <= 'Z' {
result.WriteRune('_')
}
result.WriteRune(r)
}
return strings.ToLower(result.String())
} }
// parseColumn parses a struct field into a Column model // parseColumn parses a struct field into a Column model
@@ -784,11 +774,14 @@ func (r *Reader) extractGormTag(tag string) string {
// parseTypeWithLength parses a type string and extracts length if present // parseTypeWithLength parses a type string and extracts length if present
// e.g., "varchar(255)" returns ("varchar", 255) // e.g., "varchar(255)" returns ("varchar", 255)
func (r *Reader) parseTypeWithLength(typeStr string) (baseType string, length int) { func (r *Reader) parseTypeWithLength(typeStr string) (baseType string, length int) {
typeStr = strings.TrimSpace(typeStr)
baseType = typeStr
// Check for type with length: varchar(255), char(10), etc. // Check for type with length: varchar(255), char(10), etc.
// Also handle precision/scale: numeric(10,2) // Also handle precision/scale: numeric(10,2)
if strings.Contains(typeStr, "(") { if strings.Contains(typeStr, "(") {
idx := strings.Index(typeStr, "(") idx := strings.Index(typeStr, "(")
baseType = strings.TrimSpace(typeStr[:idx]) rawBaseType := strings.TrimSpace(typeStr[:idx])
// Extract numbers from parentheses // Extract numbers from parentheses
parens := typeStr[idx+1:] parens := typeStr[idx+1:]
@@ -796,14 +789,16 @@ func (r *Reader) parseTypeWithLength(typeStr string) (baseType string, length in
parens = parens[:endIdx] parens = parens[:endIdx]
} }
// For now, just handle single number (length) // Only treat as "length" for text-ish SQL types.
if !strings.Contains(parens, ",") { // This avoids converting custom modifiers like vector(1536) into Length.
if pgsql.SupportsLength(rawBaseType) && !strings.Contains(parens, ",") {
if _, err := fmt.Sscanf(parens, "%d", &length); err == nil { if _, err := fmt.Sscanf(parens, "%d", &length); err == nil {
baseType = pgsql.CanonicalizeBaseType(rawBaseType)
return return
} }
} }
} }
baseType = typeStr
return return
} }

View File

@@ -71,8 +71,11 @@ func TestReader_ReadDatabase_Simple(t *testing.T) {
if !emailCol.NotNull { if !emailCol.NotNull {
t.Error("Column 'email' should be NOT NULL (explicit 'not null' tag)") t.Error("Column 'email' should be NOT NULL (explicit 'not null' tag)")
} }
if emailCol.Type != "varchar" || emailCol.Length != 255 { if emailCol.Type != "varchar" && emailCol.Type != "varchar(255)" {
t.Errorf("Expected email type 'varchar(255)', got '%s' with length %d", emailCol.Type, emailCol.Length) t.Errorf("Expected email type 'varchar' or 'varchar(255)', got '%s' with length %d", emailCol.Type, emailCol.Length)
}
if emailCol.Length != 255 {
t.Errorf("Expected email length 255, got %d", emailCol.Length)
} }
// Verify name column - primitive string type should be NOT NULL by default // Verify name column - primitive string type should be NOT NULL by default
@@ -363,6 +366,33 @@ func TestReader_ReadDatabase_Complex(t *testing.T) {
} }
} }
func TestParseTypeWithLength_PreservesExplicitTypeModifiers(t *testing.T) {
reader := &Reader{}
tests := []struct {
input string
wantType string
wantLength int
}{
{"varchar(255)", "varchar", 255},
{"character varying(120)", "character varying", 120},
{"vector(1536)", "vector(1536)", 0},
{"numeric(10,2)", "numeric(10,2)", 0},
}
for _, tt := range tests {
t.Run(tt.input, func(t *testing.T) {
gotType, gotLength := reader.parseTypeWithLength(tt.input)
if gotType != tt.wantType {
t.Fatalf("parseTypeWithLength(%q) type = %q, want %q", tt.input, gotType, tt.wantType)
}
if gotLength != tt.wantLength {
t.Fatalf("parseTypeWithLength(%q) length = %d, want %d", tt.input, gotLength, tt.wantLength)
}
})
}
}
func TestReader_ReadSchema(t *testing.T) { func TestReader_ReadSchema(t *testing.T) {
opts := &readers.ReaderOptions{ opts := &readers.ReaderOptions{
FilePath: filepath.Join("..", "..", "..", "tests", "assets", "gorm", "simple.go"), FilePath: filepath.Join("..", "..", "..", "tests", "assets", "gorm", "simple.go"),

View File

@@ -89,6 +89,10 @@ postgres://user@localhost/mydb?sslmode=disable
postgres://user:pass@db.example.com:5432/production?sslmode=require postgres://user:pass@db.example.com:5432/production?sslmode=require
``` ```
By default, relspec sets `application_name` to `relspecgo/<version>` for PostgreSQL
sessions so they are identifiable in `pg_stat_activity`. If you provide
`application_name` in the connection string, your explicit value is preserved.
## Extracted Information ## Extracted Information
### Tables ### Tables

View File

@@ -206,8 +206,19 @@ func (r *Reader) queryColumns(schemaName string) (map[string]map[string]*models.
c.numeric_precision, c.numeric_precision,
c.numeric_scale, c.numeric_scale,
c.udt_name, c.udt_name,
pg_catalog.format_type(a.atttypid, a.atttypmod) as formatted_data_type,
col_description((c.table_schema||'.'||c.table_name)::regclass, c.ordinal_position) as description col_description((c.table_schema||'.'||c.table_name)::regclass, c.ordinal_position) as description
FROM information_schema.columns c FROM information_schema.columns c
JOIN pg_catalog.pg_namespace n
ON n.nspname = c.table_schema
JOIN pg_catalog.pg_class cls
ON cls.relname = c.table_name
AND cls.relnamespace = n.oid
JOIN pg_catalog.pg_attribute a
ON a.attrelid = cls.oid
AND a.attname = c.column_name
AND a.attnum > 0
AND NOT a.attisdropped
WHERE c.table_schema = $1 WHERE c.table_schema = $1
ORDER BY c.table_schema, c.table_name, c.ordinal_position ORDER BY c.table_schema, c.table_name, c.ordinal_position
` `
@@ -221,12 +232,12 @@ func (r *Reader) queryColumns(schemaName string) (map[string]map[string]*models.
columnsMap := make(map[string]map[string]*models.Column) columnsMap := make(map[string]map[string]*models.Column)
for rows.Next() { for rows.Next() {
var schema, tableName, columnName, isNullable, dataType, udtName string var schema, tableName, columnName, isNullable, dataType, udtName, formattedDataType string
var ordinalPosition int var ordinalPosition int
var columnDefault, description *string var columnDefault, description *string
var charMaxLength, numPrecision, numScale *int var charMaxLength, numPrecision, numScale *int
if err := rows.Scan(&schema, &tableName, &columnName, &ordinalPosition, &columnDefault, &isNullable, &dataType, &charMaxLength, &numPrecision, &numScale, &udtName, &description); err != nil { if err := rows.Scan(&schema, &tableName, &columnName, &ordinalPosition, &columnDefault, &isNullable, &dataType, &charMaxLength, &numPrecision, &numScale, &udtName, &formattedDataType, &description); err != nil {
return nil, err return nil, err
} }
@@ -241,12 +252,12 @@ func (r *Reader) queryColumns(schemaName string) (map[string]map[string]*models.
column.AutoIncrement = true column.AutoIncrement = true
column.Default = defaultVal column.Default = defaultVal
} else { } else {
column.Default = defaultVal column.Default = normalizePostgresDefault(defaultVal)
} }
} }
// Map data type, preserving serial types when detected // Map data type, preserving serial types when detected
column.Type = r.mapDataType(dataType, udtName, hasNextval) column.Type = r.mapDataType(dataType, udtName, formattedDataType, hasNextval)
column.NotNull = (isNullable == "NO") column.NotNull = (isNullable == "NO")
column.Sequence = uint(ordinalPosition) column.Sequence = uint(ordinalPosition)
@@ -602,3 +613,30 @@ func (r *Reader) parseIndexDefinition(indexName, tableName, schema, indexDef str
return index, nil return index, nil
} }
// normalizePostgresDefault converts a raw PostgreSQL column_default expression into the
// unquoted string value that the model convention expects. PostgreSQL stores string
// literal defaults as 'value' or 'value'::type (e.g. '{}'::text[]), while every other
// reader stores the bare value so the writer can re-quote it correctly.
func normalizePostgresDefault(defaultVal string) string {
if !strings.HasPrefix(defaultVal, "'") {
return defaultVal
}
// Decode the SQL string literal: skip the leading quote, unescape '' → ', stop at
// the first unescaped closing quote (any trailing ::cast is ignored).
rest := defaultVal[1:]
var buf strings.Builder
for i := 0; i < len(rest); i++ {
if rest[i] == '\'' {
if i+1 < len(rest) && rest[i+1] == '\'' {
buf.WriteByte('\'')
i++
} else {
break
}
} else {
buf.WriteByte(rest[i])
}
}
return buf.String()
}

View File

@@ -244,7 +244,7 @@ func (r *Reader) ReadTable() (*models.Table, error) {
// connect establishes a connection to the PostgreSQL database // connect establishes a connection to the PostgreSQL database
func (r *Reader) connect() error { func (r *Reader) connect() error {
conn, err := pgx.Connect(r.ctx, r.options.ConnectionString) conn, err := pgsql.Connect(r.ctx, r.options.ConnectionString, "reader-pgsql")
if err != nil { if err != nil {
return err return err
} }
@@ -259,12 +259,14 @@ func (r *Reader) close() {
} }
} }
// mapDataType maps PostgreSQL data types to canonical types // mapDataType maps PostgreSQL data types while preserving exact type text when available.
func (r *Reader) mapDataType(pgType, udtName string, hasNextval bool) string { func (r *Reader) mapDataType(pgType, udtName, formattedType string, hasNextval bool) string {
normalizedPGType := strings.ToLower(strings.TrimSpace(pgType))
// If the column has a nextval default, it's likely a serial type // If the column has a nextval default, it's likely a serial type
// Map to the appropriate serial type instead of the base integer type // Map to the appropriate serial type instead of the base integer type
if hasNextval { if hasNextval {
switch strings.ToLower(pgType) { switch normalizedPGType {
case "integer", "int", "int4": case "integer", "int", "int4":
return "serial" return "serial"
case "bigint", "int8": case "bigint", "int8":
@@ -274,6 +276,17 @@ func (r *Reader) mapDataType(pgType, udtName string, hasNextval bool) string {
} }
} }
// Prefer the database-provided formatted type; this preserves arrays/custom
// types/modifiers like text[], vector(1536), numeric(10,2), etc.
if strings.TrimSpace(formattedType) != "" {
return formattedType
}
// information_schema reports arrays generically as "ARRAY" with udt_name like "_text".
if strings.EqualFold(pgType, "ARRAY") && strings.HasPrefix(udtName, "_") && len(udtName) > 1 {
return udtName[1:] + "[]"
}
// Map common PostgreSQL types // Map common PostgreSQL types
typeMap := map[string]string{ typeMap := map[string]string{
"integer": "integer", "integer": "integer",
@@ -320,7 +333,7 @@ func (r *Reader) mapDataType(pgType, udtName string, hasNextval bool) string {
} }
// Try mapped type first // Try mapped type first
if mapped, exists := typeMap[pgType]; exists { if mapped, exists := typeMap[normalizedPGType]; exists {
return mapped return mapped
} }
@@ -329,8 +342,11 @@ func (r *Reader) mapDataType(pgType, udtName string, hasNextval bool) string {
return pgsql.GetSQLType(pgType) return pgsql.GetSQLType(pgType)
} }
// Return UDT name for custom types // Return UDT name for custom types (including array fallback when needed)
if udtName != "" { if udtName != "" {
if strings.HasPrefix(udtName, "_") && len(udtName) > 1 {
return udtName[1:] + "[]"
}
return udtName return udtName
} }

View File

@@ -173,35 +173,39 @@ func TestMapDataType(t *testing.T) {
reader := &Reader{} reader := &Reader{}
tests := []struct { tests := []struct {
pgType string pgType string
udtName string udtName string
expected string formattedType string
expected string
}{ }{
{"integer", "int4", "integer"}, {"integer", "int4", "", "integer"},
{"bigint", "int8", "bigint"}, {"bigint", "int8", "", "bigint"},
{"smallint", "int2", "smallint"}, {"smallint", "int2", "", "smallint"},
{"character varying", "varchar", "varchar"}, {"character varying", "varchar", "", "varchar"},
{"text", "text", "text"}, {"text", "text", "", "text"},
{"boolean", "bool", "boolean"}, {"boolean", "bool", "", "boolean"},
{"timestamp without time zone", "timestamp", "timestamp"}, {"timestamp without time zone", "timestamp", "", "timestamp"},
{"timestamp with time zone", "timestamptz", "timestamptz"}, {"timestamp with time zone", "timestamptz", "", "timestamptz"},
{"json", "json", "json"}, {"json", "json", "", "json"},
{"jsonb", "jsonb", "jsonb"}, {"jsonb", "jsonb", "", "jsonb"},
{"uuid", "uuid", "uuid"}, {"uuid", "uuid", "", "uuid"},
{"numeric", "numeric", "numeric"}, {"numeric", "numeric", "", "numeric"},
{"real", "float4", "real"}, {"real", "float4", "", "real"},
{"double precision", "float8", "double precision"}, {"double precision", "float8", "", "double precision"},
{"date", "date", "date"}, {"date", "date", "", "date"},
{"time without time zone", "time", "time"}, {"time without time zone", "time", "", "time"},
{"bytea", "bytea", "bytea"}, {"bytea", "bytea", "", "bytea"},
{"unknown_type", "custom", "custom"}, // Should return UDT name {"unknown_type", "custom", "", "custom"}, // Should return UDT name
{"ARRAY", "_text", "", "text[]"},
{"USER-DEFINED", "vector", "vector(1536)", "vector(1536)"},
{"character varying", "varchar", "character varying(255)", "character varying(255)"},
} }
for _, tt := range tests { for _, tt := range tests {
t.Run(tt.pgType, func(t *testing.T) { t.Run(tt.pgType, func(t *testing.T) {
result := reader.mapDataType(tt.pgType, tt.udtName, false) result := reader.mapDataType(tt.pgType, tt.udtName, tt.formattedType, false)
if result != tt.expected { if result != tt.expected {
t.Errorf("mapDataType(%s, %s) = %s, expected %s", tt.pgType, tt.udtName, result, tt.expected) t.Errorf("mapDataType(%s, %s, %s) = %s, expected %s", tt.pgType, tt.udtName, tt.formattedType, result, tt.expected)
} }
}) })
} }
@@ -218,9 +222,9 @@ func TestMapDataType(t *testing.T) {
for _, tt := range serialTests { for _, tt := range serialTests {
t.Run(tt.pgType+"_with_nextval", func(t *testing.T) { t.Run(tt.pgType+"_with_nextval", func(t *testing.T) {
result := reader.mapDataType(tt.pgType, "", true) result := reader.mapDataType(tt.pgType, "", "", true)
if result != tt.expected { if result != tt.expected {
t.Errorf("mapDataType(%s, '', true) = %s, expected %s", tt.pgType, result, tt.expected) t.Errorf("mapDataType(%s, '', '', true) = %s, expected %s", tt.pgType, result, tt.expected)
} }
}) })
} }
@@ -230,63 +234,63 @@ func TestParseIndexDefinition(t *testing.T) {
reader := &Reader{} reader := &Reader{}
tests := []struct { tests := []struct {
name string name string
indexName string indexName string
tableName string tableName string
schema string schema string
indexDef string indexDef string
wantType string wantType string
wantUnique bool wantUnique bool
wantColumns int wantColumns int
}{ }{
{ {
name: "simple btree index", name: "simple btree index",
indexName: "idx_users_email", indexName: "idx_users_email",
tableName: "users", tableName: "users",
schema: "public", schema: "public",
indexDef: "CREATE INDEX idx_users_email ON public.users USING btree (email)", indexDef: "CREATE INDEX idx_users_email ON public.users USING btree (email)",
wantType: "btree", wantType: "btree",
wantUnique: false, wantUnique: false,
wantColumns: 1, wantColumns: 1,
}, },
{ {
name: "unique index", name: "unique index",
indexName: "idx_users_username", indexName: "idx_users_username",
tableName: "users", tableName: "users",
schema: "public", schema: "public",
indexDef: "CREATE UNIQUE INDEX idx_users_username ON public.users USING btree (username)", indexDef: "CREATE UNIQUE INDEX idx_users_username ON public.users USING btree (username)",
wantType: "btree", wantType: "btree",
wantUnique: true, wantUnique: true,
wantColumns: 1, wantColumns: 1,
}, },
{ {
name: "composite index", name: "composite index",
indexName: "idx_users_name", indexName: "idx_users_name",
tableName: "users", tableName: "users",
schema: "public", schema: "public",
indexDef: "CREATE INDEX idx_users_name ON public.users USING btree (first_name, last_name)", indexDef: "CREATE INDEX idx_users_name ON public.users USING btree (first_name, last_name)",
wantType: "btree", wantType: "btree",
wantUnique: false, wantUnique: false,
wantColumns: 2, wantColumns: 2,
}, },
{ {
name: "gin index", name: "gin index",
indexName: "idx_posts_tags", indexName: "idx_posts_tags",
tableName: "posts", tableName: "posts",
schema: "public", schema: "public",
indexDef: "CREATE INDEX idx_posts_tags ON public.posts USING gin (tags)", indexDef: "CREATE INDEX idx_posts_tags ON public.posts USING gin (tags)",
wantType: "gin", wantType: "gin",
wantUnique: false, wantUnique: false,
wantColumns: 1, wantColumns: 1,
}, },
{ {
name: "partial index with where clause", name: "partial index with where clause",
indexName: "idx_users_active", indexName: "idx_users_active",
tableName: "users", tableName: "users",
schema: "public", schema: "public",
indexDef: "CREATE INDEX idx_users_active ON public.users USING btree (id) WHERE (active = true)", indexDef: "CREATE INDEX idx_users_active ON public.users USING btree (id) WHERE (active = true)",
wantType: "btree", wantType: "btree",
wantUnique: false, wantUnique: false,
wantColumns: 1, wantColumns: 1,
}, },
} }

View File

@@ -5,9 +5,11 @@ import (
"fmt" "fmt"
"os" "os"
"regexp" "regexp"
"strconv"
"strings" "strings"
"git.warky.dev/wdevs/relspecgo/pkg/models" "git.warky.dev/wdevs/relspecgo/pkg/models"
"git.warky.dev/wdevs/relspecgo/pkg/pgsql"
"git.warky.dev/wdevs/relspecgo/pkg/readers" "git.warky.dev/wdevs/relspecgo/pkg/readers"
) )
@@ -549,6 +551,41 @@ func (r *Reader) parseColumnOptions(decorator string, column *models.Column, tab
} }
} }
// Preserve explicit type modifiers from options where present.
// Example: @Column({ type: 'varchar', length: 255 }) -> varchar(255)
if column.Type != "" && !strings.Contains(column.Type, "(") {
lengthRegex := regexp.MustCompile(`length:\s*(\d+)`)
precisionRegex := regexp.MustCompile(`precision:\s*(\d+)`)
scaleRegex := regexp.MustCompile(`scale:\s*(\d+)`)
baseType := strings.ToLower(strings.TrimSpace(column.Type))
if pgsql.SupportsLength(baseType) {
if matches := lengthRegex.FindStringSubmatch(content); len(matches) == 2 {
if n, err := strconv.Atoi(matches[1]); err == nil && n > 0 {
column.Length = n
column.Type = fmt.Sprintf("%s(%d)", column.Type, n)
}
}
}
if pgsql.SupportsPrecision(baseType) {
if matches := precisionRegex.FindStringSubmatch(content); len(matches) == 2 {
if p, err := strconv.Atoi(matches[1]); err == nil && p > 0 {
column.Precision = p
if sm := scaleRegex.FindStringSubmatch(content); len(sm) == 2 {
if s, err := strconv.Atoi(sm[1]); err == nil && s >= 0 {
column.Scale = s
column.Type = fmt.Sprintf("%s(%d,%d)", column.Type, p, s)
}
} else {
column.Type = fmt.Sprintf("%s(%d)", column.Type, p)
}
}
}
}
}
if strings.Contains(content, "nullable: true") || strings.Contains(content, "nullable:true") { if strings.Contains(content, "nullable: true") || strings.Contains(content, "nullable:true") {
column.NotNull = false column.NotNull = false
} }

View File

@@ -0,0 +1,60 @@
package typeorm
import (
"testing"
"git.warky.dev/wdevs/relspecgo/pkg/models"
)
func TestParseColumnOptions_PreservesTypeModifiers(t *testing.T) {
reader := &Reader{}
table := models.InitTable("users", "public")
tests := []struct {
name string
decorator string
wantType string
wantLength int
wantPrecision int
wantScale int
}{
{
name: "varchar with length",
decorator: `@Column({ type: 'varchar', length: 255 })`,
wantType: "varchar(255)",
wantLength: 255,
},
{
name: "numeric with precision and scale",
decorator: `@Column({ type: 'numeric', precision: 10, scale: 2 })`,
wantType: "numeric(10,2)",
wantPrecision: 10,
wantScale: 2,
},
{
name: "custom type with explicit modifier is preserved",
decorator: `@Column({ type: 'vector(1536)' })`,
wantType: "vector(1536)",
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
col := models.InitColumn("sample", table.Name, table.Schema)
reader.parseColumnOptions(tt.decorator, col, table)
if col.Type != tt.wantType {
t.Fatalf("column type = %q, want %q", col.Type, tt.wantType)
}
if col.Length != tt.wantLength {
t.Fatalf("column length = %d, want %d", col.Length, tt.wantLength)
}
if col.Precision != tt.wantPrecision {
t.Fatalf("column precision = %d, want %d", col.Precision, tt.wantPrecision)
}
if col.Scale != tt.wantScale {
t.Fatalf("column scale = %d, want %d", col.Scale, tt.wantScale)
}
})
}
}

View File

@@ -46,54 +46,67 @@ func main() {
### CLI Examples ### CLI Examples
```bash ```bash
# Generate Bun models from PostgreSQL database # Generate Bun models from a DBML schema (default: resolvespec types)
relspec --input pgsql \ relspec convert --from dbml --from-path schema.dbml \
--conn "postgres://localhost/mydb" \ --to bun --to-path models.go --package models
--output bun \
--out-file models.go \
--package models
# Convert GORM models to Bun # Use standard library database/sql nullable types instead of resolvespec
relspec --input gorm --in-file gorm_models.go --output bun --out-file bun_models.go relspec convert --from dbml --from-path schema.dbml \
--to bun --to-path models.go --package models \
--types stdlib
# Multi-file output # Explicitly select resolvespec types (same as omitting --types)
relspec --input json --in-file schema.json --output bun --out-file models/ relspec convert --from pgsql --from-conn "postgres://localhost/mydb" \
--to bun --to-path models.go --package models \
--types resolvespec
# Multi-file output (one file per table)
relspec convert --from json --from-path schema.json \
--to bun --to-path models/ --package models
``` ```
## Generated Code Example ## Generated Code Examples
### Default — resolvespec types (`--types resolvespec`)
```go ```go
package models package models
import ( import (
"time" resolvespec_common "github.com/bitechdev/ResolveSpec/pkg/spectypes"
"database/sql"
"github.com/uptrace/bun" "github.com/uptrace/bun"
) )
type User struct { type User struct {
bun.BaseModel `bun:"table:users,alias:u"` bun.BaseModel `bun:"table:users,alias:u"`
ID int64 `bun:"id,pk,autoincrement" json:"id"` ID int64 `bun:"id,type:uuid,pk," json:"id"`
Username string `bun:"username,notnull,unique" json:"username"` Username string `bun:"username,type:text,notnull," json:"username"`
Email string `bun:"email,notnull" json:"email"` Email resolvespec_common.SqlString `bun:"email,type:text,nullzero," json:"email"`
Bio sql.NullString `bun:"bio" json:"bio,omitempty"` Tags resolvespec_common.SqlStringArray `bun:"tags,type:text[],default:'{}',notnull," json:"tags"`
CreatedAt time.Time `bun:"created_at,notnull,default:now()" json:"created_at"` CreatedAt resolvespec_common.SqlTimeStamp `bun:"created_at,type:timestamptz,default:now(),notnull," json:"created_at"`
// Relationships
Posts []*Post `bun:"rel:has-many,join:id=user_id" json:"posts,omitempty"`
} }
```
type Post struct { ### Standard library — `--types stdlib`
bun.BaseModel `bun:"table:posts,alias:p"`
ID int64 `bun:"id,pk" json:"id"` ```go
UserID int64 `bun:"user_id,notnull" json:"user_id"` package models
Title string `bun:"title,notnull" json:"title"`
Content sql.NullString `bun:"content" json:"content,omitempty"`
// Belongs to import (
User *User `bun:"rel:belongs-to,join:user_id=id" json:"user,omitempty"` "database/sql"
"time"
"github.com/uptrace/bun"
)
type User struct {
bun.BaseModel `bun:"table:users,alias:u"`
ID string `bun:"id,type:uuid,pk," json:"id"`
Username string `bun:"username,type:text,notnull," json:"username"`
Email sql.NullString `bun:"email,type:text,nullzero," json:"email"`
Tags []string `bun:"tags,type:text[],default:'{}',notnull," json:"tags"`
CreatedAt time.Time `bun:"created_at,type:timestamptz,default:now(),notnull," json:"created_at"`
} }
``` ```
@@ -111,19 +124,68 @@ type Post struct {
## Type Mapping ## Type Mapping
| SQL Type | Go Type | Nullable Type | The nullable type package is selected with `--types` (or `WriterOptions.NullableTypes`).
|----------|---------|---------------|
| bigint | int64 | sql.NullInt64 | | SQL Type | NOT NULL (both) | Nullable — resolvespec | Nullable — stdlib |
| integer | int | sql.NullInt32 | |---|---|---|---|
| varchar, text | string | sql.NullString | | `bigint` | `int64` | `SqlInt64` | `sql.NullInt64` |
| boolean | bool | sql.NullBool | | `integer` | `int32` | `SqlInt32` | `sql.NullInt32` |
| timestamp | time.Time | sql.NullTime | | `smallint` | `int16` | `SqlInt16` | `sql.NullInt16` |
| numeric | float64 | sql.NullFloat64 | | `text`, `varchar` | `string` | `SqlString` | `sql.NullString` |
| `boolean` | `bool` | `SqlBool` | `sql.NullBool` |
| `timestamp`, `timestamptz` | `time.Time`* | `SqlTimeStamp` | `sql.NullTime` |
| `numeric`, `decimal` | `float64` | `SqlFloat64` | `sql.NullFloat64` |
| `uuid` | `string` | `SqlUUID` | `sql.NullString` |
| `jsonb` | `string` | `SqlJSONB` | `sql.NullString` |
| `text[]` | `SqlStringArray` | `SqlStringArray` | `[]string` |
| `integer[]` | `SqlInt32Array` | `SqlInt32Array` | `[]int32` |
| `uuid[]` | `SqlUUIDArray` | `SqlUUIDArray` | `[]string` |
| `vector` | `SqlVector` | `SqlVector` | `[]float32` |
\* In resolvespec mode, NOT NULL timestamps use `SqlTimeStamp` (not `time.Time`) unless the base type is a simple integer or boolean. In stdlib mode, NOT NULL timestamps use `time.Time`.
## Writer Options
### NullableTypes
Controls which Go package is used for nullable column types. Set via the `--types` CLI flag or `WriterOptions.NullableTypes`:
```go
// Use resolvespec types (default — omit NullableTypes or set to "resolvespec")
options := &writers.WriterOptions{
OutputPath: "models.go",
PackageName: "models",
NullableTypes: writers.NullableTypeResolveSpec,
}
// Use standard library database/sql types
options := &writers.WriterOptions{
OutputPath: "models.go",
PackageName: "models",
NullableTypes: writers.NullableTypeStdlib,
}
```
### Metadata Options
```go
options := &writers.WriterOptions{
OutputPath: "models.go",
PackageName: "models",
Metadata: map[string]any{
"multi_file": true, // Enable multi-file mode
"populate_refs": true, // Populate RefDatabase/RefSchema
"generate_get_id_str": true, // Generate GetIDStr() methods
},
}
```
## Notes ## Notes
- Model names are derived from table names (singularized, PascalCase) - Model names are derived from table names (singularized, PascalCase)
- Table aliases are auto-generated from table names - Table aliases are auto-generated from table names
- Nullable columns use `resolvespec_common.SqlString`, `resolvespec_common.SqlTimeStamp`, etc. by default; pass `--types stdlib` to use `sql.NullString`, `sql.NullTime`, etc. instead
- Array columns use `resolvespec_common.SqlStringArray`, `resolvespec_common.SqlInt32Array`, etc. by default; `--types stdlib` produces plain Go slices (`[]string`, `[]int32`, …)
- Multi-file mode: one file per table named `sql_{schema}_{table}.go` - Multi-file mode: one file per table named `sql_{schema}_{table}.go`
- Generated code is auto-formatted - Generated code is auto-formatted
- JSON tags are automatically added - JSON tags are automatically added

View File

@@ -5,48 +5,55 @@ import (
"strings" "strings"
"git.warky.dev/wdevs/relspecgo/pkg/models" "git.warky.dev/wdevs/relspecgo/pkg/models"
"git.warky.dev/wdevs/relspecgo/pkg/pgsql"
"git.warky.dev/wdevs/relspecgo/pkg/writers" "git.warky.dev/wdevs/relspecgo/pkg/writers"
) )
// TypeMapper handles type conversions between SQL and Go types for Bun // TypeMapper handles type conversions between SQL and Go types for Bun
type TypeMapper struct { type TypeMapper struct {
// Package alias for sql_types import
sqlTypesAlias string sqlTypesAlias string
typeStyle string // writers.NullableTypeResolveSpec | writers.NullableTypeStdlib
} }
// NewTypeMapper creates a new TypeMapper with default settings // NewTypeMapper creates a new TypeMapper.
func NewTypeMapper() *TypeMapper { // typeStyle should be writers.NullableTypeResolveSpec or writers.NullableTypeStdlib;
// an empty string defaults to resolvespec.
func NewTypeMapper(typeStyle string) *TypeMapper {
if typeStyle == "" {
typeStyle = writers.NullableTypeResolveSpec
}
return &TypeMapper{ return &TypeMapper{
sqlTypesAlias: "resolvespec_common", sqlTypesAlias: "resolvespec_common",
typeStyle: typeStyle,
} }
} }
// SQLTypeToGoType converts a SQL type to its Go equivalent // SQLTypeToGoType converts a SQL type to its Go equivalent.
// Uses ResolveSpec common package types (all are nullable by default in Bun)
func (tm *TypeMapper) SQLTypeToGoType(sqlType string, notNull bool) string { func (tm *TypeMapper) SQLTypeToGoType(sqlType string, notNull bool) string {
// Normalize SQL type (lowercase, remove length/precision) // Array types are handled separately for both styles.
if pgsql.IsArrayType(sqlType) {
return tm.arrayGoType(tm.extractBaseType(sqlType))
}
baseType := tm.extractBaseType(sqlType) baseType := tm.extractBaseType(sqlType)
// For Bun, we typically use resolvespec_common types for most fields if tm.typeStyle == writers.NullableTypeStdlib {
// unless they're explicitly NOT NULL and we want to avoid null handling if notNull {
return tm.rawGoType(baseType)
}
return tm.stdlibNullableGoType(baseType)
}
// resolvespec (default): use base Go types only for simple NOT NULL fields.
if notNull && tm.isSimpleType(baseType) { if notNull && tm.isSimpleType(baseType) {
return tm.baseGoType(baseType) return tm.baseGoType(baseType)
} }
// Use resolvespec_common types for nullable fields
return tm.bunGoType(baseType) return tm.bunGoType(baseType)
} }
// extractBaseType extracts the base type from a SQL type string // extractBaseType extracts the base type from a SQL type string
func (tm *TypeMapper) extractBaseType(sqlType string) string { func (tm *TypeMapper) extractBaseType(sqlType string) string {
sqlType = strings.ToLower(strings.TrimSpace(sqlType)) return pgsql.CanonicalizeBaseType(pgsql.ExtractBaseTypeLower(sqlType))
// Remove everything after '('
if idx := strings.Index(sqlType, "("); idx > 0 {
sqlType = sqlType[:idx]
}
return sqlType
} }
// isSimpleType checks if a type should use base Go type when NOT NULL // isSimpleType checks if a type should use base Go type when NOT NULL
@@ -160,6 +167,9 @@ func (tm *TypeMapper) bunGoType(sqlType string) string {
// Other // Other
"money": tm.sqlTypesAlias + ".SqlFloat64", "money": tm.sqlTypesAlias + ".SqlFloat64",
// pgvector
"vector": tm.sqlTypesAlias + ".SqlVector",
} }
if goType, ok := typeMap[sqlType]; ok { if goType, ok := typeMap[sqlType]; ok {
@@ -170,6 +180,123 @@ func (tm *TypeMapper) bunGoType(sqlType string) string {
return tm.sqlTypesAlias + ".SqlString" return tm.sqlTypesAlias + ".SqlString"
} }
// arrayGoType returns the Go type for a PostgreSQL array column.
// The baseElemType is the canonical base type (e.g. "text", "integer").
func (tm *TypeMapper) arrayGoType(baseElemType string) string {
if tm.typeStyle == writers.NullableTypeStdlib {
return tm.stdlibArrayGoType(baseElemType)
}
typeMap := map[string]string{
"text": tm.sqlTypesAlias + ".SqlStringArray", "varchar": tm.sqlTypesAlias + ".SqlStringArray",
"char": tm.sqlTypesAlias + ".SqlStringArray", "character": tm.sqlTypesAlias + ".SqlStringArray",
"citext": tm.sqlTypesAlias + ".SqlStringArray", "bpchar": tm.sqlTypesAlias + ".SqlStringArray",
"inet": tm.sqlTypesAlias + ".SqlStringArray", "cidr": tm.sqlTypesAlias + ".SqlStringArray",
"macaddr": tm.sqlTypesAlias + ".SqlStringArray",
"json": tm.sqlTypesAlias + ".SqlStringArray", "jsonb": tm.sqlTypesAlias + ".SqlStringArray",
"integer": tm.sqlTypesAlias + ".SqlInt32Array", "int": tm.sqlTypesAlias + ".SqlInt32Array",
"int4": tm.sqlTypesAlias + ".SqlInt32Array", "serial": tm.sqlTypesAlias + ".SqlInt32Array",
"smallint": tm.sqlTypesAlias + ".SqlInt16Array", "int2": tm.sqlTypesAlias + ".SqlInt16Array",
"smallserial": tm.sqlTypesAlias + ".SqlInt16Array",
"bigint": tm.sqlTypesAlias + ".SqlInt64Array", "int8": tm.sqlTypesAlias + ".SqlInt64Array",
"bigserial": tm.sqlTypesAlias + ".SqlInt64Array",
"real": tm.sqlTypesAlias + ".SqlFloat32Array", "float4": tm.sqlTypesAlias + ".SqlFloat32Array",
"double precision": tm.sqlTypesAlias + ".SqlFloat64Array", "float8": tm.sqlTypesAlias + ".SqlFloat64Array",
"numeric": tm.sqlTypesAlias + ".SqlFloat64Array", "decimal": tm.sqlTypesAlias + ".SqlFloat64Array",
"money": tm.sqlTypesAlias + ".SqlFloat64Array",
"boolean": tm.sqlTypesAlias + ".SqlBoolArray", "bool": tm.sqlTypesAlias + ".SqlBoolArray",
"uuid": tm.sqlTypesAlias + ".SqlUUIDArray",
}
if goType, ok := typeMap[baseElemType]; ok {
return goType
}
return tm.sqlTypesAlias + ".SqlStringArray"
}
// rawGoType returns the plain Go type for a NOT NULL column in stdlib mode.
func (tm *TypeMapper) rawGoType(sqlType string) string {
typeMap := map[string]string{
"integer": "int32", "int": "int32", "int4": "int32", "serial": "int32",
"smallint": "int16", "int2": "int16", "smallserial": "int16",
"bigint": "int64", "int8": "int64", "bigserial": "int64",
"boolean": "bool", "bool": "bool",
"real": "float32", "float4": "float32",
"double precision": "float64", "float8": "float64",
"numeric": "float64", "decimal": "float64", "money": "float64",
"text": "string", "varchar": "string", "char": "string",
"character": "string", "citext": "string", "bpchar": "string",
"inet": "string", "cidr": "string", "macaddr": "string",
"uuid": "string", "json": "string", "jsonb": "string",
"timestamp": "time.Time",
"timestamp without time zone": "time.Time",
"timestamp with time zone": "time.Time",
"timestamptz": "time.Time",
"date": "time.Time",
"time": "time.Time",
"time without time zone": "time.Time",
"time with time zone": "time.Time",
"timetz": "time.Time",
"bytea": "[]byte",
"vector": "[]float32",
}
if goType, ok := typeMap[sqlType]; ok {
return goType
}
return "string"
}
// stdlibNullableGoType returns the database/sql nullable type for a column.
func (tm *TypeMapper) stdlibNullableGoType(sqlType string) string {
typeMap := map[string]string{
"integer": "sql.NullInt32", "int": "sql.NullInt32", "int4": "sql.NullInt32", "serial": "sql.NullInt32",
"smallint": "sql.NullInt16", "int2": "sql.NullInt16", "smallserial": "sql.NullInt16",
"bigint": "sql.NullInt64", "int8": "sql.NullInt64", "bigserial": "sql.NullInt64",
"boolean": "sql.NullBool", "bool": "sql.NullBool",
"real": "sql.NullFloat64", "float4": "sql.NullFloat64",
"double precision": "sql.NullFloat64", "float8": "sql.NullFloat64",
"numeric": "sql.NullFloat64", "decimal": "sql.NullFloat64", "money": "sql.NullFloat64",
"text": "sql.NullString", "varchar": "sql.NullString", "char": "sql.NullString",
"character": "sql.NullString", "citext": "sql.NullString", "bpchar": "sql.NullString",
"inet": "sql.NullString", "cidr": "sql.NullString", "macaddr": "sql.NullString",
"uuid": "sql.NullString", "json": "sql.NullString", "jsonb": "sql.NullString",
"timestamp": "sql.NullTime",
"timestamp without time zone": "sql.NullTime",
"timestamp with time zone": "sql.NullTime",
"timestamptz": "sql.NullTime",
"date": "sql.NullTime",
"time": "sql.NullTime",
"time without time zone": "sql.NullTime",
"time with time zone": "sql.NullTime",
"timetz": "sql.NullTime",
"bytea": "[]byte",
"vector": "[]float32",
}
if goType, ok := typeMap[sqlType]; ok {
return goType
}
return "sql.NullString"
}
// stdlibArrayGoType returns a plain Go slice type for array columns in stdlib mode.
func (tm *TypeMapper) stdlibArrayGoType(baseElemType string) string {
typeMap := map[string]string{
"text": "[]string", "varchar": "[]string", "char": "[]string",
"character": "[]string", "citext": "[]string", "bpchar": "[]string",
"inet": "[]string", "cidr": "[]string", "macaddr": "[]string",
"uuid": "[]string", "json": "[]string", "jsonb": "[]string",
"integer": "[]int32", "int": "[]int32", "int4": "[]int32", "serial": "[]int32",
"smallint": "[]int16", "int2": "[]int16", "smallserial": "[]int16",
"bigint": "[]int64", "int8": "[]int64", "bigserial": "[]int64",
"real": "[]float32", "float4": "[]float32",
"double precision": "[]float64", "float8": "[]float64",
"numeric": "[]float64", "decimal": "[]float64", "money": "[]float64",
"boolean": "[]bool", "bool": "[]bool",
}
if goType, ok := typeMap[baseElemType]; ok {
return goType
}
return "[]string"
}
// BuildBunTag generates a complete Bun tag string for a column // BuildBunTag generates a complete Bun tag string for a column
// Bun format: bun:"column_name,type:type_name,pk,default:value" // Bun format: bun:"column_name,type:type_name,pk,default:value"
func (tm *TypeMapper) BuildBunTag(column *models.Column, table *models.Table) string { func (tm *TypeMapper) BuildBunTag(column *models.Column, table *models.Table) string {
@@ -184,9 +311,10 @@ func (tm *TypeMapper) BuildBunTag(column *models.Column, table *models.Table) st
if column.Type != "" { if column.Type != "" {
// Sanitize type to remove backticks // Sanitize type to remove backticks
typeStr := writers.SanitizeStructTagValue(column.Type) typeStr := writers.SanitizeStructTagValue(column.Type)
if column.Length > 0 { hasExplicitTypeModifier := pgsql.HasExplicitTypeModifier(typeStr)
if !hasExplicitTypeModifier && column.Length > 0 {
typeStr = fmt.Sprintf("%s(%d)", typeStr, column.Length) typeStr = fmt.Sprintf("%s(%d)", typeStr, column.Length)
} else if column.Precision > 0 { } else if !hasExplicitTypeModifier && column.Precision > 0 {
if column.Scale > 0 { if column.Scale > 0 {
typeStr = fmt.Sprintf("%s(%d,%d)", typeStr, column.Precision, column.Scale) typeStr = fmt.Sprintf("%s(%d,%d)", typeStr, column.Precision, column.Scale)
} else { } else {
@@ -291,11 +419,20 @@ func (tm *TypeMapper) NeedsFmtImport(generateGetIDStr bool) bool {
return generateGetIDStr return generateGetIDStr
} }
// GetSQLTypesImport returns the import path for sql_types (ResolveSpec common) // GetSQLTypesImport returns the import path for the ResolveSpec spectypes package.
func (tm *TypeMapper) GetSQLTypesImport() string { func (tm *TypeMapper) GetSQLTypesImport() string {
return "github.com/bitechdev/ResolveSpec/pkg/spectypes" return "github.com/bitechdev/ResolveSpec/pkg/spectypes"
} }
// GetNullableTypeImportLine returns the full Go import line for the nullable type
// package (ready to pass to AddImport). Returns empty string when no import is needed.
func (tm *TypeMapper) GetNullableTypeImportLine() string {
if tm.typeStyle == writers.NullableTypeStdlib {
return "\"database/sql\""
}
return fmt.Sprintf("%s \"%s\"", tm.sqlTypesAlias, tm.GetSQLTypesImport())
}
// GetBunImport returns the import path for Bun // GetBunImport returns the import path for Bun
func (tm *TypeMapper) GetBunImport() string { func (tm *TypeMapper) GetBunImport() string {
return "github.com/uptrace/bun" return "github.com/uptrace/bun"

View File

@@ -24,7 +24,7 @@ type Writer struct {
func NewWriter(options *writers.WriterOptions) *Writer { func NewWriter(options *writers.WriterOptions) *Writer {
w := &Writer{ w := &Writer{
options: options, options: options,
typeMapper: NewTypeMapper(), typeMapper: NewTypeMapper(options.NullableTypes),
config: LoadMethodConfigFromMetadata(options.Metadata), config: LoadMethodConfigFromMetadata(options.Metadata),
} }
@@ -80,8 +80,8 @@ func (w *Writer) writeSingleFile(db *models.Database) error {
// Add bun import (always needed) // Add bun import (always needed)
templateData.AddImport(fmt.Sprintf("\"%s\"", w.typeMapper.GetBunImport())) templateData.AddImport(fmt.Sprintf("\"%s\"", w.typeMapper.GetBunImport()))
// Add resolvespec_common import (always needed for nullable types) // Add nullable types import (resolvespec or stdlib depending on options)
templateData.AddImport(fmt.Sprintf("resolvespec_common \"%s\"", w.typeMapper.GetSQLTypesImport())) templateData.AddImport(w.typeMapper.GetNullableTypeImportLine())
// Collect all models // Collect all models
for _, schema := range db.Schemas { for _, schema := range db.Schemas {
@@ -177,8 +177,8 @@ func (w *Writer) writeMultiFile(db *models.Database) error {
// Add bun import // Add bun import
templateData.AddImport(fmt.Sprintf("\"%s\"", w.typeMapper.GetBunImport())) templateData.AddImport(fmt.Sprintf("\"%s\"", w.typeMapper.GetBunImport()))
// Add resolvespec_common import // Add nullable types import (resolvespec or stdlib depending on options)
templateData.AddImport(fmt.Sprintf("resolvespec_common \"%s\"", w.typeMapper.GetSQLTypesImport())) templateData.AddImport(w.typeMapper.GetNullableTypeImportLine())
// Create model data // Create model data
modelData := NewModelData(table, schema.Name, w.typeMapper, w.options.FlattenSchema) modelData := NewModelData(table, schema.Name, w.typeMapper, w.options.FlattenSchema)

View File

@@ -556,7 +556,7 @@ func TestWriter_FieldNameCollision(t *testing.T) {
} }
func TestTypeMapper_SQLTypeToGoType_Bun(t *testing.T) { func TestTypeMapper_SQLTypeToGoType_Bun(t *testing.T) {
mapper := NewTypeMapper() mapper := NewTypeMapper("")
tests := []struct { tests := []struct {
sqlType string sqlType string
@@ -587,7 +587,7 @@ func TestTypeMapper_SQLTypeToGoType_Bun(t *testing.T) {
} }
func TestTypeMapper_BuildBunTag(t *testing.T) { func TestTypeMapper_BuildBunTag(t *testing.T) {
mapper := NewTypeMapper() mapper := NewTypeMapper("")
tests := []struct { tests := []struct {
name string name string
@@ -698,3 +698,23 @@ func TestTypeMapper_BuildBunTag(t *testing.T) {
}) })
} }
} }
func TestTypeMapper_BuildBunTag_PreservesExplicitTypeModifiers(t *testing.T) {
mapper := NewTypeMapper("")
col := &models.Column{
Name: "embedding",
Type: "vector(1536)",
Length: 1536,
Precision: 0,
Scale: 0,
}
tag := mapper.BuildBunTag(col, nil)
if !strings.Contains(tag, "type:vector(1536),") {
t.Fatalf("expected explicit modifier to be preserved, got %q", tag)
}
if strings.Contains(tag, ")(") {
t.Fatalf("type modifier appears duplicated in %q", tag)
}
}

View File

@@ -4,6 +4,7 @@ import (
"encoding/xml" "encoding/xml"
"fmt" "fmt"
"os" "os"
"sort"
"strings" "strings"
"github.com/google/uuid" "github.com/google/uuid"
@@ -155,8 +156,15 @@ func (w *Writer) mapTableFields(table *models.Table) models.DCTXTable {
}, },
} }
columnNames := make([]string, 0, len(table.Columns))
for name := range table.Columns {
columnNames = append(columnNames, name)
}
sort.Strings(columnNames)
i := 0 i := 0
for _, column := range table.Columns { for _, colName := range columnNames {
column := table.Columns[colName]
dctxTable.Fields[i] = w.mapField(column) dctxTable.Fields[i] = w.mapField(column)
i++ i++
} }
@@ -165,12 +173,27 @@ func (w *Writer) mapTableFields(table *models.Table) models.DCTXTable {
} }
func (w *Writer) mapTableKeys(table *models.Table) []models.DCTXKey { func (w *Writer) mapTableKeys(table *models.Table) []models.DCTXKey {
keys := make([]models.DCTXKey, len(table.Indexes)) indexes := make([]*models.Index, 0, len(table.Indexes))
i := 0
for _, index := range table.Indexes { for _, index := range table.Indexes {
keys[i] = w.mapKey(index, table) indexes = append(indexes, index)
i++
} }
// Stable ordering for deterministic output and test reproducibility:
// primary keys first, then lexicographic by index name.
sort.Slice(indexes, func(i, j int) bool {
iPrimary := strings.HasSuffix(indexes[i].Name, "_pkey")
jPrimary := strings.HasSuffix(indexes[j].Name, "_pkey")
if iPrimary != jPrimary {
return iPrimary
}
return indexes[i].Name < indexes[j].Name
})
keys := make([]models.DCTXKey, len(indexes))
for i, index := range indexes {
keys[i] = w.mapKey(index, table)
}
return keys return keys
} }

View File

@@ -5,6 +5,7 @@ import (
"strings" "strings"
"git.warky.dev/wdevs/relspecgo/pkg/models" "git.warky.dev/wdevs/relspecgo/pkg/models"
"git.warky.dev/wdevs/relspecgo/pkg/pgsql"
) )
// TypeMapper handles SQL to Drizzle type conversions // TypeMapper handles SQL to Drizzle type conversions
@@ -18,7 +19,7 @@ func NewTypeMapper() *TypeMapper {
// SQLTypeToDrizzle converts SQL types to Drizzle column type functions // SQLTypeToDrizzle converts SQL types to Drizzle column type functions
// Returns the Drizzle column constructor (e.g., "integer", "varchar", "text") // Returns the Drizzle column constructor (e.g., "integer", "varchar", "text")
func (tm *TypeMapper) SQLTypeToDrizzle(sqlType string) string { func (tm *TypeMapper) SQLTypeToDrizzle(sqlType string) string {
sqlTypeLower := strings.ToLower(sqlType) sqlTypeLower := pgsql.CanonicalizeBaseType(pgsql.ExtractBaseTypeLower(sqlType))
// PostgreSQL type mapping to Drizzle // PostgreSQL type mapping to Drizzle
typeMap := map[string]string{ typeMap := map[string]string{
@@ -87,13 +88,6 @@ func (tm *TypeMapper) SQLTypeToDrizzle(sqlType string) string {
return drizzleType return drizzleType
} }
// Check for partial matches (e.g., "varchar(255)" -> "varchar")
for sqlPattern, drizzleType := range typeMap {
if strings.HasPrefix(sqlTypeLower, sqlPattern) {
return drizzleType
}
}
// Default to text for unknown types // Default to text for unknown types
return "text" return "text"
} }

View File

@@ -48,22 +48,23 @@ func main() {
### CLI Examples ### CLI Examples
```bash ```bash
# Generate GORM models from PostgreSQL database (single file) # Generate GORM models from a DBML schema (default: resolvespec types)
relspec --input pgsql \ relspec convert --from dbml --from-path schema.dbml \
--conn "postgres://localhost/mydb" \ --to gorm --to-path models.go --package models
--output gorm \
--out-file models.go \
--package models
# Generate GORM models with multi-file output (one file per table) # Use standard library database/sql nullable types instead of resolvespec
relspec --input json \ relspec convert --from dbml --from-path schema.dbml \
--in-file schema.json \ --to gorm --to-path models.go --package models \
--output gorm \ --types stdlib
--out-file models/ \
--package models
# Convert DBML to GORM models # Explicitly select resolvespec types (same as omitting --types)
relspec --input dbml --in-file schema.dbml --output gorm --out-file models.go relspec convert --from pgsql --from-conn "postgres://localhost/mydb" \
--to gorm --to-path models.go --package models \
--types resolvespec
# Multi-file output (one file per table)
relspec convert --from json --from-path schema.json \
--to gorm --to-path models/ --package models
``` ```
## Output Modes ## Output Modes
@@ -86,58 +87,86 @@ relspec --input pgsql --conn "..." --output gorm --out-file models/
Files are named: `sql_{schema}_{table}.go` Files are named: `sql_{schema}_{table}.go`
## Generated Code Example ## Generated Code Examples
### Default — resolvespec types (`--types resolvespec`)
```go ```go
package models package models
import ( import (
"time" sql_types "github.com/bitechdev/ResolveSpec/pkg/spectypes"
sql_types "git.warky.dev/wdevs/sql_types"
) )
type ModelUser struct { type ModelUser struct {
ID int64 `gorm:"column:id;type:bigint;primaryKey;autoIncrement" json:"id"` ID string `gorm:"column:id;type:uuid;primaryKey" json:"id"`
Username string `gorm:"column:username;type:varchar(50);not null;uniqueIndex" json:"username"` Username string `gorm:"column:username;type:text;not null" json:"username"`
Email string `gorm:"column:email;type:varchar(100);not null" json:"email"` Email sql_types.SqlString `gorm:"column:email;type:text" json:"email,omitempty"`
CreatedAt time.Time `gorm:"column:created_at;type:timestamp;not null;default:now()" json:"created_at"` Tags sql_types.SqlStringArray `gorm:"column:tags;type:text[];not null;default:'{}'" json:"tags"`
CreatedAt sql_types.SqlTimeStamp `gorm:"column:created_at;type:timestamptz;not null;default:now()" json:"created_at"`
// Relationships
Pos []*ModelPost `gorm:"foreignKey:UserID;references:ID;constraint:OnDelete:CASCADE" json:"pos,omitempty"`
} }
func (ModelUser) TableName() string { func (ModelUser) TableName() string {
return "public.users" return "public.users"
} }
```
type ModelPost struct { ### Standard library — `--types stdlib`
ID int64 `gorm:"column:id;type:bigint;primaryKey" json:"id"`
UserID int64 `gorm:"column:user_id;type:bigint;not null" json:"user_id"`
Title string `gorm:"column:title;type:varchar(200);not null" json:"title"`
Content sql_types.SqlString `gorm:"column:content;type:text" json:"content,omitempty"`
// Belongs to ```go
Use *ModelUser `gorm:"foreignKey:UserID;references:ID" json:"use,omitempty"` package models
import (
"database/sql"
"time"
)
type ModelUser struct {
ID string `gorm:"column:id;type:uuid;primaryKey" json:"id"`
Username string `gorm:"column:username;type:text;not null" json:"username"`
Email sql.NullString `gorm:"column:email;type:text" json:"email,omitempty"`
Tags []string `gorm:"column:tags;type:text[];not null;default:'{}'" json:"tags"`
CreatedAt time.Time `gorm:"column:created_at;type:timestamptz;not null;default:now()" json:"created_at"`
} }
func (ModelPost) TableName() string { func (ModelUser) TableName() string {
return "public.posts" return "public.users"
} }
``` ```
## Writer Options ## Writer Options
### NullableTypes
Controls which Go package is used for nullable column types. Set via the `--types` CLI flag or `WriterOptions.NullableTypes`:
```go
// Use resolvespec types (default — omit NullableTypes or set to "resolvespec")
options := &writers.WriterOptions{
OutputPath: "models.go",
PackageName: "models",
NullableTypes: writers.NullableTypeResolveSpec,
}
// Use standard library database/sql types
options := &writers.WriterOptions{
OutputPath: "models.go",
PackageName: "models",
NullableTypes: writers.NullableTypeStdlib,
}
```
### Metadata Options ### Metadata Options
Configure the writer behavior using metadata in `WriterOptions`: Configure additional writer behavior using metadata in `WriterOptions`:
```go ```go
options := &writers.WriterOptions{ options := &writers.WriterOptions{
OutputPath: "models.go", OutputPath: "models.go",
PackageName: "models", PackageName: "models",
Metadata: map[string]interface{}{ Metadata: map[string]any{
"multi_file": true, // Enable multi-file mode "multi_file": true, // Enable multi-file mode
"populate_refs": true, // Populate RefDatabase/RefSchema "populate_refs": true, // Populate RefDatabase/RefSchema
"generate_get_id_str": true, // Generate GetIDStr() methods "generate_get_id_str": true, // Generate GetIDStr() methods
}, },
} }
@@ -145,18 +174,23 @@ options := &writers.WriterOptions{
## Type Mapping ## Type Mapping
| SQL Type | Go Type | Notes | The nullable type package is selected with `--types` (or `WriterOptions.NullableTypes`).
|----------|---------|-------|
| bigint, int8 | int64 | - | | SQL Type | NOT NULL — both | Nullable — resolvespec | Nullable — stdlib |
| integer, int, int4 | int | - | |---|---|---|---|
| smallint, int2 | int16 | - | | `bigint` | `int64` | `SqlInt64` | `sql.NullInt64` |
| varchar, text | string | Not nullable | | `integer` | `int32` | `SqlInt32` | `sql.NullInt32` |
| varchar, text (nullable) | sql_types.SqlString | Nullable | | `smallint` | `int16` | `SqlInt16` | `sql.NullInt16` |
| boolean, bool | bool | - | | `text`, `varchar` | `string` | `SqlString` | `sql.NullString` |
| timestamp, timestamptz | time.Time | - | | `boolean` | `bool` | `SqlBool` | `sql.NullBool` |
| numeric, decimal | float64 | - | | `timestamp`, `timestamptz` | `time.Time` | `SqlTimeStamp` | `sql.NullTime` |
| uuid | string | - | | `numeric`, `decimal` | `float64` | `SqlFloat64` | `sql.NullFloat64` |
| json, jsonb | string | - | | `uuid` | `string` | `SqlUUID` | `sql.NullString` |
| `jsonb` | `string` | `SqlString` | `sql.NullString` |
| `text[]` | `SqlStringArray` | `SqlStringArray` | `[]string` |
| `integer[]` | `SqlInt32Array` | `SqlInt32Array` | `[]int32` |
| `uuid[]` | `SqlUUIDArray` | `SqlUUIDArray` | `[]string` |
| `vector` | `SqlVector` | `SqlVector` | `[]float32` |
## Relationship Generation ## Relationship Generation
@@ -170,7 +204,8 @@ The writer automatically generates relationship fields:
## Notes ## Notes
- Model names are prefixed with "Model" (e.g., `ModelUser`) - Model names are prefixed with "Model" (e.g., `ModelUser`)
- Nullable columns use `sql_types.SqlString`, `sql_types.SqlInt64`, etc. - Nullable columns use `sql_types.SqlString`, `sql_types.SqlInt64`, etc. by default; pass `--types stdlib` to use `sql.NullString`, `sql.NullInt64`, etc. instead
- Array columns use `sql_types.SqlStringArray`, `sql_types.SqlInt32Array`, etc. by default; `--types stdlib` produces plain Go slices (`[]string`, `[]int32`, …)
- Generated code is auto-formatted with `go fmt` - Generated code is auto-formatted with `go fmt`
- JSON tags are automatically added - JSON tags are automatically added
- Supports schema-qualified table names in `TableName()` method - Supports schema-qualified table names in `TableName()` method

View File

@@ -5,48 +5,56 @@ import (
"strings" "strings"
"git.warky.dev/wdevs/relspecgo/pkg/models" "git.warky.dev/wdevs/relspecgo/pkg/models"
"git.warky.dev/wdevs/relspecgo/pkg/pgsql"
"git.warky.dev/wdevs/relspecgo/pkg/writers" "git.warky.dev/wdevs/relspecgo/pkg/writers"
) )
// TypeMapper handles type conversions between SQL and Go types // TypeMapper handles type conversions between SQL and Go types
type TypeMapper struct { type TypeMapper struct {
// Package alias for sql_types import
sqlTypesAlias string sqlTypesAlias string
typeStyle string // writers.NullableTypeResolveSpec | writers.NullableTypeStdlib
} }
// NewTypeMapper creates a new TypeMapper with default settings // NewTypeMapper creates a new TypeMapper.
func NewTypeMapper() *TypeMapper { // typeStyle should be writers.NullableTypeResolveSpec or writers.NullableTypeStdlib;
// an empty string defaults to resolvespec.
func NewTypeMapper(typeStyle string) *TypeMapper {
if typeStyle == "" {
typeStyle = writers.NullableTypeResolveSpec
}
return &TypeMapper{ return &TypeMapper{
sqlTypesAlias: "sql_types", sqlTypesAlias: "sql_types",
typeStyle: typeStyle,
} }
} }
// SQLTypeToGoType converts a SQL type to its Go equivalent // SQLTypeToGoType converts a SQL type to its Go equivalent.
// Handles nullable types using ResolveSpec sql_types package
func (tm *TypeMapper) SQLTypeToGoType(sqlType string, notNull bool) string { func (tm *TypeMapper) SQLTypeToGoType(sqlType string, notNull bool) string {
// Normalize SQL type (lowercase, remove length/precision) // Array types are handled separately for both styles.
if pgsql.IsArrayType(sqlType) {
return tm.arrayGoType(tm.extractBaseType(sqlType))
}
baseType := tm.extractBaseType(sqlType) baseType := tm.extractBaseType(sqlType)
// If not null, use base Go types if tm.typeStyle == writers.NullableTypeStdlib {
if notNull {
return tm.rawGoType(baseType)
}
return tm.stdlibNullableGoType(baseType)
}
// resolvespec (default)
if notNull { if notNull {
return tm.baseGoType(baseType) return tm.baseGoType(baseType)
} }
// For nullable fields, use sql_types
return tm.nullableGoType(baseType) return tm.nullableGoType(baseType)
} }
// extractBaseType extracts the base type from a SQL type string // extractBaseType extracts the base type from a SQL type string
// Examples: varchar(100) → varchar, numeric(10,2) → numeric // Examples: varchar(100) → varchar, numeric(10,2) → numeric
func (tm *TypeMapper) extractBaseType(sqlType string) string { func (tm *TypeMapper) extractBaseType(sqlType string) string {
sqlType = strings.ToLower(strings.TrimSpace(sqlType)) return pgsql.CanonicalizeBaseType(pgsql.ExtractBaseTypeLower(sqlType))
// Remove everything after '('
if idx := strings.Index(sqlType, "("); idx > 0 {
sqlType = sqlType[:idx]
}
return sqlType
} }
// baseGoType returns the base Go type for a SQL type (not null) // baseGoType returns the base Go type for a SQL type (not null)
@@ -112,6 +120,9 @@ func (tm *TypeMapper) baseGoType(sqlType string) string {
// Other // Other
"money": "float64", "money": "float64",
// pgvector — always uses SqlVector even when NOT NULL
"vector": tm.sqlTypesAlias + ".SqlVector",
} }
if goType, ok := typeMap[sqlType]; ok { if goType, ok := typeMap[sqlType]; ok {
@@ -185,6 +196,9 @@ func (tm *TypeMapper) nullableGoType(sqlType string) string {
// Other // Other
"money": tm.sqlTypesAlias + ".SqlFloat64", "money": tm.sqlTypesAlias + ".SqlFloat64",
// pgvector
"vector": tm.sqlTypesAlias + ".SqlVector",
} }
if goType, ok := typeMap[sqlType]; ok { if goType, ok := typeMap[sqlType]; ok {
@@ -195,6 +209,123 @@ func (tm *TypeMapper) nullableGoType(sqlType string) string {
return tm.sqlTypesAlias + ".SqlString" return tm.sqlTypesAlias + ".SqlString"
} }
// arrayGoType returns the Go type for a PostgreSQL array column.
// The baseElemType is the canonical base type (e.g. "text", "integer").
func (tm *TypeMapper) arrayGoType(baseElemType string) string {
if tm.typeStyle == writers.NullableTypeStdlib {
return tm.stdlibArrayGoType(baseElemType)
}
typeMap := map[string]string{
"text": tm.sqlTypesAlias + ".SqlStringArray", "varchar": tm.sqlTypesAlias + ".SqlStringArray",
"char": tm.sqlTypesAlias + ".SqlStringArray", "character": tm.sqlTypesAlias + ".SqlStringArray",
"citext": tm.sqlTypesAlias + ".SqlStringArray", "bpchar": tm.sqlTypesAlias + ".SqlStringArray",
"inet": tm.sqlTypesAlias + ".SqlStringArray", "cidr": tm.sqlTypesAlias + ".SqlStringArray",
"macaddr": tm.sqlTypesAlias + ".SqlStringArray",
"json": tm.sqlTypesAlias + ".SqlStringArray", "jsonb": tm.sqlTypesAlias + ".SqlStringArray",
"integer": tm.sqlTypesAlias + ".SqlInt32Array", "int": tm.sqlTypesAlias + ".SqlInt32Array",
"int4": tm.sqlTypesAlias + ".SqlInt32Array", "serial": tm.sqlTypesAlias + ".SqlInt32Array",
"smallint": tm.sqlTypesAlias + ".SqlInt16Array", "int2": tm.sqlTypesAlias + ".SqlInt16Array",
"smallserial": tm.sqlTypesAlias + ".SqlInt16Array",
"bigint": tm.sqlTypesAlias + ".SqlInt64Array", "int8": tm.sqlTypesAlias + ".SqlInt64Array",
"bigserial": tm.sqlTypesAlias + ".SqlInt64Array",
"real": tm.sqlTypesAlias + ".SqlFloat32Array", "float4": tm.sqlTypesAlias + ".SqlFloat32Array",
"double precision": tm.sqlTypesAlias + ".SqlFloat64Array", "float8": tm.sqlTypesAlias + ".SqlFloat64Array",
"numeric": tm.sqlTypesAlias + ".SqlFloat64Array", "decimal": tm.sqlTypesAlias + ".SqlFloat64Array",
"money": tm.sqlTypesAlias + ".SqlFloat64Array",
"boolean": tm.sqlTypesAlias + ".SqlBoolArray", "bool": tm.sqlTypesAlias + ".SqlBoolArray",
"uuid": tm.sqlTypesAlias + ".SqlUUIDArray",
}
if goType, ok := typeMap[baseElemType]; ok {
return goType
}
return tm.sqlTypesAlias + ".SqlStringArray"
}
// rawGoType returns the plain Go type for a NOT NULL column in stdlib mode.
func (tm *TypeMapper) rawGoType(sqlType string) string {
typeMap := map[string]string{
"integer": "int32", "int": "int32", "int4": "int32", "serial": "int32",
"smallint": "int16", "int2": "int16", "smallserial": "int16",
"bigint": "int64", "int8": "int64", "bigserial": "int64",
"boolean": "bool", "bool": "bool",
"real": "float32", "float4": "float32",
"double precision": "float64", "float8": "float64",
"numeric": "float64", "decimal": "float64", "money": "float64",
"text": "string", "varchar": "string", "char": "string",
"character": "string", "citext": "string", "bpchar": "string",
"inet": "string", "cidr": "string", "macaddr": "string",
"uuid": "string", "json": "string", "jsonb": "string",
"timestamp": "time.Time",
"timestamp without time zone": "time.Time",
"timestamp with time zone": "time.Time",
"timestamptz": "time.Time",
"date": "time.Time",
"time": "time.Time",
"time without time zone": "time.Time",
"time with time zone": "time.Time",
"timetz": "time.Time",
"bytea": "[]byte",
"vector": "[]float32",
}
if goType, ok := typeMap[sqlType]; ok {
return goType
}
return "string"
}
// stdlibNullableGoType returns the database/sql nullable type for a column.
func (tm *TypeMapper) stdlibNullableGoType(sqlType string) string {
typeMap := map[string]string{
"integer": "sql.NullInt32", "int": "sql.NullInt32", "int4": "sql.NullInt32", "serial": "sql.NullInt32",
"smallint": "sql.NullInt16", "int2": "sql.NullInt16", "smallserial": "sql.NullInt16",
"bigint": "sql.NullInt64", "int8": "sql.NullInt64", "bigserial": "sql.NullInt64",
"boolean": "sql.NullBool", "bool": "sql.NullBool",
"real": "sql.NullFloat64", "float4": "sql.NullFloat64",
"double precision": "sql.NullFloat64", "float8": "sql.NullFloat64",
"numeric": "sql.NullFloat64", "decimal": "sql.NullFloat64", "money": "sql.NullFloat64",
"text": "sql.NullString", "varchar": "sql.NullString", "char": "sql.NullString",
"character": "sql.NullString", "citext": "sql.NullString", "bpchar": "sql.NullString",
"inet": "sql.NullString", "cidr": "sql.NullString", "macaddr": "sql.NullString",
"uuid": "sql.NullString", "json": "sql.NullString", "jsonb": "sql.NullString",
"timestamp": "sql.NullTime",
"timestamp without time zone": "sql.NullTime",
"timestamp with time zone": "sql.NullTime",
"timestamptz": "sql.NullTime",
"date": "sql.NullTime",
"time": "sql.NullTime",
"time without time zone": "sql.NullTime",
"time with time zone": "sql.NullTime",
"timetz": "sql.NullTime",
"bytea": "[]byte",
"vector": "[]float32",
}
if goType, ok := typeMap[sqlType]; ok {
return goType
}
return "sql.NullString"
}
// stdlibArrayGoType returns a plain Go slice type for array columns in stdlib mode.
func (tm *TypeMapper) stdlibArrayGoType(baseElemType string) string {
typeMap := map[string]string{
"text": "[]string", "varchar": "[]string", "char": "[]string",
"character": "[]string", "citext": "[]string", "bpchar": "[]string",
"inet": "[]string", "cidr": "[]string", "macaddr": "[]string",
"uuid": "[]string", "json": "[]string", "jsonb": "[]string",
"integer": "[]int32", "int": "[]int32", "int4": "[]int32", "serial": "[]int32",
"smallint": "[]int16", "int2": "[]int16", "smallserial": "[]int16",
"bigint": "[]int64", "int8": "[]int64", "bigserial": "[]int64",
"real": "[]float32", "float4": "[]float32",
"double precision": "[]float64", "float8": "[]float64",
"numeric": "[]float64", "decimal": "[]float64", "money": "[]float64",
"boolean": "[]bool", "bool": "[]bool",
}
if goType, ok := typeMap[baseElemType]; ok {
return goType
}
return "[]string"
}
// BuildGormTag generates a complete GORM tag string for a column // BuildGormTag generates a complete GORM tag string for a column
func (tm *TypeMapper) BuildGormTag(column *models.Column, table *models.Table) string { func (tm *TypeMapper) BuildGormTag(column *models.Column, table *models.Table) string {
var parts []string var parts []string
@@ -209,9 +340,10 @@ func (tm *TypeMapper) BuildGormTag(column *models.Column, table *models.Table) s
// Include length, precision, scale if present // Include length, precision, scale if present
// Sanitize type to remove backticks // Sanitize type to remove backticks
typeStr := writers.SanitizeStructTagValue(column.Type) typeStr := writers.SanitizeStructTagValue(column.Type)
if column.Length > 0 { hasExplicitTypeModifier := pgsql.HasExplicitTypeModifier(typeStr)
if !hasExplicitTypeModifier && column.Length > 0 {
typeStr = fmt.Sprintf("%s(%d)", typeStr, column.Length) typeStr = fmt.Sprintf("%s(%d)", typeStr, column.Length)
} else if column.Precision > 0 { } else if !hasExplicitTypeModifier && column.Precision > 0 {
if column.Scale > 0 { if column.Scale > 0 {
typeStr = fmt.Sprintf("%s(%d,%d)", typeStr, column.Precision, column.Scale) typeStr = fmt.Sprintf("%s(%d,%d)", typeStr, column.Precision, column.Scale)
} else { } else {
@@ -335,7 +467,16 @@ func (tm *TypeMapper) NeedsFmtImport(generateGetIDStr bool) bool {
return generateGetIDStr return generateGetIDStr
} }
// GetSQLTypesImport returns the import path for sql_types // GetSQLTypesImport returns the import path for the ResolveSpec spectypes package.
func (tm *TypeMapper) GetSQLTypesImport() string { func (tm *TypeMapper) GetSQLTypesImport() string {
return "github.com/bitechdev/ResolveSpec/pkg/spectypes" return "github.com/bitechdev/ResolveSpec/pkg/spectypes"
} }
// GetNullableTypeImportLine returns the full Go import line for the nullable type
// package (ready to pass to AddImport). Returns empty string when no import is needed.
func (tm *TypeMapper) GetNullableTypeImportLine() string {
if tm.typeStyle == writers.NullableTypeStdlib {
return "\"database/sql\""
}
return fmt.Sprintf("%s \"%s\"", tm.sqlTypesAlias, tm.GetSQLTypesImport())
}

View File

@@ -24,7 +24,7 @@ type Writer struct {
func NewWriter(options *writers.WriterOptions) *Writer { func NewWriter(options *writers.WriterOptions) *Writer {
w := &Writer{ w := &Writer{
options: options, options: options,
typeMapper: NewTypeMapper(), typeMapper: NewTypeMapper(options.NullableTypes),
config: LoadMethodConfigFromMetadata(options.Metadata), config: LoadMethodConfigFromMetadata(options.Metadata),
} }
@@ -77,8 +77,8 @@ func (w *Writer) writeSingleFile(db *models.Database) error {
packageName := w.getPackageName() packageName := w.getPackageName()
templateData := NewTemplateData(packageName, w.config) templateData := NewTemplateData(packageName, w.config)
// Add sql_types import (always needed for nullable types) // Add nullable types import (resolvespec or stdlib depending on options)
templateData.AddImport(fmt.Sprintf("sql_types \"%s\"", w.typeMapper.GetSQLTypesImport())) templateData.AddImport(w.typeMapper.GetNullableTypeImportLine())
// Collect all models // Collect all models
for _, schema := range db.Schemas { for _, schema := range db.Schemas {
@@ -171,8 +171,8 @@ func (w *Writer) writeMultiFile(db *models.Database) error {
// Create template data for this single table // Create template data for this single table
templateData := NewTemplateData(packageName, w.config) templateData := NewTemplateData(packageName, w.config)
// Add sql_types import // Add nullable types import (resolvespec or stdlib depending on options)
templateData.AddImport(fmt.Sprintf("sql_types \"%s\"", w.typeMapper.GetSQLTypesImport())) templateData.AddImport(w.typeMapper.GetNullableTypeImportLine())
// Create model data // Create model data
modelData := NewModelData(table, schema.Name, w.typeMapper, w.options.FlattenSchema) modelData := NewModelData(table, schema.Name, w.typeMapper, w.options.FlattenSchema)

View File

@@ -14,12 +14,12 @@ func TestWriter_WriteTable(t *testing.T) {
// Create a simple table // Create a simple table
table := models.InitTable("users", "public") table := models.InitTable("users", "public")
table.Columns["id"] = &models.Column{ table.Columns["id"] = &models.Column{
Name: "id", Name: "id",
Type: "bigint", Type: "bigint",
NotNull: true, NotNull: true,
IsPrimaryKey: true, IsPrimaryKey: true,
AutoIncrement: true, AutoIncrement: true,
Sequence: 1, Sequence: 1,
} }
table.Columns["email"] = &models.Column{ table.Columns["email"] = &models.Column{
Name: "email", Name: "email",
@@ -444,10 +444,10 @@ func TestWriter_MultipleHasManyRelationships(t *testing.T) {
// Verify all has-many relationships have unique names // Verify all has-many relationships have unique names
hasManyExpectations := []string{ hasManyExpectations := []string{
"RelRIDAPIProviderOrgLogins", // Has many via Login "RelRIDAPIProviderOrgLogins", // Has many via Login
"RelRIDAPIProviderOrgFilepointers", // Has many via Filepointer "RelRIDAPIProviderOrgFilepointers", // Has many via Filepointer
"RelRIDAPIProviderOrgAPIEvents", // Has many via APIEvent "RelRIDAPIProviderOrgAPIEvents", // Has many via APIEvent
"RelRIDOwner", // Belongs to via rid_owner "RelRIDOwner", // Belongs to via rid_owner
} }
for _, exp := range hasManyExpectations { for _, exp := range hasManyExpectations {
@@ -643,7 +643,7 @@ func TestNameConverter_Pluralize(t *testing.T) {
} }
func TestTypeMapper_SQLTypeToGoType(t *testing.T) { func TestTypeMapper_SQLTypeToGoType(t *testing.T) {
mapper := NewTypeMapper() mapper := NewTypeMapper("")
tests := []struct { tests := []struct {
sqlType string sqlType string
@@ -669,3 +669,23 @@ func TestTypeMapper_SQLTypeToGoType(t *testing.T) {
}) })
} }
} }
func TestTypeMapper_BuildGormTag_PreservesExplicitTypeModifiers(t *testing.T) {
mapper := NewTypeMapper("")
col := &models.Column{
Name: "embedding",
Type: "vector(1536)",
Length: 1536,
Precision: 0,
Scale: 0,
}
tag := mapper.BuildGormTag(col, nil)
if !strings.Contains(tag, "type:vector(1536)") {
t.Fatalf("expected explicit modifier to be preserved, got %q", tag)
}
if strings.Contains(tag, ")(") {
t.Fatalf("type modifier appears duplicated in %q", tag)
}
}

View File

@@ -4,6 +4,7 @@ import (
"strings" "strings"
"git.warky.dev/wdevs/relspecgo/pkg/models" "git.warky.dev/wdevs/relspecgo/pkg/models"
"git.warky.dev/wdevs/relspecgo/pkg/pgsql"
) )
func (w *Writer) sqlTypeToGraphQL(sqlType string, column *models.Column, table *models.Table, schema *models.Schema) string { func (w *Writer) sqlTypeToGraphQL(sqlType string, column *models.Column, table *models.Table, schema *models.Schema) string {
@@ -33,12 +34,11 @@ func (w *Writer) sqlTypeToGraphQL(sqlType string, column *models.Column, table *
} }
// Standard type mappings // Standard type mappings
baseType := strings.Split(sqlType, "(")[0] // Remove length/precision baseType := pgsql.CanonicalizeBaseType(pgsql.ExtractBaseTypeLower(sqlType))
baseType = strings.TrimSpace(baseType)
// Handle array types // Handle array types
if strings.HasSuffix(baseType, "[]") { if pgsql.IsArrayType(sqlType) {
elemType := strings.TrimSuffix(baseType, "[]") elemType := pgsql.CanonicalizeBaseType(pgsql.ExtractBaseTypeLower(pgsql.ElementType(sqlType)))
gqlType := w.mapBaseTypeToGraphQL(elemType) gqlType := w.mapBaseTypeToGraphQL(elemType)
return "[" + gqlType + "]" return "[" + gqlType + "]"
} }
@@ -108,8 +108,7 @@ func (w *Writer) sqlTypeToCustomScalar(sqlType string) string {
"date": "Date", "date": "Date",
} }
baseType := strings.Split(sqlType, "(")[0] baseType := pgsql.CanonicalizeBaseType(pgsql.ExtractBaseTypeLower(sqlType))
baseType = strings.TrimSpace(baseType)
if scalar, ok := scalarMap[baseType]; ok { if scalar, ok := scalarMap[baseType]; ok {
return scalar return scalar
@@ -132,8 +131,7 @@ func (w *Writer) isIntegerType(sqlType string) bool {
"smallserial": true, "smallserial": true,
} }
baseType := strings.Split(sqlType, "(")[0] baseType := pgsql.CanonicalizeBaseType(pgsql.ExtractBaseTypeLower(sqlType))
baseType = strings.TrimSpace(baseType)
return intTypes[baseType] return intTypes[baseType]
} }

View File

@@ -10,8 +10,6 @@ import (
"strings" "strings"
"time" "time"
"github.com/jackc/pgx/v5"
"git.warky.dev/wdevs/relspecgo/pkg/models" "git.warky.dev/wdevs/relspecgo/pkg/models"
"git.warky.dev/wdevs/relspecgo/pkg/pgsql" "git.warky.dev/wdevs/relspecgo/pkg/pgsql"
"git.warky.dev/wdevs/relspecgo/pkg/writers" "git.warky.dev/wdevs/relspecgo/pkg/writers"
@@ -493,18 +491,19 @@ func (w *Writer) generateColumnDefinition(col *models.Column) string {
// Type with length/precision - convert to valid PostgreSQL type // Type with length/precision - convert to valid PostgreSQL type
baseType := pgsql.ConvertSQLType(col.Type) baseType := pgsql.ConvertSQLType(col.Type)
typeStr := baseType typeStr := baseType
hasExplicitTypeModifier := pgsql.HasExplicitTypeModifier(baseType)
// Only add size specifiers for types that support them // Only add size specifiers for types that support them
if col.Length > 0 && col.Precision == 0 { if !hasExplicitTypeModifier && col.Length > 0 && col.Precision == 0 {
if supportsLength(baseType) { if pgsql.SupportsLength(baseType) {
typeStr = fmt.Sprintf("%s(%d)", baseType, col.Length) typeStr = fmt.Sprintf("%s(%d)", baseType, col.Length)
} else if isTextTypeWithoutLength(baseType) { } else if isTextTypeWithoutLength(baseType) {
// Convert text with length to varchar // Convert text with length to varchar
typeStr = fmt.Sprintf("varchar(%d)", col.Length) typeStr = fmt.Sprintf("varchar(%d)", col.Length)
} }
// For types that don't support length (integer, bigint, etc.), ignore the length // For types that don't support length (integer, bigint, etc.), ignore the length
} else if col.Precision > 0 { } else if !hasExplicitTypeModifier && col.Precision > 0 {
if supportsPrecision(baseType) { if pgsql.SupportsPrecision(baseType) {
if col.Scale > 0 { if col.Scale > 0 {
typeStr = fmt.Sprintf("%s(%d,%d)", baseType, col.Precision, col.Scale) typeStr = fmt.Sprintf("%s(%d,%d)", baseType, col.Precision, col.Scale)
} else { } else {
@@ -1268,30 +1267,6 @@ func isTextType(colType string) bool {
return false return false
} }
// supportsLength checks if a PostgreSQL type supports length specification
func supportsLength(colType string) bool {
lengthTypes := []string{"varchar", "character varying", "char", "character", "bit", "bit varying", "varbit"}
lowerType := strings.ToLower(colType)
for _, t := range lengthTypes {
if lowerType == t || strings.HasPrefix(lowerType, t+"(") {
return true
}
}
return false
}
// supportsPrecision checks if a PostgreSQL type supports precision/scale specification
func supportsPrecision(colType string) bool {
precisionTypes := []string{"numeric", "decimal", "time", "timestamp", "timestamptz", "timestamp with time zone", "timestamp without time zone", "time with time zone", "time without time zone", "interval"}
lowerType := strings.ToLower(colType)
for _, t := range precisionTypes {
if lowerType == t || strings.HasPrefix(lowerType, t+"(") {
return true
}
}
return false
}
// isTextTypeWithoutLength checks if type is text (which should convert to varchar when length is specified) // isTextTypeWithoutLength checks if type is text (which should convert to varchar when length is specified)
func isTextTypeWithoutLength(colType string) bool { func isTextTypeWithoutLength(colType string) bool {
return strings.EqualFold(colType, "text") return strings.EqualFold(colType, "text")
@@ -1376,7 +1351,7 @@ func (w *Writer) executeDatabaseSQL(db *models.Database, connString string) erro
// Connect to database // Connect to database
ctx := context.Background() ctx := context.Background()
conn, err := pgx.Connect(ctx, connString) conn, err := pgsql.Connect(ctx, connString, "writer-pgsql")
if err != nil { if err != nil {
return fmt.Errorf("failed to connect to database: %w", err) return fmt.Errorf("failed to connect to database: %w", err)
} }

View File

@@ -426,11 +426,11 @@ func TestWriteAllConstraintTypes(t *testing.T) {
// Verify all constraint types are present // Verify all constraint types are present
expectedConstraints := map[string]string{ expectedConstraints := map[string]string{
"Primary Key": "PRIMARY KEY", "Primary Key": "PRIMARY KEY",
"Unique": "ADD CONSTRAINT uq_order_number UNIQUE (order_number)", "Unique": "ADD CONSTRAINT uq_order_number UNIQUE (order_number)",
"Check (total)": "ADD CONSTRAINT ck_total_positive CHECK (total > 0)", "Check (total)": "ADD CONSTRAINT ck_total_positive CHECK (total > 0)",
"Check (status)": "ADD CONSTRAINT ck_status_valid CHECK (status IN ('pending', 'completed', 'cancelled'))", "Check (status)": "ADD CONSTRAINT ck_status_valid CHECK (status IN ('pending', 'completed', 'cancelled'))",
"Foreign Key": "FOREIGN KEY", "Foreign Key": "FOREIGN KEY",
} }
for name, expected := range expectedConstraints { for name, expected := range expectedConstraints {
@@ -715,11 +715,11 @@ func TestColumnSizeSpecifiers(t *testing.T) {
// Verify valid patterns ARE present // Verify valid patterns ARE present
validPatterns := []string{ validPatterns := []string{
"integer", // without size "integer", // without size
"bigint", // without size "bigint", // without size
"smallint", // without size "smallint", // without size
"varchar(100)", // text converted to varchar with length "varchar(100)", // text converted to varchar with length
"varchar(50)", // varchar with length "varchar(50)", // varchar with length
"decimal(19,4)", // decimal with precision and scale "decimal(19,4)", // decimal with precision and scale
} }
for _, pattern := range validPatterns { for _, pattern := range validPatterns {
@@ -729,6 +729,56 @@ func TestColumnSizeSpecifiers(t *testing.T) {
} }
} }
func TestGenerateColumnDefinition_PreservesExplicitTypeModifiers(t *testing.T) {
writer := NewWriter(&writers.WriterOptions{})
cases := []struct {
name string
colType string
length int
precision int
scale int
wantType string
}{
{
name: "character varying already includes length",
colType: "character varying(50)",
length: 50,
wantType: "character varying(50)",
},
{
name: "numeric already includes precision",
colType: "numeric(10,2)",
precision: 10,
scale: 2,
wantType: "numeric(10,2)",
},
{
name: "custom vector modifier preserved",
colType: "vector(1536)",
wantType: "vector(1536)",
},
}
for _, tc := range cases {
t.Run(tc.name, func(t *testing.T) {
col := models.InitColumn("sample", "events", "public")
col.Type = tc.colType
col.Length = tc.length
col.Precision = tc.precision
col.Scale = tc.scale
def := writer.generateColumnDefinition(col)
if !strings.Contains(def, " "+tc.wantType+" ") && !strings.HasSuffix(def, " "+tc.wantType) {
t.Fatalf("generated definition %q does not contain expected type %q", def, tc.wantType)
}
if strings.Contains(def, ")(") {
t.Fatalf("generated definition %q appears to duplicate modifiers", def)
}
})
}
}
func TestGenerateAddColumnStatements(t *testing.T) { func TestGenerateAddColumnStatements(t *testing.T) {
// Create a test database with tables that have new columns // Create a test database with tables that have new columns
db := models.InitDatabase("testdb") db := models.InitDatabase("testdb")

View File

@@ -8,6 +8,7 @@ import (
"github.com/jackc/pgx/v5" "github.com/jackc/pgx/v5"
"git.warky.dev/wdevs/relspecgo/pkg/models" "git.warky.dev/wdevs/relspecgo/pkg/models"
"git.warky.dev/wdevs/relspecgo/pkg/pgsql"
"git.warky.dev/wdevs/relspecgo/pkg/writers" "git.warky.dev/wdevs/relspecgo/pkg/writers"
) )
@@ -42,7 +43,7 @@ func (w *Writer) WriteDatabase(db *models.Database) error {
// Connect to database // Connect to database
ctx := context.Background() ctx := context.Background()
conn, err := pgx.Connect(ctx, connString) conn, err := pgsql.Connect(ctx, connString, "writer-sqlexec")
if err != nil { if err != nil {
return fmt.Errorf("failed to connect to database: %w", err) return fmt.Errorf("failed to connect to database: %w", err)
} }
@@ -72,7 +73,7 @@ func (w *Writer) WriteSchema(schema *models.Schema) error {
// Connect to database // Connect to database
ctx := context.Background() ctx := context.Background()
conn, err := pgx.Connect(ctx, connString) conn, err := pgsql.Connect(ctx, connString, "writer-sqlexec")
if err != nil { if err != nil {
return fmt.Errorf("failed to connect to database: %w", err) return fmt.Errorf("failed to connect to database: %w", err)
} }

View File

@@ -20,6 +20,18 @@ type Writer interface {
WriteTable(table *models.Table) error WriteTable(table *models.Table) error
} }
// NullableType constants control which Go package is used for nullable column types
// in code-generation writers (Bun, GORM).
const (
// NullableTypeResolveSpec uses github.com/bitechdev/ResolveSpec/pkg/spectypes
// (SqlString, SqlInt32, SqlVector, SqlStringArray, …). This is the default.
NullableTypeResolveSpec = "resolvespec"
// NullableTypeStdlib uses the standard library database/sql nullable types
// (sql.NullString, sql.NullInt32, …) and plain Go slices for arrays.
NullableTypeStdlib = "stdlib"
)
// WriterOptions contains common options for writers // WriterOptions contains common options for writers
type WriterOptions struct { type WriterOptions struct {
// OutputPath is the path where the output should be written // OutputPath is the path where the output should be written
@@ -33,6 +45,12 @@ type WriterOptions struct {
// Useful for databases like SQLite that do not support schemas. // Useful for databases like SQLite that do not support schemas.
FlattenSchema bool FlattenSchema bool
// NullableTypes selects the Go type package used for nullable columns in
// code-generation writers (bun, gorm). Accepted values:
// "resolvespec" (default) — github.com/bitechdev/ResolveSpec/pkg/spectypes
// "stdlib" — database/sql (sql.NullString, sql.NullInt32, …)
NullableTypes string
// Additional options can be added here as needed // Additional options can be added here as needed
Metadata map[string]interface{} Metadata map[string]interface{}
} }