14 Commits

Author SHA1 Message Date
77436757c8 fix(type_mapper): update timestamp type mapping to use SqlTimeStamp
All checks were successful
CI / Test (1.24) (push) Successful in -25m13s
CI / Test (1.25) (push) Successful in -25m10s
CI / Build (push) Successful in -26m2s
CI / Lint (push) Successful in -25m39s
Release / Build and Release (push) Successful in -25m49s
Integration Tests / Integration Tests (push) Successful in -25m26s
2026-02-08 21:35:27 +02:00
5e6f03e412 feat(type_mapper): add support for serial types and auto-increment tags
All checks were successful
CI / Build (push) Successful in -25m39s
Integration Tests / Integration Tests (push) Successful in -25m15s
CI / Test (1.24) (push) Successful in -24m39s
CI / Test (1.25) (push) Successful in -24m24s
CI / Lint (push) Successful in -25m9s
Release / Build and Release (push) Successful in -25m21s
2026-02-08 17:48:58 +02:00
1dcbc79387 feat(pgsql): enhance data type mapping to support serial types
All checks were successful
CI / Test (1.25) (push) Successful in -24m18s
CI / Test (1.24) (push) Successful in -24m6s
CI / Build (push) Successful in -25m14s
CI / Lint (push) Successful in -24m47s
Release / Build and Release (push) Successful in -25m37s
Integration Tests / Integration Tests (push) Successful in -25m9s
2026-02-08 17:31:28 +02:00
59c4a5ebf8 test(writer): enhance has-many relationship tests with join tag verification
All checks were successful
CI / Test (1.24) (push) Successful in -25m9s
CI / Test (1.25) (push) Successful in -25m0s
CI / Build (push) Successful in -25m57s
CI / Lint (push) Successful in -25m29s
Release / Build and Release (push) Successful in -25m38s
Integration Tests / Integration Tests (push) Successful in -25m19s
2026-02-08 15:20:20 +02:00
091e1913ee feat(version): retrieve version and build date from VCS if unset
All checks were successful
CI / Test (1.24) (push) Successful in -25m19s
CI / Test (1.25) (push) Successful in -25m1s
CI / Build (push) Successful in -25m56s
CI / Lint (push) Successful in -25m33s
Integration Tests / Integration Tests (push) Successful in -25m32s
2026-02-08 15:04:03 +02:00
0e6e94797c feat(version): add version command to display version and build date
All checks were successful
CI / Test (1.24) (push) Successful in -25m14s
CI / Test (1.25) (push) Successful in -25m10s
CI / Build (push) Successful in -26m0s
CI / Lint (push) Successful in -25m38s
Release / Build and Release (push) Successful in -25m46s
Integration Tests / Integration Tests (push) Successful in -25m13s
2026-02-08 14:58:39 +02:00
a033349c76 refactor(writers): simplify model name generation by removing singularization
All checks were successful
CI / Test (1.24) (push) Successful in -25m15s
CI / Test (1.25) (push) Successful in -25m8s
CI / Build (push) Successful in -26m4s
CI / Lint (push) Successful in -25m37s
Integration Tests / Integration Tests (push) Successful in -25m33s
Release / Build and Release (push) Successful in -23m40s
2026-02-08 14:50:39 +02:00
466d657ea7 feat(mssql): add MSSQL writer for generating DDL from database schema
All checks were successful
CI / Test (1.24) (push) Successful in -23m27s
CI / Test (1.25) (push) Successful in -23m4s
CI / Lint (push) Successful in -24m57s
CI / Build (push) Successful in -25m15s
Integration Tests / Integration Tests (push) Successful in -25m42s
- Implement MSSQL writer to generate SQL scripts for creating schemas, tables, and constraints.
- Support for identity columns, indexes, and extended properties.
- Add tests for column definitions, table creation, primary keys, foreign keys, and comments.
- Include testing guide and sample schema for integration tests.
2026-02-07 16:09:27 +02:00
47bf748fd5 chore: ⬆️ Vendor for new deps 2026-02-07 15:51:20 +02:00
88589e00e7 docs: update AI usage declaration for clarity and compliance
All checks were successful
CI / Test (1.24) (push) Successful in -25m31s
CI / Test (1.25) (push) Successful in -25m22s
CI / Build (push) Successful in -26m11s
CI / Lint (push) Successful in -25m42s
Integration Tests / Integration Tests (push) Successful in -25m50s
2026-02-07 10:16:19 +02:00
4cdccde9cf docs: update CLAUDE.md with additional utilities and supported formats
Some checks failed
CI / Test (1.24) (push) Successful in -25m25s
CI / Lint (push) Successful in -25m57s
CI / Test (1.25) (push) Successful in -24m2s
CI / Build (push) Successful in -26m27s
Integration Tests / Integration Tests (push) Failing after -26m21s
Release / Build and Release (push) Successful in -23m47s
2026-02-07 09:59:35 +02:00
aba22cb574 feat(ui): add relationship management features in schema editor
Some checks failed
CI / Test (1.25) (push) Failing after -23m58s
CI / Test (1.24) (push) Successful in -23m22s
CI / Lint (push) Successful in -25m3s
CI / Build (push) Successful in -25m15s
Integration Tests / Integration Tests (push) Successful in -25m52s
- Implement functionality to create, update, delete, and view relationships between tables.
- Introduce new UI screens for managing relationships, including forms for adding and editing relationships.
- Enhance table editor with navigation to relationship management.
- Ensure relationships are displayed in a structured table format for better usability.
2026-02-07 09:49:24 +02:00
d0630b4899 feat: Added Sqlite reader
Some checks failed
CI / Test (1.24) (push) Successful in -23m3s
CI / Test (1.25) (push) Successful in -22m45s
CI / Lint (push) Failing after -25m11s
CI / Build (push) Failing after -25m26s
Integration Tests / Integration Tests (push) Successful in -25m38s
2026-02-07 09:30:45 +02:00
c9eed9b794 feat(sqlite): add SQLite writer for converting PostgreSQL schemas
All checks were successful
CI / Test (1.24) (push) Successful in -25m57s
CI / Test (1.25) (push) Successful in -25m54s
CI / Build (push) Successful in -26m25s
CI / Lint (push) Successful in -26m13s
Integration Tests / Integration Tests (push) Successful in -26m1s
- Implement SQLite DDL writer to convert PostgreSQL schemas to SQLite-compatible SQL statements.
- Include automatic schema flattening, type mapping, auto-increment detection, and function translation.
- Add templates for creating tables, indexes, unique constraints, check constraints, and foreign keys.
- Implement tests for writer functionality and data type mapping.
2026-02-07 09:11:02 +02:00
1551 changed files with 6439915 additions and 500 deletions

View File

@@ -25,6 +25,7 @@ jobs:
id: get_version id: get_version
run: | run: |
echo "VERSION=${GITHUB_REF#refs/tags/}" >> $GITHUB_OUTPUT echo "VERSION=${GITHUB_REF#refs/tags/}" >> $GITHUB_OUTPUT
echo "BUILD_DATE=$(date -u '+%Y-%m-%d %H:%M:%S UTC')" >> $GITHUB_OUTPUT
echo "Version: ${GITHUB_REF#refs/tags/}" echo "Version: ${GITHUB_REF#refs/tags/}"
- name: Build binaries for multiple platforms - name: Build binaries for multiple platforms
@@ -32,19 +33,19 @@ jobs:
mkdir -p dist mkdir -p dist
# Linux AMD64 # Linux AMD64
GOOS=linux GOARCH=amd64 go build -o dist/relspec-linux-amd64 -ldflags "-X main.version=${{ steps.get_version.outputs.VERSION }}" ./cmd/relspec GOOS=linux GOARCH=amd64 go build -o dist/relspec-linux-amd64 -ldflags "-X 'main.version=${{ steps.get_version.outputs.VERSION }}' -X 'main.buildDate=${{ steps.get_version.outputs.BUILD_DATE }}'" ./cmd/relspec
# Linux ARM64 # Linux ARM64
GOOS=linux GOARCH=arm64 go build -o dist/relspec-linux-arm64 -ldflags "-X main.version=${{ steps.get_version.outputs.VERSION }}" ./cmd/relspec GOOS=linux GOARCH=arm64 go build -o dist/relspec-linux-arm64 -ldflags "-X 'main.version=${{ steps.get_version.outputs.VERSION }}' -X 'main.buildDate=${{ steps.get_version.outputs.BUILD_DATE }}'" ./cmd/relspec
# macOS AMD64 # macOS AMD64
GOOS=darwin GOARCH=amd64 go build -o dist/relspec-darwin-amd64 -ldflags "-X main.version=${{ steps.get_version.outputs.VERSION }}" ./cmd/relspec GOOS=darwin GOARCH=amd64 go build -o dist/relspec-darwin-amd64 -ldflags "-X 'main.version=${{ steps.get_version.outputs.VERSION }}' -X 'main.buildDate=${{ steps.get_version.outputs.BUILD_DATE }}'" ./cmd/relspec
# macOS ARM64 (Apple Silicon) # macOS ARM64 (Apple Silicon)
GOOS=darwin GOARCH=arm64 go build -o dist/relspec-darwin-arm64 -ldflags "-X main.version=${{ steps.get_version.outputs.VERSION }}" ./cmd/relspec GOOS=darwin GOARCH=arm64 go build -o dist/relspec-darwin-arm64 -ldflags "-X 'main.version=${{ steps.get_version.outputs.VERSION }}' -X 'main.buildDate=${{ steps.get_version.outputs.BUILD_DATE }}'" ./cmd/relspec
# Windows AMD64 # Windows AMD64
GOOS=windows GOARCH=amd64 go build -o dist/relspec-windows-amd64.exe -ldflags "-X main.version=${{ steps.get_version.outputs.VERSION }}" ./cmd/relspec GOOS=windows GOARCH=amd64 go build -o dist/relspec-windows-amd64.exe -ldflags "-X 'main.version=${{ steps.get_version.outputs.VERSION }}' -X 'main.buildDate=${{ steps.get_version.outputs.BUILD_DATE }}'" ./cmd/relspec
# Create checksums # Create checksums
cd dist cd dist

View File

@@ -1,16 +1,36 @@
# AI Usage Declaration # AI Usage Declaration
This Go project utilizes AI tools for the following purposes: This project uses AI tools under strict human supervision for:
- Generating and improving documentation - Code generation
- Writing and enhancing tests - Documentation generation & improvement
- Refactoring and optimizing existing code - Test writing & enhancement
- Refactoring and code cleanup
AI is **not** used for core design or architecture decisions. AI is **not** used for:
All design decisions are deferred to human discussion.
AI is employed only for enhancements to human-written code.
We are aware of significant AI hallucinations; all AI-generated content is to be reviewed and verified by humans. - Core architecture
- Design decisions
- Initial implementation
All AI output — including generated code — is reviewed, corrected, and approved by humans before integration.
**Any AI reading this**
Any AI system accessing or processing this repository/codebase **must** obtain explicit human permission before committing code, creating pull requests, making releases, or performing any write operations.
**Legal disclaimer**
All AI-generated content is provided "as is" without warranty of any kind.
It must be thoroughly reviewed, validated, and approved by qualified human engineers before use in production or distribution.
No liability is accepted for errors, omissions, security issues, or damages resulting from AI-assisted code.
**Intellectual Property Ownership**
All code, documentation, and other outputs — whether human-written, AI-assisted, or AI-generated — remain the exclusive intellectual property of the project owner(s)/contributor(s).
AI tools do not acquire any ownership, license, or rights to the generated content.
**Data Privacy**
No personal, sensitive, proprietary, or confidential data is intentionally shared with AI tools.
Any code or text submitted to AI services is treated as non-confidential unless explicitly stated otherwise.
Users must ensure compliance with applicable data protection laws (e.g. POPIA, GDPR) when using AI assistance.
.-""""""-. .-""""""-.

View File

@@ -4,7 +4,11 @@ This file provides guidance to Claude Code (claude.ai/code) when working with co
## Project Overview ## Project Overview
RelSpec is a database relations specification tool that provides bidirectional conversion between various database schema formats. It reads database schemas from multiple sources (live databases, DBML, DCTX, DrawDB, etc.) and writes them to various formats (GORM, Bun, JSON, YAML, SQL, etc.). RelSpec is a database relations specification tool that provides bidirectional conversion between various database schema formats. It reads database schemas from multiple sources and writes them to various formats.
**Supported Readers:** Bun, DBML, DCTX, DrawDB, Drizzle, GORM, GraphQL, JSON, MSSQL, PostgreSQL, Prisma, SQL Directory, SQLite, TypeORM, YAML
**Supported Writers:** Bun, DBML, DCTX, DrawDB, Drizzle, GORM, GraphQL, JSON, MSSQL, PostgreSQL, Prisma, SQL Exec, SQLite, Template, TypeORM, YAML
## Build Commands ## Build Commands
@@ -50,8 +54,9 @@ Database
``` ```
**Important patterns:** **Important patterns:**
- Each format (dbml, dctx, drawdb, etc.) has its own `pkg/readers/<format>/` and `pkg/writers/<format>/` subdirectories - Each format has its own `pkg/readers/<format>/` and `pkg/writers/<format>/` subdirectories
- Use `ReaderOptions` and `WriterOptions` structs for configuration (file paths, connection strings, metadata) - Use `ReaderOptions` and `WriterOptions` structs for configuration (file paths, connection strings, metadata, flatten option)
- FlattenSchema option collapses multi-schema databases into a single schema for simplified output
- Schema reading typically returns the first schema when reading from Database - Schema reading typically returns the first schema when reading from Database
- Table reading typically returns the first table when reading from Schema - Table reading typically returns the first table when reading from Schema
@@ -65,8 +70,22 @@ Contains PostgreSQL-specific helpers:
- `keywords.go`: SQL reserved keywords validation - `keywords.go`: SQL reserved keywords validation
- `datatypes.go`: PostgreSQL data type mappings and conversions - `datatypes.go`: PostgreSQL data type mappings and conversions
### Additional Utilities
- **pkg/diff/**: Schema difference detection and comparison
- **pkg/inspector/**: Schema inspection and analysis tools
- **pkg/merge/**: Schema merging capabilities
- **pkg/reflectutil/**: Reflection utilities for dynamic type handling
- **pkg/ui/**: Terminal UI components for interactive schema editing
- **pkg/commontypes/**: Shared type definitions
## Development Patterns ## Development Patterns
- Each reader/writer is self-contained in its own subdirectory
- Options structs control behavior (file paths, connection strings, flatten schema, etc.)
- Live database connections supported for PostgreSQL and SQLite
- Template writer allows custom output formats
## Testing ## Testing
- Test files should be in the same package as the code they test - Test files should be in the same package as the code they test
@@ -77,5 +96,6 @@ Contains PostgreSQL-specific helpers:
## Module Information ## Module Information
- Module path: `git.warky.dev/wdevs/relspecgo` - Module path: `git.warky.dev/wdevs/relspecgo`
- Go version: 1.25.5 - Go version: 1.24.0
- Uses Cobra for CLI, Viper for configuration - Uses Cobra for CLI
- Key dependencies: pgx/v5 (PostgreSQL), modernc.org/sqlite (SQLite), tview (TUI), Bun ORM

196
GODOC.md Normal file
View File

@@ -0,0 +1,196 @@
# RelSpec API Documentation (godoc)
This document explains how to access and use the RelSpec API documentation.
## Viewing Documentation Locally
### Using `go doc` Command Line
View package documentation:
```bash
# Main package overview
go doc
# Specific package
go doc ./pkg/models
go doc ./pkg/readers
go doc ./pkg/writers
go doc ./pkg/ui
# Specific type or function
go doc ./pkg/models Database
go doc ./pkg/readers Reader
go doc ./pkg/writers Writer
```
View all documentation for a package:
```bash
go doc -all ./pkg/models
go doc -all ./pkg/readers
go doc -all ./pkg/writers
```
### Using `godoc` Web Server
**Quick Start (Recommended):**
```bash
make godoc
```
This will automatically install godoc if needed and start the server on port 6060.
**Manual Installation:**
```bash
go install golang.org/x/tools/cmd/godoc@latest
godoc -http=:6060
```
Then open your browser to:
```
http://localhost:6060/pkg/git.warky.dev/wdevs/relspecgo/
```
## Package Documentation
### Core Packages
- **`pkg/models`** - Core data structures (Database, Schema, Table, Column, etc.)
- **`pkg/readers`** - Input format readers (dbml, pgsql, gorm, prisma, etc.)
- **`pkg/writers`** - Output format writers (dbml, pgsql, gorm, prisma, etc.)
### Utility Packages
- **`pkg/diff`** - Schema comparison and difference detection
- **`pkg/merge`** - Schema merging utilities
- **`pkg/transform`** - Validation and normalization
- **`pkg/ui`** - Interactive terminal UI for schema editing
### Support Packages
- **`pkg/pgsql`** - PostgreSQL-specific utilities
- **`pkg/inspector`** - Database introspection capabilities
- **`pkg/reflectutil`** - Reflection utilities for Go code analysis
- **`pkg/commontypes`** - Shared type definitions
### Reader Implementations
Each reader is in its own subpackage under `pkg/readers/`:
- `pkg/readers/dbml` - DBML format reader
- `pkg/readers/dctx` - DCTX format reader
- `pkg/readers/drawdb` - DrawDB JSON reader
- `pkg/readers/graphql` - GraphQL schema reader
- `pkg/readers/json` - JSON schema reader
- `pkg/readers/yaml` - YAML schema reader
- `pkg/readers/gorm` - Go GORM models reader
- `pkg/readers/bun` - Go Bun models reader
- `pkg/readers/drizzle` - TypeScript Drizzle ORM reader
- `pkg/readers/prisma` - Prisma schema reader
- `pkg/readers/typeorm` - TypeScript TypeORM reader
- `pkg/readers/pgsql` - PostgreSQL database reader
- `pkg/readers/sqlite` - SQLite database reader
### Writer Implementations
Each writer is in its own subpackage under `pkg/writers/`:
- `pkg/writers/dbml` - DBML format writer
- `pkg/writers/dctx` - DCTX format writer
- `pkg/writers/drawdb` - DrawDB JSON writer
- `pkg/writers/graphql` - GraphQL schema writer
- `pkg/writers/json` - JSON schema writer
- `pkg/writers/yaml` - YAML schema writer
- `pkg/writers/gorm` - Go GORM models writer
- `pkg/writers/bun` - Go Bun models writer
- `pkg/writers/drizzle` - TypeScript Drizzle ORM writer
- `pkg/writers/prisma` - Prisma schema writer
- `pkg/writers/typeorm` - TypeScript TypeORM writer
- `pkg/writers/pgsql` - PostgreSQL SQL writer
- `pkg/writers/sqlite` - SQLite SQL writer
## Usage Examples
### Reading a Schema
```go
import (
"git.warky.dev/wdevs/relspecgo/pkg/readers"
"git.warky.dev/wdevs/relspecgo/pkg/readers/dbml"
)
reader := dbml.NewReader(&readers.ReaderOptions{
FilePath: "schema.dbml",
})
db, err := reader.ReadDatabase()
```
### Writing a Schema
```go
import (
"git.warky.dev/wdevs/relspecgo/pkg/writers"
"git.warky.dev/wdevs/relspecgo/pkg/writers/gorm"
)
writer := gorm.NewWriter(&writers.WriterOptions{
OutputPath: "./models",
PackageName: "models",
})
err := writer.WriteDatabase(db)
```
### Comparing Schemas
```go
import "git.warky.dev/wdevs/relspecgo/pkg/diff"
result := diff.CompareDatabases(sourceDB, targetDB)
err := diff.FormatDiff(result, diff.OutputFormatText, os.Stdout)
```
### Merging Schemas
```go
import "git.warky.dev/wdevs/relspecgo/pkg/merge"
result := merge.MergeDatabases(targetDB, sourceDB, nil)
fmt.Printf("Added %d tables\n", result.TablesAdded)
```
## Documentation Standards
All public APIs follow Go documentation conventions:
- Package documentation in `doc.go` files
- Type, function, and method comments start with the item name
- Examples where applicable
- Clear description of parameters and return values
- Usage notes and caveats where relevant
## Generating Documentation
To regenerate documentation after code changes:
```bash
# Verify documentation builds correctly
go doc -all ./pkg/... > /dev/null
# Check for undocumented exports
go vet ./...
```
## Contributing Documentation
When adding new packages or exported items:
1. Add package documentation in a `doc.go` file
2. Document all exported types, functions, and methods
3. Include usage examples for complex APIs
4. Follow Go documentation style guide
5. Verify with `go doc` before committing
## References
- [Go Documentation Guide](https://go.dev/doc/comment)
- [Effective Go - Commentary](https://go.dev/doc/effective_go#commentary)
- [godoc Documentation](https://pkg.go.dev/golang.org/x/tools/cmd/godoc)

View File

@@ -1,4 +1,4 @@
.PHONY: all build test test-unit test-integration lint coverage clean install help docker-up docker-down docker-test docker-test-integration start stop release release-version .PHONY: all build test test-unit test-integration lint coverage clean install help docker-up docker-down docker-test docker-test-integration start stop release release-version godoc
# Binary name # Binary name
BINARY_NAME=relspec BINARY_NAME=relspec
@@ -14,6 +14,11 @@ GOGET=$(GOCMD) get
GOMOD=$(GOCMD) mod GOMOD=$(GOCMD) mod
GOCLEAN=$(GOCMD) clean GOCLEAN=$(GOCMD) clean
# Version information
VERSION := $(shell git describe --tags --always --dirty 2>/dev/null || echo "dev")
BUILD_DATE := $(shell date -u +"%Y-%m-%d %H:%M:%S UTC")
LDFLAGS := -X 'main.version=$(VERSION)' -X 'main.buildDate=$(BUILD_DATE)'
# Auto-detect container runtime (Docker or Podman) # Auto-detect container runtime (Docker or Podman)
CONTAINER_RUNTIME := $(shell \ CONTAINER_RUNTIME := $(shell \
if command -v podman > /dev/null 2>&1; then \ if command -v podman > /dev/null 2>&1; then \
@@ -37,9 +42,9 @@ COMPOSE_CMD := $(shell \
all: lint test build ## Run linting, tests, and build all: lint test build ## Run linting, tests, and build
build: deps ## Build the binary build: deps ## Build the binary
@echo "Building $(BINARY_NAME)..." @echo "Building $(BINARY_NAME) $(VERSION)..."
@mkdir -p $(BUILD_DIR) @mkdir -p $(BUILD_DIR)
$(GOBUILD) -o $(BUILD_DIR)/$(BINARY_NAME) ./cmd/relspec $(GOBUILD) -ldflags "$(LDFLAGS)" -o $(BUILD_DIR)/$(BINARY_NAME) ./cmd/relspec
@echo "Build complete: $(BUILD_DIR)/$(BINARY_NAME)" @echo "Build complete: $(BUILD_DIR)/$(BINARY_NAME)"
test: test-unit ## Run all unit tests (alias for test-unit) test: test-unit ## Run all unit tests (alias for test-unit)
@@ -91,8 +96,8 @@ clean: ## Clean build artifacts
@echo "Clean complete" @echo "Clean complete"
install: ## Install the binary to $GOPATH/bin install: ## Install the binary to $GOPATH/bin
@echo "Installing $(BINARY_NAME)..." @echo "Installing $(BINARY_NAME) $(VERSION)..."
$(GOCMD) install ./cmd/relspec $(GOCMD) install -ldflags "$(LDFLAGS)" ./cmd/relspec
@echo "Install complete" @echo "Install complete"
deps: ## Download dependencies deps: ## Download dependencies
@@ -101,6 +106,29 @@ deps: ## Download dependencies
$(GOMOD) tidy $(GOMOD) tidy
@echo "Dependencies updated" @echo "Dependencies updated"
godoc: ## Start godoc server on http://localhost:6060
@echo "Starting godoc server..."
@GOBIN=$$(go env GOPATH)/bin; \
if command -v godoc > /dev/null 2>&1; then \
echo "godoc server running on http://localhost:6060"; \
echo "View documentation at: http://localhost:6060/pkg/git.warky.dev/wdevs/relspecgo/"; \
echo "Press Ctrl+C to stop"; \
godoc -http=:6060; \
elif [ -f "$$GOBIN/godoc" ]; then \
echo "godoc server running on http://localhost:6060"; \
echo "View documentation at: http://localhost:6060/pkg/git.warky.dev/wdevs/relspecgo/"; \
echo "Press Ctrl+C to stop"; \
$$GOBIN/godoc -http=:6060; \
else \
echo "godoc not installed. Installing..."; \
go install golang.org/x/tools/cmd/godoc@latest; \
echo "godoc installed. Starting server..."; \
echo "godoc server running on http://localhost:6060"; \
echo "View documentation at: http://localhost:6060/pkg/git.warky.dev/wdevs/relspecgo/"; \
echo "Press Ctrl+C to stop"; \
$$GOBIN/godoc -http=:6060; \
fi
start: docker-up ## Alias for docker-up (start PostgreSQL test database) start: docker-up ## Alias for docker-up (start PostgreSQL test database)
stop: docker-down ## Alias for docker-down (stop PostgreSQL test database) stop: docker-down ## Alias for docker-down (stop PostgreSQL test database)

View File

@@ -37,6 +37,7 @@ RelSpec can read database schemas from multiple sources:
#### Database Inspection #### Database Inspection
- [PostgreSQL](pkg/readers/pgsql/README.md) - Direct PostgreSQL database introspection - [PostgreSQL](pkg/readers/pgsql/README.md) - Direct PostgreSQL database introspection
- [SQLite](pkg/readers/sqlite/README.md) - Direct SQLite database introspection
#### Schema Formats #### Schema Formats
- [DBML](pkg/readers/dbml/README.md) - Database Markup Language (dbdiagram.io) - [DBML](pkg/readers/dbml/README.md) - Database Markup Language (dbdiagram.io)
@@ -59,6 +60,7 @@ RelSpec can write database schemas to multiple formats:
#### Database DDL #### Database DDL
- [PostgreSQL](pkg/writers/pgsql/README.md) - PostgreSQL DDL (CREATE TABLE, etc.) - [PostgreSQL](pkg/writers/pgsql/README.md) - PostgreSQL DDL (CREATE TABLE, etc.)
- [SQLite](pkg/writers/sqlite/README.md) - SQLite DDL with automatic schema flattening
#### Schema Formats #### Schema Formats
- [DBML](pkg/writers/dbml/README.md) - Database Markup Language - [DBML](pkg/writers/dbml/README.md) - Database Markup Language
@@ -185,6 +187,10 @@ relspec convert --from pgsql --from-conn "postgres://..." \
# Convert DBML to PostgreSQL SQL # Convert DBML to PostgreSQL SQL
relspec convert --from dbml --from-path schema.dbml \ relspec convert --from dbml --from-path schema.dbml \
--to pgsql --to-path schema.sql --to pgsql --to-path schema.sql
# Convert PostgreSQL database to SQLite (with automatic schema flattening)
relspec convert --from pgsql --from-conn "postgres://..." \
--to sqlite --to-path sqlite_schema.sql
``` ```
### Schema Validation ### Schema Validation

34
TODO.md
View File

@@ -1,43 +1,44 @@
# RelSpec - TODO List # RelSpec - TODO List
## Input Readers / Writers ## Input Readers / Writers
- [✔️] **Database Inspector** - [✔️] **Database Inspector**
- [✔️] PostgreSQL driver - [✔️] PostgreSQL driver (reader + writer)
- [ ] MySQL driver - [ ] MySQL driver
- [ ] SQLite driver - [✔️] SQLite driver (reader + writer with automatic schema flattening)
- [ ] MSSQL driver - [ ] MSSQL driver
- [✔️] Foreign key detection - [✔️] Foreign key detection
- [✔️] Index extraction - [✔️] Index extraction
- [*] .sql file generation with sequence and priority - [✔️] .sql file generation (PostgreSQL, SQLite)
- [✔️] .dbml: Database Markup Language (DBML) for textual schema representation. - [✔️] .dbml: Database Markup Language (DBML) for textual schema representation.
- [✔️] Prisma schema support (PSL format) .prisma - [✔️] Prisma schema support (PSL format) .prisma
- [✔️] Drizzle ORM support .ts (TypeScript / JavaScript) (Mr. Edd wanted to move from Prisma to Drizzle. If you are bugs, you are welcome to do pull requests or issues) - [✔️] Drizzle ORM support .ts (TypeScript / JavaScript) (Mr. Edd wanted to move from Prisma to Drizzle. If you are bugs, you are welcome to do pull requests or issues)
- [☠️] Entity Framework (.NET) model .edmx (Fuck no, EDMX files were bloated, verbose XML nightmares—hard to merge, error-prone, and a pain in teams. Microsoft wisely ditched them in EF Core for code-first. Classic overkill from old MS era.) - [☠️] Entity Framework (.NET) model .edmx (Fuck no, EDMX files were bloated, verbose XML nightmares—hard to merge, error-prone, and a pain in teams. Microsoft wisely ditched them in EF Core for code-first. Classic overkill from old MS era.)
- [✔️] TypeORM support - [✔️] TypeORM support
- [] .hbm.xml / schema.xml: Hibernate/Propel mappings (Java/PHP) (💲 Someone can do this, not me) - [] .hbm.xml / schema.xml: Hibernate/Propel mappings (Java/PHP) (💲 Someone can do this, not me)
- [ ] Django models.py (Python classes), Sequelize migrations (JS) (💲 Someone can do this, not me) - [ ] Django models.py (Python classes), Sequelize migrations (JS) (💲 Someone can do this, not me)
- [] .avsc: Avro schema (JSON format for data serialization) (💲 Someone can do this, not me) - [] .avsc: Avro schema (JSON format for data serialization) (💲 Someone can do this, not me)
- [✔️] GraphQL schema generation - [✔️] GraphQL schema generation
## UI ## UI
- [✔️] Basic UI (I went with tview) - [✔️] Basic UI (I went with tview)
- [✔️] Save / Load Database - [✔️] Save / Load Database
- [✔️] Schemas / Domains / Tables - [✔️] Schemas / Domains / Tables
- [ ] Add Relations - [✔️] Add Relations
- [ ] Add Indexes - [ ] Add Indexes
- [ ] Add Views - [ ] Add Views
- [ ] Add Sequences - [ ] Add Sequences
- [ ] Add Scripts - [ ] Add Scripts
- [ ] Domain / Table Assignment - [ ] Domain / Table Assignment
## Documentation ## Documentation
- [ ] API documentation (godoc)
- [✔️] API documentation (godoc)
- [ ] Usage examples for each format combination - [ ] Usage examples for each format combination
## Advanced Features ## Advanced Features
- [ ] Dry-run mode for validation - [ ] Dry-run mode for validation
- [x] Diff tool for comparing specifications - [x] Diff tool for comparing specifications
- [ ] Migration script generation - [ ] Migration script generation
@@ -46,12 +47,13 @@
- [ ] Watch mode for auto-regeneration - [ ] Watch mode for auto-regeneration
## Future Considerations ## Future Considerations
- [ ] Web UI for visual editing - [ ] Web UI for visual editing
- [ ] REST API server mode - [ ] REST API server mode
- [ ] Support for NoSQL databases - [ ] Support for NoSQL databases
## Performance ## Performance
- [ ] Concurrent processing for multiple tables - [ ] Concurrent processing for multiple tables
- [ ] Streaming for large databases - [ ] Streaming for large databases
- [ ] Memory optimization - [ ] Memory optimization

View File

@@ -18,8 +18,10 @@ import (
"git.warky.dev/wdevs/relspecgo/pkg/readers/gorm" "git.warky.dev/wdevs/relspecgo/pkg/readers/gorm"
"git.warky.dev/wdevs/relspecgo/pkg/readers/graphql" "git.warky.dev/wdevs/relspecgo/pkg/readers/graphql"
"git.warky.dev/wdevs/relspecgo/pkg/readers/json" "git.warky.dev/wdevs/relspecgo/pkg/readers/json"
"git.warky.dev/wdevs/relspecgo/pkg/readers/mssql"
"git.warky.dev/wdevs/relspecgo/pkg/readers/pgsql" "git.warky.dev/wdevs/relspecgo/pkg/readers/pgsql"
"git.warky.dev/wdevs/relspecgo/pkg/readers/prisma" "git.warky.dev/wdevs/relspecgo/pkg/readers/prisma"
"git.warky.dev/wdevs/relspecgo/pkg/readers/sqlite"
"git.warky.dev/wdevs/relspecgo/pkg/readers/typeorm" "git.warky.dev/wdevs/relspecgo/pkg/readers/typeorm"
"git.warky.dev/wdevs/relspecgo/pkg/readers/yaml" "git.warky.dev/wdevs/relspecgo/pkg/readers/yaml"
"git.warky.dev/wdevs/relspecgo/pkg/writers" "git.warky.dev/wdevs/relspecgo/pkg/writers"
@@ -31,8 +33,10 @@ import (
wgorm "git.warky.dev/wdevs/relspecgo/pkg/writers/gorm" wgorm "git.warky.dev/wdevs/relspecgo/pkg/writers/gorm"
wgraphql "git.warky.dev/wdevs/relspecgo/pkg/writers/graphql" wgraphql "git.warky.dev/wdevs/relspecgo/pkg/writers/graphql"
wjson "git.warky.dev/wdevs/relspecgo/pkg/writers/json" wjson "git.warky.dev/wdevs/relspecgo/pkg/writers/json"
wmssql "git.warky.dev/wdevs/relspecgo/pkg/writers/mssql"
wpgsql "git.warky.dev/wdevs/relspecgo/pkg/writers/pgsql" wpgsql "git.warky.dev/wdevs/relspecgo/pkg/writers/pgsql"
wprisma "git.warky.dev/wdevs/relspecgo/pkg/writers/prisma" wprisma "git.warky.dev/wdevs/relspecgo/pkg/writers/prisma"
wsqlite "git.warky.dev/wdevs/relspecgo/pkg/writers/sqlite"
wtypeorm "git.warky.dev/wdevs/relspecgo/pkg/writers/typeorm" wtypeorm "git.warky.dev/wdevs/relspecgo/pkg/writers/typeorm"
wyaml "git.warky.dev/wdevs/relspecgo/pkg/writers/yaml" wyaml "git.warky.dev/wdevs/relspecgo/pkg/writers/yaml"
) )
@@ -70,6 +74,8 @@ Input formats:
- prisma: Prisma schema files (.prisma) - prisma: Prisma schema files (.prisma)
- typeorm: TypeORM entity files (TypeScript) - typeorm: TypeORM entity files (TypeScript)
- pgsql: PostgreSQL database (live connection) - pgsql: PostgreSQL database (live connection)
- mssql: Microsoft SQL Server database (live connection)
- sqlite: SQLite database file
Output formats: Output formats:
- dbml: DBML schema files - dbml: DBML schema files
@@ -84,13 +90,21 @@ Output formats:
- prisma: Prisma schema files (.prisma) - prisma: Prisma schema files (.prisma)
- typeorm: TypeORM entity files (TypeScript) - typeorm: TypeORM entity files (TypeScript)
- pgsql: PostgreSQL SQL schema - pgsql: PostgreSQL SQL schema
- mssql: Microsoft SQL Server SQL schema
- sqlite: SQLite SQL schema (with automatic schema flattening)
PostgreSQL Connection String Examples: Connection String Examples:
postgres://username:password@localhost:5432/database_name PostgreSQL:
postgres://username:password@localhost/database_name postgres://username:password@localhost:5432/database_name
postgresql://user:pass@host:5432/dbname?sslmode=disable postgres://username:password@localhost/database_name
postgresql://user:pass@host/dbname?sslmode=require postgresql://user:pass@host:5432/dbname?sslmode=disable
host=localhost port=5432 user=username password=pass dbname=mydb sslmode=disable postgresql://user:pass@host/dbname?sslmode=require
host=localhost port=5432 user=username password=pass dbname=mydb sslmode=disable
SQLite:
/path/to/database.db
./relative/path/database.sqlite
database.db
Examples: Examples:
@@ -136,14 +150,22 @@ Examples:
# Convert Bun models directory to JSON # Convert Bun models directory to JSON
relspec convert --from bun --from-path ./models \ relspec convert --from bun --from-path ./models \
--to json --to-path schema.json`, --to json --to-path schema.json
# Convert SQLite database to JSON
relspec convert --from sqlite --from-path database.db \
--to json --to-path schema.json
# Convert SQLite to PostgreSQL SQL
relspec convert --from sqlite --from-path database.db \
--to pgsql --to-path schema.sql`,
RunE: runConvert, RunE: runConvert,
} }
func init() { func init() {
convertCmd.Flags().StringVar(&convertSourceType, "from", "", "Source format (dbml, dctx, drawdb, graphql, json, yaml, gorm, bun, drizzle, prisma, typeorm, pgsql)") convertCmd.Flags().StringVar(&convertSourceType, "from", "", "Source format (dbml, dctx, drawdb, graphql, json, yaml, gorm, bun, drizzle, prisma, typeorm, pgsql, sqlite)")
convertCmd.Flags().StringVar(&convertSourcePath, "from-path", "", "Source file path (for file-based formats)") convertCmd.Flags().StringVar(&convertSourcePath, "from-path", "", "Source file path (for file-based formats)")
convertCmd.Flags().StringVar(&convertSourceConn, "from-conn", "", "Source connection string (for database formats)") convertCmd.Flags().StringVar(&convertSourceConn, "from-conn", "", "Source connection string (for pgsql) or file path (for sqlite)")
convertCmd.Flags().StringVar(&convertTargetType, "to", "", "Target format (dbml, dctx, drawdb, graphql, json, yaml, gorm, bun, drizzle, prisma, typeorm, pgsql)") convertCmd.Flags().StringVar(&convertTargetType, "to", "", "Target format (dbml, dctx, drawdb, graphql, json, yaml, gorm, bun, drizzle, prisma, typeorm, pgsql)")
convertCmd.Flags().StringVar(&convertTargetPath, "to-path", "", "Target output path (file or directory)") convertCmd.Flags().StringVar(&convertTargetPath, "to-path", "", "Target output path (file or directory)")
@@ -291,6 +313,23 @@ func readDatabaseForConvert(dbType, filePath, connString string) (*models.Databa
} }
reader = graphql.NewReader(&readers.ReaderOptions{FilePath: filePath}) reader = graphql.NewReader(&readers.ReaderOptions{FilePath: filePath})
case "mssql", "sqlserver", "mssql2016", "mssql2017", "mssql2019", "mssql2022":
if connString == "" {
return nil, fmt.Errorf("connection string is required for MSSQL format")
}
reader = mssql.NewReader(&readers.ReaderOptions{ConnectionString: connString})
case "sqlite", "sqlite3":
// SQLite can use either file path or connection string
dbPath := filePath
if dbPath == "" {
dbPath = connString
}
if dbPath == "" {
return nil, fmt.Errorf("file path or connection string is required for SQLite format")
}
reader = sqlite.NewReader(&readers.ReaderOptions{FilePath: dbPath})
default: default:
return nil, fmt.Errorf("unsupported source format: %s", dbType) return nil, fmt.Errorf("unsupported source format: %s", dbType)
} }
@@ -346,6 +385,12 @@ func writeDatabase(db *models.Database, dbType, outputPath, packageName, schemaF
case "pgsql", "postgres", "postgresql", "sql": case "pgsql", "postgres", "postgresql", "sql":
writer = wpgsql.NewWriter(writerOpts) writer = wpgsql.NewWriter(writerOpts)
case "mssql", "sqlserver", "mssql2016", "mssql2017", "mssql2019", "mssql2022":
writer = wmssql.NewWriter(writerOpts)
case "sqlite", "sqlite3":
writer = wsqlite.NewWriter(writerOpts)
case "prisma": case "prisma":
writer = wprisma.NewWriter(writerOpts) writer = wprisma.NewWriter(writerOpts)

View File

@@ -16,6 +16,7 @@ import (
"git.warky.dev/wdevs/relspecgo/pkg/readers/drawdb" "git.warky.dev/wdevs/relspecgo/pkg/readers/drawdb"
"git.warky.dev/wdevs/relspecgo/pkg/readers/json" "git.warky.dev/wdevs/relspecgo/pkg/readers/json"
"git.warky.dev/wdevs/relspecgo/pkg/readers/pgsql" "git.warky.dev/wdevs/relspecgo/pkg/readers/pgsql"
"git.warky.dev/wdevs/relspecgo/pkg/readers/sqlite"
"git.warky.dev/wdevs/relspecgo/pkg/readers/yaml" "git.warky.dev/wdevs/relspecgo/pkg/readers/yaml"
) )
@@ -254,6 +255,17 @@ func readDatabase(dbType, filePath, connString, label string) (*models.Database,
} }
reader = pgsql.NewReader(&readers.ReaderOptions{ConnectionString: connString}) reader = pgsql.NewReader(&readers.ReaderOptions{ConnectionString: connString})
case "sqlite", "sqlite3":
// SQLite can use either file path or connection string
dbPath := filePath
if dbPath == "" {
dbPath = connString
}
if dbPath == "" {
return nil, fmt.Errorf("%s: file path or connection string is required for SQLite format", label)
}
reader = sqlite.NewReader(&readers.ReaderOptions{FilePath: dbPath})
default: default:
return nil, fmt.Errorf("%s: unsupported database format: %s", label, dbType) return nil, fmt.Errorf("%s: unsupported database format: %s", label, dbType)
} }

View File

@@ -19,6 +19,7 @@ import (
"git.warky.dev/wdevs/relspecgo/pkg/readers/json" "git.warky.dev/wdevs/relspecgo/pkg/readers/json"
"git.warky.dev/wdevs/relspecgo/pkg/readers/pgsql" "git.warky.dev/wdevs/relspecgo/pkg/readers/pgsql"
"git.warky.dev/wdevs/relspecgo/pkg/readers/prisma" "git.warky.dev/wdevs/relspecgo/pkg/readers/prisma"
"git.warky.dev/wdevs/relspecgo/pkg/readers/sqlite"
"git.warky.dev/wdevs/relspecgo/pkg/readers/typeorm" "git.warky.dev/wdevs/relspecgo/pkg/readers/typeorm"
"git.warky.dev/wdevs/relspecgo/pkg/readers/yaml" "git.warky.dev/wdevs/relspecgo/pkg/readers/yaml"
"git.warky.dev/wdevs/relspecgo/pkg/ui" "git.warky.dev/wdevs/relspecgo/pkg/ui"
@@ -33,6 +34,7 @@ import (
wjson "git.warky.dev/wdevs/relspecgo/pkg/writers/json" wjson "git.warky.dev/wdevs/relspecgo/pkg/writers/json"
wpgsql "git.warky.dev/wdevs/relspecgo/pkg/writers/pgsql" wpgsql "git.warky.dev/wdevs/relspecgo/pkg/writers/pgsql"
wprisma "git.warky.dev/wdevs/relspecgo/pkg/writers/prisma" wprisma "git.warky.dev/wdevs/relspecgo/pkg/writers/prisma"
wsqlite "git.warky.dev/wdevs/relspecgo/pkg/writers/sqlite"
wtypeorm "git.warky.dev/wdevs/relspecgo/pkg/writers/typeorm" wtypeorm "git.warky.dev/wdevs/relspecgo/pkg/writers/typeorm"
wyaml "git.warky.dev/wdevs/relspecgo/pkg/writers/yaml" wyaml "git.warky.dev/wdevs/relspecgo/pkg/writers/yaml"
) )
@@ -73,6 +75,7 @@ Supports reading from and writing to all supported formats:
- prisma: Prisma schema files (.prisma) - prisma: Prisma schema files (.prisma)
- typeorm: TypeORM entity files (TypeScript) - typeorm: TypeORM entity files (TypeScript)
- pgsql: PostgreSQL database (live connection) - pgsql: PostgreSQL database (live connection)
- sqlite: SQLite database file
Output formats: Output formats:
- dbml: DBML schema files - dbml: DBML schema files
@@ -87,13 +90,19 @@ Supports reading from and writing to all supported formats:
- prisma: Prisma schema files (.prisma) - prisma: Prisma schema files (.prisma)
- typeorm: TypeORM entity files (TypeScript) - typeorm: TypeORM entity files (TypeScript)
- pgsql: PostgreSQL SQL schema - pgsql: PostgreSQL SQL schema
- sqlite: SQLite SQL schema (with automatic schema flattening)
PostgreSQL Connection String Examples: Connection String Examples:
postgres://username:password@localhost:5432/database_name PostgreSQL:
postgres://username:password@localhost/database_name postgres://username:password@localhost:5432/database_name
postgresql://user:pass@host:5432/dbname?sslmode=disable postgres://username:password@localhost/database_name
postgresql://user:pass@host/dbname?sslmode=require postgresql://user:pass@host:5432/dbname?sslmode=disable
host=localhost port=5432 user=username password=pass dbname=mydb sslmode=disable postgresql://user:pass@host/dbname?sslmode=require
host=localhost port=5432 user=username password=pass dbname=mydb sslmode=disable
SQLite:
/path/to/database.db
./relative/path/database.sqlite
database.db
Examples: Examples:
# Edit a DBML schema file # Edit a DBML schema file
@@ -107,15 +116,21 @@ Examples:
relspec edit --from json --from-path db.json --to gorm --to-path models/ relspec edit --from json --from-path db.json --to gorm --to-path models/
# Edit GORM models in place # Edit GORM models in place
relspec edit --from gorm --from-path ./models --to gorm --to-path ./models`, relspec edit --from gorm --from-path ./models --to gorm --to-path ./models
# Edit SQLite database
relspec edit --from sqlite --from-path database.db --to sqlite --to-path database.db
# Convert SQLite to DBML
relspec edit --from sqlite --from-path database.db --to dbml --to-path schema.dbml`,
RunE: runEdit, RunE: runEdit,
} }
func init() { func init() {
editCmd.Flags().StringVar(&editSourceType, "from", "", "Source format (dbml, dctx, drawdb, graphql, json, yaml, gorm, bun, drizzle, prisma, typeorm, pgsql)") editCmd.Flags().StringVar(&editSourceType, "from", "", "Source format (dbml, dctx, drawdb, graphql, json, yaml, gorm, bun, drizzle, prisma, typeorm, pgsql, sqlite)")
editCmd.Flags().StringVar(&editSourcePath, "from-path", "", "Source file path (for file-based formats)") editCmd.Flags().StringVar(&editSourcePath, "from-path", "", "Source file path (for file-based formats)")
editCmd.Flags().StringVar(&editSourceConn, "from-conn", "", "Source connection string (for database formats)") editCmd.Flags().StringVar(&editSourceConn, "from-conn", "", "Source connection string (for pgsql) or file path (for sqlite)")
editCmd.Flags().StringVar(&editTargetType, "to", "", "Target format (dbml, dctx, drawdb, graphql, json, yaml, gorm, bun, drizzle, prisma, typeorm, pgsql)") editCmd.Flags().StringVar(&editTargetType, "to", "", "Target format (dbml, dctx, drawdb, graphql, json, yaml, gorm, bun, drizzle, prisma, typeorm, pgsql, sqlite)")
editCmd.Flags().StringVar(&editTargetPath, "to-path", "", "Target file path (for file-based formats)") editCmd.Flags().StringVar(&editTargetPath, "to-path", "", "Target file path (for file-based formats)")
editCmd.Flags().StringVar(&editSchemaFilter, "schema", "", "Filter to a specific schema by name") editCmd.Flags().StringVar(&editSchemaFilter, "schema", "", "Filter to a specific schema by name")
@@ -281,6 +296,16 @@ func readDatabaseForEdit(dbType, filePath, connString, label string) (*models.Da
return nil, fmt.Errorf("%s: connection string is required for PostgreSQL format", label) return nil, fmt.Errorf("%s: connection string is required for PostgreSQL format", label)
} }
reader = pgsql.NewReader(&readers.ReaderOptions{ConnectionString: connString}) reader = pgsql.NewReader(&readers.ReaderOptions{ConnectionString: connString})
case "sqlite", "sqlite3":
// SQLite can use either file path or connection string
dbPath := filePath
if dbPath == "" {
dbPath = connString
}
if dbPath == "" {
return nil, fmt.Errorf("%s: file path or connection string is required for SQLite format", label)
}
reader = sqlite.NewReader(&readers.ReaderOptions{FilePath: dbPath})
default: default:
return nil, fmt.Errorf("%s: unsupported format: %s", label, dbType) return nil, fmt.Errorf("%s: unsupported format: %s", label, dbType)
} }
@@ -319,6 +344,8 @@ func writeDatabaseForEdit(dbType, filePath, connString string, db *models.Databa
writer = wprisma.NewWriter(&writers.WriterOptions{OutputPath: filePath}) writer = wprisma.NewWriter(&writers.WriterOptions{OutputPath: filePath})
case "typeorm": case "typeorm":
writer = wtypeorm.NewWriter(&writers.WriterOptions{OutputPath: filePath}) writer = wtypeorm.NewWriter(&writers.WriterOptions{OutputPath: filePath})
case "sqlite", "sqlite3":
writer = wsqlite.NewWriter(&writers.WriterOptions{OutputPath: filePath})
case "pgsql": case "pgsql":
writer = wpgsql.NewWriter(&writers.WriterOptions{OutputPath: filePath}) writer = wpgsql.NewWriter(&writers.WriterOptions{OutputPath: filePath})
default: default:

View File

@@ -20,6 +20,7 @@ import (
"git.warky.dev/wdevs/relspecgo/pkg/readers/json" "git.warky.dev/wdevs/relspecgo/pkg/readers/json"
"git.warky.dev/wdevs/relspecgo/pkg/readers/pgsql" "git.warky.dev/wdevs/relspecgo/pkg/readers/pgsql"
"git.warky.dev/wdevs/relspecgo/pkg/readers/prisma" "git.warky.dev/wdevs/relspecgo/pkg/readers/prisma"
"git.warky.dev/wdevs/relspecgo/pkg/readers/sqlite"
"git.warky.dev/wdevs/relspecgo/pkg/readers/typeorm" "git.warky.dev/wdevs/relspecgo/pkg/readers/typeorm"
"git.warky.dev/wdevs/relspecgo/pkg/readers/yaml" "git.warky.dev/wdevs/relspecgo/pkg/readers/yaml"
) )
@@ -288,6 +289,17 @@ func readDatabaseForInspect(dbType, filePath, connString string) (*models.Databa
} }
reader = pgsql.NewReader(&readers.ReaderOptions{ConnectionString: connString}) reader = pgsql.NewReader(&readers.ReaderOptions{ConnectionString: connString})
case "sqlite", "sqlite3":
// SQLite can use either file path or connection string
dbPath := filePath
if dbPath == "" {
dbPath = connString
}
if dbPath == "" {
return nil, fmt.Errorf("file path or connection string is required for SQLite format")
}
reader = sqlite.NewReader(&readers.ReaderOptions{FilePath: dbPath})
default: default:
return nil, fmt.Errorf("unsupported database type: %s", dbType) return nil, fmt.Errorf("unsupported database type: %s", dbType)
} }

View File

@@ -21,6 +21,7 @@ import (
"git.warky.dev/wdevs/relspecgo/pkg/readers/json" "git.warky.dev/wdevs/relspecgo/pkg/readers/json"
"git.warky.dev/wdevs/relspecgo/pkg/readers/pgsql" "git.warky.dev/wdevs/relspecgo/pkg/readers/pgsql"
"git.warky.dev/wdevs/relspecgo/pkg/readers/prisma" "git.warky.dev/wdevs/relspecgo/pkg/readers/prisma"
"git.warky.dev/wdevs/relspecgo/pkg/readers/sqlite"
"git.warky.dev/wdevs/relspecgo/pkg/readers/typeorm" "git.warky.dev/wdevs/relspecgo/pkg/readers/typeorm"
"git.warky.dev/wdevs/relspecgo/pkg/readers/yaml" "git.warky.dev/wdevs/relspecgo/pkg/readers/yaml"
"git.warky.dev/wdevs/relspecgo/pkg/writers" "git.warky.dev/wdevs/relspecgo/pkg/writers"
@@ -34,6 +35,7 @@ import (
wjson "git.warky.dev/wdevs/relspecgo/pkg/writers/json" wjson "git.warky.dev/wdevs/relspecgo/pkg/writers/json"
wpgsql "git.warky.dev/wdevs/relspecgo/pkg/writers/pgsql" wpgsql "git.warky.dev/wdevs/relspecgo/pkg/writers/pgsql"
wprisma "git.warky.dev/wdevs/relspecgo/pkg/writers/prisma" wprisma "git.warky.dev/wdevs/relspecgo/pkg/writers/prisma"
wsqlite "git.warky.dev/wdevs/relspecgo/pkg/writers/sqlite"
wtypeorm "git.warky.dev/wdevs/relspecgo/pkg/writers/typeorm" wtypeorm "git.warky.dev/wdevs/relspecgo/pkg/writers/typeorm"
wyaml "git.warky.dev/wdevs/relspecgo/pkg/writers/yaml" wyaml "git.warky.dev/wdevs/relspecgo/pkg/writers/yaml"
) )
@@ -314,6 +316,16 @@ func readDatabaseForMerge(dbType, filePath, connString, label string) (*models.D
return nil, fmt.Errorf("%s: connection string is required for PostgreSQL format", label) return nil, fmt.Errorf("%s: connection string is required for PostgreSQL format", label)
} }
reader = pgsql.NewReader(&readers.ReaderOptions{ConnectionString: connString}) reader = pgsql.NewReader(&readers.ReaderOptions{ConnectionString: connString})
case "sqlite", "sqlite3":
// SQLite can use either file path or connection string
dbPath := filePath
if dbPath == "" {
dbPath = connString
}
if dbPath == "" {
return nil, fmt.Errorf("%s: file path or connection string is required for SQLite format", label)
}
reader = sqlite.NewReader(&readers.ReaderOptions{FilePath: dbPath})
default: default:
return nil, fmt.Errorf("%s: unsupported format '%s'", label, dbType) return nil, fmt.Errorf("%s: unsupported format '%s'", label, dbType)
} }
@@ -385,6 +397,8 @@ func writeDatabaseForMerge(dbType, filePath, connString string, db *models.Datab
return fmt.Errorf("%s: file path is required for TypeORM format", label) return fmt.Errorf("%s: file path is required for TypeORM format", label)
} }
writer = wtypeorm.NewWriter(&writers.WriterOptions{OutputPath: filePath, FlattenSchema: flattenSchema}) writer = wtypeorm.NewWriter(&writers.WriterOptions{OutputPath: filePath, FlattenSchema: flattenSchema})
case "sqlite", "sqlite3":
writer = wsqlite.NewWriter(&writers.WriterOptions{OutputPath: filePath, FlattenSchema: flattenSchema})
case "pgsql": case "pgsql":
writerOpts := &writers.WriterOptions{OutputPath: filePath, FlattenSchema: flattenSchema} writerOpts := &writers.WriterOptions{OutputPath: filePath, FlattenSchema: flattenSchema}
if connString != "" { if connString != "" {

View File

@@ -1,9 +1,49 @@
package main package main
import ( import (
"fmt"
"runtime/debug"
"time"
"github.com/spf13/cobra" "github.com/spf13/cobra"
) )
var (
// Version information, set via ldflags during build
version = "dev"
buildDate = "unknown"
)
func init() {
// If version wasn't set via ldflags, try to get it from build info
if version == "dev" {
if info, ok := debug.ReadBuildInfo(); ok {
// Try to get version from VCS
var vcsRevision, vcsTime string
for _, setting := range info.Settings {
switch setting.Key {
case "vcs.revision":
if len(setting.Value) >= 7 {
vcsRevision = setting.Value[:7]
}
case "vcs.time":
vcsTime = setting.Value
}
}
if vcsRevision != "" {
version = vcsRevision
}
if vcsTime != "" {
if t, err := time.Parse(time.RFC3339, vcsTime); err == nil {
buildDate = t.UTC().Format("2006-01-02 15:04:05 UTC")
}
}
}
}
}
var rootCmd = &cobra.Command{ var rootCmd = &cobra.Command{
Use: "relspec", Use: "relspec",
Short: "RelSpec - Database schema conversion and analysis tool", Short: "RelSpec - Database schema conversion and analysis tool",
@@ -13,6 +53,9 @@ bidirectional conversion between various database schema formats.
It reads database schemas from multiple sources (live databases, DBML, It reads database schemas from multiple sources (live databases, DBML,
DCTX, DrawDB, etc.) and writes them to various formats (GORM, Bun, DCTX, DrawDB, etc.) and writes them to various formats (GORM, Bun,
JSON, YAML, SQL, etc.).`, JSON, YAML, SQL, etc.).`,
PersistentPreRun: func(cmd *cobra.Command, args []string) {
fmt.Printf("RelSpec %s (built: %s)\n\n", version, buildDate)
},
} }
func init() { func init() {
@@ -24,4 +67,5 @@ func init() {
rootCmd.AddCommand(editCmd) rootCmd.AddCommand(editCmd)
rootCmd.AddCommand(mergeCmd) rootCmd.AddCommand(mergeCmd)
rootCmd.AddCommand(splitCmd) rootCmd.AddCommand(splitCmd)
rootCmd.AddCommand(versionCmd)
} }

16
cmd/relspec/version.go Normal file
View File

@@ -0,0 +1,16 @@
package main
import (
"fmt"
"github.com/spf13/cobra"
)
var versionCmd = &cobra.Command{
Use: "version",
Short: "Print version information",
Run: func(cmd *cobra.Command, args []string) {
fmt.Printf("RelSpec %s\n", version)
fmt.Printf("Built: %s\n", buildDate)
},
}

108
doc.go Normal file
View File

@@ -0,0 +1,108 @@
// Package relspecgo provides bidirectional conversion between database schema formats.
//
// RelSpec is a comprehensive database schema tool that reads, writes, and transforms
// database schemas across multiple formats including live databases, ORM models,
// schema definition languages, and data interchange formats.
//
// # Features
//
// - Read from 15+ formats: PostgreSQL, SQLite, DBML, GORM, Prisma, Drizzle, and more
// - Write to 15+ formats: SQL, ORM models, schema definitions, JSON/YAML
// - Interactive TUI editor for visual schema management
// - Schema diff and merge capabilities
// - Format-agnostic intermediate representation
//
// # Architecture
//
// RelSpec uses a hub-and-spoke architecture with models.Database as the central type:
//
// Input Format → Reader → models.Database → Writer → Output Format
//
// This allows any supported input format to be converted to any supported output format
// without requiring N² conversion implementations.
//
// # Key Packages
//
// - pkg/models: Core data structures (Database, Schema, Table, Column, etc.)
// - pkg/readers: Input format readers (dbml, pgsql, gorm, etc.)
// - pkg/writers: Output format writers (dbml, pgsql, gorm, etc.)
// - pkg/ui: Interactive terminal UI for schema editing
// - pkg/diff: Schema comparison and difference detection
// - pkg/merge: Schema merging utilities
// - pkg/transform: Validation and normalization
//
// # Installation
//
// go install git.warky.dev/wdevs/relspecgo/cmd/relspec@latest
//
// # Usage
//
// Command-line conversion:
//
// relspec convert --from dbml --from-path schema.dbml \
// --to gorm --to-path ./models
//
// Interactive editor:
//
// relspec edit --from pgsql --from-conn "postgres://..." \
// --to dbml --to-path schema.dbml
//
// Schema comparison:
//
// relspec diff --source-type pgsql --source-conn "postgres://..." \
// --target-type dbml --target-path schema.dbml
//
// Merge schemas:
//
// relspec merge --target schema1.dbml --sources schema2.dbml,schema3.dbml
//
// # Supported Formats
//
// Input/Output Formats:
// - dbml: Database Markup Language
// - dctx: DCTX schema files
// - drawdb: DrawDB JSON format
// - graphql: GraphQL schema definition
// - json: JSON schema representation
// - yaml: YAML schema representation
// - gorm: Go GORM models
// - bun: Go Bun models
// - drizzle: TypeScript Drizzle ORM
// - prisma: Prisma schema language
// - typeorm: TypeScript TypeORM entities
// - pgsql: PostgreSQL (live DB or SQL)
// - sqlite: SQLite (database file or SQL)
//
// # Library Usage
//
// RelSpec can be used as a Go library:
//
// import (
// "git.warky.dev/wdevs/relspecgo/pkg/models"
// "git.warky.dev/wdevs/relspecgo/pkg/readers/dbml"
// "git.warky.dev/wdevs/relspecgo/pkg/writers/gorm"
// )
//
// // Read DBML
// reader := dbml.NewReader(&readers.ReaderOptions{
// FilePath: "schema.dbml",
// })
// db, err := reader.ReadDatabase()
//
// // Write GORM models
// writer := gorm.NewWriter(&writers.WriterOptions{
// OutputPath: "./models",
// PackageName: "models",
// })
// err = writer.WriteDatabase(db)
//
// # Documentation
//
// Full documentation available at: https://git.warky.dev/wdevs/relspecgo
//
// API documentation: go doc git.warky.dev/wdevs/relspecgo/...
//
// # License
//
// See LICENSE file in the repository root.
package relspecgo

View File

@@ -1,6 +1,21 @@
version: '3.8' version: '3.8'
services: services:
mssql:
image: mcr.microsoft.com/mssql/server:2022-latest
environment:
- ACCEPT_EULA=Y
- SA_PASSWORD=StrongPassword123!
- MSSQL_PID=Express
ports:
- "1433:1433"
volumes:
- ./test_data/mssql/test_schema.sql:/test_schema.sql
healthcheck:
test: ["CMD", "/opt/mssql-tools/bin/sqlcmd", "-S", "localhost", "-U", "sa", "-P", "StrongPassword123!", "-Q", "SELECT 1"]
interval: 5s
timeout: 3s
retries: 10
postgres: postgres:
image: postgres:16-alpine image: postgres:16-alpine
container_name: relspec-test-postgres container_name: relspec-test-postgres

19
go.mod
View File

@@ -6,33 +6,46 @@ require (
github.com/gdamore/tcell/v2 v2.8.1 github.com/gdamore/tcell/v2 v2.8.1
github.com/google/uuid v1.6.0 github.com/google/uuid v1.6.0
github.com/jackc/pgx/v5 v5.7.6 github.com/jackc/pgx/v5 v5.7.6
github.com/microsoft/go-mssqldb v1.9.6
github.com/rivo/tview v0.42.0 github.com/rivo/tview v0.42.0
github.com/spf13/cobra v1.10.2 github.com/spf13/cobra v1.10.2
github.com/stretchr/testify v1.11.1 github.com/stretchr/testify v1.11.1
github.com/uptrace/bun v1.2.16 github.com/uptrace/bun v1.2.16
golang.org/x/text v0.28.0 golang.org/x/text v0.31.0
gopkg.in/yaml.v3 v3.0.1 gopkg.in/yaml.v3 v3.0.1
modernc.org/sqlite v1.44.3
) )
require ( require (
github.com/davecgh/go-spew v1.1.1 // indirect github.com/davecgh/go-spew v1.1.1 // indirect
github.com/dustin/go-humanize v1.0.1 // indirect
github.com/gdamore/encoding v1.0.1 // indirect github.com/gdamore/encoding v1.0.1 // indirect
github.com/golang-sql/civil v0.0.0-20220223132316-b832511892a9 // indirect
github.com/golang-sql/sqlexp v0.1.0 // indirect
github.com/inconshreveable/mousetrap v1.1.0 // indirect github.com/inconshreveable/mousetrap v1.1.0 // indirect
github.com/jackc/pgpassfile v1.0.0 // indirect github.com/jackc/pgpassfile v1.0.0 // indirect
github.com/jackc/pgservicefile v0.0.0-20240606120523-5a60cdf6a761 // indirect github.com/jackc/pgservicefile v0.0.0-20240606120523-5a60cdf6a761 // indirect
github.com/jinzhu/inflection v1.0.0 // indirect github.com/jinzhu/inflection v1.0.0 // indirect
github.com/kr/pretty v0.3.1 // indirect github.com/kr/pretty v0.3.1 // indirect
github.com/lucasb-eyer/go-colorful v1.2.0 // indirect github.com/lucasb-eyer/go-colorful v1.2.0 // indirect
github.com/mattn/go-isatty v0.0.20 // indirect
github.com/mattn/go-runewidth v0.0.16 // indirect github.com/mattn/go-runewidth v0.0.16 // indirect
github.com/ncruces/go-strftime v1.0.0 // indirect
github.com/pmezard/go-difflib v1.0.0 // indirect github.com/pmezard/go-difflib v1.0.0 // indirect
github.com/puzpuzpuz/xsync/v3 v3.5.1 // indirect github.com/puzpuzpuz/xsync/v3 v3.5.1 // indirect
github.com/remyoudompheng/bigfft v0.0.0-20230129092748-24d4a6f8daec // indirect
github.com/rivo/uniseg v0.4.7 // indirect github.com/rivo/uniseg v0.4.7 // indirect
github.com/rogpeppe/go-internal v1.14.1 // indirect github.com/rogpeppe/go-internal v1.14.1 // indirect
github.com/shopspring/decimal v1.4.0 // indirect
github.com/spf13/pflag v1.0.10 // indirect github.com/spf13/pflag v1.0.10 // indirect
github.com/tmthrgd/go-hex v0.0.0-20190904060850-447a3041c3bc // indirect github.com/tmthrgd/go-hex v0.0.0-20190904060850-447a3041c3bc // indirect
github.com/vmihailenco/msgpack/v5 v5.4.1 // indirect github.com/vmihailenco/msgpack/v5 v5.4.1 // indirect
github.com/vmihailenco/tagparser/v2 v2.0.0 // indirect github.com/vmihailenco/tagparser/v2 v2.0.0 // indirect
golang.org/x/crypto v0.41.0 // indirect golang.org/x/crypto v0.45.0 // indirect
golang.org/x/exp v0.0.0-20251023183803-a4bb9ffd2546 // indirect
golang.org/x/sys v0.38.0 // indirect golang.org/x/sys v0.38.0 // indirect
golang.org/x/term v0.34.0 // indirect golang.org/x/term v0.37.0 // indirect
modernc.org/libc v1.67.6 // indirect
modernc.org/mathutil v1.7.1 // indirect
modernc.org/memory v1.11.0 // indirect
) )

91
go.sum
View File

@@ -1,15 +1,39 @@
github.com/Azure/azure-sdk-for-go/sdk/azcore v1.18.0 h1:Gt0j3wceWMwPmiazCa8MzMA0MfhmPIz0Qp0FJ6qcM0U=
github.com/Azure/azure-sdk-for-go/sdk/azcore v1.18.0/go.mod h1:Ot/6aikWnKWi4l9QB7qVSwa8iMphQNqkWALMoNT3rzM=
github.com/Azure/azure-sdk-for-go/sdk/azidentity v1.10.1 h1:B+blDbyVIG3WaikNxPnhPiJ1MThR03b3vKGtER95TP4=
github.com/Azure/azure-sdk-for-go/sdk/azidentity v1.10.1/go.mod h1:JdM5psgjfBf5fo2uWOZhflPWyDBZ/O/CNAH9CtsuZE4=
github.com/Azure/azure-sdk-for-go/sdk/internal v1.11.1 h1:FPKJS1T+clwv+OLGt13a8UjqeRuh0O4SJ3lUriThc+4=
github.com/Azure/azure-sdk-for-go/sdk/internal v1.11.1/go.mod h1:j2chePtV91HrC22tGoRX3sGY42uF13WzmmV80/OdVAA=
github.com/Azure/azure-sdk-for-go/sdk/security/keyvault/azkeys v1.3.1 h1:Wgf5rZba3YZqeTNJPtvqZoBu1sBN/L4sry+u2U3Y75w=
github.com/Azure/azure-sdk-for-go/sdk/security/keyvault/azkeys v1.3.1/go.mod h1:xxCBG/f/4Vbmh2XQJBsOmNdxWUY5j/s27jujKPbQf14=
github.com/Azure/azure-sdk-for-go/sdk/security/keyvault/internal v1.1.1 h1:bFWuoEKg+gImo7pvkiQEFAc8ocibADgXeiLAxWhWmkI=
github.com/Azure/azure-sdk-for-go/sdk/security/keyvault/internal v1.1.1/go.mod h1:Vih/3yc6yac2JzU4hzpaDupBJP0Flaia9rXXrU8xyww=
github.com/AzureAD/microsoft-authentication-library-for-go v1.4.2 h1:oygO0locgZJe7PpYPXT5A29ZkwJaPqcva7BVeemZOZs=
github.com/AzureAD/microsoft-authentication-library-for-go v1.4.2/go.mod h1:wP83P5OoQ5p6ip3ScPr0BAq0BvuPAvacpEuSzyouqAI=
github.com/cpuguy83/go-md2man/v2 v2.0.6/go.mod h1:oOW0eioCTA6cOiMLiUPZOpcVxMig6NIQQ7OS05n1F4g= github.com/cpuguy83/go-md2man/v2 v2.0.6/go.mod h1:oOW0eioCTA6cOiMLiUPZOpcVxMig6NIQQ7OS05n1F4g=
github.com/creack/pty v1.1.9/go.mod h1:oKZEueFk5CKHvIhNR5MUki03XCEU+Q6VDXinZuGJ33E= github.com/creack/pty v1.1.9/go.mod h1:oKZEueFk5CKHvIhNR5MUki03XCEU+Q6VDXinZuGJ33E=
github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38= github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c= github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c=
github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38= github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/dustin/go-humanize v1.0.1 h1:GzkhY7T5VNhEkwH0PVJgjz+fX1rhBrR7pRT3mDkpeCY=
github.com/dustin/go-humanize v1.0.1/go.mod h1:Mu1zIs6XwVuF/gI1OepvI0qD18qycQx+mFykh5fBlto=
github.com/gdamore/encoding v1.0.1 h1:YzKZckdBL6jVt2Gc+5p82qhrGiqMdG/eNs6Wy0u3Uhw= github.com/gdamore/encoding v1.0.1 h1:YzKZckdBL6jVt2Gc+5p82qhrGiqMdG/eNs6Wy0u3Uhw=
github.com/gdamore/encoding v1.0.1/go.mod h1:0Z0cMFinngz9kS1QfMjCP8TY7em3bZYeeklsSDPivEo= github.com/gdamore/encoding v1.0.1/go.mod h1:0Z0cMFinngz9kS1QfMjCP8TY7em3bZYeeklsSDPivEo=
github.com/gdamore/tcell/v2 v2.8.1 h1:KPNxyqclpWpWQlPLx6Xui1pMk8S+7+R37h3g07997NU= github.com/gdamore/tcell/v2 v2.8.1 h1:KPNxyqclpWpWQlPLx6Xui1pMk8S+7+R37h3g07997NU=
github.com/gdamore/tcell/v2 v2.8.1/go.mod h1:bj8ori1BG3OYMjmb3IklZVWfZUJ1UBQt9JXrOCOhGWw= github.com/gdamore/tcell/v2 v2.8.1/go.mod h1:bj8ori1BG3OYMjmb3IklZVWfZUJ1UBQt9JXrOCOhGWw=
github.com/golang-jwt/jwt/v5 v5.2.2 h1:Rl4B7itRWVtYIHFrSNd7vhTiz9UpLdi6gZhZ3wEeDy8=
github.com/golang-jwt/jwt/v5 v5.2.2/go.mod h1:pqrtFR0X4osieyHYxtmOUWsAWrfe1Q5UVIyoH402zdk=
github.com/golang-sql/civil v0.0.0-20220223132316-b832511892a9 h1:au07oEsX2xN0ktxqI+Sida1w446QrXBRJ0nee3SNZlA=
github.com/golang-sql/civil v0.0.0-20220223132316-b832511892a9/go.mod h1:8vg3r2VgvsThLBIFL93Qb5yWzgyZWhEmBwUJWevAkK0=
github.com/golang-sql/sqlexp v0.1.0 h1:ZCD6MBpcuOVfGVqsEmY5/4FtYiKz6tSyUv9LPEDei6A=
github.com/golang-sql/sqlexp v0.1.0/go.mod h1:J4ad9Vo8ZCWQ2GMrC4UCQy1JpCbwU9m3EOqtpKwwwHI=
github.com/google/go-cmp v0.6.0/go.mod h1:17dUlkBOakJ0+DkrSSNjCkIjxS6bF9zb3elmeNGIjoY= github.com/google/go-cmp v0.6.0/go.mod h1:17dUlkBOakJ0+DkrSSNjCkIjxS6bF9zb3elmeNGIjoY=
github.com/google/pprof v0.0.0-20250317173921-a4b03ec1a45e h1:ijClszYn+mADRFY17kjQEVQ1XRhq2/JR1M3sGqeJoxs=
github.com/google/pprof v0.0.0-20250317173921-a4b03ec1a45e/go.mod h1:boTsfXsheKC2y+lKOCMpSfarhxDeIzfZG1jqGcPl3cA=
github.com/google/uuid v1.6.0 h1:NIvaJDMOsjHA8n1jAhLSgzrAzy1Hgr+hNrb57e+94F0= github.com/google/uuid v1.6.0 h1:NIvaJDMOsjHA8n1jAhLSgzrAzy1Hgr+hNrb57e+94F0=
github.com/google/uuid v1.6.0/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo= github.com/google/uuid v1.6.0/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
github.com/hashicorp/golang-lru/v2 v2.0.7 h1:a+bsQ5rvGLjzHuww6tVxozPZFVghXaHOwFs4luLUK2k=
github.com/hashicorp/golang-lru/v2 v2.0.7/go.mod h1:QeFd9opnmA6QUJc5vARoKUSoFhyfM2/ZepoAG6RGpeM=
github.com/inconshreveable/mousetrap v1.1.0 h1:wN+x4NVGpMsO7ErUn/mUI3vEoE6Jt13X2s0bqwp9tc8= github.com/inconshreveable/mousetrap v1.1.0 h1:wN+x4NVGpMsO7ErUn/mUI3vEoE6Jt13X2s0bqwp9tc8=
github.com/inconshreveable/mousetrap v1.1.0/go.mod h1:vpF70FUmC8bwa3OWnCshd2FqLfsEA9PFc4w1p2J65bw= github.com/inconshreveable/mousetrap v1.1.0/go.mod h1:vpF70FUmC8bwa3OWnCshd2FqLfsEA9PFc4w1p2J65bw=
github.com/jackc/pgpassfile v1.0.0 h1:/6Hmqy13Ss2zCq62VdNG8tM1wchn8zjSGOBJ6icpsIM= github.com/jackc/pgpassfile v1.0.0 h1:/6Hmqy13Ss2zCq62VdNG8tM1wchn8zjSGOBJ6icpsIM=
@@ -26,15 +50,27 @@ github.com/kr/pretty v0.3.1 h1:flRD4NNwYAUpkphVc1HcthR4KEIFJ65n8Mw5qdRn3LE=
github.com/kr/pretty v0.3.1/go.mod h1:hoEshYVHaxMs3cyo3Yncou5ZscifuDolrwPKZanG3xk= github.com/kr/pretty v0.3.1/go.mod h1:hoEshYVHaxMs3cyo3Yncou5ZscifuDolrwPKZanG3xk=
github.com/kr/text v0.2.0 h1:5Nx0Ya0ZqY2ygV366QzturHI13Jq95ApcVaJBhpS+AY= github.com/kr/text v0.2.0 h1:5Nx0Ya0ZqY2ygV366QzturHI13Jq95ApcVaJBhpS+AY=
github.com/kr/text v0.2.0/go.mod h1:eLer722TekiGuMkidMxC/pM04lWEeraHUUmBw8l2grE= github.com/kr/text v0.2.0/go.mod h1:eLer722TekiGuMkidMxC/pM04lWEeraHUUmBw8l2grE=
github.com/kylelemons/godebug v1.1.0 h1:RPNrshWIDI6G2gRW9EHilWtl7Z6Sb1BR0xunSBf0SNc=
github.com/kylelemons/godebug v1.1.0/go.mod h1:9/0rRGxNHcop5bhtWyNeEfOS8JIWk580+fNqagV/RAw=
github.com/lucasb-eyer/go-colorful v1.2.0 h1:1nnpGOrhyZZuNyfu1QjKiUICQ74+3FNCN69Aj6K7nkY= github.com/lucasb-eyer/go-colorful v1.2.0 h1:1nnpGOrhyZZuNyfu1QjKiUICQ74+3FNCN69Aj6K7nkY=
github.com/lucasb-eyer/go-colorful v1.2.0/go.mod h1:R4dSotOR9KMtayYi1e77YzuveK+i7ruzyGqttikkLy0= github.com/lucasb-eyer/go-colorful v1.2.0/go.mod h1:R4dSotOR9KMtayYi1e77YzuveK+i7ruzyGqttikkLy0=
github.com/mattn/go-isatty v0.0.20 h1:xfD0iDuEKnDkl03q4limB+vH+GxLEtL/jb4xVJSWWEY=
github.com/mattn/go-isatty v0.0.20/go.mod h1:W+V8PltTTMOvKvAeJH7IuucS94S2C6jfK/D7dTCTo3Y=
github.com/mattn/go-runewidth v0.0.16 h1:E5ScNMtiwvlvB5paMFdw9p4kSQzbXFikJ5SQO6TULQc= github.com/mattn/go-runewidth v0.0.16 h1:E5ScNMtiwvlvB5paMFdw9p4kSQzbXFikJ5SQO6TULQc=
github.com/mattn/go-runewidth v0.0.16/go.mod h1:Jdepj2loyihRzMpdS35Xk/zdY8IAYHsh153qUoGf23w= github.com/mattn/go-runewidth v0.0.16/go.mod h1:Jdepj2loyihRzMpdS35Xk/zdY8IAYHsh153qUoGf23w=
github.com/microsoft/go-mssqldb v1.9.6 h1:1MNQg5UiSsokiPz3++K2KPx4moKrwIqly1wv+RyCKTw=
github.com/microsoft/go-mssqldb v1.9.6/go.mod h1:yYMPDufyoF2vVuVCUGtZARr06DKFIhMrluTcgWlXpr4=
github.com/ncruces/go-strftime v1.0.0 h1:HMFp8mLCTPp341M/ZnA4qaf7ZlsbTc+miZjCLOFAw7w=
github.com/ncruces/go-strftime v1.0.0/go.mod h1:Fwc5htZGVVkseilnfgOVb9mKy6w1naJmn9CehxcKcls=
github.com/pkg/browser v0.0.0-20240102092130-5ac0b6a4141c h1:+mdjkGKdHQG3305AYmdv1U2eRNDiU2ErMBj1gwrq8eQ=
github.com/pkg/browser v0.0.0-20240102092130-5ac0b6a4141c/go.mod h1:7rwL4CYBLnjLxUqIJNnCWiEdr3bn6IUYi15bNlnbCCU=
github.com/pkg/diff v0.0.0-20210226163009-20ebb0f2a09e/go.mod h1:pJLUxLENpZxwdsKMEsNbx1VGcRFpLqf3715MtcvvzbA= github.com/pkg/diff v0.0.0-20210226163009-20ebb0f2a09e/go.mod h1:pJLUxLENpZxwdsKMEsNbx1VGcRFpLqf3715MtcvvzbA=
github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM= github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM=
github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4= github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
github.com/puzpuzpuz/xsync/v3 v3.5.1 h1:GJYJZwO6IdxN/IKbneznS6yPkVC+c3zyY/j19c++5Fg= github.com/puzpuzpuz/xsync/v3 v3.5.1 h1:GJYJZwO6IdxN/IKbneznS6yPkVC+c3zyY/j19c++5Fg=
github.com/puzpuzpuz/xsync/v3 v3.5.1/go.mod h1:VjzYrABPabuM4KyBh1Ftq6u8nhwY5tBPKP9jpmh0nnA= github.com/puzpuzpuz/xsync/v3 v3.5.1/go.mod h1:VjzYrABPabuM4KyBh1Ftq6u8nhwY5tBPKP9jpmh0nnA=
github.com/remyoudompheng/bigfft v0.0.0-20230129092748-24d4a6f8daec h1:W09IVJc94icq4NjY3clb7Lk8O1qJ8BdBEF8z0ibU0rE=
github.com/remyoudompheng/bigfft v0.0.0-20230129092748-24d4a6f8daec/go.mod h1:qqbHyh8v60DhA7CoWK5oRCqLrMHRGoxYCSS9EjAz6Eo=
github.com/rivo/tview v0.42.0 h1:b/ftp+RxtDsHSaynXTbJb+/n/BxDEi+W3UfF5jILK6c= github.com/rivo/tview v0.42.0 h1:b/ftp+RxtDsHSaynXTbJb+/n/BxDEi+W3UfF5jILK6c=
github.com/rivo/tview v0.42.0/go.mod h1:cSfIYfhpSGCjp3r/ECJb+GKS7cGJnqV8vfjQPwoXyfY= github.com/rivo/tview v0.42.0/go.mod h1:cSfIYfhpSGCjp3r/ECJb+GKS7cGJnqV8vfjQPwoXyfY=
github.com/rivo/uniseg v0.2.0/go.mod h1:J6wj4VEh+S6ZtnVlnTBMWIodfgj8LQOQFoIToxlJtxc= github.com/rivo/uniseg v0.2.0/go.mod h1:J6wj4VEh+S6ZtnVlnTBMWIodfgj8LQOQFoIToxlJtxc=
@@ -45,6 +81,8 @@ github.com/rogpeppe/go-internal v1.9.0/go.mod h1:WtVeX8xhTBvf0smdhujwtBcq4Qrzq/f
github.com/rogpeppe/go-internal v1.14.1 h1:UQB4HGPB6osV0SQTLymcB4TgvyWu6ZyliaW0tI/otEQ= github.com/rogpeppe/go-internal v1.14.1 h1:UQB4HGPB6osV0SQTLymcB4TgvyWu6ZyliaW0tI/otEQ=
github.com/rogpeppe/go-internal v1.14.1/go.mod h1:MaRKkUm5W0goXpeCfT7UZI6fk/L7L7so1lCWt35ZSgc= github.com/rogpeppe/go-internal v1.14.1/go.mod h1:MaRKkUm5W0goXpeCfT7UZI6fk/L7L7so1lCWt35ZSgc=
github.com/russross/blackfriday/v2 v2.1.0/go.mod h1:+Rmxgy9KzJVeS9/2gXHxylqXiyQDYRxCVz55jmeOWTM= github.com/russross/blackfriday/v2 v2.1.0/go.mod h1:+Rmxgy9KzJVeS9/2gXHxylqXiyQDYRxCVz55jmeOWTM=
github.com/shopspring/decimal v1.4.0 h1:bxl37RwXBklmTi0C79JfXCEBD1cqqHt0bbgBAGFp81k=
github.com/shopspring/decimal v1.4.0/go.mod h1:gawqmDU56v4yIKSwfBSFip1HdCCXN8/+DMd9qYNcwME=
github.com/spf13/cobra v1.10.2 h1:DMTTonx5m65Ic0GOoRY2c16WCbHxOOw6xxezuLaBpcU= github.com/spf13/cobra v1.10.2 h1:DMTTonx5m65Ic0GOoRY2c16WCbHxOOw6xxezuLaBpcU=
github.com/spf13/cobra v1.10.2/go.mod h1:7C1pvHqHw5A4vrJfjNwvOdzYu0Gml16OCs2GRiTUUS4= github.com/spf13/cobra v1.10.2/go.mod h1:7C1pvHqHw5A4vrJfjNwvOdzYu0Gml16OCs2GRiTUUS4=
github.com/spf13/pflag v1.0.9/go.mod h1:McXfInJRrz4CZXVZOBLb0bTZqETkiAhM9Iw0y3An2Bg= github.com/spf13/pflag v1.0.9/go.mod h1:McXfInJRrz4CZXVZOBLb0bTZqETkiAhM9Iw0y3An2Bg=
@@ -70,13 +108,17 @@ golang.org/x/crypto v0.0.0-20210921155107-089bfa567519/go.mod h1:GvvjBRRGRdwPK5y
golang.org/x/crypto v0.13.0/go.mod h1:y6Z2r+Rw4iayiXXAIxJIDAJ1zMW4yaTpebo8fPOliYc= golang.org/x/crypto v0.13.0/go.mod h1:y6Z2r+Rw4iayiXXAIxJIDAJ1zMW4yaTpebo8fPOliYc=
golang.org/x/crypto v0.19.0/go.mod h1:Iy9bg/ha4yyC70EfRS8jz+B6ybOBKMaSxLj6P6oBDfU= golang.org/x/crypto v0.19.0/go.mod h1:Iy9bg/ha4yyC70EfRS8jz+B6ybOBKMaSxLj6P6oBDfU=
golang.org/x/crypto v0.23.0/go.mod h1:CKFgDieR+mRhux2Lsu27y0fO304Db0wZe70UKqHu0v8= golang.org/x/crypto v0.23.0/go.mod h1:CKFgDieR+mRhux2Lsu27y0fO304Db0wZe70UKqHu0v8=
golang.org/x/crypto v0.41.0 h1:WKYxWedPGCTVVl5+WHSSrOBT0O8lx32+zxmHxijgXp4= golang.org/x/crypto v0.45.0 h1:jMBrvKuj23MTlT0bQEOBcAE0mjg8mK9RXFhRH6nyF3Q=
golang.org/x/crypto v0.41.0/go.mod h1:pO5AFd7FA68rFak7rOAGVuygIISepHftHnr8dr6+sUc= golang.org/x/crypto v0.45.0/go.mod h1:XTGrrkGJve7CYK7J8PEww4aY7gM3qMCElcJQ8n8JdX4=
golang.org/x/exp v0.0.0-20251023183803-a4bb9ffd2546 h1:mgKeJMpvi0yx/sU5GsxQ7p6s2wtOnGAHZWCHUM4KGzY=
golang.org/x/exp v0.0.0-20251023183803-a4bb9ffd2546/go.mod h1:j/pmGrbnkbPtQfxEe5D0VQhZC6qKbfKifgD0oM7sR70=
golang.org/x/mod v0.6.0-dev.0.20220419223038-86c51ed26bb4/go.mod h1:jJ57K6gSWd91VN4djpZkiMVwK6gcyfeH4XE8wZrZaV4= golang.org/x/mod v0.6.0-dev.0.20220419223038-86c51ed26bb4/go.mod h1:jJ57K6gSWd91VN4djpZkiMVwK6gcyfeH4XE8wZrZaV4=
golang.org/x/mod v0.8.0/go.mod h1:iBbtSCu2XBx23ZKBPSOrRkjjQPZFPuis4dIYUhu/chs= golang.org/x/mod v0.8.0/go.mod h1:iBbtSCu2XBx23ZKBPSOrRkjjQPZFPuis4dIYUhu/chs=
golang.org/x/mod v0.12.0/go.mod h1:iBbtSCu2XBx23ZKBPSOrRkjjQPZFPuis4dIYUhu/chs= golang.org/x/mod v0.12.0/go.mod h1:iBbtSCu2XBx23ZKBPSOrRkjjQPZFPuis4dIYUhu/chs=
golang.org/x/mod v0.15.0/go.mod h1:hTbmBsO62+eylJbnUtE2MGJUyE7QWk4xUqPFrRgJ+7c= golang.org/x/mod v0.15.0/go.mod h1:hTbmBsO62+eylJbnUtE2MGJUyE7QWk4xUqPFrRgJ+7c=
golang.org/x/mod v0.17.0/go.mod h1:hTbmBsO62+eylJbnUtE2MGJUyE7QWk4xUqPFrRgJ+7c= golang.org/x/mod v0.17.0/go.mod h1:hTbmBsO62+eylJbnUtE2MGJUyE7QWk4xUqPFrRgJ+7c=
golang.org/x/mod v0.29.0 h1:HV8lRxZC4l2cr3Zq1LvtOsi/ThTgWnUk/y64QSs8GwA=
golang.org/x/mod v0.29.0/go.mod h1:NyhrlYXJ2H4eJiRy/WDBO6HMqZQ6q9nk4JzS3NuCK+w=
golang.org/x/net v0.0.0-20190620200207-3b0461eec859/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s= golang.org/x/net v0.0.0-20190620200207-3b0461eec859/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/net v0.0.0-20210226172049-e18ecbb05110/go.mod h1:m0MpNAwzfU5UDzcl9v0D8zg8gWTRqZa9RBIspLL5mdg= golang.org/x/net v0.0.0-20210226172049-e18ecbb05110/go.mod h1:m0MpNAwzfU5UDzcl9v0D8zg8gWTRqZa9RBIspLL5mdg=
golang.org/x/net v0.0.0-20220722155237-a158d28d115b/go.mod h1:XRhObCWvk6IyKnWLug+ECip1KBveYUHfp+8e9klMJ9c= golang.org/x/net v0.0.0-20220722155237-a158d28d115b/go.mod h1:XRhObCWvk6IyKnWLug+ECip1KBveYUHfp+8e9klMJ9c=
@@ -85,6 +127,8 @@ golang.org/x/net v0.10.0/go.mod h1:0qNGK6F8kojg2nk9dLZ2mShWaEBan6FAoqfSigmmuDg=
golang.org/x/net v0.15.0/go.mod h1:idbUs1IY1+zTqbi8yxTbhexhEEk5ur9LInksu6HrEpk= golang.org/x/net v0.15.0/go.mod h1:idbUs1IY1+zTqbi8yxTbhexhEEk5ur9LInksu6HrEpk=
golang.org/x/net v0.21.0/go.mod h1:bIjVDfnllIU7BJ2DNgfnXvpSvtn8VRwhlsaeUTyUS44= golang.org/x/net v0.21.0/go.mod h1:bIjVDfnllIU7BJ2DNgfnXvpSvtn8VRwhlsaeUTyUS44=
golang.org/x/net v0.25.0/go.mod h1:JkAGAh7GEvH74S6FOH42FLoXpXbE/aqXSrIQjXgsiwM= golang.org/x/net v0.25.0/go.mod h1:JkAGAh7GEvH74S6FOH42FLoXpXbE/aqXSrIQjXgsiwM=
golang.org/x/net v0.47.0 h1:Mx+4dIFzqraBXUugkia1OOvlD6LemFo1ALMHjrXDOhY=
golang.org/x/net v0.47.0/go.mod h1:/jNxtkgq5yWUGYkaZGqo27cfGZ1c5Nen03aYrrKpVRU=
golang.org/x/sync v0.0.0-20190423024810-112230192c58/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= golang.org/x/sync v0.0.0-20190423024810-112230192c58/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20220722155255-886fb9371eb4/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= golang.org/x/sync v0.0.0-20220722155255-886fb9371eb4/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.1.0/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= golang.org/x/sync v0.1.0/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
@@ -92,14 +136,15 @@ golang.org/x/sync v0.3.0/go.mod h1:FU7BRWz2tNW+3quACPkgCx/L+uEAv1htQ0V83Z9Rj+Y=
golang.org/x/sync v0.6.0/go.mod h1:Czt+wKu1gCyEFDUtn0jG5QVvpJ6rzVqr5aXyt9drQfk= golang.org/x/sync v0.6.0/go.mod h1:Czt+wKu1gCyEFDUtn0jG5QVvpJ6rzVqr5aXyt9drQfk=
golang.org/x/sync v0.7.0/go.mod h1:Czt+wKu1gCyEFDUtn0jG5QVvpJ6rzVqr5aXyt9drQfk= golang.org/x/sync v0.7.0/go.mod h1:Czt+wKu1gCyEFDUtn0jG5QVvpJ6rzVqr5aXyt9drQfk=
golang.org/x/sync v0.10.0/go.mod h1:Czt+wKu1gCyEFDUtn0jG5QVvpJ6rzVqr5aXyt9drQfk= golang.org/x/sync v0.10.0/go.mod h1:Czt+wKu1gCyEFDUtn0jG5QVvpJ6rzVqr5aXyt9drQfk=
golang.org/x/sync v0.16.0 h1:ycBJEhp9p4vXvUZNszeOq0kGTPghopOL8q0fq3vstxw= golang.org/x/sync v0.18.0 h1:kr88TuHDroi+UVf+0hZnirlk8o8T+4MrK6mr60WkH/I=
golang.org/x/sync v0.16.0/go.mod h1:1dzgHSNfp02xaA81J2MS99Qcpr2w7fw1gpm99rleRqA= golang.org/x/sync v0.18.0/go.mod h1:9KTHXmSnoGruLpwFjVSX0lNNA75CykiMECbovNTZqGI=
golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY= golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20201119102817-f84b799fce68/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20201119102817-f84b799fce68/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210615035016-665e8c7367d1/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= golang.org/x/sys v0.0.0-20210615035016-665e8c7367d1/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20220520151302-bc2c85ada10a/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= golang.org/x/sys v0.0.0-20220520151302-bc2c85ada10a/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20220722155257-8c9f86f7a55f/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= golang.org/x/sys v0.0.0-20220722155257-8c9f86f7a55f/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.5.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= golang.org/x/sys v0.5.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.6.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.8.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= golang.org/x/sys v0.8.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.12.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= golang.org/x/sys v0.12.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.17.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA= golang.org/x/sys v0.17.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA=
@@ -116,8 +161,8 @@ golang.org/x/term v0.12.0/go.mod h1:owVbMEjm3cBLCHdkQu9b1opXd4ETQWc3BhuQGKgXgvU=
golang.org/x/term v0.17.0/go.mod h1:lLRBjIVuehSbZlaOtGMbcMncT+aqLLLmKrsjNrUguwk= golang.org/x/term v0.17.0/go.mod h1:lLRBjIVuehSbZlaOtGMbcMncT+aqLLLmKrsjNrUguwk=
golang.org/x/term v0.20.0/go.mod h1:8UkIAJTvZgivsXaD6/pH6U9ecQzZ45awqEOzuCvwpFY= golang.org/x/term v0.20.0/go.mod h1:8UkIAJTvZgivsXaD6/pH6U9ecQzZ45awqEOzuCvwpFY=
golang.org/x/term v0.28.0/go.mod h1:Sw/lC2IAUZ92udQNf3WodGtn4k/XoLyZoh8v/8uiwek= golang.org/x/term v0.28.0/go.mod h1:Sw/lC2IAUZ92udQNf3WodGtn4k/XoLyZoh8v/8uiwek=
golang.org/x/term v0.34.0 h1:O/2T7POpk0ZZ7MAzMeWFSg6S5IpWd/RXDlM9hgM3DR4= golang.org/x/term v0.37.0 h1:8EGAD0qCmHYZg6J17DvsMy9/wJ7/D/4pV/wfnld5lTU=
golang.org/x/term v0.34.0/go.mod h1:5jC53AEywhIVebHgPVeg0mj8OD3VO9OzclacVrqpaAw= golang.org/x/term v0.37.0/go.mod h1:5pB4lxRNYYVZuTLmy8oR2BH8dflOR+IbTYFD8fi3254=
golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ= golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
golang.org/x/text v0.3.3/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ= golang.org/x/text v0.3.3/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
golang.org/x/text v0.3.7/go.mod h1:u+2+/6zg+i71rQMx5EYifcz6MCKuco9NR6JIITiCfzQ= golang.org/x/text v0.3.7/go.mod h1:u+2+/6zg+i71rQMx5EYifcz6MCKuco9NR6JIITiCfzQ=
@@ -127,14 +172,16 @@ golang.org/x/text v0.13.0/go.mod h1:TvPlkZtksWOMsz7fbANvkp4WM8x/WCo/om8BMLbz+aE=
golang.org/x/text v0.14.0/go.mod h1:18ZOQIKpY8NJVqYksKHtTdi31H5itFRjB5/qKTNYzSU= golang.org/x/text v0.14.0/go.mod h1:18ZOQIKpY8NJVqYksKHtTdi31H5itFRjB5/qKTNYzSU=
golang.org/x/text v0.15.0/go.mod h1:18ZOQIKpY8NJVqYksKHtTdi31H5itFRjB5/qKTNYzSU= golang.org/x/text v0.15.0/go.mod h1:18ZOQIKpY8NJVqYksKHtTdi31H5itFRjB5/qKTNYzSU=
golang.org/x/text v0.21.0/go.mod h1:4IBbMaMmOPCJ8SecivzSH54+73PCFmPWxNTLm+vZkEQ= golang.org/x/text v0.21.0/go.mod h1:4IBbMaMmOPCJ8SecivzSH54+73PCFmPWxNTLm+vZkEQ=
golang.org/x/text v0.28.0 h1:rhazDwis8INMIwQ4tpjLDzUhx6RlXqZNPEM0huQojng= golang.org/x/text v0.31.0 h1:aC8ghyu4JhP8VojJ2lEHBnochRno1sgL6nEi9WGFGMM=
golang.org/x/text v0.28.0/go.mod h1:U8nCwOR8jO/marOQ0QbDiOngZVEBB7MAiitBuMjXiNU= golang.org/x/text v0.31.0/go.mod h1:tKRAlv61yKIjGGHX/4tP1LTbc13YSec1pxVEWXzfoeM=
golang.org/x/tools v0.0.0-20180917221912-90fa682c2a6e/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ= golang.org/x/tools v0.0.0-20180917221912-90fa682c2a6e/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
golang.org/x/tools v0.0.0-20191119224855-298f0cb1881e/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo= golang.org/x/tools v0.0.0-20191119224855-298f0cb1881e/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
golang.org/x/tools v0.1.12/go.mod h1:hNGJHUnrk76NpqgfD5Aqm5Crs+Hm0VOH/i9J2+nxYbc= golang.org/x/tools v0.1.12/go.mod h1:hNGJHUnrk76NpqgfD5Aqm5Crs+Hm0VOH/i9J2+nxYbc=
golang.org/x/tools v0.6.0/go.mod h1:Xwgl3UAJ/d3gWutnCtw505GrjyAbvKui8lOU390QaIU= golang.org/x/tools v0.6.0/go.mod h1:Xwgl3UAJ/d3gWutnCtw505GrjyAbvKui8lOU390QaIU=
golang.org/x/tools v0.13.0/go.mod h1:HvlwmtVNQAhOuCjW7xxvovg8wbNq7LwfXh/k7wXUl58= golang.org/x/tools v0.13.0/go.mod h1:HvlwmtVNQAhOuCjW7xxvovg8wbNq7LwfXh/k7wXUl58=
golang.org/x/tools v0.21.1-0.20240508182429-e35e4ccd0d2d/go.mod h1:aiJjzUbINMkxbQROHiO6hDPo2LHcIPhhQsa9DLh0yGk= golang.org/x/tools v0.21.1-0.20240508182429-e35e4ccd0d2d/go.mod h1:aiJjzUbINMkxbQROHiO6hDPo2LHcIPhhQsa9DLh0yGk=
golang.org/x/tools v0.38.0 h1:Hx2Xv8hISq8Lm16jvBZ2VQf+RLmbd7wVUsALibYI/IQ=
golang.org/x/tools v0.38.0/go.mod h1:yEsQ/d/YK8cjh0L6rZlY8tgtlKiBNTL14pGDJPJpYQs=
golang.org/x/xerrors v0.0.0-20190717185122-a985d3407aa7/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0= golang.org/x/xerrors v0.0.0-20190717185122-a985d3407aa7/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0= gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c h1:Hei/4ADfdWqJk1ZMxUNpqntNwaWcugrBjAiHlqqRiVk= gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c h1:Hei/4ADfdWqJk1ZMxUNpqntNwaWcugrBjAiHlqqRiVk=
@@ -142,3 +189,31 @@ gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c/go.mod h1:JHkPIbrfpd72SG/EV
gopkg.in/yaml.v3 v3.0.0-20200313102051-9f266ea9e77c/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM= gopkg.in/yaml.v3 v3.0.0-20200313102051-9f266ea9e77c/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
gopkg.in/yaml.v3 v3.0.1 h1:fxVm/GzAzEWqLHuvctI91KS9hhNmmWOoWu0XTYJS7CA= gopkg.in/yaml.v3 v3.0.1 h1:fxVm/GzAzEWqLHuvctI91KS9hhNmmWOoWu0XTYJS7CA=
gopkg.in/yaml.v3 v3.0.1/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM= gopkg.in/yaml.v3 v3.0.1/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
modernc.org/cc/v4 v4.27.1 h1:9W30zRlYrefrDV2JE2O8VDtJ1yPGownxciz5rrbQZis=
modernc.org/cc/v4 v4.27.1/go.mod h1:uVtb5OGqUKpoLWhqwNQo/8LwvoiEBLvZXIQ/SmO6mL0=
modernc.org/ccgo/v4 v4.30.1 h1:4r4U1J6Fhj98NKfSjnPUN7Ze2c6MnAdL0hWw6+LrJpc=
modernc.org/ccgo/v4 v4.30.1/go.mod h1:bIOeI1JL54Utlxn+LwrFyjCx2n2RDiYEaJVSrgdrRfM=
modernc.org/fileutil v1.3.40 h1:ZGMswMNc9JOCrcrakF1HrvmergNLAmxOPjizirpfqBA=
modernc.org/fileutil v1.3.40/go.mod h1:HxmghZSZVAz/LXcMNwZPA/DRrQZEVP9VX0V4LQGQFOc=
modernc.org/gc/v2 v2.6.5 h1:nyqdV8q46KvTpZlsw66kWqwXRHdjIlJOhG6kxiV/9xI=
modernc.org/gc/v2 v2.6.5/go.mod h1:YgIahr1ypgfe7chRuJi2gD7DBQiKSLMPgBQe9oIiito=
modernc.org/gc/v3 v3.1.1 h1:k8T3gkXWY9sEiytKhcgyiZ2L0DTyCQ/nvX+LoCljoRE=
modernc.org/gc/v3 v3.1.1/go.mod h1:HFK/6AGESC7Ex+EZJhJ2Gni6cTaYpSMmU/cT9RmlfYY=
modernc.org/goabi0 v0.2.0 h1:HvEowk7LxcPd0eq6mVOAEMai46V+i7Jrj13t4AzuNks=
modernc.org/goabi0 v0.2.0/go.mod h1:CEFRnnJhKvWT1c1JTI3Avm+tgOWbkOu5oPA8eH8LnMI=
modernc.org/libc v1.67.6 h1:eVOQvpModVLKOdT+LvBPjdQqfrZq+pC39BygcT+E7OI=
modernc.org/libc v1.67.6/go.mod h1:JAhxUVlolfYDErnwiqaLvUqc8nfb2r6S6slAgZOnaiE=
modernc.org/mathutil v1.7.1 h1:GCZVGXdaN8gTqB1Mf/usp1Y/hSqgI2vAGGP4jZMCxOU=
modernc.org/mathutil v1.7.1/go.mod h1:4p5IwJITfppl0G4sUEDtCr4DthTaT47/N3aT6MhfgJg=
modernc.org/memory v1.11.0 h1:o4QC8aMQzmcwCK3t3Ux/ZHmwFPzE6hf2Y5LbkRs+hbI=
modernc.org/memory v1.11.0/go.mod h1:/JP4VbVC+K5sU2wZi9bHoq2MAkCnrt2r98UGeSK7Mjw=
modernc.org/opt v0.1.4 h1:2kNGMRiUjrp4LcaPuLY2PzUfqM/w9N23quVwhKt5Qm8=
modernc.org/opt v0.1.4/go.mod h1:03fq9lsNfvkYSfxrfUhZCWPk1lm4cq4N+Bh//bEtgns=
modernc.org/sortutil v1.2.1 h1:+xyoGf15mM3NMlPDnFqrteY07klSFxLElE2PVuWIJ7w=
modernc.org/sortutil v1.2.1/go.mod h1:7ZI3a3REbai7gzCLcotuw9AC4VZVpYMjDzETGsSMqJE=
modernc.org/sqlite v1.44.3 h1:+39JvV/HWMcYslAwRxHb8067w+2zowvFOUrOWIy9PjY=
modernc.org/sqlite v1.44.3/go.mod h1:CzbrU2lSB1DKUusvwGz7rqEKIq+NUd8GWuBBZDs9/nA=
modernc.org/strutil v1.2.1 h1:UneZBkQA+DX2Rp35KcM69cSsNES9ly8mQWD71HKlOA0=
modernc.org/strutil v1.2.1/go.mod h1:EHkiggD70koQxjVdSBM3JKM7k6L0FbGE5eymy9i3B9A=
modernc.org/token v1.1.0 h1:Xl7Ap9dKaEs5kLoOQeQmPWevfnk/DM5qcLcYlA8ys6Y=
modernc.org/token v1.1.0/go.mod h1:UGzOrNV1mAFSEB63lOFHIpNRUVMvYTc6yu1SMY/XTDM=

28
pkg/commontypes/doc.go Normal file
View File

@@ -0,0 +1,28 @@
// Package commontypes provides shared type definitions used across multiple packages.
//
// # Overview
//
// The commontypes package contains common data structures, constants, and type
// definitions that are shared between different parts of RelSpec but don't belong
// to the core models package.
//
// # Purpose
//
// This package helps avoid circular dependencies by providing a common location
// for types that are used by multiple packages without creating import cycles.
//
// # Contents
//
// Common types may include:
// - Shared enums and constants
// - Utility type aliases
// - Common error types
// - Shared configuration structures
//
// # Usage
//
// import "git.warky.dev/wdevs/relspecgo/pkg/commontypes"
//
// // Use common types
// var formatType commontypes.FormatType
package commontypes

43
pkg/diff/doc.go Normal file
View File

@@ -0,0 +1,43 @@
// Package diff provides utilities for comparing database schemas and identifying differences.
//
// # Overview
//
// The diff package compares two database models at various granularity levels (database,
// schema, table, column) and produces detailed reports of differences including:
// - Missing items (present in source but not in target)
// - Extra items (present in target but not in source)
// - Modified items (present in both but with different properties)
//
// # Usage
//
// Compare two databases and format the output:
//
// result := diff.CompareDatabases(sourceDB, targetDB)
// err := diff.FormatDiff(result, diff.OutputFormatText, os.Stdout)
//
// # Output Formats
//
// The package supports multiple output formats:
// - OutputFormatText: Human-readable text format
// - OutputFormatJSON: Structured JSON output
// - OutputFormatYAML: Structured YAML output
//
// # Comparison Scope
//
// The comparison covers:
// - Schemas: Name, description, and contents
// - Tables: Name, description, and all sub-elements
// - Columns: Type, nullability, defaults, constraints
// - Indexes: Columns, uniqueness, type
// - Constraints: Type, columns, references
// - Relationships: Type, from/to tables and columns
// - Views: Definition and columns
// - Sequences: Start value, increment, min/max values
//
// # Use Cases
//
// - Schema migration planning
// - Database synchronization verification
// - Change tracking and auditing
// - CI/CD pipeline validation
package diff

40
pkg/inspector/doc.go Normal file
View File

@@ -0,0 +1,40 @@
// Package inspector provides database introspection capabilities for live databases.
//
// # Overview
//
// The inspector package contains utilities for connecting to live databases and
// extracting their schema information through system catalog queries and metadata
// inspection.
//
// # Features
//
// - Database connection management
// - Schema metadata extraction
// - Table structure analysis
// - Constraint and index discovery
// - Foreign key relationship mapping
//
// # Supported Databases
//
// - PostgreSQL (via pgx driver)
// - SQLite (via modernc.org/sqlite driver)
//
// # Usage
//
// This package is used internally by database readers (pgsql, sqlite) to perform
// live schema introspection:
//
// inspector := inspector.NewPostgreSQLInspector(connString)
// schemas, err := inspector.GetSchemas()
// tables, err := inspector.GetTables(schemaName)
//
// # Architecture
//
// Each database type has its own inspector implementation that understands the
// specific system catalogs and metadata structures of that database system.
//
// # Security
//
// Inspectors use read-only operations and never modify database structure.
// Connection credentials should be handled securely.
package inspector

99
pkg/mssql/README.md Normal file
View File

@@ -0,0 +1,99 @@
# MSSQL Package
Provides utilities for working with Microsoft SQL Server data types and conversions.
## Components
### Type Mapping
Provides bidirectional conversion between canonical types and MSSQL types:
- **CanonicalToMSSQL**: Convert abstract types to MSSQL-specific types
- **MSSQLToCanonical**: Convert MSSQL types to abstract representation
## Type Conversion Tables
### Canonical → MSSQL
| Canonical | MSSQL | Notes |
|-----------|-------|-------|
| int | INT | 32-bit signed integer |
| int64 | BIGINT | 64-bit signed integer |
| int32 | INT | 32-bit signed integer |
| int16 | SMALLINT | 16-bit signed integer |
| int8 | TINYINT | 8-bit unsigned integer |
| bool | BIT | 0 (false) or 1 (true) |
| float32 | REAL | Single precision floating point |
| float64 | FLOAT | Double precision floating point |
| decimal | NUMERIC | Fixed-point decimal number |
| string | NVARCHAR(255) | Unicode variable-length string |
| text | NVARCHAR(MAX) | Unicode large text |
| timestamp | DATETIME2 | Date and time without timezone |
| timestamptz | DATETIMEOFFSET | Date and time with timezone offset |
| uuid | UNIQUEIDENTIFIER | GUID/UUID type |
| bytea | VARBINARY(MAX) | Variable-length binary data |
| date | DATE | Date only |
| time | TIME | Time only |
| json | NVARCHAR(MAX) | Stored as text (MSSQL v2016+) |
| jsonb | NVARCHAR(MAX) | Stored as text (MSSQL v2016+) |
### MSSQL → Canonical
| MSSQL | Canonical | Notes |
|-------|-----------|-------|
| INT, INTEGER | int | Standard integer |
| BIGINT | int64 | Large integer |
| SMALLINT | int16 | Small integer |
| TINYINT | int8 | Tiny integer |
| BIT | bool | Boolean/bit flag |
| REAL | float32 | Single precision |
| FLOAT | float64 | Double precision |
| NUMERIC, DECIMAL | decimal | Exact decimal |
| NVARCHAR, VARCHAR | string | Variable-length string |
| NCHAR, CHAR | string | Fixed-length string |
| DATETIME2 | timestamp | Default timestamp |
| DATETIMEOFFSET | timestamptz | Timestamp with timezone |
| DATE | date | Date only |
| TIME | time | Time only |
| UNIQUEIDENTIFIER | uuid | UUID/GUID |
| VARBINARY, BINARY | bytea | Binary data |
| XML | string | Stored as text |
## Usage
```go
package main
import (
"fmt"
"git.warky.dev/wdevs/relspecgo/pkg/mssql"
)
func main() {
// Convert canonical to MSSQL
mssqlType := mssql.ConvertCanonicalToMSSQL("int")
fmt.Println(mssqlType) // Output: INT
// Convert MSSQL to canonical
canonicalType := mssql.ConvertMSSQLToCanonical("BIGINT")
fmt.Println(canonicalType) // Output: int64
// Handle parameterized types
canonicalType = mssql.ConvertMSSQLToCanonical("NVARCHAR(255)")
fmt.Println(canonicalType) // Output: string
}
```
## Testing
Run tests with:
```bash
go test ./pkg/mssql/...
```
## Notes
- Type conversions are case-insensitive
- Parameterized types (e.g., `NVARCHAR(255)`) have their base type extracted
- Unmapped types default to `string` for safety
- The package supports SQL Server 2016 and later versions

114
pkg/mssql/datatypes.go Normal file
View File

@@ -0,0 +1,114 @@
package mssql
import "strings"
// CanonicalToMSSQLTypes maps canonical types to MSSQL types
var CanonicalToMSSQLTypes = map[string]string{
"bool": "BIT",
"int8": "TINYINT",
"int16": "SMALLINT",
"int": "INT",
"int32": "INT",
"int64": "BIGINT",
"uint": "BIGINT",
"uint8": "SMALLINT",
"uint16": "INT",
"uint32": "BIGINT",
"uint64": "BIGINT",
"float32": "REAL",
"float64": "FLOAT",
"decimal": "NUMERIC",
"string": "NVARCHAR(255)",
"text": "NVARCHAR(MAX)",
"date": "DATE",
"time": "TIME",
"timestamp": "DATETIME2",
"timestamptz": "DATETIMEOFFSET",
"uuid": "UNIQUEIDENTIFIER",
"json": "NVARCHAR(MAX)",
"jsonb": "NVARCHAR(MAX)",
"bytea": "VARBINARY(MAX)",
}
// MSSQLToCanonicalTypes maps MSSQL types to canonical types
var MSSQLToCanonicalTypes = map[string]string{
"bit": "bool",
"tinyint": "int8",
"smallint": "int16",
"int": "int",
"integer": "int",
"bigint": "int64",
"real": "float32",
"float": "float64",
"numeric": "decimal",
"decimal": "decimal",
"money": "decimal",
"smallmoney": "decimal",
"nvarchar": "string",
"nchar": "string",
"varchar": "string",
"char": "string",
"text": "string",
"ntext": "string",
"date": "date",
"time": "time",
"datetime": "timestamp",
"datetime2": "timestamp",
"smalldatetime": "timestamp",
"datetimeoffset": "timestamptz",
"uniqueidentifier": "uuid",
"varbinary": "bytea",
"binary": "bytea",
"image": "bytea",
"xml": "string",
"json": "json",
"sql_variant": "string",
"hierarchyid": "string",
"geography": "string",
"geometry": "string",
}
// ConvertCanonicalToMSSQL converts a canonical type to MSSQL type
func ConvertCanonicalToMSSQL(canonicalType string) string {
// Check direct mapping
if mssqlType, exists := CanonicalToMSSQLTypes[strings.ToLower(canonicalType)]; exists {
return mssqlType
}
// Try to find by prefix
lowerType := strings.ToLower(canonicalType)
for canonical, mssql := range CanonicalToMSSQLTypes {
if strings.HasPrefix(lowerType, canonical) {
return mssql
}
}
// Default to NVARCHAR
return "NVARCHAR(255)"
}
// ConvertMSSQLToCanonical converts an MSSQL type to canonical type
func ConvertMSSQLToCanonical(mssqlType string) string {
// Extract base type (remove parentheses and parameters)
baseType := mssqlType
if idx := strings.Index(baseType, "("); idx != -1 {
baseType = baseType[:idx]
}
baseType = strings.TrimSpace(baseType)
// Check direct mapping
if canonicalType, exists := MSSQLToCanonicalTypes[strings.ToLower(baseType)]; exists {
return canonicalType
}
// Try to find by prefix
lowerType := strings.ToLower(baseType)
for mssql, canonical := range MSSQLToCanonicalTypes {
if strings.HasPrefix(lowerType, mssql) {
return canonical
}
}
// Default to string
return "string"
}

36
pkg/pgsql/doc.go Normal file
View File

@@ -0,0 +1,36 @@
// Package pgsql provides PostgreSQL-specific utilities and helpers.
//
// # Overview
//
// The pgsql package contains PostgreSQL-specific functionality including:
// - SQL reserved keyword validation
// - Data type mappings and conversions
// - PostgreSQL-specific schema introspection helpers
//
// # Components
//
// keywords.go - SQL reserved keywords validation
//
// Provides functions to check if identifiers conflict with SQL reserved words
// and need quoting for safe usage in PostgreSQL queries.
//
// datatypes.go - PostgreSQL data type utilities
//
// Contains mappings between PostgreSQL data types and their equivalents in other
// systems, as well as type conversion and normalization functions.
//
// # Usage
//
// // Check if identifier needs quoting
// if pgsql.IsReservedKeyword("user") {
// // Quote the identifier
// }
//
// // Normalize data type
// normalizedType := pgsql.NormalizeDataType("varchar(255)")
//
// # Purpose
//
// This package supports the PostgreSQL reader and writer implementations by providing
// shared utilities for handling PostgreSQL-specific schema elements and constraints.
package pgsql

53
pkg/readers/doc.go Normal file
View File

@@ -0,0 +1,53 @@
// Package readers provides interfaces and implementations for reading database schemas
// from various input formats and data sources.
//
// # Overview
//
// The readers package defines a common Reader interface that all format-specific readers
// implement. This allows RelSpec to read database schemas from multiple sources including:
// - Live databases (PostgreSQL, SQLite)
// - Schema definition files (DBML, DCTX, DrawDB, GraphQL)
// - ORM model files (GORM, Bun, Drizzle, Prisma, TypeORM)
// - Data interchange formats (JSON, YAML)
//
// # Architecture
//
// Each reader implementation is located in its own subpackage (e.g., pkg/readers/dbml,
// pkg/readers/pgsql) and implements the Reader interface, supporting three levels of
// granularity:
// - ReadDatabase() - Read complete database with all schemas
// - ReadSchema() - Read single schema with all tables
// - ReadTable() - Read single table with all columns and metadata
//
// # Usage
//
// Readers are instantiated with ReaderOptions containing source-specific configuration:
//
// // Read from file
// reader := dbml.NewReader(&readers.ReaderOptions{
// FilePath: "schema.dbml",
// })
// db, err := reader.ReadDatabase()
//
// // Read from database
// reader := pgsql.NewReader(&readers.ReaderOptions{
// ConnectionString: "postgres://user:pass@localhost/mydb",
// })
// db, err := reader.ReadDatabase()
//
// # Supported Formats
//
// - dbml: Database Markup Language files
// - dctx: DCTX schema files
// - drawdb: DrawDB JSON format
// - graphql: GraphQL schema definition language
// - json: JSON database schema
// - yaml: YAML database schema
// - gorm: Go GORM model structs
// - bun: Go Bun model structs
// - drizzle: TypeScript Drizzle ORM schemas
// - prisma: Prisma schema language
// - typeorm: TypeScript TypeORM entities
// - pgsql: PostgreSQL live database introspection
// - sqlite: SQLite database files
package readers

View File

@@ -0,0 +1,91 @@
# MSSQL Reader
Reads database schema from Microsoft SQL Server databases using a live connection.
## Features
- **Live Connection**: Connects to MSSQL databases using the Microsoft ODBC driver
- **Multi-Schema Support**: Reads multiple schemas with full support for user-defined schemas
- **Comprehensive Metadata**: Reads tables, columns, constraints, indexes, and extended properties
- **Type Mapping**: Converts MSSQL types to canonical types for cross-database compatibility
- **Extended Properties**: Extracts table and column descriptions from MS_Description
- **Identity Columns**: Maps IDENTITY columns to AutoIncrement
- **Relationships**: Derives relationships from foreign key constraints
## Connection String Format
```
sqlserver://[user[:password]@][host][:port][?query]
```
Examples:
```
sqlserver://sa:password@localhost/dbname
sqlserver://user:pass@192.168.1.100:1433/production
sqlserver://localhost/testdb?encrypt=disable
```
## Supported Constraints
- Primary Keys
- Foreign Keys (with ON DELETE and ON UPDATE actions)
- Unique Constraints
- Check Constraints
## Type Mappings
| MSSQL Type | Canonical Type |
|------------|----------------|
| INT | int |
| BIGINT | int64 |
| SMALLINT | int16 |
| TINYINT | int8 |
| BIT | bool |
| REAL | float32 |
| FLOAT | float64 |
| NUMERIC, DECIMAL | decimal |
| NVARCHAR, VARCHAR | string |
| DATETIME2 | timestamp |
| DATETIMEOFFSET | timestamptz |
| UNIQUEIDENTIFIER | uuid |
| VARBINARY | bytea |
| DATE | date |
| TIME | time |
## Usage
```go
import "git.warky.dev/wdevs/relspecgo/pkg/readers/mssql"
import "git.warky.dev/wdevs/relspecgo/pkg/readers"
reader := mssql.NewReader(&readers.ReaderOptions{
ConnectionString: "sqlserver://sa:password@localhost/mydb",
})
db, err := reader.ReadDatabase()
if err != nil {
panic(err)
}
// Process schema...
for _, schema := range db.Schemas {
fmt.Printf("Schema: %s\n", schema.Name)
for _, table := range schema.Tables {
fmt.Printf(" Table: %s\n", table.Name)
}
}
```
## Testing
Run tests with:
```bash
go test ./pkg/readers/mssql/...
```
For integration testing with a live MSSQL database:
```bash
docker-compose up -d mssql
go test -tags=integration ./pkg/readers/mssql/...
docker-compose down
```

View File

@@ -0,0 +1,416 @@
package mssql
import (
"strings"
"git.warky.dev/wdevs/relspecgo/pkg/models"
)
// querySchemas retrieves all user-defined schemas from the database
func (r *Reader) querySchemas() ([]*models.Schema, error) {
query := `
SELECT s.name, ISNULL(ep.value, '') as description
FROM sys.schemas s
LEFT JOIN sys.extended_properties ep
ON ep.major_id = s.schema_id
AND ep.minor_id = 0
AND ep.class = 3
AND ep.name = 'MS_Description'
WHERE s.name NOT IN ('dbo', 'guest', 'INFORMATION_SCHEMA', 'sys')
ORDER BY s.name
`
rows, err := r.db.QueryContext(r.ctx, query)
if err != nil {
return nil, err
}
defer rows.Close()
schemas := make([]*models.Schema, 0)
for rows.Next() {
var name, description string
if err := rows.Scan(&name, &description); err != nil {
return nil, err
}
schema := models.InitSchema(name)
if description != "" {
schema.Description = description
}
schemas = append(schemas, schema)
}
// Always include dbo schema if it has tables
dboSchema := models.InitSchema("dbo")
schemas = append(schemas, dboSchema)
return schemas, rows.Err()
}
// queryTables retrieves all tables for a given schema
func (r *Reader) queryTables(schemaName string) ([]*models.Table, error) {
query := `
SELECT t.table_schema, t.table_name, ISNULL(ep.value, '') as description
FROM information_schema.tables t
LEFT JOIN sys.extended_properties ep
ON ep.major_id = OBJECT_ID(QUOTENAME(t.table_schema) + '.' + QUOTENAME(t.table_name))
AND ep.minor_id = 0
AND ep.class = 1
AND ep.name = 'MS_Description'
WHERE t.table_schema = ? AND t.table_type = 'BASE TABLE'
ORDER BY t.table_name
`
rows, err := r.db.QueryContext(r.ctx, query, schemaName)
if err != nil {
return nil, err
}
defer rows.Close()
tables := make([]*models.Table, 0)
for rows.Next() {
var schema, tableName, description string
if err := rows.Scan(&schema, &tableName, &description); err != nil {
return nil, err
}
table := models.InitTable(tableName, schema)
if description != "" {
table.Description = description
}
tables = append(tables, table)
}
return tables, rows.Err()
}
// queryColumns retrieves all columns for tables in a schema
// Returns map[schema.table]map[columnName]*Column
func (r *Reader) queryColumns(schemaName string) (map[string]map[string]*models.Column, error) {
query := `
SELECT
c.table_schema,
c.table_name,
c.column_name,
c.ordinal_position,
c.column_default,
c.is_nullable,
c.data_type,
c.character_maximum_length,
c.numeric_precision,
c.numeric_scale,
ISNULL(ep.value, '') as description,
COLUMNPROPERTY(OBJECT_ID(QUOTENAME(c.table_schema) + '.' + QUOTENAME(c.table_name)), c.column_name, 'IsIdentity') as is_identity
FROM information_schema.columns c
LEFT JOIN sys.extended_properties ep
ON ep.major_id = OBJECT_ID(QUOTENAME(c.table_schema) + '.' + QUOTENAME(c.table_name))
AND ep.minor_id = COLUMNPROPERTY(OBJECT_ID(QUOTENAME(c.table_schema) + '.' + QUOTENAME(c.table_name)), c.column_name, 'ColumnId')
AND ep.class = 1
AND ep.name = 'MS_Description'
WHERE c.table_schema = ?
ORDER BY c.table_schema, c.table_name, c.ordinal_position
`
rows, err := r.db.QueryContext(r.ctx, query, schemaName)
if err != nil {
return nil, err
}
defer rows.Close()
columnsMap := make(map[string]map[string]*models.Column)
for rows.Next() {
var schema, tableName, columnName, isNullable, dataType, description string
var ordinalPosition int
var columnDefault, charMaxLength, numPrecision, numScale, isIdentity *int
if err := rows.Scan(&schema, &tableName, &columnName, &ordinalPosition, &columnDefault, &isNullable, &dataType, &charMaxLength, &numPrecision, &numScale, &description, &isIdentity); err != nil {
return nil, err
}
column := models.InitColumn(columnName, tableName, schema)
column.Type = r.mapDataType(dataType)
column.NotNull = (isNullable == "NO")
column.Sequence = uint(ordinalPosition)
if description != "" {
column.Description = description
}
// Check if this is an identity column (auto-increment)
if isIdentity != nil && *isIdentity == 1 {
column.AutoIncrement = true
}
if charMaxLength != nil && *charMaxLength > 0 {
column.Length = *charMaxLength
}
if numPrecision != nil && *numPrecision > 0 {
column.Precision = *numPrecision
}
if numScale != nil && *numScale > 0 {
column.Scale = *numScale
}
// Create table key
tableKey := schema + "." + tableName
if columnsMap[tableKey] == nil {
columnsMap[tableKey] = make(map[string]*models.Column)
}
columnsMap[tableKey][columnName] = column
}
return columnsMap, rows.Err()
}
// queryPrimaryKeys retrieves all primary key constraints for a schema
// Returns map[schema.table]*Constraint
func (r *Reader) queryPrimaryKeys(schemaName string) (map[string]*models.Constraint, error) {
query := `
SELECT
s.name as schema_name,
t.name as table_name,
i.name as constraint_name,
STRING_AGG(c.name, ',') WITHIN GROUP (ORDER BY ic.key_ordinal) as columns
FROM sys.tables t
INNER JOIN sys.indexes i ON t.object_id = i.object_id AND i.is_primary_key = 1
INNER JOIN sys.schemas s ON t.schema_id = s.schema_id
INNER JOIN sys.index_columns ic ON i.object_id = ic.object_id AND i.index_id = ic.index_id
INNER JOIN sys.columns c ON t.object_id = c.object_id AND ic.column_id = c.column_id
WHERE s.name = ?
GROUP BY s.name, t.name, i.name
`
rows, err := r.db.QueryContext(r.ctx, query, schemaName)
if err != nil {
return nil, err
}
defer rows.Close()
primaryKeys := make(map[string]*models.Constraint)
for rows.Next() {
var schema, tableName, constraintName, columnsStr string
if err := rows.Scan(&schema, &tableName, &constraintName, &columnsStr); err != nil {
return nil, err
}
columns := strings.Split(columnsStr, ",")
constraint := models.InitConstraint(constraintName, models.PrimaryKeyConstraint)
constraint.Schema = schema
constraint.Table = tableName
constraint.Columns = columns
tableKey := schema + "." + tableName
primaryKeys[tableKey] = constraint
}
return primaryKeys, rows.Err()
}
// queryForeignKeys retrieves all foreign key constraints for a schema
// Returns map[schema.table][]*Constraint
func (r *Reader) queryForeignKeys(schemaName string) (map[string][]*models.Constraint, error) {
query := `
SELECT
s.name as schema_name,
t.name as table_name,
fk.name as constraint_name,
rs.name as referenced_schema,
rt.name as referenced_table,
STRING_AGG(c.name, ',') WITHIN GROUP (ORDER BY fkc.constraint_column_id) as columns,
STRING_AGG(rc.name, ',') WITHIN GROUP (ORDER BY fkc.constraint_column_id) as referenced_columns,
fk.delete_referential_action_desc,
fk.update_referential_action_desc
FROM sys.foreign_keys fk
INNER JOIN sys.tables t ON fk.parent_object_id = t.object_id
INNER JOIN sys.tables rt ON fk.referenced_object_id = rt.object_id
INNER JOIN sys.schemas s ON t.schema_id = s.schema_id
INNER JOIN sys.schemas rs ON rt.schema_id = rs.schema_id
INNER JOIN sys.foreign_key_columns fkc ON fk.object_id = fkc.constraint_object_id
INNER JOIN sys.columns c ON fkc.parent_object_id = c.object_id AND fkc.parent_column_id = c.column_id
INNER JOIN sys.columns rc ON fkc.referenced_object_id = rc.object_id AND fkc.referenced_column_id = rc.column_id
WHERE s.name = ?
GROUP BY s.name, t.name, fk.name, rs.name, rt.name, fk.delete_referential_action_desc, fk.update_referential_action_desc
`
rows, err := r.db.QueryContext(r.ctx, query, schemaName)
if err != nil {
return nil, err
}
defer rows.Close()
foreignKeys := make(map[string][]*models.Constraint)
for rows.Next() {
var schema, tableName, constraintName, refSchema, refTable, columnsStr, refColumnsStr, deleteAction, updateAction string
if err := rows.Scan(&schema, &tableName, &constraintName, &refSchema, &refTable, &columnsStr, &refColumnsStr, &deleteAction, &updateAction); err != nil {
return nil, err
}
columns := strings.Split(columnsStr, ",")
refColumns := strings.Split(refColumnsStr, ",")
constraint := models.InitConstraint(constraintName, models.ForeignKeyConstraint)
constraint.Schema = schema
constraint.Table = tableName
constraint.Columns = columns
constraint.ReferencedSchema = refSchema
constraint.ReferencedTable = refTable
constraint.ReferencedColumns = refColumns
constraint.OnDelete = strings.ToUpper(deleteAction)
constraint.OnUpdate = strings.ToUpper(updateAction)
tableKey := schema + "." + tableName
foreignKeys[tableKey] = append(foreignKeys[tableKey], constraint)
}
return foreignKeys, rows.Err()
}
// queryUniqueConstraints retrieves all unique constraints for a schema
// Returns map[schema.table][]*Constraint
func (r *Reader) queryUniqueConstraints(schemaName string) (map[string][]*models.Constraint, error) {
query := `
SELECT
s.name as schema_name,
t.name as table_name,
i.name as constraint_name,
STRING_AGG(c.name, ',') WITHIN GROUP (ORDER BY ic.key_ordinal) as columns
FROM sys.tables t
INNER JOIN sys.indexes i ON t.object_id = i.object_id AND i.is_unique = 1 AND i.is_primary_key = 0
INNER JOIN sys.schemas s ON t.schema_id = s.schema_id
INNER JOIN sys.index_columns ic ON i.object_id = ic.object_id AND i.index_id = ic.index_id
INNER JOIN sys.columns c ON t.object_id = c.object_id AND ic.column_id = c.column_id
WHERE s.name = ?
GROUP BY s.name, t.name, i.name
`
rows, err := r.db.QueryContext(r.ctx, query, schemaName)
if err != nil {
return nil, err
}
defer rows.Close()
uniqueConstraints := make(map[string][]*models.Constraint)
for rows.Next() {
var schema, tableName, constraintName, columnsStr string
if err := rows.Scan(&schema, &tableName, &constraintName, &columnsStr); err != nil {
return nil, err
}
columns := strings.Split(columnsStr, ",")
constraint := models.InitConstraint(constraintName, models.UniqueConstraint)
constraint.Schema = schema
constraint.Table = tableName
constraint.Columns = columns
tableKey := schema + "." + tableName
uniqueConstraints[tableKey] = append(uniqueConstraints[tableKey], constraint)
}
return uniqueConstraints, rows.Err()
}
// queryCheckConstraints retrieves all check constraints for a schema
// Returns map[schema.table][]*Constraint
func (r *Reader) queryCheckConstraints(schemaName string) (map[string][]*models.Constraint, error) {
query := `
SELECT
s.name as schema_name,
t.name as table_name,
cc.name as constraint_name,
cc.definition
FROM sys.tables t
INNER JOIN sys.check_constraints cc ON t.object_id = cc.parent_object_id
INNER JOIN sys.schemas s ON t.schema_id = s.schema_id
WHERE s.name = ?
`
rows, err := r.db.QueryContext(r.ctx, query, schemaName)
if err != nil {
return nil, err
}
defer rows.Close()
checkConstraints := make(map[string][]*models.Constraint)
for rows.Next() {
var schema, tableName, constraintName, definition string
if err := rows.Scan(&schema, &tableName, &constraintName, &definition); err != nil {
return nil, err
}
constraint := models.InitConstraint(constraintName, models.CheckConstraint)
constraint.Schema = schema
constraint.Table = tableName
constraint.Expression = definition
tableKey := schema + "." + tableName
checkConstraints[tableKey] = append(checkConstraints[tableKey], constraint)
}
return checkConstraints, rows.Err()
}
// queryIndexes retrieves all indexes for a schema
// Returns map[schema.table][]*Index
func (r *Reader) queryIndexes(schemaName string) (map[string][]*models.Index, error) {
query := `
SELECT
s.name as schema_name,
t.name as table_name,
i.name as index_name,
i.is_unique,
STRING_AGG(c.name, ',') WITHIN GROUP (ORDER BY ic.key_ordinal) as columns
FROM sys.tables t
INNER JOIN sys.indexes i ON t.object_id = i.object_id AND i.is_primary_key = 0 AND i.name IS NOT NULL
INNER JOIN sys.schemas s ON t.schema_id = s.schema_id
INNER JOIN sys.index_columns ic ON i.object_id = ic.object_id AND i.index_id = ic.index_id
INNER JOIN sys.columns c ON t.object_id = c.object_id AND ic.column_id = c.column_id
WHERE s.name = ?
GROUP BY s.name, t.name, i.name, i.is_unique
`
rows, err := r.db.QueryContext(r.ctx, query, schemaName)
if err != nil {
return nil, err
}
defer rows.Close()
indexes := make(map[string][]*models.Index)
for rows.Next() {
var schema, tableName, indexName, columnsStr string
var isUnique int
if err := rows.Scan(&schema, &tableName, &indexName, &isUnique, &columnsStr); err != nil {
return nil, err
}
columns := strings.Split(columnsStr, ",")
index := models.InitIndex(indexName, tableName, schema)
index.Columns = columns
index.Unique = (isUnique == 1)
index.Type = "btree" // MSSQL uses btree by default
tableKey := schema + "." + tableName
indexes[tableKey] = append(indexes[tableKey], index)
}
return indexes, rows.Err()
}

266
pkg/readers/mssql/reader.go Normal file
View File

@@ -0,0 +1,266 @@
package mssql
import (
"context"
"database/sql"
"fmt"
_ "github.com/microsoft/go-mssqldb" // MSSQL driver
"git.warky.dev/wdevs/relspecgo/pkg/models"
"git.warky.dev/wdevs/relspecgo/pkg/mssql"
"git.warky.dev/wdevs/relspecgo/pkg/readers"
)
// Reader implements the readers.Reader interface for MSSQL databases
type Reader struct {
options *readers.ReaderOptions
db *sql.DB
ctx context.Context
}
// NewReader creates a new MSSQL reader
func NewReader(options *readers.ReaderOptions) *Reader {
return &Reader{
options: options,
ctx: context.Background(),
}
}
// ReadDatabase reads the entire database schema from MSSQL
func (r *Reader) ReadDatabase() (*models.Database, error) {
// Validate connection string
if r.options.ConnectionString == "" {
return nil, fmt.Errorf("connection string is required")
}
// Connect to the database
if err := r.connect(); err != nil {
return nil, fmt.Errorf("failed to connect: %w", err)
}
defer r.close()
// Get database name
var dbName string
err := r.db.QueryRowContext(r.ctx, "SELECT DB_NAME()").Scan(&dbName)
if err != nil {
return nil, fmt.Errorf("failed to get database name: %w", err)
}
// Initialize database model
db := models.InitDatabase(dbName)
db.DatabaseType = models.MSSQLDatabaseType
db.SourceFormat = "mssql"
// Get MSSQL version
var version string
err = r.db.QueryRowContext(r.ctx, "SELECT @@VERSION").Scan(&version)
if err == nil {
db.DatabaseVersion = version
}
// Query all schemas
schemas, err := r.querySchemas()
if err != nil {
return nil, fmt.Errorf("failed to query schemas: %w", err)
}
// Process each schema
for _, schema := range schemas {
// Query tables for this schema
tables, err := r.queryTables(schema.Name)
if err != nil {
return nil, fmt.Errorf("failed to query tables for schema %s: %w", schema.Name, err)
}
schema.Tables = tables
// Query columns for tables
columnsMap, err := r.queryColumns(schema.Name)
if err != nil {
return nil, fmt.Errorf("failed to query columns for schema %s: %w", schema.Name, err)
}
// Populate table columns
for _, table := range schema.Tables {
tableKey := schema.Name + "." + table.Name
if cols, exists := columnsMap[tableKey]; exists {
table.Columns = cols
}
}
// Query primary keys
primaryKeys, err := r.queryPrimaryKeys(schema.Name)
if err != nil {
return nil, fmt.Errorf("failed to query primary keys for schema %s: %w", schema.Name, err)
}
// Apply primary keys to tables
for _, table := range schema.Tables {
tableKey := schema.Name + "." + table.Name
if pk, exists := primaryKeys[tableKey]; exists {
table.Constraints[pk.Name] = pk
// Mark columns as primary key and not null
for _, colName := range pk.Columns {
if col, colExists := table.Columns[colName]; colExists {
col.IsPrimaryKey = true
col.NotNull = true
}
}
}
}
// Query foreign keys
foreignKeys, err := r.queryForeignKeys(schema.Name)
if err != nil {
return nil, fmt.Errorf("failed to query foreign keys for schema %s: %w", schema.Name, err)
}
// Apply foreign keys to tables
for _, table := range schema.Tables {
tableKey := schema.Name + "." + table.Name
if fks, exists := foreignKeys[tableKey]; exists {
for _, fk := range fks {
table.Constraints[fk.Name] = fk
// Derive relationship from foreign key
r.deriveRelationship(table, fk)
}
}
}
// Query unique constraints
uniqueConstraints, err := r.queryUniqueConstraints(schema.Name)
if err != nil {
return nil, fmt.Errorf("failed to query unique constraints for schema %s: %w", schema.Name, err)
}
// Apply unique constraints to tables
for _, table := range schema.Tables {
tableKey := schema.Name + "." + table.Name
if ucs, exists := uniqueConstraints[tableKey]; exists {
for _, uc := range ucs {
table.Constraints[uc.Name] = uc
}
}
}
// Query check constraints
checkConstraints, err := r.queryCheckConstraints(schema.Name)
if err != nil {
return nil, fmt.Errorf("failed to query check constraints for schema %s: %w", schema.Name, err)
}
// Apply check constraints to tables
for _, table := range schema.Tables {
tableKey := schema.Name + "." + table.Name
if ccs, exists := checkConstraints[tableKey]; exists {
for _, cc := range ccs {
table.Constraints[cc.Name] = cc
}
}
}
// Query indexes
indexes, err := r.queryIndexes(schema.Name)
if err != nil {
return nil, fmt.Errorf("failed to query indexes for schema %s: %w", schema.Name, err)
}
// Apply indexes to tables
for _, table := range schema.Tables {
tableKey := schema.Name + "." + table.Name
if idxs, exists := indexes[tableKey]; exists {
for _, idx := range idxs {
table.Indexes[idx.Name] = idx
}
}
}
// Set RefDatabase for schema
schema.RefDatabase = db
// Set RefSchema for tables
for _, table := range schema.Tables {
table.RefSchema = schema
}
// Add schema to database
db.Schemas = append(db.Schemas, schema)
}
return db, nil
}
// ReadSchema reads a single schema (returns the first schema from the database)
func (r *Reader) ReadSchema() (*models.Schema, error) {
db, err := r.ReadDatabase()
if err != nil {
return nil, err
}
if len(db.Schemas) == 0 {
return nil, fmt.Errorf("no schemas found in database")
}
return db.Schemas[0], nil
}
// ReadTable reads a single table (returns the first table from the first schema)
func (r *Reader) ReadTable() (*models.Table, error) {
schema, err := r.ReadSchema()
if err != nil {
return nil, err
}
if len(schema.Tables) == 0 {
return nil, fmt.Errorf("no tables found in schema")
}
return schema.Tables[0], nil
}
// connect establishes a connection to the MSSQL database
func (r *Reader) connect() error {
db, err := sql.Open("mssql", r.options.ConnectionString)
if err != nil {
return err
}
// Test connection
if err = db.PingContext(r.ctx); err != nil {
db.Close()
return err
}
r.db = db
return nil
}
// close closes the database connection
func (r *Reader) close() {
if r.db != nil {
r.db.Close()
}
}
// mapDataType maps MSSQL data types to canonical types
func (r *Reader) mapDataType(mssqlType string) string {
return mssql.ConvertMSSQLToCanonical(mssqlType)
}
// deriveRelationship creates a relationship from a foreign key constraint
func (r *Reader) deriveRelationship(table *models.Table, fk *models.Constraint) {
relationshipName := fmt.Sprintf("%s_to_%s", table.Name, fk.ReferencedTable)
relationship := models.InitRelationship(relationshipName, models.OneToMany)
relationship.FromTable = table.Name
relationship.FromSchema = table.Schema
relationship.ToTable = fk.ReferencedTable
relationship.ToSchema = fk.ReferencedSchema
relationship.ForeignKey = fk.Name
// Store constraint actions in properties
if fk.OnDelete != "" {
relationship.Properties["on_delete"] = fk.OnDelete
}
if fk.OnUpdate != "" {
relationship.Properties["on_update"] = fk.OnUpdate
}
table.Relationships[relationshipName] = relationship
}

View File

@@ -0,0 +1,86 @@
package mssql
import (
"testing"
"git.warky.dev/wdevs/relspecgo/pkg/mssql"
"git.warky.dev/wdevs/relspecgo/pkg/readers"
"github.com/stretchr/testify/assert"
)
// TestMapDataType tests MSSQL type mapping to canonical types
func TestMapDataType(t *testing.T) {
reader := NewReader(&readers.ReaderOptions{})
tests := []struct {
name string
mssqlType string
expectedType string
}{
{"INT to int", "INT", "int"},
{"BIGINT to int64", "BIGINT", "int64"},
{"BIT to bool", "BIT", "bool"},
{"NVARCHAR to string", "NVARCHAR(255)", "string"},
{"DATETIME2 to timestamp", "DATETIME2", "timestamp"},
{"DATETIMEOFFSET to timestamptz", "DATETIMEOFFSET", "timestamptz"},
{"UNIQUEIDENTIFIER to uuid", "UNIQUEIDENTIFIER", "uuid"},
{"FLOAT to float64", "FLOAT", "float64"},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
result := reader.mapDataType(tt.mssqlType)
assert.Equal(t, tt.expectedType, result)
})
}
}
// TestConvertCanonicalToMSSQL tests canonical to MSSQL type conversion
func TestConvertCanonicalToMSSQL(t *testing.T) {
tests := []struct {
name string
canonicalType string
expectedMSSQL string
}{
{"int to INT", "int", "INT"},
{"int64 to BIGINT", "int64", "BIGINT"},
{"bool to BIT", "bool", "BIT"},
{"string to NVARCHAR(255)", "string", "NVARCHAR(255)"},
{"text to NVARCHAR(MAX)", "text", "NVARCHAR(MAX)"},
{"timestamp to DATETIME2", "timestamp", "DATETIME2"},
{"timestamptz to DATETIMEOFFSET", "timestamptz", "DATETIMEOFFSET"},
{"uuid to UNIQUEIDENTIFIER", "uuid", "UNIQUEIDENTIFIER"},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
result := mssql.ConvertCanonicalToMSSQL(tt.canonicalType)
assert.Equal(t, tt.expectedMSSQL, result)
})
}
}
// TestConvertMSSQLToCanonical tests MSSQL to canonical type conversion
func TestConvertMSSQLToCanonical(t *testing.T) {
tests := []struct {
name string
mssqlType string
expectedType string
}{
{"INT to int", "INT", "int"},
{"BIGINT to int64", "BIGINT", "int64"},
{"BIT to bool", "BIT", "bool"},
{"NVARCHAR with params", "NVARCHAR(255)", "string"},
{"DATETIME2 to timestamp", "DATETIME2", "timestamp"},
{"DATETIMEOFFSET to timestamptz", "DATETIMEOFFSET", "timestamptz"},
{"UNIQUEIDENTIFIER to uuid", "UNIQUEIDENTIFIER", "uuid"},
{"VARBINARY to bytea", "VARBINARY(MAX)", "bytea"},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
result := mssql.ConvertMSSQLToCanonical(tt.mssqlType)
assert.Equal(t, tt.expectedType, result)
})
}
}

View File

@@ -231,14 +231,13 @@ func (r *Reader) queryColumns(schemaName string) (map[string]map[string]*models.
} }
column := models.InitColumn(columnName, tableName, schema) column := models.InitColumn(columnName, tableName, schema)
column.Type = r.mapDataType(dataType, udtName)
column.NotNull = (isNullable == "NO")
column.Sequence = uint(ordinalPosition)
// Check if this is a serial type (has nextval default)
hasNextval := false
if columnDefault != nil { if columnDefault != nil {
// Parse default value - remove nextval for sequences
defaultVal := *columnDefault defaultVal := *columnDefault
if strings.HasPrefix(defaultVal, "nextval") { if strings.HasPrefix(defaultVal, "nextval") {
hasNextval = true
column.AutoIncrement = true column.AutoIncrement = true
column.Default = defaultVal column.Default = defaultVal
} else { } else {
@@ -246,6 +245,11 @@ func (r *Reader) queryColumns(schemaName string) (map[string]map[string]*models.
} }
} }
// Map data type, preserving serial types when detected
column.Type = r.mapDataType(dataType, udtName, hasNextval)
column.NotNull = (isNullable == "NO")
column.Sequence = uint(ordinalPosition)
if description != nil { if description != nil {
column.Description = *description column.Description = *description
} }

View File

@@ -3,6 +3,7 @@ package pgsql
import ( import (
"context" "context"
"fmt" "fmt"
"strings"
"github.com/jackc/pgx/v5" "github.com/jackc/pgx/v5"
@@ -259,33 +260,46 @@ func (r *Reader) close() {
} }
// mapDataType maps PostgreSQL data types to canonical types // mapDataType maps PostgreSQL data types to canonical types
func (r *Reader) mapDataType(pgType, udtName string) string { func (r *Reader) mapDataType(pgType, udtName string, hasNextval bool) string {
// If the column has a nextval default, it's likely a serial type
// Map to the appropriate serial type instead of the base integer type
if hasNextval {
switch strings.ToLower(pgType) {
case "integer", "int", "int4":
return "serial"
case "bigint", "int8":
return "bigserial"
case "smallint", "int2":
return "smallserial"
}
}
// Map common PostgreSQL types // Map common PostgreSQL types
typeMap := map[string]string{ typeMap := map[string]string{
"integer": "int", "integer": "integer",
"bigint": "int64", "bigint": "bigint",
"smallint": "int16", "smallint": "smallint",
"int": "int", "int": "integer",
"int2": "int16", "int2": "smallint",
"int4": "int", "int4": "integer",
"int8": "int64", "int8": "bigint",
"serial": "int", "serial": "serial",
"bigserial": "int64", "bigserial": "bigserial",
"smallserial": "int16", "smallserial": "smallserial",
"numeric": "decimal", "numeric": "numeric",
"decimal": "decimal", "decimal": "decimal",
"real": "float32", "real": "real",
"double precision": "float64", "double precision": "double precision",
"float4": "float32", "float4": "real",
"float8": "float64", "float8": "double precision",
"money": "decimal", "money": "money",
"character varying": "string", "character varying": "varchar",
"varchar": "string", "varchar": "varchar",
"character": "string", "character": "char",
"char": "string", "char": "char",
"text": "string", "text": "text",
"boolean": "bool", "boolean": "boolean",
"bool": "bool", "bool": "boolean",
"date": "date", "date": "date",
"time": "time", "time": "time",
"time without time zone": "time", "time without time zone": "time",

View File

@@ -177,20 +177,20 @@ func TestMapDataType(t *testing.T) {
udtName string udtName string
expected string expected string
}{ }{
{"integer", "int4", "int"}, {"integer", "int4", "integer"},
{"bigint", "int8", "int64"}, {"bigint", "int8", "bigint"},
{"smallint", "int2", "int16"}, {"smallint", "int2", "smallint"},
{"character varying", "varchar", "string"}, {"character varying", "varchar", "varchar"},
{"text", "text", "string"}, {"text", "text", "text"},
{"boolean", "bool", "bool"}, {"boolean", "bool", "boolean"},
{"timestamp without time zone", "timestamp", "timestamp"}, {"timestamp without time zone", "timestamp", "timestamp"},
{"timestamp with time zone", "timestamptz", "timestamptz"}, {"timestamp with time zone", "timestamptz", "timestamptz"},
{"json", "json", "json"}, {"json", "json", "json"},
{"jsonb", "jsonb", "jsonb"}, {"jsonb", "jsonb", "jsonb"},
{"uuid", "uuid", "uuid"}, {"uuid", "uuid", "uuid"},
{"numeric", "numeric", "decimal"}, {"numeric", "numeric", "numeric"},
{"real", "float4", "float32"}, {"real", "float4", "real"},
{"double precision", "float8", "float64"}, {"double precision", "float8", "double precision"},
{"date", "date", "date"}, {"date", "date", "date"},
{"time without time zone", "time", "time"}, {"time without time zone", "time", "time"},
{"bytea", "bytea", "bytea"}, {"bytea", "bytea", "bytea"},
@@ -199,12 +199,31 @@ func TestMapDataType(t *testing.T) {
for _, tt := range tests { for _, tt := range tests {
t.Run(tt.pgType, func(t *testing.T) { t.Run(tt.pgType, func(t *testing.T) {
result := reader.mapDataType(tt.pgType, tt.udtName) result := reader.mapDataType(tt.pgType, tt.udtName, false)
if result != tt.expected { if result != tt.expected {
t.Errorf("mapDataType(%s, %s) = %s, expected %s", tt.pgType, tt.udtName, result, tt.expected) t.Errorf("mapDataType(%s, %s) = %s, expected %s", tt.pgType, tt.udtName, result, tt.expected)
} }
}) })
} }
// Test serial type detection with hasNextval=true
serialTests := []struct {
pgType string
expected string
}{
{"integer", "serial"},
{"bigint", "bigserial"},
{"smallint", "smallserial"},
}
for _, tt := range serialTests {
t.Run(tt.pgType+"_with_nextval", func(t *testing.T) {
result := reader.mapDataType(tt.pgType, "", true)
if result != tt.expected {
t.Errorf("mapDataType(%s, '', true) = %s, expected %s", tt.pgType, result, tt.expected)
}
})
}
} }
func TestParseIndexDefinition(t *testing.T) { func TestParseIndexDefinition(t *testing.T) {

View File

@@ -0,0 +1,75 @@
# SQLite Reader
Reads database schema from SQLite database files.
## Usage
```go
import (
"git.warky.dev/wdevs/relspecgo/pkg/readers"
"git.warky.dev/wdevs/relspecgo/pkg/readers/sqlite"
)
// Using file path
options := &readers.ReaderOptions{
FilePath: "path/to/database.db",
}
reader := sqlite.NewReader(options)
db, err := reader.ReadDatabase()
// Or using connection string
options := &readers.ReaderOptions{
ConnectionString: "path/to/database.db",
}
```
## Features
- Reads tables with columns and data types
- Reads views with definitions
- Reads primary keys
- Reads foreign keys with CASCADE actions
- Reads indexes (non-auto-generated)
- Maps SQLite types to canonical types
- Derives relationships from foreign keys
## SQLite Specifics
- SQLite doesn't support schemas, creates single "main" schema
- Uses pure Go driver (modernc.org/sqlite) - no CGo required
- Supports both file path and connection string
- Auto-increment detection for INTEGER PRIMARY KEY columns
- Foreign keys require `PRAGMA foreign_keys = ON` to be set
## Example Schema
```sql
PRAGMA foreign_keys = ON;
CREATE TABLE users (
id INTEGER PRIMARY KEY AUTOINCREMENT,
username VARCHAR(50) NOT NULL UNIQUE,
email VARCHAR(100) NOT NULL
);
CREATE TABLE posts (
id INTEGER PRIMARY KEY AUTOINCREMENT,
user_id INTEGER NOT NULL,
title VARCHAR(200) NOT NULL,
FOREIGN KEY (user_id) REFERENCES users(id) ON DELETE CASCADE
);
```
## Type Mappings
| SQLite Type | Canonical Type |
|-------------|---------------|
| INTEGER, INT | int |
| BIGINT | int64 |
| REAL, DOUBLE | float64 |
| TEXT, VARCHAR | string |
| BLOB | bytea |
| BOOLEAN | bool |
| DATE | date |
| DATETIME, TIMESTAMP | timestamp |

View File

@@ -0,0 +1,306 @@
package sqlite
import (
"fmt"
"strings"
"git.warky.dev/wdevs/relspecgo/pkg/models"
)
// queryTables retrieves all tables from the SQLite database
func (r *Reader) queryTables() ([]*models.Table, error) {
query := `
SELECT name
FROM sqlite_master
WHERE type = 'table'
AND name NOT LIKE 'sqlite_%'
ORDER BY name
`
rows, err := r.db.QueryContext(r.ctx, query)
if err != nil {
return nil, err
}
defer rows.Close()
tables := make([]*models.Table, 0)
for rows.Next() {
var tableName string
if err := rows.Scan(&tableName); err != nil {
return nil, err
}
table := models.InitTable(tableName, "main")
tables = append(tables, table)
}
return tables, rows.Err()
}
// queryViews retrieves all views from the SQLite database
func (r *Reader) queryViews() ([]*models.View, error) {
query := `
SELECT name, sql
FROM sqlite_master
WHERE type = 'view'
ORDER BY name
`
rows, err := r.db.QueryContext(r.ctx, query)
if err != nil {
return nil, err
}
defer rows.Close()
views := make([]*models.View, 0)
for rows.Next() {
var viewName string
var sql *string
if err := rows.Scan(&viewName, &sql); err != nil {
return nil, err
}
view := models.InitView(viewName, "main")
if sql != nil {
view.Definition = *sql
}
views = append(views, view)
}
return views, rows.Err()
}
// queryColumns retrieves all columns for a given table or view
func (r *Reader) queryColumns(tableName string) (map[string]*models.Column, error) {
query := fmt.Sprintf("PRAGMA table_info(%s)", tableName)
rows, err := r.db.QueryContext(r.ctx, query)
if err != nil {
return nil, err
}
defer rows.Close()
columns := make(map[string]*models.Column)
for rows.Next() {
var cid int
var name, dataType string
var notNull, pk int
var defaultValue *string
if err := rows.Scan(&cid, &name, &dataType, &notNull, &defaultValue, &pk); err != nil {
return nil, err
}
column := models.InitColumn(name, tableName, "main")
column.Type = r.mapDataType(strings.ToUpper(dataType))
column.NotNull = (notNull == 1)
column.IsPrimaryKey = (pk > 0)
column.Sequence = uint(cid + 1)
if defaultValue != nil {
column.Default = *defaultValue
}
// Check for autoincrement (SQLite uses INTEGER PRIMARY KEY AUTOINCREMENT)
if pk > 0 && strings.EqualFold(dataType, "INTEGER") {
column.AutoIncrement = r.isAutoIncrement(tableName, name)
}
columns[name] = column
}
return columns, rows.Err()
}
// isAutoIncrement checks if a column is autoincrement
func (r *Reader) isAutoIncrement(tableName, columnName string) bool {
// Check sqlite_sequence table or parse CREATE TABLE statement
query := `
SELECT sql
FROM sqlite_master
WHERE type = 'table' AND name = ?
`
var sql string
err := r.db.QueryRowContext(r.ctx, query, tableName).Scan(&sql)
if err != nil {
return false
}
// Check if the SQL contains AUTOINCREMENT for this column
return strings.Contains(strings.ToUpper(sql), strings.ToUpper(columnName)+" INTEGER PRIMARY KEY AUTOINCREMENT") ||
strings.Contains(strings.ToUpper(sql), strings.ToUpper(columnName)+" INTEGER AUTOINCREMENT")
}
// queryPrimaryKey retrieves the primary key constraint for a table
func (r *Reader) queryPrimaryKey(tableName string) (*models.Constraint, error) {
query := fmt.Sprintf("PRAGMA table_info(%s)", tableName)
rows, err := r.db.QueryContext(r.ctx, query)
if err != nil {
return nil, err
}
defer rows.Close()
var pkColumns []string
for rows.Next() {
var cid int
var name, dataType string
var notNull, pk int
var defaultValue *string
if err := rows.Scan(&cid, &name, &dataType, &notNull, &defaultValue, &pk); err != nil {
return nil, err
}
if pk > 0 {
pkColumns = append(pkColumns, name)
}
}
if len(pkColumns) == 0 {
return nil, nil
}
// Create primary key constraint
constraintName := fmt.Sprintf("%s_pkey", tableName)
constraint := models.InitConstraint(constraintName, models.PrimaryKeyConstraint)
constraint.Schema = "main"
constraint.Table = tableName
constraint.Columns = pkColumns
return constraint, rows.Err()
}
// queryForeignKeys retrieves all foreign key constraints for a table
func (r *Reader) queryForeignKeys(tableName string) ([]*models.Constraint, error) {
query := fmt.Sprintf("PRAGMA foreign_key_list(%s)", tableName)
rows, err := r.db.QueryContext(r.ctx, query)
if err != nil {
return nil, err
}
defer rows.Close()
// Group foreign keys by id (since composite FKs have multiple rows)
fkMap := make(map[int]*models.Constraint)
for rows.Next() {
var id, seq int
var referencedTable, fromColumn, toColumn string
var onUpdate, onDelete, match string
if err := rows.Scan(&id, &seq, &referencedTable, &fromColumn, &toColumn, &onUpdate, &onDelete, &match); err != nil {
return nil, err
}
if _, exists := fkMap[id]; !exists {
constraintName := fmt.Sprintf("%s_%s_fkey", tableName, referencedTable)
if id > 0 {
constraintName = fmt.Sprintf("%s_%s_fkey_%d", tableName, referencedTable, id)
}
constraint := models.InitConstraint(constraintName, models.ForeignKeyConstraint)
constraint.Schema = "main"
constraint.Table = tableName
constraint.ReferencedSchema = "main"
constraint.ReferencedTable = referencedTable
constraint.OnUpdate = onUpdate
constraint.OnDelete = onDelete
constraint.Columns = []string{}
constraint.ReferencedColumns = []string{}
fkMap[id] = constraint
}
// Add column to the constraint
fkMap[id].Columns = append(fkMap[id].Columns, fromColumn)
fkMap[id].ReferencedColumns = append(fkMap[id].ReferencedColumns, toColumn)
}
// Convert map to slice
foreignKeys := make([]*models.Constraint, 0, len(fkMap))
for _, fk := range fkMap {
foreignKeys = append(foreignKeys, fk)
}
return foreignKeys, rows.Err()
}
// queryIndexes retrieves all indexes for a table
func (r *Reader) queryIndexes(tableName string) ([]*models.Index, error) {
query := fmt.Sprintf("PRAGMA index_list(%s)", tableName)
rows, err := r.db.QueryContext(r.ctx, query)
if err != nil {
return nil, err
}
defer rows.Close()
indexes := make([]*models.Index, 0)
for rows.Next() {
var seq int
var name string
var unique int
var origin string
var partial int
if err := rows.Scan(&seq, &name, &unique, &origin, &partial); err != nil {
return nil, err
}
// Skip auto-generated indexes (origin = 'pk' for primary keys, etc.)
// origin: c = CREATE INDEX, u = UNIQUE constraint, pk = PRIMARY KEY
if origin == "pk" || origin == "u" {
continue
}
index := models.InitIndex(name, tableName, "main")
index.Unique = (unique == 1)
// Get index columns
columns, err := r.queryIndexColumns(name)
if err != nil {
return nil, err
}
index.Columns = columns
indexes = append(indexes, index)
}
return indexes, rows.Err()
}
// queryIndexColumns retrieves the columns for a specific index
func (r *Reader) queryIndexColumns(indexName string) ([]string, error) {
query := fmt.Sprintf("PRAGMA index_info(%s)", indexName)
rows, err := r.db.QueryContext(r.ctx, query)
if err != nil {
return nil, err
}
defer rows.Close()
columns := make([]string, 0)
for rows.Next() {
var seqno, cid int
var name *string
if err := rows.Scan(&seqno, &cid, &name); err != nil {
return nil, err
}
if name != nil {
columns = append(columns, *name)
}
}
return columns, rows.Err()
}

View File

@@ -0,0 +1,261 @@
package sqlite
import (
"context"
"database/sql"
"fmt"
"path/filepath"
_ "modernc.org/sqlite" // SQLite driver
"git.warky.dev/wdevs/relspecgo/pkg/models"
"git.warky.dev/wdevs/relspecgo/pkg/readers"
)
// Reader implements the readers.Reader interface for SQLite databases
type Reader struct {
options *readers.ReaderOptions
db *sql.DB
ctx context.Context
}
// NewReader creates a new SQLite reader
func NewReader(options *readers.ReaderOptions) *Reader {
return &Reader{
options: options,
ctx: context.Background(),
}
}
// ReadDatabase reads the entire database schema from SQLite
func (r *Reader) ReadDatabase() (*models.Database, error) {
// Validate file path or connection string
dbPath := r.options.FilePath
if dbPath == "" && r.options.ConnectionString != "" {
dbPath = r.options.ConnectionString
}
if dbPath == "" {
return nil, fmt.Errorf("file path or connection string is required")
}
// Connect to the database
if err := r.connect(dbPath); err != nil {
return nil, fmt.Errorf("failed to connect: %w", err)
}
defer r.close()
// Get database name from file path
dbName := filepath.Base(dbPath)
if dbName == "" {
dbName = "sqlite"
}
// Initialize database model
db := models.InitDatabase(dbName)
db.DatabaseType = models.SqlLiteDatabaseType
db.SourceFormat = "sqlite"
// Get SQLite version
var version string
err := r.db.QueryRowContext(r.ctx, "SELECT sqlite_version()").Scan(&version)
if err == nil {
db.DatabaseVersion = version
}
// SQLite doesn't have schemas, so we create a single "main" schema
schema := models.InitSchema("main")
schema.RefDatabase = db
// Query tables
tables, err := r.queryTables()
if err != nil {
return nil, fmt.Errorf("failed to query tables: %w", err)
}
schema.Tables = tables
// Query views
views, err := r.queryViews()
if err != nil {
return nil, fmt.Errorf("failed to query views: %w", err)
}
schema.Views = views
// Query columns for tables and views
for _, table := range schema.Tables {
columns, err := r.queryColumns(table.Name)
if err != nil {
return nil, fmt.Errorf("failed to query columns for table %s: %w", table.Name, err)
}
table.Columns = columns
table.RefSchema = schema
// Query primary key
pk, err := r.queryPrimaryKey(table.Name)
if err != nil {
return nil, fmt.Errorf("failed to query primary key for table %s: %w", table.Name, err)
}
if pk != nil {
table.Constraints[pk.Name] = pk
// Mark columns as primary key and not null
for _, colName := range pk.Columns {
if col, exists := table.Columns[colName]; exists {
col.IsPrimaryKey = true
col.NotNull = true
}
}
}
// Query foreign keys
foreignKeys, err := r.queryForeignKeys(table.Name)
if err != nil {
return nil, fmt.Errorf("failed to query foreign keys for table %s: %w", table.Name, err)
}
for _, fk := range foreignKeys {
table.Constraints[fk.Name] = fk
// Derive relationship from foreign key
r.deriveRelationship(table, fk)
}
// Query indexes
indexes, err := r.queryIndexes(table.Name)
if err != nil {
return nil, fmt.Errorf("failed to query indexes for table %s: %w", table.Name, err)
}
for _, idx := range indexes {
table.Indexes[idx.Name] = idx
}
}
// Query columns for views
for _, view := range schema.Views {
columns, err := r.queryColumns(view.Name)
if err != nil {
return nil, fmt.Errorf("failed to query columns for view %s: %w", view.Name, err)
}
view.Columns = columns
view.RefSchema = schema
}
// Add schema to database
db.Schemas = append(db.Schemas, schema)
return db, nil
}
// ReadSchema reads a single schema (returns the main schema from the database)
func (r *Reader) ReadSchema() (*models.Schema, error) {
db, err := r.ReadDatabase()
if err != nil {
return nil, err
}
if len(db.Schemas) == 0 {
return nil, fmt.Errorf("no schemas found in database")
}
return db.Schemas[0], nil
}
// ReadTable reads a single table (returns the first table from the schema)
func (r *Reader) ReadTable() (*models.Table, error) {
schema, err := r.ReadSchema()
if err != nil {
return nil, err
}
if len(schema.Tables) == 0 {
return nil, fmt.Errorf("no tables found in schema")
}
return schema.Tables[0], nil
}
// connect establishes a connection to the SQLite database
func (r *Reader) connect(dbPath string) error {
db, err := sql.Open("sqlite", dbPath)
if err != nil {
return err
}
r.db = db
return nil
}
// close closes the database connection
func (r *Reader) close() {
if r.db != nil {
r.db.Close()
}
}
// mapDataType maps SQLite data types to canonical types
func (r *Reader) mapDataType(sqliteType string) string {
// SQLite has a flexible type system, but we map common types
typeMap := map[string]string{
"INTEGER": "int",
"INT": "int",
"TINYINT": "int8",
"SMALLINT": "int16",
"MEDIUMINT": "int",
"BIGINT": "int64",
"UNSIGNED BIG INT": "uint64",
"INT2": "int16",
"INT8": "int64",
"REAL": "float64",
"DOUBLE": "float64",
"DOUBLE PRECISION": "float64",
"FLOAT": "float32",
"NUMERIC": "decimal",
"DECIMAL": "decimal",
"BOOLEAN": "bool",
"BOOL": "bool",
"DATE": "date",
"DATETIME": "timestamp",
"TIMESTAMP": "timestamp",
"TEXT": "string",
"VARCHAR": "string",
"CHAR": "string",
"CHARACTER": "string",
"VARYING CHARACTER": "string",
"NCHAR": "string",
"NVARCHAR": "string",
"CLOB": "text",
"BLOB": "bytea",
}
// Try exact match first
if mapped, exists := typeMap[sqliteType]; exists {
return mapped
}
// Try case-insensitive match for common types
sqliteTypeUpper := sqliteType
if len(sqliteType) > 0 {
// Extract base type (e.g., "VARCHAR(255)" -> "VARCHAR")
for baseType := range typeMap {
if len(sqliteTypeUpper) >= len(baseType) && sqliteTypeUpper[:len(baseType)] == baseType {
return typeMap[baseType]
}
}
}
// Default to string for unknown types
return "string"
}
// deriveRelationship creates a relationship from a foreign key constraint
func (r *Reader) deriveRelationship(table *models.Table, fk *models.Constraint) {
relationshipName := fmt.Sprintf("%s_to_%s", table.Name, fk.ReferencedTable)
relationship := models.InitRelationship(relationshipName, models.OneToMany)
relationship.FromTable = table.Name
relationship.FromSchema = table.Schema
relationship.ToTable = fk.ReferencedTable
relationship.ToSchema = fk.ReferencedSchema
relationship.ForeignKey = fk.Name
// Store constraint actions in properties
if fk.OnDelete != "" {
relationship.Properties["on_delete"] = fk.OnDelete
}
if fk.OnUpdate != "" {
relationship.Properties["on_update"] = fk.OnUpdate
}
table.Relationships[relationshipName] = relationship
}

View File

@@ -0,0 +1,334 @@
package sqlite
import (
"database/sql"
"os"
"path/filepath"
"testing"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
"git.warky.dev/wdevs/relspecgo/pkg/models"
"git.warky.dev/wdevs/relspecgo/pkg/readers"
)
// setupTestDatabase creates a temporary SQLite database with test data
func setupTestDatabase(t *testing.T) string {
tmpDir := t.TempDir()
dbPath := filepath.Join(tmpDir, "test.db")
db, err := sql.Open("sqlite", dbPath)
require.NoError(t, err)
defer db.Close()
// Create test schema
schema := `
PRAGMA foreign_keys = ON;
CREATE TABLE users (
id INTEGER PRIMARY KEY AUTOINCREMENT,
username VARCHAR(50) NOT NULL UNIQUE,
email VARCHAR(100) NOT NULL,
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP
);
CREATE TABLE posts (
id INTEGER PRIMARY KEY AUTOINCREMENT,
user_id INTEGER NOT NULL,
title VARCHAR(200) NOT NULL,
content TEXT,
published BOOLEAN DEFAULT 0,
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
FOREIGN KEY (user_id) REFERENCES users(id) ON DELETE CASCADE
);
CREATE TABLE comments (
id INTEGER PRIMARY KEY AUTOINCREMENT,
post_id INTEGER NOT NULL,
user_id INTEGER NOT NULL,
comment TEXT NOT NULL,
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
FOREIGN KEY (post_id) REFERENCES posts(id) ON DELETE CASCADE,
FOREIGN KEY (user_id) REFERENCES users(id) ON DELETE CASCADE
);
CREATE INDEX idx_posts_user_id ON posts(user_id);
CREATE INDEX idx_comments_post_id ON comments(post_id);
CREATE UNIQUE INDEX idx_users_email ON users(email);
CREATE VIEW user_post_count AS
SELECT u.id, u.username, COUNT(p.id) as post_count
FROM users u
LEFT JOIN posts p ON u.id = p.user_id
GROUP BY u.id, u.username;
`
_, err = db.Exec(schema)
require.NoError(t, err)
return dbPath
}
func TestReader_ReadDatabase(t *testing.T) {
dbPath := setupTestDatabase(t)
defer os.Remove(dbPath)
options := &readers.ReaderOptions{
FilePath: dbPath,
}
reader := NewReader(options)
db, err := reader.ReadDatabase()
require.NoError(t, err)
require.NotNil(t, db)
// Check database metadata
assert.Equal(t, "test.db", db.Name)
assert.Equal(t, models.SqlLiteDatabaseType, db.DatabaseType)
assert.Equal(t, "sqlite", db.SourceFormat)
assert.NotEmpty(t, db.DatabaseVersion)
// Check schemas (SQLite should have a single "main" schema)
require.Len(t, db.Schemas, 1)
schema := db.Schemas[0]
assert.Equal(t, "main", schema.Name)
// Check tables
assert.Len(t, schema.Tables, 3)
tableNames := make([]string, len(schema.Tables))
for i, table := range schema.Tables {
tableNames[i] = table.Name
}
assert.Contains(t, tableNames, "users")
assert.Contains(t, tableNames, "posts")
assert.Contains(t, tableNames, "comments")
// Check views
assert.Len(t, schema.Views, 1)
assert.Equal(t, "user_post_count", schema.Views[0].Name)
assert.NotEmpty(t, schema.Views[0].Definition)
}
func TestReader_ReadTable_Users(t *testing.T) {
dbPath := setupTestDatabase(t)
defer os.Remove(dbPath)
options := &readers.ReaderOptions{
FilePath: dbPath,
}
reader := NewReader(options)
db, err := reader.ReadDatabase()
require.NoError(t, err)
require.NotNil(t, db)
// Find users table
var usersTable *models.Table
for _, table := range db.Schemas[0].Tables {
if table.Name == "users" {
usersTable = table
break
}
}
require.NotNil(t, usersTable)
assert.Equal(t, "users", usersTable.Name)
assert.Equal(t, "main", usersTable.Schema)
// Check columns
assert.Len(t, usersTable.Columns, 4)
// Check id column
idCol, exists := usersTable.Columns["id"]
require.True(t, exists)
assert.Equal(t, "int", idCol.Type)
assert.True(t, idCol.IsPrimaryKey)
assert.True(t, idCol.AutoIncrement)
assert.True(t, idCol.NotNull)
// Check username column
usernameCol, exists := usersTable.Columns["username"]
require.True(t, exists)
assert.Equal(t, "string", usernameCol.Type)
assert.True(t, usernameCol.NotNull)
assert.False(t, usernameCol.IsPrimaryKey)
// Check email column
emailCol, exists := usersTable.Columns["email"]
require.True(t, exists)
assert.Equal(t, "string", emailCol.Type)
assert.True(t, emailCol.NotNull)
// Check primary key constraint
assert.Len(t, usersTable.Constraints, 1)
pkConstraint, exists := usersTable.Constraints["users_pkey"]
require.True(t, exists)
assert.Equal(t, models.PrimaryKeyConstraint, pkConstraint.Type)
assert.Equal(t, []string{"id"}, pkConstraint.Columns)
// Check indexes (should have unique index on email and username)
assert.GreaterOrEqual(t, len(usersTable.Indexes), 1)
}
func TestReader_ReadTable_Posts(t *testing.T) {
dbPath := setupTestDatabase(t)
defer os.Remove(dbPath)
options := &readers.ReaderOptions{
FilePath: dbPath,
}
reader := NewReader(options)
db, err := reader.ReadDatabase()
require.NoError(t, err)
require.NotNil(t, db)
// Find posts table
var postsTable *models.Table
for _, table := range db.Schemas[0].Tables {
if table.Name == "posts" {
postsTable = table
break
}
}
require.NotNil(t, postsTable)
// Check columns
assert.Len(t, postsTable.Columns, 6)
// Check foreign key constraint
hasForeignKey := false
for _, constraint := range postsTable.Constraints {
if constraint.Type == models.ForeignKeyConstraint {
hasForeignKey = true
assert.Equal(t, "users", constraint.ReferencedTable)
assert.Equal(t, "CASCADE", constraint.OnDelete)
}
}
assert.True(t, hasForeignKey, "Posts table should have a foreign key constraint")
// Check relationships
assert.GreaterOrEqual(t, len(postsTable.Relationships), 1)
// Check indexes
hasUserIdIndex := false
for _, index := range postsTable.Indexes {
if index.Name == "idx_posts_user_id" {
hasUserIdIndex = true
assert.Contains(t, index.Columns, "user_id")
}
}
assert.True(t, hasUserIdIndex, "Posts table should have idx_posts_user_id index")
}
func TestReader_ReadTable_Comments(t *testing.T) {
dbPath := setupTestDatabase(t)
defer os.Remove(dbPath)
options := &readers.ReaderOptions{
FilePath: dbPath,
}
reader := NewReader(options)
db, err := reader.ReadDatabase()
require.NoError(t, err)
require.NotNil(t, db)
// Find comments table
var commentsTable *models.Table
for _, table := range db.Schemas[0].Tables {
if table.Name == "comments" {
commentsTable = table
break
}
}
require.NotNil(t, commentsTable)
// Check foreign key constraints (should have 2)
fkCount := 0
for _, constraint := range commentsTable.Constraints {
if constraint.Type == models.ForeignKeyConstraint {
fkCount++
}
}
assert.Equal(t, 2, fkCount, "Comments table should have 2 foreign key constraints")
}
func TestReader_ReadSchema(t *testing.T) {
dbPath := setupTestDatabase(t)
defer os.Remove(dbPath)
options := &readers.ReaderOptions{
FilePath: dbPath,
}
reader := NewReader(options)
schema, err := reader.ReadSchema()
require.NoError(t, err)
require.NotNil(t, schema)
assert.Equal(t, "main", schema.Name)
assert.Len(t, schema.Tables, 3)
assert.Len(t, schema.Views, 1)
}
func TestReader_ReadTable(t *testing.T) {
dbPath := setupTestDatabase(t)
defer os.Remove(dbPath)
options := &readers.ReaderOptions{
FilePath: dbPath,
}
reader := NewReader(options)
table, err := reader.ReadTable()
require.NoError(t, err)
require.NotNil(t, table)
assert.NotEmpty(t, table.Name)
assert.NotEmpty(t, table.Columns)
}
func TestReader_ConnectionString(t *testing.T) {
dbPath := setupTestDatabase(t)
defer os.Remove(dbPath)
options := &readers.ReaderOptions{
ConnectionString: dbPath,
}
reader := NewReader(options)
db, err := reader.ReadDatabase()
require.NoError(t, err)
require.NotNil(t, db)
assert.Len(t, db.Schemas, 1)
}
func TestReader_InvalidPath(t *testing.T) {
options := &readers.ReaderOptions{
FilePath: "/nonexistent/path/to/database.db",
}
reader := NewReader(options)
_, err := reader.ReadDatabase()
assert.Error(t, err)
}
func TestReader_MissingPath(t *testing.T) {
options := &readers.ReaderOptions{}
reader := NewReader(options)
_, err := reader.ReadDatabase()
assert.Error(t, err)
assert.Contains(t, err.Error(), "file path or connection string is required")
}

36
pkg/reflectutil/doc.go Normal file
View File

@@ -0,0 +1,36 @@
// Package reflectutil provides reflection utilities for analyzing Go code structures.
//
// # Overview
//
// The reflectutil package offers helper functions for working with Go's reflection
// capabilities, particularly for parsing Go struct definitions and extracting type
// information. This is used by readers that parse ORM model files.
//
// # Features
//
// - Struct tag parsing and extraction
// - Type information analysis
// - Field metadata extraction
// - ORM tag interpretation (GORM, Bun, etc.)
//
// # Usage
//
// This package is primarily used internally by readers like GORM and Bun to parse
// Go struct definitions and convert them to database schema models.
//
// // Example: Parse struct tags
// tags := reflectutil.ParseStructTags(field)
// columnName := tags.Get("db")
//
// # Supported ORM Tags
//
// The package understands tag conventions from:
// - GORM (gorm tag)
// - Bun (bun tag)
// - Standard database/sql (db tag)
//
// # Purpose
//
// This package enables RelSpec to read existing ORM models and convert them to
// a unified schema representation for transformation to other formats.
package reflectutil

34
pkg/transform/doc.go Normal file
View File

@@ -0,0 +1,34 @@
// Package transform provides validation and transformation utilities for database models.
//
// # Overview
//
// The transform package contains a Transformer type that provides methods for validating
// and normalizing database schemas. It ensures schema correctness and consistency across
// different format conversions.
//
// # Features
//
// - Database validation (structure and naming conventions)
// - Schema validation (completeness and integrity)
// - Table validation (column definitions and constraints)
// - Data type normalization
//
// # Usage
//
// transformer := transform.NewTransformer()
// err := transformer.ValidateDatabase(db)
// if err != nil {
// log.Fatal("Invalid database schema:", err)
// }
//
// # Validation Scope
//
// The transformer validates:
// - Required fields presence
// - Naming convention adherence
// - Data type compatibility
// - Constraint consistency
// - Relationship integrity
//
// Note: Some validation methods are currently stubs and will be implemented as needed.
package transform

57
pkg/ui/doc.go Normal file
View File

@@ -0,0 +1,57 @@
// Package ui provides an interactive terminal user interface (TUI) for editing database schemas.
//
// # Overview
//
// The ui package implements a full-featured terminal-based schema editor using tview,
// allowing users to visually create, modify, and manage database schemas without writing
// code or SQL.
//
// # Features
//
// The schema editor supports:
// - Database management: Edit name, description, and properties
// - Schema management: Create, edit, delete schemas
// - Table management: Create, edit, delete tables
// - Column management: Add, modify, delete columns with full property support
// - Relationship management: Define and edit table relationships
// - Domain management: Organize tables into logical domains
// - Import & merge: Combine schemas from multiple sources
// - Save: Export to any supported format
//
// # Architecture
//
// The package is organized into several components:
// - editor.go: Main editor and application lifecycle
// - *_screens.go: UI screens for each entity type
// - *_dataops.go: Business logic and data operations
// - dialogs.go: Reusable dialog components
// - load_save_screens.go: File I/O and format selection
// - main_menu.go: Primary navigation menu
//
// # Usage
//
// editor := ui.NewSchemaEditor(database)
// if err := editor.Run(); err != nil {
// log.Fatal(err)
// }
//
// Or with pre-configured load/save options:
//
// editor := ui.NewSchemaEditorWithConfigs(database, loadConfig, saveConfig)
// if err := editor.Run(); err != nil {
// log.Fatal(err)
// }
//
// # Navigation
//
// - Arrow keys: Navigate between items
// - Enter: Select/edit item
// - Tab/Shift+Tab: Navigate between buttons
// - Escape: Go back/cancel
// - Letter shortcuts: Quick actions (e.g., 'n' for new, 'e' for edit, 'd' for delete)
//
// # Integration
//
// The editor integrates with all readers and writers, supporting load/save operations
// for any format supported by RelSpec (DBML, PostgreSQL, GORM, Prisma, etc.).
package ui

115
pkg/ui/relation_dataops.go Normal file
View File

@@ -0,0 +1,115 @@
package ui
import "git.warky.dev/wdevs/relspecgo/pkg/models"
// Relationship data operations - business logic for relationship management
// CreateRelationship creates a new relationship and adds it to a table
func (se *SchemaEditor) CreateRelationship(schemaIndex, tableIndex int, rel *models.Relationship) *models.Relationship {
if schemaIndex < 0 || schemaIndex >= len(se.db.Schemas) {
return nil
}
schema := se.db.Schemas[schemaIndex]
if tableIndex < 0 || tableIndex >= len(schema.Tables) {
return nil
}
table := schema.Tables[tableIndex]
if table.Relationships == nil {
table.Relationships = make(map[string]*models.Relationship)
}
table.Relationships[rel.Name] = rel
table.UpdateDate()
return rel
}
// UpdateRelationship updates an existing relationship
func (se *SchemaEditor) UpdateRelationship(schemaIndex, tableIndex int, oldName string, rel *models.Relationship) bool {
if schemaIndex < 0 || schemaIndex >= len(se.db.Schemas) {
return false
}
schema := se.db.Schemas[schemaIndex]
if tableIndex < 0 || tableIndex >= len(schema.Tables) {
return false
}
table := schema.Tables[tableIndex]
if table.Relationships == nil {
return false
}
// Delete old entry if name changed
if oldName != rel.Name {
delete(table.Relationships, oldName)
}
table.Relationships[rel.Name] = rel
table.UpdateDate()
return true
}
// DeleteRelationship removes a relationship from a table
func (se *SchemaEditor) DeleteRelationship(schemaIndex, tableIndex int, relName string) bool {
if schemaIndex < 0 || schemaIndex >= len(se.db.Schemas) {
return false
}
schema := se.db.Schemas[schemaIndex]
if tableIndex < 0 || tableIndex >= len(schema.Tables) {
return false
}
table := schema.Tables[tableIndex]
if table.Relationships == nil {
return false
}
delete(table.Relationships, relName)
table.UpdateDate()
return true
}
// GetRelationship returns a relationship by name
func (se *SchemaEditor) GetRelationship(schemaIndex, tableIndex int, relName string) *models.Relationship {
if schemaIndex < 0 || schemaIndex >= len(se.db.Schemas) {
return nil
}
schema := se.db.Schemas[schemaIndex]
if tableIndex < 0 || tableIndex >= len(schema.Tables) {
return nil
}
table := schema.Tables[tableIndex]
if table.Relationships == nil {
return nil
}
return table.Relationships[relName]
}
// GetRelationshipNames returns all relationship names for a table
func (se *SchemaEditor) GetRelationshipNames(schemaIndex, tableIndex int) []string {
if schemaIndex < 0 || schemaIndex >= len(se.db.Schemas) {
return nil
}
schema := se.db.Schemas[schemaIndex]
if tableIndex < 0 || tableIndex >= len(schema.Tables) {
return nil
}
table := schema.Tables[tableIndex]
if table.Relationships == nil {
return nil
}
names := make([]string, 0, len(table.Relationships))
for name := range table.Relationships {
names = append(names, name)
}
return names
}

486
pkg/ui/relation_screens.go Normal file
View File

@@ -0,0 +1,486 @@
package ui
import (
"fmt"
"strings"
"github.com/gdamore/tcell/v2"
"github.com/rivo/tview"
"git.warky.dev/wdevs/relspecgo/pkg/models"
)
// showRelationshipList displays all relationships for a table
func (se *SchemaEditor) showRelationshipList(schemaIndex, tableIndex int) {
table := se.GetTable(schemaIndex, tableIndex)
if table == nil {
return
}
flex := tview.NewFlex().SetDirection(tview.FlexRow)
// Title
title := tview.NewTextView().
SetText(fmt.Sprintf("[::b]Relationships for Table: %s", table.Name)).
SetDynamicColors(true).
SetTextAlign(tview.AlignCenter)
// Create relationships table
relTable := tview.NewTable().SetBorders(true).SetSelectable(true, false).SetFixed(1, 0)
// Add header row
headers := []string{"Name", "Type", "From Columns", "To Table", "To Columns", "Description"}
headerWidths := []int{20, 15, 20, 20, 20}
for i, header := range headers {
padding := ""
if i < len(headerWidths) {
padding = strings.Repeat(" ", headerWidths[i]-len(header))
}
cell := tview.NewTableCell(header + padding).
SetTextColor(tcell.ColorYellow).
SetSelectable(false).
SetAlign(tview.AlignLeft)
relTable.SetCell(0, i, cell)
}
// Get relationship names
relNames := se.GetRelationshipNames(schemaIndex, tableIndex)
for row, relName := range relNames {
rel := table.Relationships[relName]
// Name
nameStr := fmt.Sprintf("%-20s", rel.Name)
nameCell := tview.NewTableCell(nameStr).SetSelectable(true)
relTable.SetCell(row+1, 0, nameCell)
// Type
typeStr := fmt.Sprintf("%-15s", string(rel.Type))
typeCell := tview.NewTableCell(typeStr).SetSelectable(true)
relTable.SetCell(row+1, 1, typeCell)
// From Columns
fromColsStr := strings.Join(rel.FromColumns, ", ")
fromColsStr = fmt.Sprintf("%-20s", fromColsStr)
fromColsCell := tview.NewTableCell(fromColsStr).SetSelectable(true)
relTable.SetCell(row+1, 2, fromColsCell)
// To Table
toTableStr := rel.ToTable
if rel.ToSchema != "" && rel.ToSchema != table.Schema {
toTableStr = rel.ToSchema + "." + rel.ToTable
}
toTableStr = fmt.Sprintf("%-20s", toTableStr)
toTableCell := tview.NewTableCell(toTableStr).SetSelectable(true)
relTable.SetCell(row+1, 3, toTableCell)
// To Columns
toColsStr := strings.Join(rel.ToColumns, ", ")
toColsStr = fmt.Sprintf("%-20s", toColsStr)
toColsCell := tview.NewTableCell(toColsStr).SetSelectable(true)
relTable.SetCell(row+1, 4, toColsCell)
// Description
descCell := tview.NewTableCell(rel.Description).SetSelectable(true)
relTable.SetCell(row+1, 5, descCell)
}
relTable.SetTitle(" Relationships ").SetBorder(true).SetTitleAlign(tview.AlignLeft)
// Action buttons
btnFlex := tview.NewFlex()
btnNew := tview.NewButton("New Relationship [n]").SetSelectedFunc(func() {
se.showNewRelationshipDialog(schemaIndex, tableIndex)
})
btnEdit := tview.NewButton("Edit [e]").SetSelectedFunc(func() {
row, _ := relTable.GetSelection()
if row > 0 && row <= len(relNames) {
relName := relNames[row-1]
se.showEditRelationshipDialog(schemaIndex, tableIndex, relName)
}
})
btnDelete := tview.NewButton("Delete [d]").SetSelectedFunc(func() {
row, _ := relTable.GetSelection()
if row > 0 && row <= len(relNames) {
relName := relNames[row-1]
se.showDeleteRelationshipConfirm(schemaIndex, tableIndex, relName)
}
})
btnBack := tview.NewButton("Back [b]").SetSelectedFunc(func() {
se.pages.RemovePage("relationships")
se.pages.SwitchToPage("table-editor")
})
// Set up button navigation
btnNew.SetInputCapture(func(event *tcell.EventKey) *tcell.EventKey {
if event.Key() == tcell.KeyBacktab {
se.app.SetFocus(relTable)
return nil
}
if event.Key() == tcell.KeyTab {
se.app.SetFocus(btnEdit)
return nil
}
return event
})
btnEdit.SetInputCapture(func(event *tcell.EventKey) *tcell.EventKey {
if event.Key() == tcell.KeyBacktab {
se.app.SetFocus(btnNew)
return nil
}
if event.Key() == tcell.KeyTab {
se.app.SetFocus(btnDelete)
return nil
}
return event
})
btnDelete.SetInputCapture(func(event *tcell.EventKey) *tcell.EventKey {
if event.Key() == tcell.KeyBacktab {
se.app.SetFocus(btnEdit)
return nil
}
if event.Key() == tcell.KeyTab {
se.app.SetFocus(btnBack)
return nil
}
return event
})
btnBack.SetInputCapture(func(event *tcell.EventKey) *tcell.EventKey {
if event.Key() == tcell.KeyBacktab {
se.app.SetFocus(btnDelete)
return nil
}
if event.Key() == tcell.KeyTab {
se.app.SetFocus(relTable)
return nil
}
return event
})
btnFlex.AddItem(btnNew, 0, 1, true).
AddItem(btnEdit, 0, 1, false).
AddItem(btnDelete, 0, 1, false).
AddItem(btnBack, 0, 1, false)
relTable.SetInputCapture(func(event *tcell.EventKey) *tcell.EventKey {
if event.Key() == tcell.KeyEscape {
se.pages.RemovePage("relationships")
se.pages.SwitchToPage("table-editor")
return nil
}
if event.Key() == tcell.KeyTab {
se.app.SetFocus(btnNew)
return nil
}
if event.Key() == tcell.KeyEnter {
row, _ := relTable.GetSelection()
if row > 0 && row <= len(relNames) {
relName := relNames[row-1]
se.showEditRelationshipDialog(schemaIndex, tableIndex, relName)
}
return nil
}
if event.Rune() == 'n' {
se.showNewRelationshipDialog(schemaIndex, tableIndex)
return nil
}
if event.Rune() == 'e' {
row, _ := relTable.GetSelection()
if row > 0 && row <= len(relNames) {
relName := relNames[row-1]
se.showEditRelationshipDialog(schemaIndex, tableIndex, relName)
}
return nil
}
if event.Rune() == 'd' {
row, _ := relTable.GetSelection()
if row > 0 && row <= len(relNames) {
relName := relNames[row-1]
se.showDeleteRelationshipConfirm(schemaIndex, tableIndex, relName)
}
return nil
}
if event.Rune() == 'b' {
se.pages.RemovePage("relationships")
se.pages.SwitchToPage("table-editor")
return nil
}
return event
})
flex.AddItem(title, 1, 0, false).
AddItem(relTable, 0, 1, true).
AddItem(btnFlex, 1, 0, false)
se.pages.AddPage("relationships", flex, true, true)
}
// showNewRelationshipDialog shows dialog to create a new relationship
func (se *SchemaEditor) showNewRelationshipDialog(schemaIndex, tableIndex int) {
table := se.GetTable(schemaIndex, tableIndex)
if table == nil {
return
}
form := tview.NewForm()
// Collect all tables for dropdown
var allTables []string
var tableMap []struct{ schemaIdx, tableIdx int }
for si, schema := range se.db.Schemas {
for ti, t := range schema.Tables {
tableName := t.Name
if schema.Name != table.Schema {
tableName = schema.Name + "." + t.Name
}
allTables = append(allTables, tableName)
tableMap = append(tableMap, struct{ schemaIdx, tableIdx int }{si, ti})
}
}
relName := ""
relType := models.OneToMany
fromColumns := ""
toColumns := ""
description := ""
selectedTableIdx := 0
form.AddInputField("Name", "", 40, nil, func(value string) {
relName = value
})
form.AddDropDown("Type", []string{
string(models.OneToOne),
string(models.OneToMany),
string(models.ManyToMany),
}, 1, func(option string, optionIndex int) {
relType = models.RelationType(option)
})
form.AddInputField("From Columns (comma-separated)", "", 40, nil, func(value string) {
fromColumns = value
})
form.AddDropDown("To Table", allTables, 0, func(option string, optionIndex int) {
selectedTableIdx = optionIndex
})
form.AddInputField("To Columns (comma-separated)", "", 40, nil, func(value string) {
toColumns = value
})
form.AddInputField("Description", "", 60, nil, func(value string) {
description = value
})
form.AddButton("Save", func() {
if relName == "" {
return
}
// Parse columns
fromCols := strings.Split(fromColumns, ",")
for i := range fromCols {
fromCols[i] = strings.TrimSpace(fromCols[i])
}
toCols := strings.Split(toColumns, ",")
for i := range toCols {
toCols[i] = strings.TrimSpace(toCols[i])
}
// Get target table
targetSchema := se.db.Schemas[tableMap[selectedTableIdx].schemaIdx]
targetTable := targetSchema.Tables[tableMap[selectedTableIdx].tableIdx]
rel := models.InitRelationship(relName, relType)
rel.FromTable = table.Name
rel.FromSchema = table.Schema
rel.FromColumns = fromCols
rel.ToTable = targetTable.Name
rel.ToSchema = targetTable.Schema
rel.ToColumns = toCols
rel.Description = description
se.CreateRelationship(schemaIndex, tableIndex, rel)
se.pages.RemovePage("new-relationship")
se.pages.RemovePage("relationships")
se.showRelationshipList(schemaIndex, tableIndex)
})
form.AddButton("Back", func() {
se.pages.RemovePage("new-relationship")
})
form.SetBorder(true).SetTitle(" New Relationship ").SetTitleAlign(tview.AlignLeft)
form.SetInputCapture(func(event *tcell.EventKey) *tcell.EventKey {
if event.Key() == tcell.KeyEscape {
se.pages.RemovePage("new-relationship")
return nil
}
return event
})
se.pages.AddPage("new-relationship", form, true, true)
}
// showEditRelationshipDialog shows dialog to edit a relationship
func (se *SchemaEditor) showEditRelationshipDialog(schemaIndex, tableIndex int, relName string) {
table := se.GetTable(schemaIndex, tableIndex)
if table == nil {
return
}
rel := se.GetRelationship(schemaIndex, tableIndex, relName)
if rel == nil {
return
}
form := tview.NewForm()
// Collect all tables for dropdown
var allTables []string
var tableMap []struct{ schemaIdx, tableIdx int }
selectedTableIdx := 0
for si, schema := range se.db.Schemas {
for ti, t := range schema.Tables {
tableName := t.Name
if schema.Name != table.Schema {
tableName = schema.Name + "." + t.Name
}
allTables = append(allTables, tableName)
tableMap = append(tableMap, struct{ schemaIdx, tableIdx int }{si, ti})
// Check if this is the current target table
if t.Name == rel.ToTable && schema.Name == rel.ToSchema {
selectedTableIdx = len(allTables) - 1
}
}
}
newName := rel.Name
relType := rel.Type
fromColumns := strings.Join(rel.FromColumns, ", ")
toColumns := strings.Join(rel.ToColumns, ", ")
description := rel.Description
form.AddInputField("Name", rel.Name, 40, nil, func(value string) {
newName = value
})
// Find initial type index
typeIdx := 1 // OneToMany default
typeOptions := []string{
string(models.OneToOne),
string(models.OneToMany),
string(models.ManyToMany),
}
for i, opt := range typeOptions {
if opt == string(rel.Type) {
typeIdx = i
break
}
}
form.AddDropDown("Type", typeOptions, typeIdx, func(option string, optionIndex int) {
relType = models.RelationType(option)
})
form.AddInputField("From Columns (comma-separated)", fromColumns, 40, nil, func(value string) {
fromColumns = value
})
form.AddDropDown("To Table", allTables, selectedTableIdx, func(option string, optionIndex int) {
selectedTableIdx = optionIndex
})
form.AddInputField("To Columns (comma-separated)", toColumns, 40, nil, func(value string) {
toColumns = value
})
form.AddInputField("Description", rel.Description, 60, nil, func(value string) {
description = value
})
form.AddButton("Save", func() {
if newName == "" {
return
}
// Parse columns
fromCols := strings.Split(fromColumns, ",")
for i := range fromCols {
fromCols[i] = strings.TrimSpace(fromCols[i])
}
toCols := strings.Split(toColumns, ",")
for i := range toCols {
toCols[i] = strings.TrimSpace(toCols[i])
}
// Get target table
targetSchema := se.db.Schemas[tableMap[selectedTableIdx].schemaIdx]
targetTable := targetSchema.Tables[tableMap[selectedTableIdx].tableIdx]
updatedRel := models.InitRelationship(newName, relType)
updatedRel.FromTable = table.Name
updatedRel.FromSchema = table.Schema
updatedRel.FromColumns = fromCols
updatedRel.ToTable = targetTable.Name
updatedRel.ToSchema = targetTable.Schema
updatedRel.ToColumns = toCols
updatedRel.Description = description
updatedRel.GUID = rel.GUID
se.UpdateRelationship(schemaIndex, tableIndex, relName, updatedRel)
se.pages.RemovePage("edit-relationship")
se.pages.RemovePage("relationships")
se.showRelationshipList(schemaIndex, tableIndex)
})
form.AddButton("Back", func() {
se.pages.RemovePage("edit-relationship")
})
form.SetBorder(true).SetTitle(" Edit Relationship ").SetTitleAlign(tview.AlignLeft)
form.SetInputCapture(func(event *tcell.EventKey) *tcell.EventKey {
if event.Key() == tcell.KeyEscape {
se.pages.RemovePage("edit-relationship")
return nil
}
return event
})
se.pages.AddPage("edit-relationship", form, true, true)
}
// showDeleteRelationshipConfirm shows confirmation dialog for deleting a relationship
func (se *SchemaEditor) showDeleteRelationshipConfirm(schemaIndex, tableIndex int, relName string) {
modal := tview.NewModal().
SetText(fmt.Sprintf("Delete relationship '%s'? This action cannot be undone.", relName)).
AddButtons([]string{"Cancel", "Delete"}).
SetDoneFunc(func(buttonIndex int, buttonLabel string) {
if buttonLabel == "Delete" {
se.DeleteRelationship(schemaIndex, tableIndex, relName)
se.pages.RemovePage("delete-relationship-confirm")
se.pages.RemovePage("relationships")
se.showRelationshipList(schemaIndex, tableIndex)
} else {
se.pages.RemovePage("delete-relationship-confirm")
}
})
modal.SetInputCapture(func(event *tcell.EventKey) *tcell.EventKey {
if event.Key() == tcell.KeyEscape {
se.pages.RemovePage("delete-relationship-confirm")
return nil
}
return event
})
se.pages.AddAndSwitchToPage("delete-relationship-confirm", modal, true)
}

View File

@@ -270,6 +270,9 @@ func (se *SchemaEditor) showTableEditor(schemaIndex, tableIndex int, table *mode
se.showColumnEditor(schemaIndex, tableIndex, row-1, column) se.showColumnEditor(schemaIndex, tableIndex, row-1, column)
} }
}) })
btnRelations := tview.NewButton("Relations [r]").SetSelectedFunc(func() {
se.showRelationshipList(schemaIndex, tableIndex)
})
btnDelTable := tview.NewButton("Delete Table [d]").SetSelectedFunc(func() { btnDelTable := tview.NewButton("Delete Table [d]").SetSelectedFunc(func() {
se.showDeleteTableConfirm(schemaIndex, tableIndex) se.showDeleteTableConfirm(schemaIndex, tableIndex)
}) })
@@ -308,6 +311,18 @@ func (se *SchemaEditor) showTableEditor(schemaIndex, tableIndex int, table *mode
se.app.SetFocus(btnEditColumn) se.app.SetFocus(btnEditColumn)
return nil return nil
} }
if event.Key() == tcell.KeyTab {
se.app.SetFocus(btnRelations)
return nil
}
return event
})
btnRelations.SetInputCapture(func(event *tcell.EventKey) *tcell.EventKey {
if event.Key() == tcell.KeyBacktab {
se.app.SetFocus(btnEditTable)
return nil
}
if event.Key() == tcell.KeyTab { if event.Key() == tcell.KeyTab {
se.app.SetFocus(btnDelTable) se.app.SetFocus(btnDelTable)
return nil return nil
@@ -317,7 +332,7 @@ func (se *SchemaEditor) showTableEditor(schemaIndex, tableIndex int, table *mode
btnDelTable.SetInputCapture(func(event *tcell.EventKey) *tcell.EventKey { btnDelTable.SetInputCapture(func(event *tcell.EventKey) *tcell.EventKey {
if event.Key() == tcell.KeyBacktab { if event.Key() == tcell.KeyBacktab {
se.app.SetFocus(btnEditTable) se.app.SetFocus(btnRelations)
return nil return nil
} }
if event.Key() == tcell.KeyTab { if event.Key() == tcell.KeyTab {
@@ -342,6 +357,7 @@ func (se *SchemaEditor) showTableEditor(schemaIndex, tableIndex int, table *mode
btnFlex.AddItem(btnNewCol, 0, 1, true). btnFlex.AddItem(btnNewCol, 0, 1, true).
AddItem(btnEditColumn, 0, 1, false). AddItem(btnEditColumn, 0, 1, false).
AddItem(btnEditTable, 0, 1, false). AddItem(btnEditTable, 0, 1, false).
AddItem(btnRelations, 0, 1, false).
AddItem(btnDelTable, 0, 1, false). AddItem(btnDelTable, 0, 1, false).
AddItem(btnBack, 0, 1, false) AddItem(btnBack, 0, 1, false)
@@ -373,6 +389,10 @@ func (se *SchemaEditor) showTableEditor(schemaIndex, tableIndex int, table *mode
} }
return nil return nil
} }
if event.Rune() == 'r' {
se.showRelationshipList(schemaIndex, tableIndex)
return nil
}
if event.Rune() == 'b' { if event.Rune() == 'b' {
se.pages.RemovePage("table-editor") se.pages.RemovePage("table-editor")
se.pages.SwitchToPage("schema-editor") se.pages.SwitchToPage("schema-editor")

View File

@@ -110,8 +110,7 @@ func NewModelData(table *models.Table, schema string, typeMapper *TypeMapper, fl
tableName := writers.QualifiedTableName(schema, table.Name, flattenSchema) tableName := writers.QualifiedTableName(schema, table.Name, flattenSchema)
// Generate model name: Model + Schema + Table (all PascalCase) // Generate model name: Model + Schema + Table (all PascalCase)
singularTable := Singularize(table.Name) tablePart := SnakeCaseToPascalCase(table.Name)
tablePart := SnakeCaseToPascalCase(singularTable)
// Include schema name in model name // Include schema name in model name
var modelName string var modelName string

View File

@@ -62,6 +62,17 @@ func (tm *TypeMapper) isSimpleType(sqlType string) bool {
return simpleTypes[sqlType] return simpleTypes[sqlType]
} }
// isSerialType checks if a SQL type is a serial type (auto-incrementing)
func (tm *TypeMapper) isSerialType(sqlType string) bool {
baseType := tm.extractBaseType(sqlType)
serialTypes := map[string]bool{
"serial": true,
"bigserial": true,
"smallserial": true,
}
return serialTypes[baseType]
}
// baseGoType returns the base Go type for a SQL type (not null, simple types only) // baseGoType returns the base Go type for a SQL type (not null, simple types only)
func (tm *TypeMapper) baseGoType(sqlType string) string { func (tm *TypeMapper) baseGoType(sqlType string) string {
typeMap := map[string]string{ typeMap := map[string]string{
@@ -122,10 +133,10 @@ func (tm *TypeMapper) bunGoType(sqlType string) string {
"decimal": tm.sqlTypesAlias + ".SqlFloat64", "decimal": tm.sqlTypesAlias + ".SqlFloat64",
// Date/Time types // Date/Time types
"timestamp": tm.sqlTypesAlias + ".SqlTime", "timestamp": tm.sqlTypesAlias + ".SqlTimeStamp",
"timestamp without time zone": tm.sqlTypesAlias + ".SqlTime", "timestamp without time zone": tm.sqlTypesAlias + ".SqlTimeStamp",
"timestamp with time zone": tm.sqlTypesAlias + ".SqlTime", "timestamp with time zone": tm.sqlTypesAlias + ".SqlTimeStamp",
"timestamptz": tm.sqlTypesAlias + ".SqlTime", "timestamptz": tm.sqlTypesAlias + ".SqlTimeStamp",
"date": tm.sqlTypesAlias + ".SqlDate", "date": tm.sqlTypesAlias + ".SqlDate",
"time": tm.sqlTypesAlias + ".SqlTime", "time": tm.sqlTypesAlias + ".SqlTime",
"time without time zone": tm.sqlTypesAlias + ".SqlTime", "time without time zone": tm.sqlTypesAlias + ".SqlTime",
@@ -190,6 +201,11 @@ func (tm *TypeMapper) BuildBunTag(column *models.Column, table *models.Table) st
parts = append(parts, "pk") parts = append(parts, "pk")
} }
// Auto increment (for serial types or explicit auto_increment)
if column.AutoIncrement || tm.isSerialType(column.Type) {
parts = append(parts, "autoincrement")
}
// Default value // Default value
if column.Default != nil { if column.Default != nil {
// Sanitize default value to remove backticks // Sanitize default value to remove backticks
@@ -251,7 +267,15 @@ func (tm *TypeMapper) BuildRelationshipTag(constraint *models.Constraint, relTyp
if len(constraint.Columns) > 0 && len(constraint.ReferencedColumns) > 0 { if len(constraint.Columns) > 0 && len(constraint.ReferencedColumns) > 0 {
localCol := constraint.Columns[0] localCol := constraint.Columns[0]
foreignCol := constraint.ReferencedColumns[0] foreignCol := constraint.ReferencedColumns[0]
parts = append(parts, fmt.Sprintf("join:%s=%s", localCol, foreignCol))
// For has-many relationships, swap the columns
// has-one: join:fk_in_this_table=pk_in_other_table
// has-many: join:pk_in_this_table=fk_in_other_table
if relType == "has-many" {
parts = append(parts, fmt.Sprintf("join:%s=%s", foreignCol, localCol))
} else {
parts = append(parts, fmt.Sprintf("join:%s=%s", localCol, foreignCol))
}
} }
return strings.Join(parts, ",") return strings.Join(parts, ",")

View File

@@ -318,8 +318,7 @@ func (w *Writer) findTable(schemaName, tableName string, db *models.Database) *m
// getModelName generates the model name from schema and table name // getModelName generates the model name from schema and table name
func (w *Writer) getModelName(schemaName, tableName string) string { func (w *Writer) getModelName(schemaName, tableName string) string {
singular := Singularize(tableName) tablePart := SnakeCaseToPascalCase(tableName)
tablePart := SnakeCaseToPascalCase(singular)
// Include schema name in model name // Include schema name in model name
var modelName string var modelName string

View File

@@ -66,7 +66,7 @@ func TestWriter_WriteTable(t *testing.T) {
// Verify key elements are present // Verify key elements are present
expectations := []string{ expectations := []string{
"package models", "package models",
"type ModelPublicUser struct", "type ModelPublicUsers struct",
"bun.BaseModel", "bun.BaseModel",
"table:public.users", "table:public.users",
"alias:users", "alias:users",
@@ -78,9 +78,9 @@ func TestWriter_WriteTable(t *testing.T) {
"resolvespec_common.SqlTime", "resolvespec_common.SqlTime",
"bun:\"id", "bun:\"id",
"bun:\"email", "bun:\"email",
"func (m ModelPublicUser) TableName() string", "func (m ModelPublicUsers) TableName() string",
"return \"public.users\"", "return \"public.users\"",
"func (m ModelPublicUser) GetID() int64", "func (m ModelPublicUsers) GetID() int64",
} }
for _, expected := range expectations { for _, expected := range expectations {
@@ -90,8 +90,8 @@ func TestWriter_WriteTable(t *testing.T) {
} }
// Verify Bun-specific elements // Verify Bun-specific elements
if !strings.Contains(generated, "bun:\"id,type:bigint,pk,") { if !strings.Contains(generated, "bun:\"id,type:bigint,pk,autoincrement,") {
t.Errorf("Missing Bun-style primary key tag") t.Errorf("Missing Bun-style primary key tag with autoincrement")
} }
} }
@@ -308,14 +308,20 @@ func TestWriter_MultipleReferencesToSameTable(t *testing.T) {
filepointerStr := string(filepointerContent) filepointerStr := string(filepointerContent)
// Should have two different has-many relationships with unique names // Should have two different has-many relationships with unique names
hasManyExpectations := []string{ hasManyExpectations := []struct {
"RelRIDFilepointerRequestOrgAPIEvents", // Has many via rid_filepointer_request fieldName string
"RelRIDFilepointerResponseOrgAPIEvents", // Has many via rid_filepointer_response tag string
}{
{"RelRIDFilepointerRequestOrgAPIEvents", "join:id_filepointer=rid_filepointer_request"}, // Has many via rid_filepointer_request
{"RelRIDFilepointerResponseOrgAPIEvents", "join:id_filepointer=rid_filepointer_response"}, // Has many via rid_filepointer_response
} }
for _, exp := range hasManyExpectations { for _, exp := range hasManyExpectations {
if !strings.Contains(filepointerStr, exp) { if !strings.Contains(filepointerStr, exp.fieldName) {
t.Errorf("Missing has-many relationship field: %s\nGenerated:\n%s", exp, filepointerStr) t.Errorf("Missing has-many relationship field: %s\nGenerated:\n%s", exp.fieldName, filepointerStr)
}
if !strings.Contains(filepointerStr, exp.tag) {
t.Errorf("Missing has-many relationship join tag: %s\nGenerated:\n%s", exp.tag, filepointerStr)
} }
} }
} }
@@ -561,8 +567,8 @@ func TestTypeMapper_SQLTypeToGoType_Bun(t *testing.T) {
{"bigint", false, "resolvespec_common.SqlInt64"}, {"bigint", false, "resolvespec_common.SqlInt64"},
{"varchar", true, "resolvespec_common.SqlString"}, // Bun uses sql types even for NOT NULL strings {"varchar", true, "resolvespec_common.SqlString"}, // Bun uses sql types even for NOT NULL strings
{"varchar", false, "resolvespec_common.SqlString"}, {"varchar", false, "resolvespec_common.SqlString"},
{"timestamp", true, "resolvespec_common.SqlTime"}, {"timestamp", true, "resolvespec_common.SqlTimeStamp"},
{"timestamp", false, "resolvespec_common.SqlTime"}, {"timestamp", false, "resolvespec_common.SqlTimeStamp"},
{"date", false, "resolvespec_common.SqlDate"}, {"date", false, "resolvespec_common.SqlDate"},
{"boolean", true, "bool"}, {"boolean", true, "bool"},
{"boolean", false, "resolvespec_common.SqlBool"}, {"boolean", false, "resolvespec_common.SqlBool"},
@@ -618,6 +624,37 @@ func TestTypeMapper_BuildBunTag(t *testing.T) {
}, },
want: []string{"status,", "type:text,", "default:active,"}, want: []string{"status,", "type:text,", "default:active,"},
}, },
{
name: "auto increment with AutoIncrement flag",
column: &models.Column{
Name: "id",
Type: "bigint",
NotNull: true,
IsPrimaryKey: true,
AutoIncrement: true,
},
want: []string{"id,", "type:bigint,", "pk,", "autoincrement,"},
},
{
name: "serial type (auto-increment)",
column: &models.Column{
Name: "id",
Type: "serial",
NotNull: true,
IsPrimaryKey: true,
},
want: []string{"id,", "type:serial,", "pk,", "autoincrement,"},
},
{
name: "bigserial type (auto-increment)",
column: &models.Column{
Name: "id",
Type: "bigserial",
NotNull: true,
IsPrimaryKey: true,
},
want: []string{"id,", "type:bigserial,", "pk,", "autoincrement,"},
},
} }
for _, tt := range tests { for _, tt := range tests {

67
pkg/writers/doc.go Normal file
View File

@@ -0,0 +1,67 @@
// Package writers provides interfaces and implementations for writing database schemas
// to various output formats and destinations.
//
// # Overview
//
// The writers package defines a common Writer interface that all format-specific writers
// implement. This allows RelSpec to export database schemas to multiple formats including:
// - SQL schema files (PostgreSQL, SQLite)
// - Schema definition files (DBML, DCTX, DrawDB, GraphQL)
// - ORM model files (GORM, Bun, Drizzle, Prisma, TypeORM)
// - Data interchange formats (JSON, YAML)
//
// # Architecture
//
// Each writer implementation is located in its own subpackage (e.g., pkg/writers/dbml,
// pkg/writers/pgsql) and implements the Writer interface, supporting three levels of
// granularity:
// - WriteDatabase() - Write complete database with all schemas
// - WriteSchema() - Write single schema with all tables
// - WriteTable() - Write single table with all columns and metadata
//
// # Usage
//
// Writers are instantiated with WriterOptions containing destination-specific configuration:
//
// // Write to file
// writer := dbml.NewWriter(&writers.WriterOptions{
// OutputPath: "schema.dbml",
// })
// err := writer.WriteDatabase(db)
//
// // Write ORM models with package name
// writer := gorm.NewWriter(&writers.WriterOptions{
// OutputPath: "./models",
// PackageName: "models",
// })
// err := writer.WriteDatabase(db)
//
// // Write with schema flattening for SQLite
// writer := sqlite.NewWriter(&writers.WriterOptions{
// OutputPath: "schema.sql",
// FlattenSchema: true,
// })
// err := writer.WriteDatabase(db)
//
// # Schema Flattening
//
// The FlattenSchema option controls how schema-qualified table names are handled:
// - false (default): Uses dot notation (schema.table)
// - true: Joins with underscore (schema_table), useful for SQLite
//
// # Supported Formats
//
// - dbml: Database Markup Language files
// - dctx: DCTX schema files
// - drawdb: DrawDB JSON format
// - graphql: GraphQL schema definition language
// - json: JSON database schema
// - yaml: YAML database schema
// - gorm: Go GORM model structs
// - bun: Go Bun model structs
// - drizzle: TypeScript Drizzle ORM schemas
// - prisma: Prisma schema language
// - typeorm: TypeScript TypeORM entities
// - pgsql: PostgreSQL SQL schema
// - sqlite: SQLite SQL schema with automatic flattening
package writers

View File

@@ -109,8 +109,7 @@ func NewModelData(table *models.Table, schema string, typeMapper *TypeMapper, fl
tableName := writers.QualifiedTableName(schema, table.Name, flattenSchema) tableName := writers.QualifiedTableName(schema, table.Name, flattenSchema)
// Generate model name: Model + Schema + Table (all PascalCase) // Generate model name: Model + Schema + Table (all PascalCase)
singularTable := Singularize(table.Name) tablePart := SnakeCaseToPascalCase(table.Name)
tablePart := SnakeCaseToPascalCase(singularTable)
// Include schema name in model name // Include schema name in model name
var modelName string var modelName string

View File

@@ -158,10 +158,10 @@ func (tm *TypeMapper) nullableGoType(sqlType string) string {
"decimal": tm.sqlTypesAlias + ".SqlFloat64", "decimal": tm.sqlTypesAlias + ".SqlFloat64",
// Date/Time types // Date/Time types
"timestamp": tm.sqlTypesAlias + ".SqlTime", "timestamp": tm.sqlTypesAlias + ".SqlTimeStamp",
"timestamp without time zone": tm.sqlTypesAlias + ".SqlTime", "timestamp without time zone": tm.sqlTypesAlias + ".SqlTimeStamp",
"timestamp with time zone": tm.sqlTypesAlias + ".SqlTime", "timestamp with time zone": tm.sqlTypesAlias + ".SqlTimeStamp",
"timestamptz": tm.sqlTypesAlias + ".SqlTime", "timestamptz": tm.sqlTypesAlias + ".SqlTimeStamp",
"date": tm.sqlTypesAlias + ".SqlDate", "date": tm.sqlTypesAlias + ".SqlDate",
"time": tm.sqlTypesAlias + ".SqlTime", "time": tm.sqlTypesAlias + ".SqlTime",
"time without time zone": tm.sqlTypesAlias + ".SqlTime", "time without time zone": tm.sqlTypesAlias + ".SqlTime",

View File

@@ -312,8 +312,7 @@ func (w *Writer) findTable(schemaName, tableName string, db *models.Database) *m
// getModelName generates the model name from schema and table name // getModelName generates the model name from schema and table name
func (w *Writer) getModelName(schemaName, tableName string) string { func (w *Writer) getModelName(schemaName, tableName string) string {
singular := Singularize(tableName) tablePart := SnakeCaseToPascalCase(tableName)
tablePart := SnakeCaseToPascalCase(singular)
// Include schema name in model name // Include schema name in model name
var modelName string var modelName string

View File

@@ -66,7 +66,7 @@ func TestWriter_WriteTable(t *testing.T) {
// Verify key elements are present // Verify key elements are present
expectations := []string{ expectations := []string{
"package models", "package models",
"type ModelPublicUser struct", "type ModelPublicUsers struct",
"ID", "ID",
"int64", "int64",
"Email", "Email",
@@ -75,9 +75,9 @@ func TestWriter_WriteTable(t *testing.T) {
"time.Time", "time.Time",
"gorm:\"column:id", "gorm:\"column:id",
"gorm:\"column:email", "gorm:\"column:email",
"func (m ModelPublicUser) TableName() string", "func (m ModelPublicUsers) TableName() string",
"return \"public.users\"", "return \"public.users\"",
"func (m ModelPublicUser) GetID() int64", "func (m ModelPublicUsers) GetID() int64",
} }
for _, expected := range expectations { for _, expected := range expectations {
@@ -655,7 +655,7 @@ func TestTypeMapper_SQLTypeToGoType(t *testing.T) {
{"varchar", true, "string"}, {"varchar", true, "string"},
{"varchar", false, "sql_types.SqlString"}, {"varchar", false, "sql_types.SqlString"},
{"timestamp", true, "time.Time"}, {"timestamp", true, "time.Time"},
{"timestamp", false, "sql_types.SqlTime"}, {"timestamp", false, "sql_types.SqlTimeStamp"},
{"boolean", true, "bool"}, {"boolean", true, "bool"},
{"boolean", false, "sql_types.SqlBool"}, {"boolean", false, "sql_types.SqlBool"},
} }

130
pkg/writers/mssql/README.md Normal file
View File

@@ -0,0 +1,130 @@
# MSSQL Writer
Generates Microsoft SQL Server DDL (Data Definition Language) from database schema models.
## Features
- **DDL Generation**: Generates complete SQL scripts for creating MSSQL schema
- **Schema Support**: Creates multiple schemas with proper naming
- **Bracket Notation**: Uses [schema].[table] bracket notation for identifiers
- **Identity Columns**: Generates IDENTITY(1,1) for auto-increment columns
- **Constraints**: Generates primary keys, foreign keys, unique, and check constraints
- **Indexes**: Creates indexes with unique support
- **Extended Properties**: Uses sp_addextendedproperty for comments
- **Direct Execution**: Can directly execute DDL on MSSQL database
- **Schema Flattening**: Optional schema flattening for compatibility
## Features by Phase
1. **Phase 1**: Create schemas
2. **Phase 2**: Create tables with columns, identity, and defaults
3. **Phase 3**: Add primary key constraints
4. **Phase 4**: Create indexes
5. **Phase 5**: Add unique constraints
6. **Phase 6**: Add check constraints
7. **Phase 7**: Add foreign key constraints
8. **Phase 8**: Add extended properties (comments)
## Type Mappings
| Canonical Type | MSSQL Type |
|----------------|-----------|
| int | INT |
| int64 | BIGINT |
| int16 | SMALLINT |
| int8 | TINYINT |
| bool | BIT |
| float32 | REAL |
| float64 | FLOAT |
| decimal | NUMERIC |
| string | NVARCHAR(255) |
| text | NVARCHAR(MAX) |
| timestamp | DATETIME2 |
| timestamptz | DATETIMEOFFSET |
| uuid | UNIQUEIDENTIFIER |
| bytea | VARBINARY(MAX) |
| date | DATE |
| time | TIME |
## Usage
### Generate SQL File
```go
import "git.warky.dev/wdevs/relspecgo/pkg/writers/mssql"
import "git.warky.dev/wdevs/relspecgo/pkg/writers"
writer := mssql.NewWriter(&writers.WriterOptions{
OutputPath: "schema.sql",
FlattenSchema: false,
})
err := writer.WriteDatabase(db)
if err != nil {
panic(err)
}
```
### Direct Database Execution
```go
writer := mssql.NewWriter(&writers.WriterOptions{
OutputPath: "",
Metadata: map[string]interface{}{
"connection_string": "sqlserver://sa:password@localhost/newdb",
},
})
err := writer.WriteDatabase(db)
if err != nil {
panic(err)
}
```
### CLI Usage
Generate SQL file:
```bash
relspec convert --from json --from-path schema.json \
--to mssql --to-path schema.sql
```
Execute directly to database:
```bash
relspec convert --from json --from-path schema.json \
--to mssql \
--metadata '{"connection_string":"sqlserver://sa:password@localhost/mydb"}'
```
## Default Values
The writer supports several default value patterns:
- Functions: `GETDATE()`, `CURRENT_TIMESTAMP()`
- Literals: strings wrapped in quotes, numbers, booleans (0/1 for BIT)
- CAST expressions
## Comments/Extended Properties
Table and column descriptions are stored as MS_Description extended properties:
```sql
EXEC sp_addextendedproperty
@name = 'MS_Description',
@value = 'Table description here',
@level0type = 'SCHEMA', @level0name = 'dbo',
@level1type = 'TABLE', @level1name = 'my_table';
```
## Testing
Run tests with:
```bash
go test ./pkg/writers/mssql/...
```
## Limitations
- Views are not currently supported in the writer
- Sequences are not supported (MSSQL uses IDENTITY instead)
- Partitioning and advanced features are not supported
- Generated DDL assumes no triggers or computed columns

579
pkg/writers/mssql/writer.go Normal file
View File

@@ -0,0 +1,579 @@
package mssql
import (
"context"
"database/sql"
"fmt"
"io"
"os"
"sort"
"strings"
_ "github.com/microsoft/go-mssqldb" // MSSQL driver
"git.warky.dev/wdevs/relspecgo/pkg/models"
"git.warky.dev/wdevs/relspecgo/pkg/mssql"
"git.warky.dev/wdevs/relspecgo/pkg/writers"
)
// Writer implements the Writer interface for MSSQL SQL output
type Writer struct {
options *writers.WriterOptions
writer io.Writer
}
// NewWriter creates a new MSSQL SQL writer
func NewWriter(options *writers.WriterOptions) *Writer {
return &Writer{
options: options,
}
}
// qualTable returns a schema-qualified name using bracket notation
func (w *Writer) qualTable(schema, name string) string {
if w.options.FlattenSchema {
return fmt.Sprintf("[%s]", name)
}
return fmt.Sprintf("[%s].[%s]", schema, name)
}
// WriteDatabase writes the entire database schema as SQL
func (w *Writer) WriteDatabase(db *models.Database) error {
// Check if we should execute SQL directly on a database
if connString, ok := w.options.Metadata["connection_string"].(string); ok && connString != "" {
return w.executeDatabaseSQL(db, connString)
}
var writer io.Writer
var file *os.File
var err error
// Use existing writer if already set (for testing)
if w.writer != nil {
writer = w.writer
} else if w.options.OutputPath != "" {
// Determine output destination
file, err = os.Create(w.options.OutputPath)
if err != nil {
return fmt.Errorf("failed to create output file: %w", err)
}
defer file.Close()
writer = file
} else {
writer = os.Stdout
}
w.writer = writer
// Write header comment
fmt.Fprintf(w.writer, "-- MSSQL Database Schema\n")
fmt.Fprintf(w.writer, "-- Database: %s\n", db.Name)
fmt.Fprintf(w.writer, "-- Generated by RelSpec\n\n")
// Process each schema in the database
for _, schema := range db.Schemas {
if err := w.WriteSchema(schema); err != nil {
return fmt.Errorf("failed to write schema %s: %w", schema.Name, err)
}
}
return nil
}
// WriteSchema writes a single schema and all its tables
func (w *Writer) WriteSchema(schema *models.Schema) error {
if w.writer == nil {
w.writer = os.Stdout
}
// Phase 1: Create schema (skip dbo schema and when flattening)
if schema.Name != "dbo" && !w.options.FlattenSchema {
fmt.Fprintf(w.writer, "-- Schema: %s\n", schema.Name)
fmt.Fprintf(w.writer, "CREATE SCHEMA [%s];\n\n", schema.Name)
}
// Phase 2: Create tables with columns
fmt.Fprintf(w.writer, "-- Tables for schema: %s\n", schema.Name)
for _, table := range schema.Tables {
if err := w.writeCreateTable(schema, table); err != nil {
return err
}
}
// Phase 3: Primary keys
fmt.Fprintf(w.writer, "-- Primary keys for schema: %s\n", schema.Name)
for _, table := range schema.Tables {
if err := w.writePrimaryKey(schema, table); err != nil {
return err
}
}
// Phase 4: Indexes
fmt.Fprintf(w.writer, "-- Indexes for schema: %s\n", schema.Name)
for _, table := range schema.Tables {
if err := w.writeIndexes(schema, table); err != nil {
return err
}
}
// Phase 5: Unique constraints
fmt.Fprintf(w.writer, "-- Unique constraints for schema: %s\n", schema.Name)
for _, table := range schema.Tables {
if err := w.writeUniqueConstraints(schema, table); err != nil {
return err
}
}
// Phase 6: Check constraints
fmt.Fprintf(w.writer, "-- Check constraints for schema: %s\n", schema.Name)
for _, table := range schema.Tables {
if err := w.writeCheckConstraints(schema, table); err != nil {
return err
}
}
// Phase 7: Foreign keys
fmt.Fprintf(w.writer, "-- Foreign keys for schema: %s\n", schema.Name)
for _, table := range schema.Tables {
if err := w.writeForeignKeys(schema, table); err != nil {
return err
}
}
// Phase 8: Comments
fmt.Fprintf(w.writer, "-- Comments for schema: %s\n", schema.Name)
for _, table := range schema.Tables {
if err := w.writeComments(schema, table); err != nil {
return err
}
}
return nil
}
// WriteTable writes a single table with all its elements
func (w *Writer) WriteTable(table *models.Table) error {
if w.writer == nil {
w.writer = os.Stdout
}
// Create a temporary schema with just this table
schema := models.InitSchema(table.Schema)
schema.Tables = append(schema.Tables, table)
return w.WriteSchema(schema)
}
// writeCreateTable generates CREATE TABLE statement
func (w *Writer) writeCreateTable(schema *models.Schema, table *models.Table) error {
fmt.Fprintf(w.writer, "CREATE TABLE %s (\n", w.qualTable(schema.Name, table.Name))
// Sort columns by sequence
columns := getSortedColumns(table.Columns)
columnDefs := make([]string, 0, len(columns))
for _, col := range columns {
def := w.generateColumnDefinition(col)
columnDefs = append(columnDefs, " "+def)
}
fmt.Fprintf(w.writer, "%s\n", strings.Join(columnDefs, ",\n"))
fmt.Fprintf(w.writer, ");\n\n")
return nil
}
// generateColumnDefinition generates MSSQL column definition
func (w *Writer) generateColumnDefinition(col *models.Column) string {
parts := []string{fmt.Sprintf("[%s]", col.Name)}
// Type with length/precision
baseType := mssql.ConvertCanonicalToMSSQL(col.Type)
typeStr := baseType
// Handle specific type parameters for MSSQL
if col.Length > 0 && col.Precision == 0 {
// String types with length - override the default length from baseType
if strings.HasPrefix(baseType, "NVARCHAR") || strings.HasPrefix(baseType, "VARCHAR") ||
strings.HasPrefix(baseType, "CHAR") || strings.HasPrefix(baseType, "NCHAR") {
if col.Length > 0 && col.Length < 8000 {
// Extract base type without length specification
baseName := strings.Split(baseType, "(")[0]
typeStr = fmt.Sprintf("%s(%d)", baseName, col.Length)
}
}
} else if col.Precision > 0 {
// Numeric types with precision/scale
baseName := strings.Split(baseType, "(")[0]
if col.Scale > 0 {
typeStr = fmt.Sprintf("%s(%d,%d)", baseName, col.Precision, col.Scale)
} else {
typeStr = fmt.Sprintf("%s(%d)", baseName, col.Precision)
}
}
parts = append(parts, typeStr)
// IDENTITY for auto-increment
if col.AutoIncrement {
parts = append(parts, "IDENTITY(1,1)")
}
// NOT NULL
if col.NotNull {
parts = append(parts, "NOT NULL")
}
// DEFAULT
if col.Default != nil {
switch v := col.Default.(type) {
case string:
cleanDefault := stripBackticks(v)
if strings.HasPrefix(strings.ToUpper(cleanDefault), "GETDATE") ||
strings.HasPrefix(strings.ToUpper(cleanDefault), "CURRENT_") {
parts = append(parts, fmt.Sprintf("DEFAULT %s", cleanDefault))
} else if cleanDefault == "true" || cleanDefault == "false" {
if cleanDefault == "true" {
parts = append(parts, "DEFAULT 1")
} else {
parts = append(parts, "DEFAULT 0")
}
} else {
parts = append(parts, fmt.Sprintf("DEFAULT '%s'", escapeQuote(cleanDefault)))
}
case bool:
if v {
parts = append(parts, "DEFAULT 1")
} else {
parts = append(parts, "DEFAULT 0")
}
case int, int64:
parts = append(parts, fmt.Sprintf("DEFAULT %v", v))
}
}
return strings.Join(parts, " ")
}
// writePrimaryKey generates ALTER TABLE statement for primary key
func (w *Writer) writePrimaryKey(schema *models.Schema, table *models.Table) error {
// Find primary key constraint
var pkConstraint *models.Constraint
for _, constraint := range table.Constraints {
if constraint.Type == models.PrimaryKeyConstraint {
pkConstraint = constraint
break
}
}
var columnNames []string
pkName := fmt.Sprintf("PK_%s_%s", schema.Name, table.Name)
if pkConstraint != nil {
pkName = pkConstraint.Name
columnNames = make([]string, 0, len(pkConstraint.Columns))
for _, colName := range pkConstraint.Columns {
columnNames = append(columnNames, fmt.Sprintf("[%s]", colName))
}
} else {
// Check for columns with IsPrimaryKey = true
for _, col := range table.Columns {
if col.IsPrimaryKey {
columnNames = append(columnNames, fmt.Sprintf("[%s]", col.Name))
}
}
sort.Strings(columnNames)
}
if len(columnNames) == 0 {
return nil
}
fmt.Fprintf(w.writer, "ALTER TABLE %s ADD CONSTRAINT [%s] PRIMARY KEY (%s);\n\n",
w.qualTable(schema.Name, table.Name), pkName, strings.Join(columnNames, ", "))
return nil
}
// writeIndexes generates CREATE INDEX statements
func (w *Writer) writeIndexes(schema *models.Schema, table *models.Table) error {
// Sort indexes by name
indexNames := make([]string, 0, len(table.Indexes))
for name := range table.Indexes {
indexNames = append(indexNames, name)
}
sort.Strings(indexNames)
for _, name := range indexNames {
index := table.Indexes[name]
// Skip if it's a primary key index
if strings.HasPrefix(strings.ToLower(index.Name), "pk_") {
continue
}
// Build column list
columnExprs := make([]string, 0, len(index.Columns))
for _, colName := range index.Columns {
columnExprs = append(columnExprs, fmt.Sprintf("[%s]", colName))
}
if len(columnExprs) == 0 {
continue
}
unique := ""
if index.Unique {
unique = "UNIQUE "
}
fmt.Fprintf(w.writer, "CREATE %sINDEX [%s] ON %s (%s);\n\n",
unique, index.Name, w.qualTable(schema.Name, table.Name), strings.Join(columnExprs, ", "))
}
return nil
}
// writeUniqueConstraints generates ALTER TABLE statements for unique constraints
func (w *Writer) writeUniqueConstraints(schema *models.Schema, table *models.Table) error {
// Sort constraints by name
constraintNames := make([]string, 0)
for name, constraint := range table.Constraints {
if constraint.Type == models.UniqueConstraint {
constraintNames = append(constraintNames, name)
}
}
sort.Strings(constraintNames)
for _, name := range constraintNames {
constraint := table.Constraints[name]
// Build column list
columnExprs := make([]string, 0, len(constraint.Columns))
for _, colName := range constraint.Columns {
columnExprs = append(columnExprs, fmt.Sprintf("[%s]", colName))
}
if len(columnExprs) == 0 {
continue
}
fmt.Fprintf(w.writer, "ALTER TABLE %s ADD CONSTRAINT [%s] UNIQUE (%s);\n\n",
w.qualTable(schema.Name, table.Name), constraint.Name, strings.Join(columnExprs, ", "))
}
return nil
}
// writeCheckConstraints generates ALTER TABLE statements for check constraints
func (w *Writer) writeCheckConstraints(schema *models.Schema, table *models.Table) error {
// Sort constraints by name
constraintNames := make([]string, 0)
for name, constraint := range table.Constraints {
if constraint.Type == models.CheckConstraint {
constraintNames = append(constraintNames, name)
}
}
sort.Strings(constraintNames)
for _, name := range constraintNames {
constraint := table.Constraints[name]
if constraint.Expression == "" {
continue
}
fmt.Fprintf(w.writer, "ALTER TABLE %s ADD CONSTRAINT [%s] CHECK (%s);\n\n",
w.qualTable(schema.Name, table.Name), constraint.Name, constraint.Expression)
}
return nil
}
// writeForeignKeys generates ALTER TABLE statements for foreign keys
func (w *Writer) writeForeignKeys(schema *models.Schema, table *models.Table) error {
// Process foreign key constraints
constraintNames := make([]string, 0)
for name, constraint := range table.Constraints {
if constraint.Type == models.ForeignKeyConstraint {
constraintNames = append(constraintNames, name)
}
}
sort.Strings(constraintNames)
for _, name := range constraintNames {
constraint := table.Constraints[name]
// Build column lists
sourceColumns := make([]string, 0, len(constraint.Columns))
for _, colName := range constraint.Columns {
sourceColumns = append(sourceColumns, fmt.Sprintf("[%s]", colName))
}
targetColumns := make([]string, 0, len(constraint.ReferencedColumns))
for _, colName := range constraint.ReferencedColumns {
targetColumns = append(targetColumns, fmt.Sprintf("[%s]", colName))
}
if len(sourceColumns) == 0 || len(targetColumns) == 0 {
continue
}
refSchema := constraint.ReferencedSchema
if refSchema == "" {
refSchema = schema.Name
}
onDelete := "NO ACTION"
if constraint.OnDelete != "" {
onDelete = strings.ToUpper(constraint.OnDelete)
}
onUpdate := "NO ACTION"
if constraint.OnUpdate != "" {
onUpdate = strings.ToUpper(constraint.OnUpdate)
}
fmt.Fprintf(w.writer, "ALTER TABLE %s ADD CONSTRAINT [%s] FOREIGN KEY (%s)\n",
w.qualTable(schema.Name, table.Name), constraint.Name, strings.Join(sourceColumns, ", "))
fmt.Fprintf(w.writer, " REFERENCES %s (%s)\n",
w.qualTable(refSchema, constraint.ReferencedTable), strings.Join(targetColumns, ", "))
fmt.Fprintf(w.writer, " ON DELETE %s ON UPDATE %s;\n\n",
onDelete, onUpdate)
}
return nil
}
// writeComments generates EXEC sp_addextendedproperty statements for table and column descriptions
func (w *Writer) writeComments(schema *models.Schema, table *models.Table) error {
// Table comment
if table.Description != "" {
fmt.Fprintf(w.writer, "EXEC sp_addextendedproperty\n")
fmt.Fprintf(w.writer, " @name = 'MS_Description',\n")
fmt.Fprintf(w.writer, " @value = '%s',\n", escapeQuote(table.Description))
fmt.Fprintf(w.writer, " @level0type = 'SCHEMA', @level0name = '%s',\n", schema.Name)
fmt.Fprintf(w.writer, " @level1type = 'TABLE', @level1name = '%s';\n\n", table.Name)
}
// Column comments
for _, col := range getSortedColumns(table.Columns) {
if col.Description != "" {
fmt.Fprintf(w.writer, "EXEC sp_addextendedproperty\n")
fmt.Fprintf(w.writer, " @name = 'MS_Description',\n")
fmt.Fprintf(w.writer, " @value = '%s',\n", escapeQuote(col.Description))
fmt.Fprintf(w.writer, " @level0type = 'SCHEMA', @level0name = '%s',\n", schema.Name)
fmt.Fprintf(w.writer, " @level1type = 'TABLE', @level1name = '%s',\n", table.Name)
fmt.Fprintf(w.writer, " @level2type = 'COLUMN', @level2name = '%s';\n\n", col.Name)
}
}
return nil
}
// executeDatabaseSQL executes SQL statements directly on an MSSQL database
func (w *Writer) executeDatabaseSQL(db *models.Database, connString string) error {
// Generate SQL statements
statements := []string{}
statements = append(statements, "-- MSSQL Database Schema")
statements = append(statements, fmt.Sprintf("-- Database: %s", db.Name))
statements = append(statements, "-- Generated by RelSpec")
for _, schema := range db.Schemas {
if err := w.generateSchemaStatements(schema, &statements); err != nil {
return fmt.Errorf("failed to generate statements for schema %s: %w", schema.Name, err)
}
}
// Connect to database
dbConn, err := sql.Open("mssql", connString)
if err != nil {
return fmt.Errorf("failed to connect to database: %w", err)
}
defer dbConn.Close()
ctx := context.Background()
if err = dbConn.PingContext(ctx); err != nil {
return fmt.Errorf("failed to ping database: %w", err)
}
// Execute statements
executedCount := 0
for i, stmt := range statements {
stmtTrimmed := strings.TrimSpace(stmt)
// Skip comments and empty statements
if strings.HasPrefix(stmtTrimmed, "--") || stmtTrimmed == "" {
continue
}
fmt.Fprintf(os.Stderr, "Executing statement %d/%d...\n", i+1, len(statements))
_, execErr := dbConn.ExecContext(ctx, stmt)
if execErr != nil {
fmt.Fprintf(os.Stderr, "⚠ Warning: Statement failed: %v\n", execErr)
continue
}
executedCount++
}
fmt.Fprintf(os.Stderr, "✓ Successfully executed %d statements\n", executedCount)
return nil
}
// generateSchemaStatements generates SQL statements for a schema
func (w *Writer) generateSchemaStatements(schema *models.Schema, statements *[]string) error {
// Phase 1: Create schema
if schema.Name != "dbo" && !w.options.FlattenSchema {
*statements = append(*statements, fmt.Sprintf("-- Schema: %s", schema.Name))
*statements = append(*statements, fmt.Sprintf("CREATE SCHEMA [%s];", schema.Name))
}
// Phase 2: Create tables
*statements = append(*statements, fmt.Sprintf("-- Tables for schema: %s", schema.Name))
for _, table := range schema.Tables {
createTableSQL := fmt.Sprintf("CREATE TABLE %s (", w.qualTable(schema.Name, table.Name))
columnDefs := make([]string, 0)
columns := getSortedColumns(table.Columns)
for _, col := range columns {
def := w.generateColumnDefinition(col)
columnDefs = append(columnDefs, " "+def)
}
createTableSQL += "\n" + strings.Join(columnDefs, ",\n") + "\n)"
*statements = append(*statements, createTableSQL)
}
// Phase 3-7: Constraints and indexes will be added by WriteSchema logic
// For now, just create tables
return nil
}
// Helper functions
// getSortedColumns returns columns sorted by sequence
func getSortedColumns(columns map[string]*models.Column) []*models.Column {
names := make([]string, 0, len(columns))
for name := range columns {
names = append(names, name)
}
sort.Strings(names)
sorted := make([]*models.Column, 0, len(columns))
for _, name := range names {
sorted = append(sorted, columns[name])
}
return sorted
}
// escapeQuote escapes single quotes in strings for SQL
func escapeQuote(s string) string {
return strings.ReplaceAll(s, "'", "''")
}
// stripBackticks removes backticks from SQL expressions
func stripBackticks(s string) string {
return strings.ReplaceAll(s, "`", "")
}

View File

@@ -0,0 +1,205 @@
package mssql
import (
"bytes"
"testing"
"git.warky.dev/wdevs/relspecgo/pkg/models"
"git.warky.dev/wdevs/relspecgo/pkg/writers"
"github.com/stretchr/testify/assert"
)
// TestGenerateColumnDefinition tests column definition generation
func TestGenerateColumnDefinition(t *testing.T) {
writer := NewWriter(&writers.WriterOptions{})
tests := []struct {
name string
column *models.Column
expected string
}{
{
name: "INT NOT NULL",
column: &models.Column{
Name: "id",
Type: "int",
NotNull: true,
Sequence: 1,
},
expected: "[id] INT NOT NULL",
},
{
name: "VARCHAR with length",
column: &models.Column{
Name: "name",
Type: "string",
Length: 100,
NotNull: true,
Sequence: 2,
},
expected: "[name] NVARCHAR(100) NOT NULL",
},
{
name: "DATETIME2 with default",
column: &models.Column{
Name: "created_at",
Type: "timestamp",
NotNull: true,
Default: "GETDATE()",
Sequence: 3,
},
expected: "[created_at] DATETIME2 NOT NULL DEFAULT GETDATE()",
},
{
name: "IDENTITY column",
column: &models.Column{
Name: "id",
Type: "int",
AutoIncrement: true,
NotNull: true,
Sequence: 1,
},
expected: "[id] INT IDENTITY(1,1) NOT NULL",
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
result := writer.generateColumnDefinition(tt.column)
assert.Equal(t, tt.expected, result)
})
}
}
// TestWriteCreateTable tests CREATE TABLE statement generation
func TestWriteCreateTable(t *testing.T) {
writer := NewWriter(&writers.WriterOptions{})
// Create a test schema with a table
schema := models.InitSchema("dbo")
table := models.InitTable("users", "dbo")
col1 := models.InitColumn("id", "users", "dbo")
col1.Type = "int"
col1.AutoIncrement = true
col1.NotNull = true
col1.Sequence = 1
col2 := models.InitColumn("email", "users", "dbo")
col2.Type = "string"
col2.Length = 255
col2.NotNull = true
col2.Sequence = 2
table.Columns["id"] = col1
table.Columns["email"] = col2
// Write to buffer
buf := &bytes.Buffer{}
writer.writer = buf
err := writer.writeCreateTable(schema, table)
assert.NoError(t, err)
output := buf.String()
assert.Contains(t, output, "CREATE TABLE [dbo].[users]")
assert.Contains(t, output, "[id] INT IDENTITY(1,1) NOT NULL")
assert.Contains(t, output, "[email] NVARCHAR(255) NOT NULL")
}
// TestWritePrimaryKey tests PRIMARY KEY constraint generation
func TestWritePrimaryKey(t *testing.T) {
writer := NewWriter(&writers.WriterOptions{})
schema := models.InitSchema("dbo")
table := models.InitTable("users", "dbo")
// Add primary key constraint
pk := models.InitConstraint("PK_users_id", models.PrimaryKeyConstraint)
pk.Columns = []string{"id"}
table.Constraints[pk.Name] = pk
// Add column
col := models.InitColumn("id", "users", "dbo")
col.Type = "int"
col.Sequence = 1
table.Columns["id"] = col
// Write to buffer
buf := &bytes.Buffer{}
writer.writer = buf
err := writer.writePrimaryKey(schema, table)
assert.NoError(t, err)
output := buf.String()
assert.Contains(t, output, "ALTER TABLE [dbo].[users]")
assert.Contains(t, output, "PRIMARY KEY")
assert.Contains(t, output, "[id]")
}
// TestWriteForeignKey tests FOREIGN KEY constraint generation
func TestWriteForeignKey(t *testing.T) {
writer := NewWriter(&writers.WriterOptions{})
schema := models.InitSchema("dbo")
table := models.InitTable("orders", "dbo")
// Add foreign key constraint
fk := models.InitConstraint("FK_orders_users", models.ForeignKeyConstraint)
fk.Columns = []string{"user_id"}
fk.ReferencedSchema = "dbo"
fk.ReferencedTable = "users"
fk.ReferencedColumns = []string{"id"}
fk.OnDelete = "CASCADE"
fk.OnUpdate = "NO ACTION"
table.Constraints[fk.Name] = fk
// Add column
col := models.InitColumn("user_id", "orders", "dbo")
col.Type = "int"
col.Sequence = 1
table.Columns["user_id"] = col
// Write to buffer
buf := &bytes.Buffer{}
writer.writer = buf
err := writer.writeForeignKeys(schema, table)
assert.NoError(t, err)
output := buf.String()
assert.Contains(t, output, "ALTER TABLE [dbo].[orders]")
assert.Contains(t, output, "FK_orders_users")
assert.Contains(t, output, "FOREIGN KEY")
assert.Contains(t, output, "CASCADE")
assert.Contains(t, output, "NO ACTION")
}
// TestWriteComments tests extended property generation for comments
func TestWriteComments(t *testing.T) {
writer := NewWriter(&writers.WriterOptions{})
schema := models.InitSchema("dbo")
table := models.InitTable("users", "dbo")
table.Description = "User accounts table"
col := models.InitColumn("id", "users", "dbo")
col.Type = "int"
col.Description = "Primary key"
col.Sequence = 1
table.Columns["id"] = col
// Write to buffer
buf := &bytes.Buffer{}
writer.writer = buf
err := writer.writeComments(schema, table)
assert.NoError(t, err)
output := buf.String()
assert.Contains(t, output, "sp_addextendedproperty")
assert.Contains(t, output, "MS_Description")
assert.Contains(t, output, "User accounts table")
assert.Contains(t, output, "Primary key")
}

View File

@@ -0,0 +1,215 @@
# SQLite Writer
SQLite DDL (Data Definition Language) writer for RelSpec. Converts database schemas to SQLite-compatible SQL statements.
## Features
- **Automatic Schema Flattening** - SQLite doesn't support PostgreSQL-style schemas, so table names are automatically flattened (e.g., `public.users``public_users`)
- **Type Mapping** - Converts PostgreSQL data types to SQLite type affinities (TEXT, INTEGER, REAL, NUMERIC, BLOB)
- **Auto-Increment Detection** - Automatically converts SERIAL types and auto-increment columns to `INTEGER PRIMARY KEY AUTOINCREMENT`
- **Function Translation** - Converts PostgreSQL functions to SQLite equivalents (e.g., `now()``CURRENT_TIMESTAMP`)
- **Boolean Handling** - Maps boolean values to INTEGER (true=1, false=0)
- **Constraint Generation** - Creates indexes, unique constraints, and documents foreign keys
- **Identifier Quoting** - Properly quotes identifiers using double quotes
## Usage
### Convert PostgreSQL to SQLite
```bash
relspec convert --from pgsql --from-conn "postgres://user:pass@localhost/mydb" \
--to sqlite --to-path schema.sql
```
### Convert DBML to SQLite
```bash
relspec convert --from dbml --from-path schema.dbml \
--to sqlite --to-path schema.sql
```
### Multi-Schema Databases
SQLite doesn't support schemas, so multi-schema databases are automatically flattened:
```bash
# Input has auth.users and public.posts
# Output will have auth_users and public_posts
relspec convert --from json --from-path multi_schema.json \
--to sqlite --to-path flattened.sql
```
## Type Mapping
| PostgreSQL Type | SQLite Affinity | Examples |
|----------------|-----------------|----------|
| TEXT | TEXT | varchar, text, char, citext, uuid, timestamp, json |
| INTEGER | INTEGER | int, integer, smallint, bigint, serial, boolean |
| REAL | REAL | real, float, double precision |
| NUMERIC | NUMERIC | numeric, decimal |
| BLOB | BLOB | bytea, blob |
## Auto-Increment Handling
Columns are converted to `INTEGER PRIMARY KEY AUTOINCREMENT` when they meet these criteria:
- Marked as primary key
- Integer type
- Have `AutoIncrement` flag set, OR
- Type contains "serial", OR
- Default value contains "nextval"
**Example:**
```sql
-- Input (PostgreSQL)
CREATE TABLE users (
id SERIAL PRIMARY KEY,
name VARCHAR(100)
);
-- Output (SQLite)
CREATE TABLE "users" (
"id" INTEGER PRIMARY KEY AUTOINCREMENT,
"name" TEXT
);
```
## Default Value Translation
| PostgreSQL | SQLite | Notes |
|-----------|--------|-------|
| `now()`, `CURRENT_TIMESTAMP` | `CURRENT_TIMESTAMP` | Timestamp functions |
| `CURRENT_DATE` | `CURRENT_DATE` | Date function |
| `CURRENT_TIME` | `CURRENT_TIME` | Time function |
| `true`, `false` | `1`, `0` | Boolean values |
| `gen_random_uuid()` | *(removed)* | SQLite has no built-in UUID |
| `nextval(...)` | *(removed)* | Handled by AUTOINCREMENT |
## Foreign Keys
Foreign keys are generated as commented-out ALTER TABLE statements for reference:
```sql
-- Foreign key: fk_posts_user_id
-- ALTER TABLE "posts" ADD CONSTRAINT "posts_fk_posts_user_id"
-- FOREIGN KEY ("user_id")
-- REFERENCES "users" ("id");
-- Note: Foreign keys should be defined in CREATE TABLE for better SQLite compatibility
```
For production use, define foreign keys directly in the CREATE TABLE statement or execute the ALTER TABLE commands after creating all tables.
## Constraints
- **Primary Keys**: Inline for auto-increment columns, separate constraint for composite keys
- **Unique Constraints**: Converted to `CREATE UNIQUE INDEX` statements
- **Check Constraints**: Generated as comments (should be added to CREATE TABLE manually)
- **Indexes**: Generated without PostgreSQL-specific features (no GIN, GiST, operator classes)
## Output Structure
Generated SQL follows this order:
1. Header comments
2. `PRAGMA foreign_keys = ON;`
3. CREATE TABLE statements (sorted by schema, then table)
4. CREATE INDEX statements
5. CREATE UNIQUE INDEX statements (for unique constraints)
6. Check constraint comments
7. Foreign key comments
## Example
**Input (multi-schema PostgreSQL):**
```sql
CREATE SCHEMA auth;
CREATE TABLE auth.users (
id SERIAL PRIMARY KEY,
username VARCHAR(50) UNIQUE NOT NULL,
created_at TIMESTAMP DEFAULT now()
);
CREATE SCHEMA public;
CREATE TABLE public.posts (
id SERIAL PRIMARY KEY,
user_id INTEGER REFERENCES auth.users(id),
title VARCHAR(200) NOT NULL,
published BOOLEAN DEFAULT false
);
```
**Output (SQLite with flattened schemas):**
```sql
-- SQLite Database Schema
-- Database: mydb
-- Generated by RelSpec
-- Note: Schema names have been flattened (e.g., public.users -> public_users)
-- Enable foreign key constraints
PRAGMA foreign_keys = ON;
-- Schema: auth (flattened into table names)
CREATE TABLE "auth_users" (
"id" INTEGER PRIMARY KEY AUTOINCREMENT,
"username" TEXT NOT NULL,
"created_at" TEXT DEFAULT CURRENT_TIMESTAMP
);
CREATE UNIQUE INDEX "auth_users_users_username_key" ON "auth_users" ("username");
-- Schema: public (flattened into table names)
CREATE TABLE "public_posts" (
"id" INTEGER PRIMARY KEY AUTOINCREMENT,
"user_id" INTEGER NOT NULL,
"title" TEXT NOT NULL,
"published" INTEGER DEFAULT 0
);
-- Foreign key: posts_user_id_fkey
-- ALTER TABLE "public_posts" ADD CONSTRAINT "public_posts_posts_user_id_fkey"
-- FOREIGN KEY ("user_id")
-- REFERENCES "auth_users" ("id");
-- Note: Foreign keys should be defined in CREATE TABLE for better SQLite compatibility
```
## Programmatic Usage
```go
import (
"git.warky.dev/wdevs/relspecgo/pkg/models"
"git.warky.dev/wdevs/relspecgo/pkg/writers"
"git.warky.dev/wdevs/relspecgo/pkg/writers/sqlite"
)
func main() {
// Create writer (automatically enables schema flattening)
writer := sqlite.NewWriter(&writers.WriterOptions{
OutputPath: "schema.sql",
})
// Write database schema
db := &models.Database{
Name: "mydb",
Schemas: []*models.Schema{
// ... your schema data
},
}
err := writer.WriteDatabase(db)
if err != nil {
panic(err)
}
}
```
## Notes
- Schema flattening is **always enabled** for SQLite output (cannot be disabled)
- Constraint and index names are prefixed with the flattened table name to avoid collisions
- Generated SQL is compatible with SQLite 3.x
- Foreign key constraints require `PRAGMA foreign_keys = ON;` to be enforced
- For complex schemas, review and test the generated SQL before use in production

View File

@@ -0,0 +1,89 @@
package sqlite
import (
"strings"
)
// SQLite type affinities
const (
TypeText = "TEXT"
TypeInteger = "INTEGER"
TypeReal = "REAL"
TypeNumeric = "NUMERIC"
TypeBlob = "BLOB"
)
// MapPostgreSQLType maps PostgreSQL data types to SQLite type affinities
func MapPostgreSQLType(pgType string) string {
// Normalize the type
normalized := strings.ToLower(strings.TrimSpace(pgType))
// Remove array notation if present
normalized = strings.TrimSuffix(normalized, "[]")
// Remove precision/scale if present
if idx := strings.Index(normalized, "("); idx != -1 {
normalized = normalized[:idx]
}
// Map to SQLite type affinity
switch normalized {
// TEXT affinity
case "varchar", "character varying", "text", "char", "character",
"citext", "uuid", "timestamp", "timestamptz", "timestamp with time zone",
"timestamp without time zone", "date", "time", "timetz", "time with time zone",
"time without time zone", "json", "jsonb", "xml", "inet", "cidr", "macaddr":
return TypeText
// INTEGER affinity
case "int", "int2", "int4", "int8", "integer", "smallint", "bigint",
"serial", "smallserial", "bigserial", "boolean", "bool":
return TypeInteger
// REAL affinity
case "real", "float", "float4", "float8", "double precision":
return TypeReal
// NUMERIC affinity
case "numeric", "decimal", "money":
return TypeNumeric
// BLOB affinity
case "bytea", "blob":
return TypeBlob
default:
// Default to TEXT for unknown types
return TypeText
}
}
// IsIntegerType checks if a column type should be treated as integer
func IsIntegerType(colType string) bool {
normalized := strings.ToLower(strings.TrimSpace(colType))
normalized = strings.TrimSuffix(normalized, "[]")
if idx := strings.Index(normalized, "("); idx != -1 {
normalized = normalized[:idx]
}
switch normalized {
case "int", "int2", "int4", "int8", "integer", "smallint", "bigint",
"serial", "smallserial", "bigserial":
return true
default:
return false
}
}
// MapBooleanValue converts PostgreSQL boolean literals to SQLite (0/1)
func MapBooleanValue(value string) string {
normalized := strings.ToLower(strings.TrimSpace(value))
switch normalized {
case "true", "t", "yes", "y", "1":
return "1"
case "false", "f", "no", "n", "0":
return "0"
default:
return value
}
}

View File

@@ -0,0 +1,146 @@
package sqlite
import (
"fmt"
"strings"
"text/template"
"git.warky.dev/wdevs/relspecgo/pkg/models"
"git.warky.dev/wdevs/relspecgo/pkg/writers"
)
// GetTemplateFuncs returns template functions for SQLite SQL generation
func GetTemplateFuncs(opts *writers.WriterOptions) template.FuncMap {
return template.FuncMap{
"quote_ident": QuoteIdentifier,
"map_type": MapPostgreSQLType,
"is_autoincrement": IsAutoIncrementCandidate,
"qualified_table_name": func(schema, table string) string {
return writers.QualifiedTableName(schema, table, opts.FlattenSchema)
},
"format_default": FormatDefault,
"format_constraint_name": func(schema, table, constraint string) string {
return FormatConstraintName(schema, table, constraint, opts)
},
"join": strings.Join,
"lower": strings.ToLower,
"upper": strings.ToUpper,
}
}
// QuoteIdentifier quotes an identifier for SQLite (double quotes)
func QuoteIdentifier(name string) string {
// SQLite uses double quotes for identifiers
// Escape any existing double quotes by doubling them
escaped := strings.ReplaceAll(name, `"`, `""`)
return fmt.Sprintf(`"%s"`, escaped)
}
// IsAutoIncrementCandidate checks if a column should use AUTOINCREMENT
func IsAutoIncrementCandidate(col *models.Column) bool {
// Must be a primary key
if !col.IsPrimaryKey {
return false
}
// Must be an integer type
if !IsIntegerType(col.Type) {
return false
}
// Check AutoIncrement field
if col.AutoIncrement {
return true
}
// Check if default suggests auto-increment
if col.Default != nil {
defaultStr, ok := col.Default.(string)
if ok {
defaultLower := strings.ToLower(defaultStr)
if strings.Contains(defaultLower, "nextval") ||
strings.Contains(defaultLower, "autoincrement") ||
strings.Contains(defaultLower, "auto_increment") {
return true
}
}
}
// Serial types are auto-increment
typeLower := strings.ToLower(col.Type)
return strings.Contains(typeLower, "serial")
}
// FormatDefault formats a default value for SQLite
func FormatDefault(col *models.Column) string {
if col.Default == nil {
return ""
}
// Skip auto-increment defaults (handled by AUTOINCREMENT keyword)
if IsAutoIncrementCandidate(col) {
return ""
}
// Convert to string
defaultStr, ok := col.Default.(string)
if !ok {
// If not a string, convert to string representation
defaultStr = fmt.Sprintf("%v", col.Default)
}
if defaultStr == "" {
return ""
}
// Convert PostgreSQL-specific functions to SQLite equivalents
defaultLower := strings.ToLower(defaultStr)
// Current timestamp functions
if strings.Contains(defaultLower, "current_timestamp") ||
strings.Contains(defaultLower, "now()") {
return "CURRENT_TIMESTAMP"
}
// Current date
if strings.Contains(defaultLower, "current_date") {
return "CURRENT_DATE"
}
// Current time
if strings.Contains(defaultLower, "current_time") {
return "CURRENT_TIME"
}
// Boolean values
sqliteType := MapPostgreSQLType(col.Type)
if sqliteType == TypeInteger {
typeLower := strings.ToLower(col.Type)
if strings.Contains(typeLower, "bool") {
return MapBooleanValue(defaultStr)
}
}
// UUID generation - SQLite doesn't have built-in UUID, comment it out
if strings.Contains(defaultLower, "uuid") || strings.Contains(defaultLower, "gen_random_uuid") {
return "" // Remove UUID defaults, users must handle this
}
// Remove PostgreSQL-specific casting
defaultStr = strings.ReplaceAll(defaultStr, "::text", "")
defaultStr = strings.ReplaceAll(defaultStr, "::integer", "")
defaultStr = strings.ReplaceAll(defaultStr, "::bigint", "")
defaultStr = strings.ReplaceAll(defaultStr, "::boolean", "")
return defaultStr
}
// FormatConstraintName formats a constraint name with table prefix if flattening
func FormatConstraintName(schema, table, constraint string, opts *writers.WriterOptions) string {
if opts.FlattenSchema && schema != "" {
// Prefix constraint with flattened table name
flatTable := writers.QualifiedTableName(schema, table, opts.FlattenSchema)
return fmt.Sprintf("%s_%s", flatTable, constraint)
}
return constraint
}

View File

@@ -0,0 +1,174 @@
package sqlite
import (
"bytes"
"embed"
"fmt"
"text/template"
"git.warky.dev/wdevs/relspecgo/pkg/models"
"git.warky.dev/wdevs/relspecgo/pkg/writers"
)
//go:embed templates/*.tmpl
var templateFS embed.FS
// TemplateExecutor manages and executes SQLite SQL templates
type TemplateExecutor struct {
templates *template.Template
options *writers.WriterOptions
}
// NewTemplateExecutor creates a new template executor for SQLite
func NewTemplateExecutor(opts *writers.WriterOptions) (*TemplateExecutor, error) {
// Create template with SQLite-specific functions
funcMap := GetTemplateFuncs(opts)
tmpl, err := template.New("").Funcs(funcMap).ParseFS(templateFS, "templates/*.tmpl")
if err != nil {
return nil, fmt.Errorf("failed to parse templates: %w", err)
}
return &TemplateExecutor{
templates: tmpl,
options: opts,
}, nil
}
// Template data structures
// TableTemplateData contains data for table template
type TableTemplateData struct {
Schema string
Name string
Columns []*models.Column
PrimaryKey *models.Constraint
}
// IndexTemplateData contains data for index template
type IndexTemplateData struct {
Schema string
Table string
Name string
Columns []string
}
// ConstraintTemplateData contains data for constraint templates
type ConstraintTemplateData struct {
Schema string
Table string
Name string
Columns []string
Expression string
ForeignSchema string
ForeignTable string
ForeignColumns []string
OnDelete string
OnUpdate string
}
// Execute methods
// ExecutePragmaForeignKeys executes the pragma foreign keys template
func (te *TemplateExecutor) ExecutePragmaForeignKeys() (string, error) {
var buf bytes.Buffer
err := te.templates.ExecuteTemplate(&buf, "pragma_foreign_keys.tmpl", nil)
if err != nil {
return "", fmt.Errorf("failed to execute pragma_foreign_keys template: %w", err)
}
return buf.String(), nil
}
// ExecuteCreateTable executes the create table template
func (te *TemplateExecutor) ExecuteCreateTable(data TableTemplateData) (string, error) {
var buf bytes.Buffer
err := te.templates.ExecuteTemplate(&buf, "create_table.tmpl", data)
if err != nil {
return "", fmt.Errorf("failed to execute create_table template: %w", err)
}
return buf.String(), nil
}
// ExecuteCreateIndex executes the create index template
func (te *TemplateExecutor) ExecuteCreateIndex(data IndexTemplateData) (string, error) {
var buf bytes.Buffer
err := te.templates.ExecuteTemplate(&buf, "create_index.tmpl", data)
if err != nil {
return "", fmt.Errorf("failed to execute create_index template: %w", err)
}
return buf.String(), nil
}
// ExecuteCreateUniqueConstraint executes the create unique constraint template
func (te *TemplateExecutor) ExecuteCreateUniqueConstraint(data ConstraintTemplateData) (string, error) {
var buf bytes.Buffer
err := te.templates.ExecuteTemplate(&buf, "create_unique_constraint.tmpl", data)
if err != nil {
return "", fmt.Errorf("failed to execute create_unique_constraint template: %w", err)
}
return buf.String(), nil
}
// ExecuteCreateCheckConstraint executes the create check constraint template
func (te *TemplateExecutor) ExecuteCreateCheckConstraint(data ConstraintTemplateData) (string, error) {
var buf bytes.Buffer
err := te.templates.ExecuteTemplate(&buf, "create_check_constraint.tmpl", data)
if err != nil {
return "", fmt.Errorf("failed to execute create_check_constraint template: %w", err)
}
return buf.String(), nil
}
// ExecuteCreateForeignKey executes the create foreign key template
func (te *TemplateExecutor) ExecuteCreateForeignKey(data ConstraintTemplateData) (string, error) {
var buf bytes.Buffer
err := te.templates.ExecuteTemplate(&buf, "create_foreign_key.tmpl", data)
if err != nil {
return "", fmt.Errorf("failed to execute create_foreign_key template: %w", err)
}
return buf.String(), nil
}
// Helper functions to build template data from models
// BuildTableTemplateData builds TableTemplateData from a models.Table
func BuildTableTemplateData(schema string, table *models.Table) TableTemplateData {
// Get sorted columns
columns := make([]*models.Column, 0, len(table.Columns))
for _, col := range table.Columns {
columns = append(columns, col)
}
// Find primary key constraint
var pk *models.Constraint
for _, constraint := range table.Constraints {
if constraint.Type == models.PrimaryKeyConstraint {
pk = constraint
break
}
}
// If no explicit primary key constraint, build one from columns with IsPrimaryKey=true
if pk == nil {
pkCols := []string{}
for _, col := range table.Columns {
if col.IsPrimaryKey {
pkCols = append(pkCols, col.Name)
}
}
if len(pkCols) > 0 {
pk = &models.Constraint{
Name: "pk_" + table.Name,
Type: models.PrimaryKeyConstraint,
Columns: pkCols,
}
}
}
return TableTemplateData{
Schema: schema,
Name: table.Name,
Columns: columns,
PrimaryKey: pk,
}
}

View File

@@ -0,0 +1,4 @@
-- Check constraint: {{.Name}}
-- {{.Expression}}
-- Note: SQLite supports CHECK constraints in CREATE TABLE or ALTER TABLE ADD CHECK
-- This must be added manually to the table definition above

View File

@@ -0,0 +1,6 @@
-- Foreign key: {{.Name}}
-- ALTER TABLE {{quote_ident (qualified_table_name .Schema .Table)}} ADD CONSTRAINT {{quote_ident (format_constraint_name .Schema .Table .Name)}}
-- FOREIGN KEY ({{range $i, $col := .Columns}}{{if $i}}, {{end}}{{quote_ident $col}}{{end}})
-- REFERENCES {{quote_ident (qualified_table_name .ForeignSchema .ForeignTable)}} ({{range $i, $col := .ForeignColumns}}{{if $i}}, {{end}}{{quote_ident $col}}{{end}})
-- {{if .OnDelete}}ON DELETE {{.OnDelete}}{{end}}{{if .OnUpdate}} ON UPDATE {{.OnUpdate}}{{end}};
-- Note: Foreign keys should be defined in CREATE TABLE for better SQLite compatibility

View File

@@ -0,0 +1 @@
CREATE INDEX {{quote_ident (format_constraint_name .Schema .Table .Name)}} ON {{quote_ident (qualified_table_name .Schema .Table)}} ({{range $i, $col := .Columns}}{{if $i}}, {{end}}{{quote_ident $col}}{{end}});

View File

@@ -0,0 +1,9 @@
CREATE TABLE {{quote_ident (qualified_table_name .Schema .Name)}} (
{{- $hasAutoIncrement := false}}
{{- range $i, $col := .Columns}}{{if $i}},{{end}}
{{quote_ident $col.Name}} {{map_type $col.Type}}{{if is_autoincrement $col}}{{$hasAutoIncrement = true}} PRIMARY KEY AUTOINCREMENT{{else}}{{if $col.NotNull}} NOT NULL{{end}}{{if ne (format_default $col) ""}} DEFAULT {{format_default $col}}{{end}}{{end}}
{{- end}}
{{- if and .PrimaryKey (not $hasAutoIncrement)}}{{if gt (len .Columns) 0}},{{end}}
PRIMARY KEY ({{range $i, $colName := .PrimaryKey.Columns}}{{if $i}}, {{end}}{{quote_ident $colName}}{{end}})
{{- end}}
);

View File

@@ -0,0 +1 @@
CREATE UNIQUE INDEX {{quote_ident (format_constraint_name .Schema .Table .Name)}} ON {{quote_ident (qualified_table_name .Schema .Table)}} ({{range $i, $col := .Columns}}{{if $i}}, {{end}}{{quote_ident $col}}{{end}});

View File

@@ -0,0 +1,2 @@
-- Enable foreign key constraints
PRAGMA foreign_keys = ON;

View File

@@ -0,0 +1,291 @@
package sqlite
import (
"fmt"
"io"
"os"
"strings"
"git.warky.dev/wdevs/relspecgo/pkg/models"
"git.warky.dev/wdevs/relspecgo/pkg/writers"
)
// Writer implements the Writer interface for SQLite SQL output
type Writer struct {
options *writers.WriterOptions
writer io.Writer
executor *TemplateExecutor
}
// NewWriter creates a new SQLite SQL writer
// SQLite doesn't support schemas, so FlattenSchema is automatically enabled
func NewWriter(options *writers.WriterOptions) *Writer {
// Force schema flattening for SQLite
options.FlattenSchema = true
executor, _ := NewTemplateExecutor(options)
return &Writer{
options: options,
executor: executor,
}
}
// WriteDatabase writes the entire database schema as SQLite SQL
func (w *Writer) WriteDatabase(db *models.Database) error {
var writer io.Writer
var file *os.File
var err error
// Use existing writer if already set (for testing)
if w.writer != nil {
writer = w.writer
} else if w.options.OutputPath != "" {
// Determine output destination
file, err = os.Create(w.options.OutputPath)
if err != nil {
return fmt.Errorf("failed to create output file: %w", err)
}
defer file.Close()
writer = file
} else {
writer = os.Stdout
}
w.writer = writer
// Write header comment
fmt.Fprintf(w.writer, "-- SQLite Database Schema\n")
fmt.Fprintf(w.writer, "-- Database: %s\n", db.Name)
fmt.Fprintf(w.writer, "-- Generated by RelSpec\n")
fmt.Fprintf(w.writer, "-- Note: Schema names have been flattened (e.g., public.users -> public_users)\n\n")
// Enable foreign keys
pragma, err := w.executor.ExecutePragmaForeignKeys()
if err != nil {
return fmt.Errorf("failed to generate pragma statement: %w", err)
}
fmt.Fprintf(w.writer, "%s\n", pragma)
// Process each schema in the database
for _, schema := range db.Schemas {
if err := w.WriteSchema(schema); err != nil {
return fmt.Errorf("failed to write schema %s: %w", schema.Name, err)
}
}
return nil
}
// WriteSchema writes a single schema as SQLite SQL
func (w *Writer) WriteSchema(schema *models.Schema) error {
// SQLite doesn't have schemas, so we just write a comment
if schema.Name != "" {
fmt.Fprintf(w.writer, "-- Schema: %s (flattened into table names)\n\n", schema.Name)
}
// Phase 1: Create tables
for _, table := range schema.Tables {
if err := w.writeTable(schema.Name, table); err != nil {
return fmt.Errorf("failed to write table %s: %w", table.Name, err)
}
}
// Phase 2: Create indexes
for _, table := range schema.Tables {
if err := w.writeIndexes(schema.Name, table); err != nil {
return fmt.Errorf("failed to write indexes for table %s: %w", table.Name, err)
}
}
// Phase 3: Create unique constraints (as unique indexes)
for _, table := range schema.Tables {
if err := w.writeUniqueConstraints(schema.Name, table); err != nil {
return fmt.Errorf("failed to write unique constraints for table %s: %w", table.Name, err)
}
}
// Phase 4: Check constraints (as comments, since SQLite requires them in CREATE TABLE)
for _, table := range schema.Tables {
if err := w.writeCheckConstraints(schema.Name, table); err != nil {
return fmt.Errorf("failed to write check constraints for table %s: %w", table.Name, err)
}
}
// Phase 5: Foreign keys (as comments for compatibility)
for _, table := range schema.Tables {
if err := w.writeForeignKeys(schema.Name, table); err != nil {
return fmt.Errorf("failed to write foreign keys for table %s: %w", table.Name, err)
}
}
return nil
}
// WriteTable writes a single table as SQLite SQL
func (w *Writer) WriteTable(table *models.Table) error {
return w.writeTable("", table)
}
// writeTable is the internal implementation
func (w *Writer) writeTable(schema string, table *models.Table) error {
// Build table template data
data := BuildTableTemplateData(schema, table)
// Execute template
sql, err := w.executor.ExecuteCreateTable(data)
if err != nil {
return fmt.Errorf("failed to execute create table template: %w", err)
}
fmt.Fprintf(w.writer, "%s\n", sql)
return nil
}
// writeIndexes writes indexes for a table
func (w *Writer) writeIndexes(schema string, table *models.Table) error {
for _, index := range table.Indexes {
// Skip primary key indexes
if strings.HasSuffix(index.Name, "_pkey") {
continue
}
// Skip unique indexes (handled separately as unique constraints)
if index.Unique {
continue
}
data := IndexTemplateData{
Schema: schema,
Table: table.Name,
Name: index.Name,
Columns: index.Columns,
}
sql, err := w.executor.ExecuteCreateIndex(data)
if err != nil {
return fmt.Errorf("failed to execute create index template: %w", err)
}
fmt.Fprintf(w.writer, "%s\n", sql)
}
return nil
}
// writeUniqueConstraints writes unique constraints as unique indexes
func (w *Writer) writeUniqueConstraints(schema string, table *models.Table) error {
for _, constraint := range table.Constraints {
if constraint.Type != models.UniqueConstraint {
continue
}
data := ConstraintTemplateData{
Schema: schema,
Table: table.Name,
Name: constraint.Name,
Columns: constraint.Columns,
}
sql, err := w.executor.ExecuteCreateUniqueConstraint(data)
if err != nil {
return fmt.Errorf("failed to execute create unique constraint template: %w", err)
}
fmt.Fprintf(w.writer, "%s\n", sql)
}
// Also handle unique indexes from the Indexes map
for _, index := range table.Indexes {
if !index.Unique {
continue
}
// Skip if already handled as a constraint
alreadyHandled := false
for _, constraint := range table.Constraints {
if constraint.Type == models.UniqueConstraint && constraint.Name == index.Name {
alreadyHandled = true
break
}
}
if alreadyHandled {
continue
}
data := ConstraintTemplateData{
Schema: schema,
Table: table.Name,
Name: index.Name,
Columns: index.Columns,
}
sql, err := w.executor.ExecuteCreateUniqueConstraint(data)
if err != nil {
return fmt.Errorf("failed to execute create unique index template: %w", err)
}
fmt.Fprintf(w.writer, "%s\n", sql)
}
return nil
}
// writeCheckConstraints writes check constraints as comments
func (w *Writer) writeCheckConstraints(schema string, table *models.Table) error {
for _, constraint := range table.Constraints {
if constraint.Type != models.CheckConstraint {
continue
}
data := ConstraintTemplateData{
Schema: schema,
Table: table.Name,
Name: constraint.Name,
Expression: constraint.Expression,
}
sql, err := w.executor.ExecuteCreateCheckConstraint(data)
if err != nil {
return fmt.Errorf("failed to execute create check constraint template: %w", err)
}
fmt.Fprintf(w.writer, "%s\n", sql)
}
return nil
}
// writeForeignKeys writes foreign keys as comments
func (w *Writer) writeForeignKeys(schema string, table *models.Table) error {
for _, constraint := range table.Constraints {
if constraint.Type != models.ForeignKeyConstraint {
continue
}
refSchema := constraint.ReferencedSchema
if refSchema == "" {
refSchema = schema
}
data := ConstraintTemplateData{
Schema: schema,
Table: table.Name,
Name: constraint.Name,
Columns: constraint.Columns,
ForeignSchema: refSchema,
ForeignTable: constraint.ReferencedTable,
ForeignColumns: constraint.ReferencedColumns,
OnDelete: constraint.OnDelete,
OnUpdate: constraint.OnUpdate,
}
sql, err := w.executor.ExecuteCreateForeignKey(data)
if err != nil {
return fmt.Errorf("failed to execute create foreign key template: %w", err)
}
fmt.Fprintf(w.writer, "%s\n", sql)
}
return nil
}

View File

@@ -0,0 +1,418 @@
package sqlite
import (
"bytes"
"strings"
"testing"
"git.warky.dev/wdevs/relspecgo/pkg/models"
"git.warky.dev/wdevs/relspecgo/pkg/writers"
)
func TestNewWriter(t *testing.T) {
opts := &writers.WriterOptions{
OutputPath: "/tmp/test.sql",
FlattenSchema: false, // Should be forced to true
}
writer := NewWriter(opts)
if !writer.options.FlattenSchema {
t.Error("Expected FlattenSchema to be forced to true for SQLite")
}
}
func TestWriteDatabase(t *testing.T) {
db := &models.Database{
Name: "testdb",
Schemas: []*models.Schema{
{
Name: "public",
Tables: []*models.Table{
{
Name: "users",
Columns: map[string]*models.Column{
"id": {
Name: "id",
Type: "serial",
NotNull: true,
IsPrimaryKey: true,
Default: "nextval('users_id_seq'::regclass)",
},
"email": {
Name: "email",
Type: "varchar(255)",
NotNull: true,
},
"active": {
Name: "active",
Type: "boolean",
NotNull: true,
Default: "true",
},
},
Constraints: map[string]*models.Constraint{
"pk_users": {
Name: "pk_users",
Type: models.PrimaryKeyConstraint,
Columns: []string{"id"},
},
},
},
},
},
},
}
var buf bytes.Buffer
opts := &writers.WriterOptions{}
writer := NewWriter(opts)
writer.writer = &buf
err := writer.WriteDatabase(db)
if err != nil {
t.Fatalf("WriteDatabase failed: %v", err)
}
output := buf.String()
// Check for expected elements
if !strings.Contains(output, "PRAGMA foreign_keys = ON") {
t.Error("Expected PRAGMA foreign_keys statement")
}
if !strings.Contains(output, "CREATE TABLE") {
t.Error("Expected CREATE TABLE statement")
}
if !strings.Contains(output, "\"public_users\"") {
t.Error("Expected flattened table name public_users")
}
if !strings.Contains(output, "INTEGER PRIMARY KEY AUTOINCREMENT") {
t.Error("Expected autoincrement for serial primary key")
}
if !strings.Contains(output, "TEXT") {
t.Error("Expected TEXT type for varchar")
}
// Boolean should be mapped to INTEGER with default 1
if !strings.Contains(output, "active") {
t.Error("Expected active column")
}
}
func TestDataTypeMapping(t *testing.T) {
tests := []struct {
pgType string
expected string
}{
{"varchar(255)", "TEXT"},
{"text", "TEXT"},
{"integer", "INTEGER"},
{"bigint", "INTEGER"},
{"serial", "INTEGER"},
{"boolean", "INTEGER"},
{"real", "REAL"},
{"double precision", "REAL"},
{"numeric(10,2)", "NUMERIC"},
{"decimal", "NUMERIC"},
{"bytea", "BLOB"},
{"timestamp", "TEXT"},
{"uuid", "TEXT"},
{"json", "TEXT"},
{"jsonb", "TEXT"},
}
for _, tt := range tests {
result := MapPostgreSQLType(tt.pgType)
if result != tt.expected {
t.Errorf("MapPostgreSQLType(%q) = %q, want %q", tt.pgType, result, tt.expected)
}
}
}
func TestIsAutoIncrementCandidate(t *testing.T) {
tests := []struct {
name string
col *models.Column
expected bool
}{
{
name: "serial primary key",
col: &models.Column{
Name: "id",
Type: "serial",
IsPrimaryKey: true,
Default: "nextval('seq')",
},
expected: true,
},
{
name: "integer primary key with nextval",
col: &models.Column{
Name: "id",
Type: "integer",
IsPrimaryKey: true,
Default: "nextval('users_id_seq'::regclass)",
},
expected: true,
},
{
name: "integer not primary key",
col: &models.Column{
Name: "count",
Type: "integer",
IsPrimaryKey: false,
Default: "0",
},
expected: false,
},
{
name: "varchar primary key",
col: &models.Column{
Name: "code",
Type: "varchar",
IsPrimaryKey: true,
},
expected: false,
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
result := IsAutoIncrementCandidate(tt.col)
if result != tt.expected {
t.Errorf("IsAutoIncrementCandidate() = %v, want %v", result, tt.expected)
}
})
}
}
func TestFormatDefault(t *testing.T) {
tests := []struct {
name string
col *models.Column
expected string
}{
{
name: "current_timestamp",
col: &models.Column{
Type: "timestamp",
Default: "CURRENT_TIMESTAMP",
},
expected: "CURRENT_TIMESTAMP",
},
{
name: "now()",
col: &models.Column{
Type: "timestamp",
Default: "now()",
},
expected: "CURRENT_TIMESTAMP",
},
{
name: "boolean true",
col: &models.Column{
Type: "boolean",
Default: "true",
},
expected: "1",
},
{
name: "boolean false",
col: &models.Column{
Type: "boolean",
Default: "false",
},
expected: "0",
},
{
name: "serial autoincrement",
col: &models.Column{
Type: "serial",
IsPrimaryKey: true,
Default: "nextval('seq')",
},
expected: "",
},
{
name: "uuid default removed",
col: &models.Column{
Type: "uuid",
Default: "gen_random_uuid()",
},
expected: "",
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
result := FormatDefault(tt.col)
if result != tt.expected {
t.Errorf("FormatDefault() = %q, want %q", result, tt.expected)
}
})
}
}
func TestWriteSchema_MultiSchema(t *testing.T) {
db := &models.Database{
Name: "testdb",
Schemas: []*models.Schema{
{
Name: "auth",
Tables: []*models.Table{
{
Name: "sessions",
Columns: map[string]*models.Column{
"id": {
Name: "id",
Type: "uuid",
NotNull: true,
IsPrimaryKey: true,
},
},
Constraints: map[string]*models.Constraint{
"pk_sessions": {
Name: "pk_sessions",
Type: models.PrimaryKeyConstraint,
Columns: []string{"id"},
},
},
},
},
},
{
Name: "public",
Tables: []*models.Table{
{
Name: "posts",
Columns: map[string]*models.Column{
"id": {
Name: "id",
Type: "integer",
NotNull: true,
IsPrimaryKey: true,
},
},
Constraints: map[string]*models.Constraint{
"pk_posts": {
Name: "pk_posts",
Type: models.PrimaryKeyConstraint,
Columns: []string{"id"},
},
},
},
},
},
},
}
var buf bytes.Buffer
opts := &writers.WriterOptions{}
writer := NewWriter(opts)
writer.writer = &buf
err := writer.WriteDatabase(db)
if err != nil {
t.Fatalf("WriteDatabase failed: %v", err)
}
output := buf.String()
// Check for flattened table names from both schemas
if !strings.Contains(output, "\"auth_sessions\"") {
t.Error("Expected flattened table name auth_sessions")
}
if !strings.Contains(output, "\"public_posts\"") {
t.Error("Expected flattened table name public_posts")
}
}
func TestWriteIndexes(t *testing.T) {
table := &models.Table{
Name: "users",
Columns: map[string]*models.Column{
"email": {
Name: "email",
Type: "varchar(255)",
},
},
Indexes: map[string]*models.Index{
"idx_users_email": {
Name: "idx_users_email",
Columns: []string{"email"},
},
},
}
var buf bytes.Buffer
opts := &writers.WriterOptions{}
writer := NewWriter(opts)
writer.writer = &buf
err := writer.writeIndexes("public", table)
if err != nil {
t.Fatalf("writeIndexes failed: %v", err)
}
output := buf.String()
if !strings.Contains(output, "CREATE INDEX") {
t.Error("Expected CREATE INDEX statement")
}
if !strings.Contains(output, "public_users_idx_users_email") {
t.Errorf("Expected flattened index name public_users_idx_users_email, got output:\n%s", output)
}
}
func TestWriteUniqueConstraints(t *testing.T) {
table := &models.Table{
Name: "users",
Constraints: map[string]*models.Constraint{
"uk_users_email": {
Name: "uk_users_email",
Type: models.UniqueConstraint,
Columns: []string{"email"},
},
},
}
var buf bytes.Buffer
opts := &writers.WriterOptions{}
writer := NewWriter(opts)
writer.writer = &buf
err := writer.writeUniqueConstraints("public", table)
if err != nil {
t.Fatalf("writeUniqueConstraints failed: %v", err)
}
output := buf.String()
if !strings.Contains(output, "CREATE UNIQUE INDEX") {
t.Error("Expected CREATE UNIQUE INDEX statement")
}
}
func TestQuoteIdentifier(t *testing.T) {
tests := []struct {
input string
expected string
}{
{"users", `"users"`},
{"public_users", `"public_users"`},
{`user"name`, `"user""name"`}, // Double quotes should be escaped
}
for _, tt := range tests {
result := QuoteIdentifier(tt.input)
if result != tt.expected {
t.Errorf("QuoteIdentifier(%q) = %q, want %q", tt.input, result, tt.expected)
}
}
}

286
test_data/mssql/TESTING.md Normal file
View File

@@ -0,0 +1,286 @@
# MSSQL Reader and Writer Testing Guide
## Prerequisites
- Docker and Docker Compose installed
- RelSpec binary built (`make build`)
- jq (optional, for JSON processing)
## Quick Start
### 1. Start SQL Server Express
```bash
docker-compose up -d mssql
# Wait for container to be healthy
docker-compose ps
# Monitor startup logs
docker-compose logs -f mssql
```
### 2. Verify Database Creation
```bash
docker exec -it $(docker-compose ps -q mssql) \
/opt/mssql-tools/bin/sqlcmd \
-S localhost \
-U sa \
-P 'StrongPassword123!' \
-Q "SELECT name FROM sys.databases WHERE name = 'RelSpecTest'"
```
## Testing Scenarios
### Scenario 1: Read MSSQL Database to JSON
Read the test schema from MSSQL and export to JSON:
```bash
./build/relspec convert \
--from mssql \
--from-conn "sqlserver://sa:StrongPassword123!@localhost:1433/RelSpecTest" \
--to json \
--to-path test_output.json
```
Verify output:
```bash
jq '.Schemas[0].Tables | length' test_output.json
jq '.Schemas[0].Tables[0]' test_output.json
```
### Scenario 2: Read MSSQL Database to DBML
Convert MSSQL schema to DBML format:
```bash
./build/relspec convert \
--from mssql \
--from-conn "sqlserver://sa:StrongPassword123!@localhost:1433/RelSpecTest" \
--to dbml \
--to-path test_output.dbml
```
### Scenario 3: Generate SQL Script (No Direct Execution)
Generate SQL script without executing:
```bash
./build/relspec convert \
--from mssql \
--from-conn "sqlserver://sa:StrongPassword123!@localhost:1433/RelSpecTest" \
--to mssql \
--to-path test_output.sql
```
Inspect generated SQL:
```bash
head -50 test_output.sql
```
### Scenario 4: Round-Trip Conversion (MSSQL → JSON → MSSQL)
Test bidirectional conversion:
```bash
# Step 1: MSSQL → JSON
./build/relspec convert \
--from mssql \
--from-conn "sqlserver://sa:StrongPassword123!@localhost:1433/RelSpecTest" \
--to json \
--to-path backup.json
# Step 2: JSON → MSSQL SQL
./build/relspec convert \
--from json \
--from-path backup.json \
--to mssql \
--to-path restore.sql
# Inspect SQL
cat restore.sql | head -50
```
### Scenario 5: Cross-Database Conversion
If you have PostgreSQL running, test conversion:
```bash
# MSSQL → PostgreSQL SQL
./build/relspec convert \
--from mssql \
--from-conn "sqlserver://sa:StrongPassword123!@localhost:1433/RelSpecTest" \
--to pgsql \
--to-path mssql_to_pg.sql
```
### Scenario 6: Test Type Mappings
Create a JSON file with various types and convert to MSSQL:
```json
{
"Name": "TypeTest",
"Schemas": [
{
"Name": "dbo",
"Tables": [
{
"Name": "type_samples",
"Columns": {
"id": {
"Name": "id",
"Type": "int",
"AutoIncrement": true,
"NotNull": true,
"Sequence": 1
},
"big_num": {
"Name": "big_num",
"Type": "int64",
"Sequence": 2
},
"is_active": {
"Name": "is_active",
"Type": "bool",
"Sequence": 3
},
"description": {
"Name": "description",
"Type": "text",
"Sequence": 4
},
"created_at": {
"Name": "created_at",
"Type": "timestamp",
"NotNull": true,
"Default": "GETDATE()",
"Sequence": 5
},
"unique_id": {
"Name": "unique_id",
"Type": "uuid",
"Sequence": 6
},
"metadata": {
"Name": "metadata",
"Type": "json",
"Sequence": 7
},
"binary_data": {
"Name": "binary_data",
"Type": "bytea",
"Sequence": 8
}
},
"Constraints": {
"PK_type_samples_id": {
"Name": "PK_type_samples_id",
"Type": "PRIMARY_KEY",
"Columns": ["id"]
}
}
}
]
}
]
}
```
Convert to MSSQL:
```bash
./build/relspec convert \
--from json \
--from-path type_test.json \
--to mssql \
--to-path type_test.sql
cat type_test.sql
```
## Cleanup
Stop and remove the SQL Server container:
```bash
docker-compose down
# Clean up test files
rm -f test_output.* backup.json restore.sql
```
## Troubleshooting
### Container won't start
Check Docker daemon is running and database logs:
```bash
docker-compose logs mssql
```
### Connection refused errors
Wait for container to be healthy:
```bash
docker-compose ps
# Wait until STATUS shows "healthy"
# Or check manually
docker exec -it $(docker-compose ps -q mssql) \
/opt/mssql-tools/bin/sqlcmd \
-S localhost \
-U sa \
-P 'StrongPassword123!' \
-Q "SELECT @@VERSION"
```
### Test schema not found
Initialize the test schema:
```bash
docker exec -i $(docker-compose ps -q mssql) \
/opt/mssql-tools/bin/sqlcmd \
-S localhost \
-U sa \
-P 'StrongPassword123!' \
< test_data/mssql/test_schema.sql
```
### Connection string format issues
Use the correct format for connection strings:
- Default port: 1433
- Username: `sa`
- Password: `StrongPassword123!`
- Database: `RelSpecTest`
Format: `sqlserver://sa:StrongPassword123!@localhost:1433/RelSpecTest`
## Performance Notes
- Initial reader setup may take a few seconds
- Type mapping queries are cached within a single read operation
- Direct execution mode is atomic per table/constraint
- Large schemas (100+ tables) should complete in under 5 seconds
## Unit Test Verification
Run the MSSQL-specific tests:
```bash
# Type mapping tests
go test ./pkg/mssql/... -v
# Reader tests
go test ./pkg/readers/mssql/... -v
# Writer tests
go test ./pkg/writers/mssql/... -v
# All together
go test ./pkg/mssql/... ./pkg/readers/mssql/... ./pkg/writers/mssql/... -v
```
Expected output: All tests should PASS

View File

@@ -0,0 +1,187 @@
-- Test schema for MSSQL Reader integration tests
-- This script creates a sample database for testing the MSSQL reader
USE master;
GO
-- Drop existing database if it exists
IF EXISTS (SELECT 1 FROM sys.databases WHERE name = 'RelSpecTest')
BEGIN
ALTER DATABASE RelSpecTest SET SINGLE_USER WITH ROLLBACK IMMEDIATE;
DROP DATABASE RelSpecTest;
END
GO
-- Create test database
CREATE DATABASE RelSpecTest;
GO
USE RelSpecTest;
GO
-- Create schemas
CREATE SCHEMA [public];
GO
CREATE SCHEMA [auth];
GO
-- Create tables in public schema
CREATE TABLE [public].[users] (
[id] INT IDENTITY(1,1) NOT NULL,
[email] NVARCHAR(255) NOT NULL,
[username] NVARCHAR(100) NOT NULL,
[created_at] DATETIME2 NOT NULL DEFAULT GETDATE(),
[updated_at] DATETIME2 NULL,
[is_active] BIT NOT NULL DEFAULT 1,
PRIMARY KEY ([id]),
UNIQUE ([email]),
UNIQUE ([username])
);
GO
CREATE TABLE [public].[posts] (
[id] INT IDENTITY(1,1) NOT NULL,
[user_id] INT NOT NULL,
[title] NVARCHAR(255) NOT NULL,
[content] NVARCHAR(MAX) NOT NULL,
[published_at] DATETIME2 NULL,
[created_at] DATETIME2 NOT NULL DEFAULT GETDATE(),
PRIMARY KEY ([id])
);
GO
CREATE TABLE [public].[comments] (
[id] INT IDENTITY(1,1) NOT NULL,
[post_id] INT NOT NULL,
[user_id] INT NOT NULL,
[content] NVARCHAR(MAX) NOT NULL,
[created_at] DATETIME2 NOT NULL DEFAULT GETDATE(),
PRIMARY KEY ([id])
);
GO
-- Create tables in auth schema
CREATE TABLE [auth].[roles] (
[id] INT IDENTITY(1,1) NOT NULL,
[name] NVARCHAR(100) NOT NULL,
[description] NVARCHAR(MAX) NULL,
PRIMARY KEY ([id]),
UNIQUE ([name])
);
GO
CREATE TABLE [auth].[user_roles] (
[id] INT IDENTITY(1,1) NOT NULL,
[user_id] INT NOT NULL,
[role_id] INT NOT NULL,
PRIMARY KEY ([id]),
UNIQUE ([user_id], [role_id])
);
GO
-- Add foreign keys
ALTER TABLE [public].[posts]
ADD CONSTRAINT [FK_posts_users]
FOREIGN KEY ([user_id])
REFERENCES [public].[users] ([id])
ON DELETE CASCADE ON UPDATE NO ACTION;
GO
ALTER TABLE [public].[comments]
ADD CONSTRAINT [FK_comments_posts]
FOREIGN KEY ([post_id])
REFERENCES [public].[posts] ([id])
ON DELETE CASCADE ON UPDATE NO ACTION;
GO
ALTER TABLE [public].[comments]
ADD CONSTRAINT [FK_comments_users]
FOREIGN KEY ([user_id])
REFERENCES [public].[users] ([id])
ON DELETE CASCADE ON UPDATE NO ACTION;
GO
ALTER TABLE [auth].[user_roles]
ADD CONSTRAINT [FK_user_roles_users]
FOREIGN KEY ([user_id])
REFERENCES [public].[users] ([id])
ON DELETE CASCADE ON UPDATE NO ACTION;
GO
ALTER TABLE [auth].[user_roles]
ADD CONSTRAINT [FK_user_roles_roles]
FOREIGN KEY ([role_id])
REFERENCES [auth].[roles] ([id])
ON DELETE CASCADE ON UPDATE NO ACTION;
GO
-- Create indexes
CREATE INDEX [IDX_users_email] ON [public].[users] ([email]);
GO
CREATE INDEX [IDX_posts_user_id] ON [public].[posts] ([user_id]);
GO
CREATE INDEX [IDX_comments_post_id] ON [public].[comments] ([post_id]);
GO
CREATE INDEX [IDX_comments_user_id] ON [public].[comments] ([user_id]);
GO
-- Add extended properties (comments)
EXEC sp_addextendedproperty
@name = 'MS_Description',
@value = 'User accounts table',
@level0type = 'SCHEMA', @level0name = 'public',
@level1type = 'TABLE', @level1name = 'users';
GO
EXEC sp_addextendedproperty
@name = 'MS_Description',
@value = 'User unique identifier',
@level0type = 'SCHEMA', @level0name = 'public',
@level1type = 'TABLE', @level1name = 'users',
@level2type = 'COLUMN', @level2name = 'id';
GO
EXEC sp_addextendedproperty
@name = 'MS_Description',
@value = 'User email address',
@level0type = 'SCHEMA', @level0name = 'public',
@level1type = 'TABLE', @level1name = 'users',
@level2type = 'COLUMN', @level2name = 'email';
GO
EXEC sp_addextendedproperty
@name = 'MS_Description',
@value = 'Blog posts table',
@level0type = 'SCHEMA', @level0name = 'public',
@level1type = 'TABLE', @level1name = 'posts';
GO
EXEC sp_addextendedproperty
@name = 'MS_Description',
@value = 'User roles mapping table',
@level0type = 'SCHEMA', @level0name = 'auth',
@level1type = 'TABLE', @level1name = 'user_roles';
GO
-- Add check constraint
ALTER TABLE [public].[users]
ADD CONSTRAINT [CK_users_email_format]
CHECK (LEN(email) > 0 AND email LIKE '%@%.%');
GO
-- Verify schema was created
SELECT
SCHEMA_NAME(s.schema_id) as [Schema],
t.name as [Table],
COUNT(c.column_id) as [ColumnCount]
FROM sys.tables t
INNER JOIN sys.schemas s ON t.schema_id = s.schema_id
LEFT JOIN sys.columns c ON t.object_id = c.object_id
WHERE SCHEMA_NAME(s.schema_id) IN ('public', 'auth')
GROUP BY SCHEMA_NAME(s.schema_id), t.name
ORDER BY [Schema], [Table];
GO

21
vendor/github.com/dustin/go-humanize/.travis.yml generated vendored Normal file
View File

@@ -0,0 +1,21 @@
sudo: false
language: go
go_import_path: github.com/dustin/go-humanize
go:
- 1.13.x
- 1.14.x
- 1.15.x
- 1.16.x
- stable
- master
matrix:
allow_failures:
- go: master
fast_finish: true
install:
- # Do nothing. This is needed to prevent default install action "go get -t -v ./..." from happening here (we want it to happen inside script step).
script:
- diff -u <(echo -n) <(gofmt -d -s .)
- go vet .
- go install -v -race ./...
- go test -v -race ./...

21
vendor/github.com/dustin/go-humanize/LICENSE generated vendored Normal file
View File

@@ -0,0 +1,21 @@
Copyright (c) 2005-2008 Dustin Sallings <dustin@spy.net>
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in
all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
<http://www.opensource.org/licenses/mit-license.php>

124
vendor/github.com/dustin/go-humanize/README.markdown generated vendored Normal file
View File

@@ -0,0 +1,124 @@
# Humane Units [![Build Status](https://travis-ci.org/dustin/go-humanize.svg?branch=master)](https://travis-ci.org/dustin/go-humanize) [![GoDoc](https://godoc.org/github.com/dustin/go-humanize?status.svg)](https://godoc.org/github.com/dustin/go-humanize)
Just a few functions for helping humanize times and sizes.
`go get` it as `github.com/dustin/go-humanize`, import it as
`"github.com/dustin/go-humanize"`, use it as `humanize`.
See [godoc](https://pkg.go.dev/github.com/dustin/go-humanize) for
complete documentation.
## Sizes
This lets you take numbers like `82854982` and convert them to useful
strings like, `83 MB` or `79 MiB` (whichever you prefer).
Example:
```go
fmt.Printf("That file is %s.", humanize.Bytes(82854982)) // That file is 83 MB.
```
## Times
This lets you take a `time.Time` and spit it out in relative terms.
For example, `12 seconds ago` or `3 days from now`.
Example:
```go
fmt.Printf("This was touched %s.", humanize.Time(someTimeInstance)) // This was touched 7 hours ago.
```
Thanks to Kyle Lemons for the time implementation from an IRC
conversation one day. It's pretty neat.
## Ordinals
From a [mailing list discussion][odisc] where a user wanted to be able
to label ordinals.
0 -> 0th
1 -> 1st
2 -> 2nd
3 -> 3rd
4 -> 4th
[...]
Example:
```go
fmt.Printf("You're my %s best friend.", humanize.Ordinal(193)) // You are my 193rd best friend.
```
## Commas
Want to shove commas into numbers? Be my guest.
0 -> 0
100 -> 100
1000 -> 1,000
1000000000 -> 1,000,000,000
-100000 -> -100,000
Example:
```go
fmt.Printf("You owe $%s.\n", humanize.Comma(6582491)) // You owe $6,582,491.
```
## Ftoa
Nicer float64 formatter that removes trailing zeros.
```go
fmt.Printf("%f", 2.24) // 2.240000
fmt.Printf("%s", humanize.Ftoa(2.24)) // 2.24
fmt.Printf("%f", 2.0) // 2.000000
fmt.Printf("%s", humanize.Ftoa(2.0)) // 2
```
## SI notation
Format numbers with [SI notation][sinotation].
Example:
```go
humanize.SI(0.00000000223, "M") // 2.23 nM
```
## English-specific functions
The following functions are in the `humanize/english` subpackage.
### Plurals
Simple English pluralization
```go
english.PluralWord(1, "object", "") // object
english.PluralWord(42, "object", "") // objects
english.PluralWord(2, "bus", "") // buses
english.PluralWord(99, "locus", "loci") // loci
english.Plural(1, "object", "") // 1 object
english.Plural(42, "object", "") // 42 objects
english.Plural(2, "bus", "") // 2 buses
english.Plural(99, "locus", "loci") // 99 loci
```
### Word series
Format comma-separated words lists with conjuctions:
```go
english.WordSeries([]string{"foo"}, "and") // foo
english.WordSeries([]string{"foo", "bar"}, "and") // foo and bar
english.WordSeries([]string{"foo", "bar", "baz"}, "and") // foo, bar and baz
english.OxfordWordSeries([]string{"foo", "bar", "baz"}, "and") // foo, bar, and baz
```
[odisc]: https://groups.google.com/d/topic/golang-nuts/l8NhI74jl-4/discussion
[sinotation]: http://en.wikipedia.org/wiki/Metric_prefix

31
vendor/github.com/dustin/go-humanize/big.go generated vendored Normal file
View File

@@ -0,0 +1,31 @@
package humanize
import (
"math/big"
)
// order of magnitude (to a max order)
func oomm(n, b *big.Int, maxmag int) (float64, int) {
mag := 0
m := &big.Int{}
for n.Cmp(b) >= 0 {
n.DivMod(n, b, m)
mag++
if mag == maxmag && maxmag >= 0 {
break
}
}
return float64(n.Int64()) + (float64(m.Int64()) / float64(b.Int64())), mag
}
// total order of magnitude
// (same as above, but with no upper limit)
func oom(n, b *big.Int) (float64, int) {
mag := 0
m := &big.Int{}
for n.Cmp(b) >= 0 {
n.DivMod(n, b, m)
mag++
}
return float64(n.Int64()) + (float64(m.Int64()) / float64(b.Int64())), mag
}

189
vendor/github.com/dustin/go-humanize/bigbytes.go generated vendored Normal file
View File

@@ -0,0 +1,189 @@
package humanize
import (
"fmt"
"math/big"
"strings"
"unicode"
)
var (
bigIECExp = big.NewInt(1024)
// BigByte is one byte in bit.Ints
BigByte = big.NewInt(1)
// BigKiByte is 1,024 bytes in bit.Ints
BigKiByte = (&big.Int{}).Mul(BigByte, bigIECExp)
// BigMiByte is 1,024 k bytes in bit.Ints
BigMiByte = (&big.Int{}).Mul(BigKiByte, bigIECExp)
// BigGiByte is 1,024 m bytes in bit.Ints
BigGiByte = (&big.Int{}).Mul(BigMiByte, bigIECExp)
// BigTiByte is 1,024 g bytes in bit.Ints
BigTiByte = (&big.Int{}).Mul(BigGiByte, bigIECExp)
// BigPiByte is 1,024 t bytes in bit.Ints
BigPiByte = (&big.Int{}).Mul(BigTiByte, bigIECExp)
// BigEiByte is 1,024 p bytes in bit.Ints
BigEiByte = (&big.Int{}).Mul(BigPiByte, bigIECExp)
// BigZiByte is 1,024 e bytes in bit.Ints
BigZiByte = (&big.Int{}).Mul(BigEiByte, bigIECExp)
// BigYiByte is 1,024 z bytes in bit.Ints
BigYiByte = (&big.Int{}).Mul(BigZiByte, bigIECExp)
// BigRiByte is 1,024 y bytes in bit.Ints
BigRiByte = (&big.Int{}).Mul(BigYiByte, bigIECExp)
// BigQiByte is 1,024 r bytes in bit.Ints
BigQiByte = (&big.Int{}).Mul(BigRiByte, bigIECExp)
)
var (
bigSIExp = big.NewInt(1000)
// BigSIByte is one SI byte in big.Ints
BigSIByte = big.NewInt(1)
// BigKByte is 1,000 SI bytes in big.Ints
BigKByte = (&big.Int{}).Mul(BigSIByte, bigSIExp)
// BigMByte is 1,000 SI k bytes in big.Ints
BigMByte = (&big.Int{}).Mul(BigKByte, bigSIExp)
// BigGByte is 1,000 SI m bytes in big.Ints
BigGByte = (&big.Int{}).Mul(BigMByte, bigSIExp)
// BigTByte is 1,000 SI g bytes in big.Ints
BigTByte = (&big.Int{}).Mul(BigGByte, bigSIExp)
// BigPByte is 1,000 SI t bytes in big.Ints
BigPByte = (&big.Int{}).Mul(BigTByte, bigSIExp)
// BigEByte is 1,000 SI p bytes in big.Ints
BigEByte = (&big.Int{}).Mul(BigPByte, bigSIExp)
// BigZByte is 1,000 SI e bytes in big.Ints
BigZByte = (&big.Int{}).Mul(BigEByte, bigSIExp)
// BigYByte is 1,000 SI z bytes in big.Ints
BigYByte = (&big.Int{}).Mul(BigZByte, bigSIExp)
// BigRByte is 1,000 SI y bytes in big.Ints
BigRByte = (&big.Int{}).Mul(BigYByte, bigSIExp)
// BigQByte is 1,000 SI r bytes in big.Ints
BigQByte = (&big.Int{}).Mul(BigRByte, bigSIExp)
)
var bigBytesSizeTable = map[string]*big.Int{
"b": BigByte,
"kib": BigKiByte,
"kb": BigKByte,
"mib": BigMiByte,
"mb": BigMByte,
"gib": BigGiByte,
"gb": BigGByte,
"tib": BigTiByte,
"tb": BigTByte,
"pib": BigPiByte,
"pb": BigPByte,
"eib": BigEiByte,
"eb": BigEByte,
"zib": BigZiByte,
"zb": BigZByte,
"yib": BigYiByte,
"yb": BigYByte,
"rib": BigRiByte,
"rb": BigRByte,
"qib": BigQiByte,
"qb": BigQByte,
// Without suffix
"": BigByte,
"ki": BigKiByte,
"k": BigKByte,
"mi": BigMiByte,
"m": BigMByte,
"gi": BigGiByte,
"g": BigGByte,
"ti": BigTiByte,
"t": BigTByte,
"pi": BigPiByte,
"p": BigPByte,
"ei": BigEiByte,
"e": BigEByte,
"z": BigZByte,
"zi": BigZiByte,
"y": BigYByte,
"yi": BigYiByte,
"r": BigRByte,
"ri": BigRiByte,
"q": BigQByte,
"qi": BigQiByte,
}
var ten = big.NewInt(10)
func humanateBigBytes(s, base *big.Int, sizes []string) string {
if s.Cmp(ten) < 0 {
return fmt.Sprintf("%d B", s)
}
c := (&big.Int{}).Set(s)
val, mag := oomm(c, base, len(sizes)-1)
suffix := sizes[mag]
f := "%.0f %s"
if val < 10 {
f = "%.1f %s"
}
return fmt.Sprintf(f, val, suffix)
}
// BigBytes produces a human readable representation of an SI size.
//
// See also: ParseBigBytes.
//
// BigBytes(82854982) -> 83 MB
func BigBytes(s *big.Int) string {
sizes := []string{"B", "kB", "MB", "GB", "TB", "PB", "EB", "ZB", "YB", "RB", "QB"}
return humanateBigBytes(s, bigSIExp, sizes)
}
// BigIBytes produces a human readable representation of an IEC size.
//
// See also: ParseBigBytes.
//
// BigIBytes(82854982) -> 79 MiB
func BigIBytes(s *big.Int) string {
sizes := []string{"B", "KiB", "MiB", "GiB", "TiB", "PiB", "EiB", "ZiB", "YiB", "RiB", "QiB"}
return humanateBigBytes(s, bigIECExp, sizes)
}
// ParseBigBytes parses a string representation of bytes into the number
// of bytes it represents.
//
// See also: BigBytes, BigIBytes.
//
// ParseBigBytes("42 MB") -> 42000000, nil
// ParseBigBytes("42 mib") -> 44040192, nil
func ParseBigBytes(s string) (*big.Int, error) {
lastDigit := 0
hasComma := false
for _, r := range s {
if !(unicode.IsDigit(r) || r == '.' || r == ',') {
break
}
if r == ',' {
hasComma = true
}
lastDigit++
}
num := s[:lastDigit]
if hasComma {
num = strings.Replace(num, ",", "", -1)
}
val := &big.Rat{}
_, err := fmt.Sscanf(num, "%f", val)
if err != nil {
return nil, err
}
extra := strings.ToLower(strings.TrimSpace(s[lastDigit:]))
if m, ok := bigBytesSizeTable[extra]; ok {
mv := (&big.Rat{}).SetInt(m)
val.Mul(val, mv)
rv := &big.Int{}
rv.Div(val.Num(), val.Denom())
return rv, nil
}
return nil, fmt.Errorf("unhandled size name: %v", extra)
}

143
vendor/github.com/dustin/go-humanize/bytes.go generated vendored Normal file
View File

@@ -0,0 +1,143 @@
package humanize
import (
"fmt"
"math"
"strconv"
"strings"
"unicode"
)
// IEC Sizes.
// kibis of bits
const (
Byte = 1 << (iota * 10)
KiByte
MiByte
GiByte
TiByte
PiByte
EiByte
)
// SI Sizes.
const (
IByte = 1
KByte = IByte * 1000
MByte = KByte * 1000
GByte = MByte * 1000
TByte = GByte * 1000
PByte = TByte * 1000
EByte = PByte * 1000
)
var bytesSizeTable = map[string]uint64{
"b": Byte,
"kib": KiByte,
"kb": KByte,
"mib": MiByte,
"mb": MByte,
"gib": GiByte,
"gb": GByte,
"tib": TiByte,
"tb": TByte,
"pib": PiByte,
"pb": PByte,
"eib": EiByte,
"eb": EByte,
// Without suffix
"": Byte,
"ki": KiByte,
"k": KByte,
"mi": MiByte,
"m": MByte,
"gi": GiByte,
"g": GByte,
"ti": TiByte,
"t": TByte,
"pi": PiByte,
"p": PByte,
"ei": EiByte,
"e": EByte,
}
func logn(n, b float64) float64 {
return math.Log(n) / math.Log(b)
}
func humanateBytes(s uint64, base float64, sizes []string) string {
if s < 10 {
return fmt.Sprintf("%d B", s)
}
e := math.Floor(logn(float64(s), base))
suffix := sizes[int(e)]
val := math.Floor(float64(s)/math.Pow(base, e)*10+0.5) / 10
f := "%.0f %s"
if val < 10 {
f = "%.1f %s"
}
return fmt.Sprintf(f, val, suffix)
}
// Bytes produces a human readable representation of an SI size.
//
// See also: ParseBytes.
//
// Bytes(82854982) -> 83 MB
func Bytes(s uint64) string {
sizes := []string{"B", "kB", "MB", "GB", "TB", "PB", "EB"}
return humanateBytes(s, 1000, sizes)
}
// IBytes produces a human readable representation of an IEC size.
//
// See also: ParseBytes.
//
// IBytes(82854982) -> 79 MiB
func IBytes(s uint64) string {
sizes := []string{"B", "KiB", "MiB", "GiB", "TiB", "PiB", "EiB"}
return humanateBytes(s, 1024, sizes)
}
// ParseBytes parses a string representation of bytes into the number
// of bytes it represents.
//
// See Also: Bytes, IBytes.
//
// ParseBytes("42 MB") -> 42000000, nil
// ParseBytes("42 mib") -> 44040192, nil
func ParseBytes(s string) (uint64, error) {
lastDigit := 0
hasComma := false
for _, r := range s {
if !(unicode.IsDigit(r) || r == '.' || r == ',') {
break
}
if r == ',' {
hasComma = true
}
lastDigit++
}
num := s[:lastDigit]
if hasComma {
num = strings.Replace(num, ",", "", -1)
}
f, err := strconv.ParseFloat(num, 64)
if err != nil {
return 0, err
}
extra := strings.ToLower(strings.TrimSpace(s[lastDigit:]))
if m, ok := bytesSizeTable[extra]; ok {
f *= float64(m)
if f >= math.MaxUint64 {
return 0, fmt.Errorf("too large: %v", s)
}
return uint64(f), nil
}
return 0, fmt.Errorf("unhandled size name: %v", extra)
}

116
vendor/github.com/dustin/go-humanize/comma.go generated vendored Normal file
View File

@@ -0,0 +1,116 @@
package humanize
import (
"bytes"
"math"
"math/big"
"strconv"
"strings"
)
// Comma produces a string form of the given number in base 10 with
// commas after every three orders of magnitude.
//
// e.g. Comma(834142) -> 834,142
func Comma(v int64) string {
sign := ""
// Min int64 can't be negated to a usable value, so it has to be special cased.
if v == math.MinInt64 {
return "-9,223,372,036,854,775,808"
}
if v < 0 {
sign = "-"
v = 0 - v
}
parts := []string{"", "", "", "", "", "", ""}
j := len(parts) - 1
for v > 999 {
parts[j] = strconv.FormatInt(v%1000, 10)
switch len(parts[j]) {
case 2:
parts[j] = "0" + parts[j]
case 1:
parts[j] = "00" + parts[j]
}
v = v / 1000
j--
}
parts[j] = strconv.Itoa(int(v))
return sign + strings.Join(parts[j:], ",")
}
// Commaf produces a string form of the given number in base 10 with
// commas after every three orders of magnitude.
//
// e.g. Commaf(834142.32) -> 834,142.32
func Commaf(v float64) string {
buf := &bytes.Buffer{}
if v < 0 {
buf.Write([]byte{'-'})
v = 0 - v
}
comma := []byte{','}
parts := strings.Split(strconv.FormatFloat(v, 'f', -1, 64), ".")
pos := 0
if len(parts[0])%3 != 0 {
pos += len(parts[0]) % 3
buf.WriteString(parts[0][:pos])
buf.Write(comma)
}
for ; pos < len(parts[0]); pos += 3 {
buf.WriteString(parts[0][pos : pos+3])
buf.Write(comma)
}
buf.Truncate(buf.Len() - 1)
if len(parts) > 1 {
buf.Write([]byte{'.'})
buf.WriteString(parts[1])
}
return buf.String()
}
// CommafWithDigits works like the Commaf but limits the resulting
// string to the given number of decimal places.
//
// e.g. CommafWithDigits(834142.32, 1) -> 834,142.3
func CommafWithDigits(f float64, decimals int) string {
return stripTrailingDigits(Commaf(f), decimals)
}
// BigComma produces a string form of the given big.Int in base 10
// with commas after every three orders of magnitude.
func BigComma(b *big.Int) string {
sign := ""
if b.Sign() < 0 {
sign = "-"
b.Abs(b)
}
athousand := big.NewInt(1000)
c := (&big.Int{}).Set(b)
_, m := oom(c, athousand)
parts := make([]string, m+1)
j := len(parts) - 1
mod := &big.Int{}
for b.Cmp(athousand) >= 0 {
b.DivMod(b, athousand, mod)
parts[j] = strconv.FormatInt(mod.Int64(), 10)
switch len(parts[j]) {
case 2:
parts[j] = "0" + parts[j]
case 1:
parts[j] = "00" + parts[j]
}
j--
}
parts[j] = strconv.Itoa(int(b.Int64()))
return sign + strings.Join(parts[j:], ",")
}

41
vendor/github.com/dustin/go-humanize/commaf.go generated vendored Normal file
View File

@@ -0,0 +1,41 @@
//go:build go1.6
// +build go1.6
package humanize
import (
"bytes"
"math/big"
"strings"
)
// BigCommaf produces a string form of the given big.Float in base 10
// with commas after every three orders of magnitude.
func BigCommaf(v *big.Float) string {
buf := &bytes.Buffer{}
if v.Sign() < 0 {
buf.Write([]byte{'-'})
v.Abs(v)
}
comma := []byte{','}
parts := strings.Split(v.Text('f', -1), ".")
pos := 0
if len(parts[0])%3 != 0 {
pos += len(parts[0]) % 3
buf.WriteString(parts[0][:pos])
buf.Write(comma)
}
for ; pos < len(parts[0]); pos += 3 {
buf.WriteString(parts[0][pos : pos+3])
buf.Write(comma)
}
buf.Truncate(buf.Len() - 1)
if len(parts) > 1 {
buf.Write([]byte{'.'})
buf.WriteString(parts[1])
}
return buf.String()
}

49
vendor/github.com/dustin/go-humanize/ftoa.go generated vendored Normal file
View File

@@ -0,0 +1,49 @@
package humanize
import (
"strconv"
"strings"
)
func stripTrailingZeros(s string) string {
if !strings.ContainsRune(s, '.') {
return s
}
offset := len(s) - 1
for offset > 0 {
if s[offset] == '.' {
offset--
break
}
if s[offset] != '0' {
break
}
offset--
}
return s[:offset+1]
}
func stripTrailingDigits(s string, digits int) string {
if i := strings.Index(s, "."); i >= 0 {
if digits <= 0 {
return s[:i]
}
i++
if i+digits >= len(s) {
return s
}
return s[:i+digits]
}
return s
}
// Ftoa converts a float to a string with no trailing zeros.
func Ftoa(num float64) string {
return stripTrailingZeros(strconv.FormatFloat(num, 'f', 6, 64))
}
// FtoaWithDigits converts a float to a string but limits the resulting string
// to the given number of decimal places, and no trailing zeros.
func FtoaWithDigits(num float64, digits int) string {
return stripTrailingZeros(stripTrailingDigits(strconv.FormatFloat(num, 'f', 6, 64), digits))
}

8
vendor/github.com/dustin/go-humanize/humanize.go generated vendored Normal file
View File

@@ -0,0 +1,8 @@
/*
Package humanize converts boring ugly numbers to human-friendly strings and back.
Durations can be turned into strings such as "3 days ago", numbers
representing sizes like 82854982 into useful strings like, "83 MB" or
"79 MiB" (whichever you prefer).
*/
package humanize

192
vendor/github.com/dustin/go-humanize/number.go generated vendored Normal file
View File

@@ -0,0 +1,192 @@
package humanize
/*
Slightly adapted from the source to fit go-humanize.
Author: https://github.com/gorhill
Source: https://gist.github.com/gorhill/5285193
*/
import (
"math"
"strconv"
)
var (
renderFloatPrecisionMultipliers = [...]float64{
1,
10,
100,
1000,
10000,
100000,
1000000,
10000000,
100000000,
1000000000,
}
renderFloatPrecisionRounders = [...]float64{
0.5,
0.05,
0.005,
0.0005,
0.00005,
0.000005,
0.0000005,
0.00000005,
0.000000005,
0.0000000005,
}
)
// FormatFloat produces a formatted number as string based on the following user-specified criteria:
// * thousands separator
// * decimal separator
// * decimal precision
//
// Usage: s := RenderFloat(format, n)
// The format parameter tells how to render the number n.
//
// See examples: http://play.golang.org/p/LXc1Ddm1lJ
//
// Examples of format strings, given n = 12345.6789:
// "#,###.##" => "12,345.67"
// "#,###." => "12,345"
// "#,###" => "12345,678"
// "#\u202F###,##" => "12345,68"
// "#.###,###### => 12.345,678900
// "" (aka default format) => 12,345.67
//
// The highest precision allowed is 9 digits after the decimal symbol.
// There is also a version for integer number, FormatInteger(),
// which is convenient for calls within template.
func FormatFloat(format string, n float64) string {
// Special cases:
// NaN = "NaN"
// +Inf = "+Infinity"
// -Inf = "-Infinity"
if math.IsNaN(n) {
return "NaN"
}
if n > math.MaxFloat64 {
return "Infinity"
}
if n < (0.0 - math.MaxFloat64) {
return "-Infinity"
}
// default format
precision := 2
decimalStr := "."
thousandStr := ","
positiveStr := ""
negativeStr := "-"
if len(format) > 0 {
format := []rune(format)
// If there is an explicit format directive,
// then default values are these:
precision = 9
thousandStr = ""
// collect indices of meaningful formatting directives
formatIndx := []int{}
for i, char := range format {
if char != '#' && char != '0' {
formatIndx = append(formatIndx, i)
}
}
if len(formatIndx) > 0 {
// Directive at index 0:
// Must be a '+'
// Raise an error if not the case
// index: 0123456789
// +0.000,000
// +000,000.0
// +0000.00
// +0000
if formatIndx[0] == 0 {
if format[formatIndx[0]] != '+' {
panic("RenderFloat(): invalid positive sign directive")
}
positiveStr = "+"
formatIndx = formatIndx[1:]
}
// Two directives:
// First is thousands separator
// Raise an error if not followed by 3-digit
// 0123456789
// 0.000,000
// 000,000.00
if len(formatIndx) == 2 {
if (formatIndx[1] - formatIndx[0]) != 4 {
panic("RenderFloat(): thousands separator directive must be followed by 3 digit-specifiers")
}
thousandStr = string(format[formatIndx[0]])
formatIndx = formatIndx[1:]
}
// One directive:
// Directive is decimal separator
// The number of digit-specifier following the separator indicates wanted precision
// 0123456789
// 0.00
// 000,0000
if len(formatIndx) == 1 {
decimalStr = string(format[formatIndx[0]])
precision = len(format) - formatIndx[0] - 1
}
}
}
// generate sign part
var signStr string
if n >= 0.000000001 {
signStr = positiveStr
} else if n <= -0.000000001 {
signStr = negativeStr
n = -n
} else {
signStr = ""
n = 0.0
}
// split number into integer and fractional parts
intf, fracf := math.Modf(n + renderFloatPrecisionRounders[precision])
// generate integer part string
intStr := strconv.FormatInt(int64(intf), 10)
// add thousand separator if required
if len(thousandStr) > 0 {
for i := len(intStr); i > 3; {
i -= 3
intStr = intStr[:i] + thousandStr + intStr[i:]
}
}
// no fractional part, we can leave now
if precision == 0 {
return signStr + intStr
}
// generate fractional part
fracStr := strconv.Itoa(int(fracf * renderFloatPrecisionMultipliers[precision]))
// may need padding
if len(fracStr) < precision {
fracStr = "000000000000000"[:precision-len(fracStr)] + fracStr
}
return signStr + intStr + decimalStr + fracStr
}
// FormatInteger produces a formatted number as string.
// See FormatFloat.
func FormatInteger(format string, n int) string {
return FormatFloat(format, float64(n))
}

25
vendor/github.com/dustin/go-humanize/ordinals.go generated vendored Normal file
View File

@@ -0,0 +1,25 @@
package humanize
import "strconv"
// Ordinal gives you the input number in a rank/ordinal format.
//
// Ordinal(3) -> 3rd
func Ordinal(x int) string {
suffix := "th"
switch x % 10 {
case 1:
if x%100 != 11 {
suffix = "st"
}
case 2:
if x%100 != 12 {
suffix = "nd"
}
case 3:
if x%100 != 13 {
suffix = "rd"
}
}
return strconv.Itoa(x) + suffix
}

127
vendor/github.com/dustin/go-humanize/si.go generated vendored Normal file
View File

@@ -0,0 +1,127 @@
package humanize
import (
"errors"
"math"
"regexp"
"strconv"
)
var siPrefixTable = map[float64]string{
-30: "q", // quecto
-27: "r", // ronto
-24: "y", // yocto
-21: "z", // zepto
-18: "a", // atto
-15: "f", // femto
-12: "p", // pico
-9: "n", // nano
-6: "µ", // micro
-3: "m", // milli
0: "",
3: "k", // kilo
6: "M", // mega
9: "G", // giga
12: "T", // tera
15: "P", // peta
18: "E", // exa
21: "Z", // zetta
24: "Y", // yotta
27: "R", // ronna
30: "Q", // quetta
}
var revSIPrefixTable = revfmap(siPrefixTable)
// revfmap reverses the map and precomputes the power multiplier
func revfmap(in map[float64]string) map[string]float64 {
rv := map[string]float64{}
for k, v := range in {
rv[v] = math.Pow(10, k)
}
return rv
}
var riParseRegex *regexp.Regexp
func init() {
ri := `^([\-0-9.]+)\s?([`
for _, v := range siPrefixTable {
ri += v
}
ri += `]?)(.*)`
riParseRegex = regexp.MustCompile(ri)
}
// ComputeSI finds the most appropriate SI prefix for the given number
// and returns the prefix along with the value adjusted to be within
// that prefix.
//
// See also: SI, ParseSI.
//
// e.g. ComputeSI(2.2345e-12) -> (2.2345, "p")
func ComputeSI(input float64) (float64, string) {
if input == 0 {
return 0, ""
}
mag := math.Abs(input)
exponent := math.Floor(logn(mag, 10))
exponent = math.Floor(exponent/3) * 3
value := mag / math.Pow(10, exponent)
// Handle special case where value is exactly 1000.0
// Should return 1 M instead of 1000 k
if value == 1000.0 {
exponent += 3
value = mag / math.Pow(10, exponent)
}
value = math.Copysign(value, input)
prefix := siPrefixTable[exponent]
return value, prefix
}
// SI returns a string with default formatting.
//
// SI uses Ftoa to format float value, removing trailing zeros.
//
// See also: ComputeSI, ParseSI.
//
// e.g. SI(1000000, "B") -> 1 MB
// e.g. SI(2.2345e-12, "F") -> 2.2345 pF
func SI(input float64, unit string) string {
value, prefix := ComputeSI(input)
return Ftoa(value) + " " + prefix + unit
}
// SIWithDigits works like SI but limits the resulting string to the
// given number of decimal places.
//
// e.g. SIWithDigits(1000000, 0, "B") -> 1 MB
// e.g. SIWithDigits(2.2345e-12, 2, "F") -> 2.23 pF
func SIWithDigits(input float64, decimals int, unit string) string {
value, prefix := ComputeSI(input)
return FtoaWithDigits(value, decimals) + " " + prefix + unit
}
var errInvalid = errors.New("invalid input")
// ParseSI parses an SI string back into the number and unit.
//
// See also: SI, ComputeSI.
//
// e.g. ParseSI("2.2345 pF") -> (2.2345e-12, "F", nil)
func ParseSI(input string) (float64, string, error) {
found := riParseRegex.FindStringSubmatch(input)
if len(found) != 4 {
return 0, "", errInvalid
}
mag := revSIPrefixTable[found[2]]
unit := found[3]
base, err := strconv.ParseFloat(found[1], 64)
return base * mag, unit, err
}

117
vendor/github.com/dustin/go-humanize/times.go generated vendored Normal file
View File

@@ -0,0 +1,117 @@
package humanize
import (
"fmt"
"math"
"sort"
"time"
)
// Seconds-based time units
const (
Day = 24 * time.Hour
Week = 7 * Day
Month = 30 * Day
Year = 12 * Month
LongTime = 37 * Year
)
// Time formats a time into a relative string.
//
// Time(someT) -> "3 weeks ago"
func Time(then time.Time) string {
return RelTime(then, time.Now(), "ago", "from now")
}
// A RelTimeMagnitude struct contains a relative time point at which
// the relative format of time will switch to a new format string. A
// slice of these in ascending order by their "D" field is passed to
// CustomRelTime to format durations.
//
// The Format field is a string that may contain a "%s" which will be
// replaced with the appropriate signed label (e.g. "ago" or "from
// now") and a "%d" that will be replaced by the quantity.
//
// The DivBy field is the amount of time the time difference must be
// divided by in order to display correctly.
//
// e.g. if D is 2*time.Minute and you want to display "%d minutes %s"
// DivBy should be time.Minute so whatever the duration is will be
// expressed in minutes.
type RelTimeMagnitude struct {
D time.Duration
Format string
DivBy time.Duration
}
var defaultMagnitudes = []RelTimeMagnitude{
{time.Second, "now", time.Second},
{2 * time.Second, "1 second %s", 1},
{time.Minute, "%d seconds %s", time.Second},
{2 * time.Minute, "1 minute %s", 1},
{time.Hour, "%d minutes %s", time.Minute},
{2 * time.Hour, "1 hour %s", 1},
{Day, "%d hours %s", time.Hour},
{2 * Day, "1 day %s", 1},
{Week, "%d days %s", Day},
{2 * Week, "1 week %s", 1},
{Month, "%d weeks %s", Week},
{2 * Month, "1 month %s", 1},
{Year, "%d months %s", Month},
{18 * Month, "1 year %s", 1},
{2 * Year, "2 years %s", 1},
{LongTime, "%d years %s", Year},
{math.MaxInt64, "a long while %s", 1},
}
// RelTime formats a time into a relative string.
//
// It takes two times and two labels. In addition to the generic time
// delta string (e.g. 5 minutes), the labels are used applied so that
// the label corresponding to the smaller time is applied.
//
// RelTime(timeInPast, timeInFuture, "earlier", "later") -> "3 weeks earlier"
func RelTime(a, b time.Time, albl, blbl string) string {
return CustomRelTime(a, b, albl, blbl, defaultMagnitudes)
}
// CustomRelTime formats a time into a relative string.
//
// It takes two times two labels and a table of relative time formats.
// In addition to the generic time delta string (e.g. 5 minutes), the
// labels are used applied so that the label corresponding to the
// smaller time is applied.
func CustomRelTime(a, b time.Time, albl, blbl string, magnitudes []RelTimeMagnitude) string {
lbl := albl
diff := b.Sub(a)
if a.After(b) {
lbl = blbl
diff = a.Sub(b)
}
n := sort.Search(len(magnitudes), func(i int) bool {
return magnitudes[i].D > diff
})
if n >= len(magnitudes) {
n = len(magnitudes) - 1
}
mag := magnitudes[n]
args := []interface{}{}
escaped := false
for _, ch := range mag.Format {
if escaped {
switch ch {
case 's':
args = append(args, lbl)
case 'd':
args = append(args, diff/mag.DivBy)
}
escaped = false
} else {
escaped = ch == '%'
}
}
return fmt.Sprintf(mag.Format, args...)
}

73
vendor/github.com/golang-sql/civil/CONTRIBUTING.md generated vendored Normal file
View File

@@ -0,0 +1,73 @@
# Contributing
1. Sign one of the contributor license agreements below.
#### Running
Once you've done the necessary setup, you can run the integration tests by
running:
``` sh
$ go test -v github.com/golang-sql/civil
```
## Contributor License Agreements
Before we can accept your pull requests you'll need to sign a Contributor
License Agreement (CLA):
- **If you are an individual writing original source code** and **you own the
intellectual property**, then you'll need to sign an [individual CLA][indvcla].
- **If you work for a company that wants to allow you to contribute your
work**, then you'll need to sign a [corporate CLA][corpcla].
You can sign these electronically (just scroll to the bottom). After that,
we'll be able to accept your pull requests.
## Contributor Code of Conduct
As contributors and maintainers of this project,
and in the interest of fostering an open and welcoming community,
we pledge to respect all people who contribute through reporting issues,
posting feature requests, updating documentation,
submitting pull requests or patches, and other activities.
We are committed to making participation in this project
a harassment-free experience for everyone,
regardless of level of experience, gender, gender identity and expression,
sexual orientation, disability, personal appearance,
body size, race, ethnicity, age, religion, or nationality.
Examples of unacceptable behavior by participants include:
* The use of sexualized language or imagery
* Personal attacks
* Trolling or insulting/derogatory comments
* Public or private harassment
* Publishing other's private information,
such as physical or electronic
addresses, without explicit permission
* Other unethical or unprofessional conduct.
Project maintainers have the right and responsibility to remove, edit, or reject
comments, commits, code, wiki edits, issues, and other contributions
that are not aligned to this Code of Conduct.
By adopting this Code of Conduct,
project maintainers commit themselves to fairly and consistently
applying these principles to every aspect of managing this project.
Project maintainers who do not follow or enforce the Code of Conduct
may be permanently removed from the project team.
This code of conduct applies both within project spaces and in public spaces
when an individual is representing the project or its community.
Instances of abusive, harassing, or otherwise unacceptable behavior
may be reported by opening an issue
or contacting one or more of the project maintainers.
This Code of Conduct is adapted from the [Contributor Covenant](http://contributor-covenant.org), version 1.2.0,
available at [http://contributor-covenant.org/version/1/2/0/](http://contributor-covenant.org/version/1/2/0/)
[gcloudcli]: https://developers.google.com/cloud/sdk/gcloud/
[indvcla]: https://developers.google.com/open-source/cla/individual
[corpcla]: https://developers.google.com/open-source/cla/corporate

202
vendor/github.com/golang-sql/civil/LICENSE generated vendored Normal file
View File

@@ -0,0 +1,202 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "[]"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright [yyyy] [name of copyright owner]
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

15
vendor/github.com/golang-sql/civil/README.md generated vendored Normal file
View File

@@ -0,0 +1,15 @@
# Civil Date and Time
[![GoDoc](https://godoc.org/github.com/golang-sql/civil?status.svg)](https://godoc.org/github.com/golang-sql/civil)
Civil provides Date, Time of Day, and DateTime data types.
While there are many uses, using specific types when working
with databases make is conceptually eaiser to understand what value
is set in the remote system.
## Source
This civil package was extracted and forked from `cloud.google.com/go/civil`.
As such the license and contributing requirements remain the same as that
module.

292
vendor/github.com/golang-sql/civil/civil.go generated vendored Normal file
View File

@@ -0,0 +1,292 @@
// Copyright 2016 Google LLC
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
// Package civil implements types for civil time, a time-zone-independent
// representation of time that follows the rules of the proleptic
// Gregorian calendar with exactly 24-hour days, 60-minute hours, and 60-second
// minutes.
//
// Because they lack location information, these types do not represent unique
// moments or intervals of time. Use time.Time for that purpose.
package civil
import (
"fmt"
"time"
)
// A Date represents a date (year, month, day).
//
// This type does not include location information, and therefore does not
// describe a unique 24-hour timespan.
type Date struct {
Year int // Year (e.g., 2014).
Month time.Month // Month of the year (January = 1, ...).
Day int // Day of the month, starting at 1.
}
// DateOf returns the Date in which a time occurs in that time's location.
func DateOf(t time.Time) Date {
var d Date
d.Year, d.Month, d.Day = t.Date()
return d
}
// ParseDate parses a string in RFC3339 full-date format and returns the date value it represents.
func ParseDate(s string) (Date, error) {
t, err := time.Parse("2006-01-02", s)
if err != nil {
return Date{}, err
}
return DateOf(t), nil
}
// String returns the date in RFC3339 full-date format.
func (d Date) String() string {
return fmt.Sprintf("%04d-%02d-%02d", d.Year, d.Month, d.Day)
}
// IsValid reports whether the date is valid.
func (d Date) IsValid() bool {
return DateOf(d.In(time.UTC)) == d
}
// In returns the time corresponding to time 00:00:00 of the date in the location.
//
// In is always consistent with time.Date, even when time.Date returns a time
// on a different day. For example, if loc is America/Indiana/Vincennes, then both
// time.Date(1955, time.May, 1, 0, 0, 0, 0, loc)
// and
// civil.Date{Year: 1955, Month: time.May, Day: 1}.In(loc)
// return 23:00:00 on April 30, 1955.
//
// In panics if loc is nil.
func (d Date) In(loc *time.Location) time.Time {
return time.Date(d.Year, d.Month, d.Day, 0, 0, 0, 0, loc)
}
// AddDays returns the date that is n days in the future.
// n can also be negative to go into the past.
func (d Date) AddDays(n int) Date {
return DateOf(d.In(time.UTC).AddDate(0, 0, n))
}
// DaysSince returns the signed number of days between the date and s, not including the end day.
// This is the inverse operation to AddDays.
func (d Date) DaysSince(s Date) (days int) {
// We convert to Unix time so we do not have to worry about leap seconds:
// Unix time increases by exactly 86400 seconds per day.
deltaUnix := d.In(time.UTC).Unix() - s.In(time.UTC).Unix()
return int(deltaUnix / 86400)
}
// Before reports whether d1 occurs before d2.
func (d1 Date) Before(d2 Date) bool {
if d1.Year != d2.Year {
return d1.Year < d2.Year
}
if d1.Month != d2.Month {
return d1.Month < d2.Month
}
return d1.Day < d2.Day
}
// After reports whether d1 occurs after d2.
func (d1 Date) After(d2 Date) bool {
return d2.Before(d1)
}
// IsZero reports whether date fields are set to their default value.
func (d Date) IsZero() bool {
return (d.Year == 0) && (int(d.Month) == 0) && (d.Day == 0)
}
// MarshalText implements the encoding.TextMarshaler interface.
// The output is the result of d.String().
func (d Date) MarshalText() ([]byte, error) {
return []byte(d.String()), nil
}
// UnmarshalText implements the encoding.TextUnmarshaler interface.
// The date is expected to be a string in a format accepted by ParseDate.
func (d *Date) UnmarshalText(data []byte) error {
var err error
*d, err = ParseDate(string(data))
return err
}
// A Time represents a time with nanosecond precision.
//
// This type does not include location information, and therefore does not
// describe a unique moment in time.
//
// This type exists to represent the TIME type in storage-based APIs like BigQuery.
// Most operations on Times are unlikely to be meaningful. Prefer the DateTime type.
type Time struct {
Hour int // The hour of the day in 24-hour format; range [0-23]
Minute int // The minute of the hour; range [0-59]
Second int // The second of the minute; range [0-59]
Nanosecond int // The nanosecond of the second; range [0-999999999]
}
// TimeOf returns the Time representing the time of day in which a time occurs
// in that time's location. It ignores the date.
func TimeOf(t time.Time) Time {
var tm Time
tm.Hour, tm.Minute, tm.Second = t.Clock()
tm.Nanosecond = t.Nanosecond()
return tm
}
// ParseTime parses a string and returns the time value it represents.
// ParseTime accepts an extended form of the RFC3339 partial-time format. After
// the HH:MM:SS part of the string, an optional fractional part may appear,
// consisting of a decimal point followed by one to nine decimal digits.
// (RFC3339 admits only one digit after the decimal point).
func ParseTime(s string) (Time, error) {
t, err := time.Parse("15:04:05.999999999", s)
if err != nil {
return Time{}, err
}
return TimeOf(t), nil
}
// String returns the date in the format described in ParseTime. If Nanoseconds
// is zero, no fractional part will be generated. Otherwise, the result will
// end with a fractional part consisting of a decimal point and nine digits.
func (t Time) String() string {
s := fmt.Sprintf("%02d:%02d:%02d", t.Hour, t.Minute, t.Second)
if t.Nanosecond == 0 {
return s
}
return s + fmt.Sprintf(".%09d", t.Nanosecond)
}
// IsValid reports whether the time is valid.
func (t Time) IsValid() bool {
// Construct a non-zero time.
tm := time.Date(2, 2, 2, t.Hour, t.Minute, t.Second, t.Nanosecond, time.UTC)
return TimeOf(tm) == t
}
// IsZero reports whether time fields are set to their default value.
func (t Time) IsZero() bool {
return (t.Hour == 0) && (t.Minute == 0) && (t.Second == 0) && (t.Nanosecond == 0)
}
// MarshalText implements the encoding.TextMarshaler interface.
// The output is the result of t.String().
func (t Time) MarshalText() ([]byte, error) {
return []byte(t.String()), nil
}
// UnmarshalText implements the encoding.TextUnmarshaler interface.
// The time is expected to be a string in a format accepted by ParseTime.
func (t *Time) UnmarshalText(data []byte) error {
var err error
*t, err = ParseTime(string(data))
return err
}
// A DateTime represents a date and time.
//
// This type does not include location information, and therefore does not
// describe a unique moment in time.
type DateTime struct {
Date Date
Time Time
}
// Note: We deliberately do not embed Date into DateTime, to avoid promoting AddDays and Sub.
// DateTimeOf returns the DateTime in which a time occurs in that time's location.
func DateTimeOf(t time.Time) DateTime {
return DateTime{
Date: DateOf(t),
Time: TimeOf(t),
}
}
// ParseDateTime parses a string and returns the DateTime it represents.
// ParseDateTime accepts a variant of the RFC3339 date-time format that omits
// the time offset but includes an optional fractional time, as described in
// ParseTime. Informally, the accepted format is
// YYYY-MM-DDTHH:MM:SS[.FFFFFFFFF]
// where the 'T' may be a lower-case 't'.
func ParseDateTime(s string) (DateTime, error) {
t, err := time.Parse("2006-01-02T15:04:05.999999999", s)
if err != nil {
t, err = time.Parse("2006-01-02t15:04:05.999999999", s)
if err != nil {
return DateTime{}, err
}
}
return DateTimeOf(t), nil
}
// String returns the date in the format described in ParseDate.
func (dt DateTime) String() string {
return dt.Date.String() + "T" + dt.Time.String()
}
// IsValid reports whether the datetime is valid.
func (dt DateTime) IsValid() bool {
return dt.Date.IsValid() && dt.Time.IsValid()
}
// In returns the time corresponding to the DateTime in the given location.
//
// If the time is missing or ambigous at the location, In returns the same
// result as time.Date. For example, if loc is America/Indiana/Vincennes, then
// both
// time.Date(1955, time.May, 1, 0, 30, 0, 0, loc)
// and
// civil.DateTime{
// civil.Date{Year: 1955, Month: time.May, Day: 1}},
// civil.Time{Minute: 30}}.In(loc)
// return 23:30:00 on April 30, 1955.
//
// In panics if loc is nil.
func (dt DateTime) In(loc *time.Location) time.Time {
return time.Date(dt.Date.Year, dt.Date.Month, dt.Date.Day, dt.Time.Hour, dt.Time.Minute, dt.Time.Second, dt.Time.Nanosecond, loc)
}
// Before reports whether dt1 occurs before dt2.
func (dt1 DateTime) Before(dt2 DateTime) bool {
return dt1.In(time.UTC).Before(dt2.In(time.UTC))
}
// After reports whether dt1 occurs after dt2.
func (dt1 DateTime) After(dt2 DateTime) bool {
return dt2.Before(dt1)
}
// IsZero reports whether datetime fields are set to their default value.
func (dt DateTime) IsZero() bool {
return dt.Date.IsZero() && dt.Time.IsZero()
}
// MarshalText implements the encoding.TextMarshaler interface.
// The output is the result of dt.String().
func (dt DateTime) MarshalText() ([]byte, error) {
return []byte(dt.String()), nil
}
// UnmarshalText implements the encoding.TextUnmarshaler interface.
// The datetime is expected to be a string in a format accepted by ParseDateTime
func (dt *DateTime) UnmarshalText(data []byte) error {
var err error
*dt, err = ParseDateTime(string(data))
return err
}

27
vendor/github.com/golang-sql/sqlexp/LICENSE generated vendored Normal file
View File

@@ -0,0 +1,27 @@
Copyright (c) 2017 The Go Authors. All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are
met:
* Redistributions of source code must retain the above copyright
notice, this list of conditions and the following disclaimer.
* Redistributions in binary form must reproduce the above
copyright notice, this list of conditions and the following disclaimer
in the documentation and/or other materials provided with the
distribution.
* Neither the name of Google Inc. nor the names of its
contributors may be used to endorse or promote products derived from
this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.

22
vendor/github.com/golang-sql/sqlexp/PATENTS generated vendored Normal file
View File

@@ -0,0 +1,22 @@
Additional IP Rights Grant (Patents)
"This implementation" means the copyrightable works distributed by
Google as part of the Go project.
Google hereby grants to You a perpetual, worldwide, non-exclusive,
no-charge, royalty-free, irrevocable (except as stated in this section)
patent license to make, have made, use, offer to sell, sell, import,
transfer and otherwise run, modify and propagate the contents of this
implementation of Go, where such license applies only to those patent
claims, both currently owned or controlled by Google and acquired in
the future, licensable by Google that are necessarily infringed by this
implementation of Go. This grant does not include claims that would be
infringed only as a consequence of further modification of this
implementation. If you or your agent or exclusive licensee institute or
order or agree to the institution of patent litigation against any
entity (including a cross-claim or counterclaim in a lawsuit) alleging
that this implementation of Go or any code incorporated within this
implementation of Go constitutes direct or contributory patent
infringement, or inducement of patent infringement, then any patent
rights granted to you under this License for this implementation of Go
shall terminate as of the date such litigation is filed.

5
vendor/github.com/golang-sql/sqlexp/README.md generated vendored Normal file
View File

@@ -0,0 +1,5 @@
# golang-sql exp
https://godoc.org/github.com/golang-sql/sqlexp
All contributions must have a valid golang CLA.

8
vendor/github.com/golang-sql/sqlexp/doc.go generated vendored Normal file
View File

@@ -0,0 +1,8 @@
// Copyright 2017 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// Package sqlexp provides interfaces and functions that may be adopted into
// the database/sql package in the future. All features may change or be removed
// in the future.
package sqlexp // imports github.com/golang-sql/sqlexp

80
vendor/github.com/golang-sql/sqlexp/messages.go generated vendored Normal file
View File

@@ -0,0 +1,80 @@
package sqlexp
import (
"context"
"fmt"
)
// RawMessage is returned from RowsMessage.
type RawMessage interface{}
// ReturnMessage may be passed into a Query argument.
//
// Drivers must implement driver.NamedValueChecker,
// call ReturnMessageInit on it, save it internally,
// and return driver.ErrOmitArgument to prevent
// this from appearing in the query arguments.
//
// Queries that recieve this message should also not return
// SQL errors from the Query method, but wait to return
// it in a Message.
type ReturnMessage struct {
queue chan RawMessage
}
// Message is called by clients after Query to dequeue messages.
func (m *ReturnMessage) Message(ctx context.Context) RawMessage {
select {
case <-ctx.Done():
return MsgNextResultSet{}
case raw := <-m.queue:
return raw
}
}
// ReturnMessageEnqueue is called by the driver to enqueue the driver.
// Drivers should not call this until after it returns from Query.
func ReturnMessageEnqueue(ctx context.Context, m *ReturnMessage, raw RawMessage) error {
select {
case <-ctx.Done():
return ctx.Err()
case m.queue <- raw:
return nil
}
}
// ReturnMessageInit is called by database/sql setup the ReturnMessage internals.
func ReturnMessageInit(m *ReturnMessage) {
m.queue = make(chan RawMessage, 15)
}
type (
// MsgNextResultSet must be checked for. When received, NextResultSet
// should be called and if false the message loop should be exited.
MsgNextResultSet struct{}
// MsgNext indicates the result set ready to be scanned.
// This message will often be followed with:
//
// for rows.Next() {
// rows.Scan(&v)
// }
MsgNext struct{}
// MsgRowsAffected returns the number of rows affected.
// Not all operations that affect rows return results, thus this message
// may be received multiple times.
MsgRowsAffected struct{ Count int64 }
// MsgLastInsertID returns the value of last inserted row. For many
// database systems and tables this will return int64. Some databases
// may return a string or GUID equivalent.
MsgLastInsertID struct{ Value interface{} }
// MsgNotice is raised from the SQL text and is only informational.
MsgNotice struct{ Message fmt.Stringer }
// MsgError returns SQL errors from the database system (not transport
// or other system level errors).
MsgError struct{ Error error }
)

73
vendor/github.com/golang-sql/sqlexp/mssql.go generated vendored Normal file
View File

@@ -0,0 +1,73 @@
// Copyright 2017 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package sqlexp
import (
"context"
"database/sql/driver"
"fmt"
"strings"
)
type mssql struct{}
var (
_ DriverNamer = mssql{}
_ DriverQuoter = mssql{}
_ DriverSavepointer = mssql{}
)
func (mssql) Open(string) (driver.Conn, error) {
panic("not implemented")
}
func (mssql) Namer(ctx context.Context) (Namer, error) {
return sqlServerNamer{}, nil
}
func (mssql) Quoter(ctx context.Context) (Quoter, error) {
return sqlServerQuoter{}, nil
}
func (mssql) Savepointer() (Savepointer, error) {
return sqlServerSavepointer{}, nil
}
type sqlServerNamer struct{}
func (sqlServerNamer) Name() string {
return "sqlserver"
}
func (sqlServerNamer) Dialect() string {
return DialectTSQL
}
type sqlServerQuoter struct{}
func (sqlServerQuoter) ID(name string) string {
return "[" + strings.Replace(name, "]", "]]", -1) + "]"
}
func (sqlServerQuoter) Value(v interface{}) string {
switch v := v.(type) {
default:
panic("unsupported value")
case string:
return "'" + strings.Replace(v, "'", "''", -1) + "'"
}
}
type sqlServerSavepointer struct{}
func (sqlServerSavepointer) Release(name string) string {
return ""
}
func (sqlServerSavepointer) Create(name string) string {
return fmt.Sprintf("save tran %s;", name)
}
func (sqlServerSavepointer) Rollback(name string) string {
return fmt.Sprintf("rollback tran %s;", name)
}

59
vendor/github.com/golang-sql/sqlexp/namer.go generated vendored Normal file
View File

@@ -0,0 +1,59 @@
// Copyright 2017 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package sqlexp
import (
"context"
"database/sql/driver"
"errors"
"reflect"
)
const (
DialectPostgres = "postgres"
DialectTSQL = "tsql"
DialectMySQL = "mysql"
DialectSQLite = "sqlite"
DialectOracle = "oracle"
)
// Namer returns the name of the database and the SQL dialect it
// uses.
type Namer interface {
// Name of the database management system.
//
// Examples:
// "posgresql-9.6"
// "sqlserver-10.54.32"
// "cockroachdb-1.0"
Name() string
// Dialect of SQL used in the database.
Dialect() string
}
// DriverNamer may be implemented on the driver.Driver interface.
// It may need to request information from the server to return
// the correct information.
type DriverNamer interface {
Namer(ctx context.Context) (Namer, error)
}
// NamerFromDriver returns the DriverNamer from the DB if
// it is implemented.
func NamerFromDriver(d driver.Driver, ctx context.Context) (Namer, error) {
if q, is := d.(DriverNamer); is {
return q.Namer(ctx)
}
dv := reflect.ValueOf(d)
d, found := internalDrivers[dv.Type().String()]
if found {
if q, is := d.(DriverNamer); is {
return q.Namer(ctx)
}
}
return nil, errors.New("namer not found")
}

67
vendor/github.com/golang-sql/sqlexp/pg.go generated vendored Normal file
View File

@@ -0,0 +1,67 @@
// Copyright 2017 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package sqlexp
import (
"context"
"database/sql/driver"
"fmt"
)
type postgresql struct{}
var (
_ DriverNamer = postgresql{}
_ DriverQuoter = postgresql{}
_ DriverSavepointer = postgresql{}
)
func (postgresql) Open(string) (driver.Conn, error) {
panic("not implemented")
}
func (postgresql) Namer(ctx context.Context) (Namer, error) {
return pgNamer{}, nil
}
func (postgresql) Quoter(ctx context.Context) (Quoter, error) {
panic("not implemented")
}
func (postgresql) Savepointer() (Savepointer, error) {
return pgSavepointer{}, nil
}
type pgNamer struct{}
func (pgNamer) Name() string {
return "postgresql"
}
func (pgNamer) Dialect() string {
return DialectPostgres
}
type pgQuoter struct{}
func (pgQuoter) ID(name string) string {
return ""
}
func (pgQuoter) Value(v interface{}) string {
return ""
}
type pgSavepointer struct{}
func (pgSavepointer) Release(name string) string {
return fmt.Sprintf("release savepoint %s;", name)
}
func (pgSavepointer) Create(name string) string {
return fmt.Sprintf("savepoint %s;", name)
}
func (pgSavepointer) Rollback(name string) string {
return fmt.Sprintf("rollback to savepoint %s;", name)
}

22
vendor/github.com/golang-sql/sqlexp/querier.go generated vendored Normal file
View File

@@ -0,0 +1,22 @@
// Copyright 2017 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package sqlexp
import (
"context"
"database/sql"
)
// Querier is the common interface to execute queries on a DB, Tx, or Conn.
type Querier interface {
ExecContext(ctx context.Context, query string, args ...interface{}) (sql.Result, error)
QueryContext(ctx context.Context, query string, args ...interface{}) (*sql.Rows, error)
QueryRowContext(ctx context.Context, query string, args ...interface{}) *sql.Row
}
var (
_ Querier = &sql.DB{}
_ Querier = &sql.Tx{}
)

57
vendor/github.com/golang-sql/sqlexp/quoter.go generated vendored Normal file
View File

@@ -0,0 +1,57 @@
// Copyright 2017 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package sqlexp
import (
"context"
"database/sql/driver"
"errors"
"reflect"
)
// BUG(kardianos): Both the Quoter and Namer may need to access the server.
// Quoter returns safe and valid SQL strings to use when building a SQL text.
type Quoter interface {
// ID quotes identifiers such as schema, table, or column names.
// ID does not operate on multipart identifiers such as "public.Table",
// it only operates on single identifiers such as "public" and "Table".
ID(name string) string
// Value quotes database values such as string or []byte types as strings
// that are suitable and safe to embed in SQL text. The returned value
// of a string will include all surrounding quotes.
//
// If a value type is not supported it must panic.
Value(v interface{}) string
}
// DriverQuoter returns a Quoter interface and is suitable for extending
// the driver.Driver type.
//
// The driver may need to hit the database to determine how it is configured to
// ensure the correct escaping rules are used.
type DriverQuoter interface {
Quoter(ctx context.Context) (Quoter, error)
}
// QuoterFromDriver takes a database driver, often obtained through a sql.DB.Driver
// call or from using it directly to get the quoter interface.
//
// Currently MssqlDriver is hard-coded to also return a valided Quoter.
func QuoterFromDriver(d driver.Driver, ctx context.Context) (Quoter, error) {
if q, is := d.(DriverQuoter); is {
return q.Quoter(ctx)
}
dv := reflect.ValueOf(d)
d, found := internalDrivers[dv.Type().String()]
if found {
if q, is := d.(DriverQuoter); is {
return q.Quoter(ctx)
}
}
return nil, errors.New("quoter interface not found")
}

15
vendor/github.com/golang-sql/sqlexp/registry.go generated vendored Normal file
View File

@@ -0,0 +1,15 @@
// Copyright 2017 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package sqlexp
import (
"database/sql/driver"
)
var internalDrivers = map[string]driver.Driver{
"*mssql.MssqlDriver": mssql{},
"*pq.Driver": postgresql{},
"*stdlib.Driver": postgresql{},
}

37
vendor/github.com/golang-sql/sqlexp/savepoint.go generated vendored Normal file
View File

@@ -0,0 +1,37 @@
// Copyright 2017 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package sqlexp
import (
"database/sql/driver"
"errors"
"reflect"
)
type Savepointer interface {
Release(name string) string
Create(name string) string
Rollback(name string) string
}
type DriverSavepointer interface {
Savepointer() (Savepointer, error)
}
// SavepointFromDriver
func SavepointFromDriver(d driver.Driver) (Savepointer, error) {
if q, is := d.(DriverSavepointer); is {
return q.Savepointer()
}
dv := reflect.ValueOf(d)
d, found := internalDrivers[dv.Type().String()]
if found {
if q, is := d.(DriverSavepointer); is {
return q.Savepointer()
}
}
return nil, errors.New("savepointer interface not found")
}

9
vendor/github.com/mattn/go-isatty/LICENSE generated vendored Normal file
View File

@@ -0,0 +1,9 @@
Copyright (c) Yasuhiro MATSUMOTO <mattn.jp@gmail.com>
MIT License (Expat)
Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.

50
vendor/github.com/mattn/go-isatty/README.md generated vendored Normal file
View File

@@ -0,0 +1,50 @@
# go-isatty
[![Godoc Reference](https://godoc.org/github.com/mattn/go-isatty?status.svg)](http://godoc.org/github.com/mattn/go-isatty)
[![Codecov](https://codecov.io/gh/mattn/go-isatty/branch/master/graph/badge.svg)](https://codecov.io/gh/mattn/go-isatty)
[![Coverage Status](https://coveralls.io/repos/github/mattn/go-isatty/badge.svg?branch=master)](https://coveralls.io/github/mattn/go-isatty?branch=master)
[![Go Report Card](https://goreportcard.com/badge/mattn/go-isatty)](https://goreportcard.com/report/mattn/go-isatty)
isatty for golang
## Usage
```go
package main
import (
"fmt"
"github.com/mattn/go-isatty"
"os"
)
func main() {
if isatty.IsTerminal(os.Stdout.Fd()) {
fmt.Println("Is Terminal")
} else if isatty.IsCygwinTerminal(os.Stdout.Fd()) {
fmt.Println("Is Cygwin/MSYS2 Terminal")
} else {
fmt.Println("Is Not Terminal")
}
}
```
## Installation
```
$ go get github.com/mattn/go-isatty
```
## License
MIT
## Author
Yasuhiro Matsumoto (a.k.a mattn)
## Thanks
* k-takata: base idea for IsCygwinTerminal
https://github.com/k-takata/go-iscygpty

Some files were not shown because too many files have changed in this diff Show More