4 Commits

Author SHA1 Message Date
4cdccde9cf docs: update CLAUDE.md with additional utilities and supported formats
Some checks failed
CI / Test (1.24) (push) Successful in -25m25s
CI / Lint (push) Successful in -25m57s
CI / Test (1.25) (push) Successful in -24m2s
CI / Build (push) Successful in -26m27s
Integration Tests / Integration Tests (push) Failing after -26m21s
Release / Build and Release (push) Successful in -23m47s
2026-02-07 09:59:35 +02:00
aba22cb574 feat(ui): add relationship management features in schema editor
Some checks failed
CI / Test (1.25) (push) Failing after -23m58s
CI / Test (1.24) (push) Successful in -23m22s
CI / Lint (push) Successful in -25m3s
CI / Build (push) Successful in -25m15s
Integration Tests / Integration Tests (push) Successful in -25m52s
- Implement functionality to create, update, delete, and view relationships between tables.
- Introduce new UI screens for managing relationships, including forms for adding and editing relationships.
- Enhance table editor with navigation to relationship management.
- Ensure relationships are displayed in a structured table format for better usability.
2026-02-07 09:49:24 +02:00
d0630b4899 feat: Added Sqlite reader
Some checks failed
CI / Test (1.24) (push) Successful in -23m3s
CI / Test (1.25) (push) Successful in -22m45s
CI / Lint (push) Failing after -25m11s
CI / Build (push) Failing after -25m26s
Integration Tests / Integration Tests (push) Successful in -25m38s
2026-02-07 09:30:45 +02:00
c9eed9b794 feat(sqlite): add SQLite writer for converting PostgreSQL schemas
All checks were successful
CI / Test (1.24) (push) Successful in -25m57s
CI / Test (1.25) (push) Successful in -25m54s
CI / Build (push) Successful in -26m25s
CI / Lint (push) Successful in -26m13s
Integration Tests / Integration Tests (push) Successful in -26m1s
- Implement SQLite DDL writer to convert PostgreSQL schemas to SQLite-compatible SQL statements.
- Include automatic schema flattening, type mapping, auto-increment detection, and function translation.
- Add templates for creating tables, indexes, unique constraints, check constraints, and foreign keys.
- Implement tests for writer functionality and data type mapping.
2026-02-07 09:11:02 +02:00
1399 changed files with 6353341 additions and 386 deletions

View File

@@ -4,7 +4,11 @@ This file provides guidance to Claude Code (claude.ai/code) when working with co
## Project Overview
RelSpec is a database relations specification tool that provides bidirectional conversion between various database schema formats. It reads database schemas from multiple sources (live databases, DBML, DCTX, DrawDB, etc.) and writes them to various formats (GORM, Bun, JSON, YAML, SQL, etc.).
RelSpec is a database relations specification tool that provides bidirectional conversion between various database schema formats. It reads database schemas from multiple sources and writes them to various formats.
**Supported Readers:** Bun, DBML, DCTX, DrawDB, Drizzle, GORM, GraphQL, JSON, PostgreSQL, Prisma, SQL Directory, SQLite, TypeORM, YAML
**Supported Writers:** Bun, DBML, DCTX, DrawDB, Drizzle, GORM, GraphQL, JSON, PostgreSQL, Prisma, SQL Exec, SQLite, Template, TypeORM, YAML
## Build Commands
@@ -50,8 +54,9 @@ Database
```
**Important patterns:**
- Each format (dbml, dctx, drawdb, etc.) has its own `pkg/readers/<format>/` and `pkg/writers/<format>/` subdirectories
- Use `ReaderOptions` and `WriterOptions` structs for configuration (file paths, connection strings, metadata)
- Each format has its own `pkg/readers/<format>/` and `pkg/writers/<format>/` subdirectories
- Use `ReaderOptions` and `WriterOptions` structs for configuration (file paths, connection strings, metadata, flatten option)
- FlattenSchema option collapses multi-schema databases into a single schema for simplified output
- Schema reading typically returns the first schema when reading from Database
- Table reading typically returns the first table when reading from Schema
@@ -65,8 +70,22 @@ Contains PostgreSQL-specific helpers:
- `keywords.go`: SQL reserved keywords validation
- `datatypes.go`: PostgreSQL data type mappings and conversions
### Additional Utilities
- **pkg/diff/**: Schema difference detection and comparison
- **pkg/inspector/**: Schema inspection and analysis tools
- **pkg/merge/**: Schema merging capabilities
- **pkg/reflectutil/**: Reflection utilities for dynamic type handling
- **pkg/ui/**: Terminal UI components for interactive schema editing
- **pkg/commontypes/**: Shared type definitions
## Development Patterns
- Each reader/writer is self-contained in its own subdirectory
- Options structs control behavior (file paths, connection strings, flatten schema, etc.)
- Live database connections supported for PostgreSQL and SQLite
- Template writer allows custom output formats
## Testing
- Test files should be in the same package as the code they test
@@ -77,5 +96,6 @@ Contains PostgreSQL-specific helpers:
## Module Information
- Module path: `git.warky.dev/wdevs/relspecgo`
- Go version: 1.25.5
- Uses Cobra for CLI, Viper for configuration
- Go version: 1.24.0
- Uses Cobra for CLI
- Key dependencies: pgx/v5 (PostgreSQL), modernc.org/sqlite (SQLite), tview (TUI), Bun ORM

196
GODOC.md Normal file
View File

@@ -0,0 +1,196 @@
# RelSpec API Documentation (godoc)
This document explains how to access and use the RelSpec API documentation.
## Viewing Documentation Locally
### Using `go doc` Command Line
View package documentation:
```bash
# Main package overview
go doc
# Specific package
go doc ./pkg/models
go doc ./pkg/readers
go doc ./pkg/writers
go doc ./pkg/ui
# Specific type or function
go doc ./pkg/models Database
go doc ./pkg/readers Reader
go doc ./pkg/writers Writer
```
View all documentation for a package:
```bash
go doc -all ./pkg/models
go doc -all ./pkg/readers
go doc -all ./pkg/writers
```
### Using `godoc` Web Server
**Quick Start (Recommended):**
```bash
make godoc
```
This will automatically install godoc if needed and start the server on port 6060.
**Manual Installation:**
```bash
go install golang.org/x/tools/cmd/godoc@latest
godoc -http=:6060
```
Then open your browser to:
```
http://localhost:6060/pkg/git.warky.dev/wdevs/relspecgo/
```
## Package Documentation
### Core Packages
- **`pkg/models`** - Core data structures (Database, Schema, Table, Column, etc.)
- **`pkg/readers`** - Input format readers (dbml, pgsql, gorm, prisma, etc.)
- **`pkg/writers`** - Output format writers (dbml, pgsql, gorm, prisma, etc.)
### Utility Packages
- **`pkg/diff`** - Schema comparison and difference detection
- **`pkg/merge`** - Schema merging utilities
- **`pkg/transform`** - Validation and normalization
- **`pkg/ui`** - Interactive terminal UI for schema editing
### Support Packages
- **`pkg/pgsql`** - PostgreSQL-specific utilities
- **`pkg/inspector`** - Database introspection capabilities
- **`pkg/reflectutil`** - Reflection utilities for Go code analysis
- **`pkg/commontypes`** - Shared type definitions
### Reader Implementations
Each reader is in its own subpackage under `pkg/readers/`:
- `pkg/readers/dbml` - DBML format reader
- `pkg/readers/dctx` - DCTX format reader
- `pkg/readers/drawdb` - DrawDB JSON reader
- `pkg/readers/graphql` - GraphQL schema reader
- `pkg/readers/json` - JSON schema reader
- `pkg/readers/yaml` - YAML schema reader
- `pkg/readers/gorm` - Go GORM models reader
- `pkg/readers/bun` - Go Bun models reader
- `pkg/readers/drizzle` - TypeScript Drizzle ORM reader
- `pkg/readers/prisma` - Prisma schema reader
- `pkg/readers/typeorm` - TypeScript TypeORM reader
- `pkg/readers/pgsql` - PostgreSQL database reader
- `pkg/readers/sqlite` - SQLite database reader
### Writer Implementations
Each writer is in its own subpackage under `pkg/writers/`:
- `pkg/writers/dbml` - DBML format writer
- `pkg/writers/dctx` - DCTX format writer
- `pkg/writers/drawdb` - DrawDB JSON writer
- `pkg/writers/graphql` - GraphQL schema writer
- `pkg/writers/json` - JSON schema writer
- `pkg/writers/yaml` - YAML schema writer
- `pkg/writers/gorm` - Go GORM models writer
- `pkg/writers/bun` - Go Bun models writer
- `pkg/writers/drizzle` - TypeScript Drizzle ORM writer
- `pkg/writers/prisma` - Prisma schema writer
- `pkg/writers/typeorm` - TypeScript TypeORM writer
- `pkg/writers/pgsql` - PostgreSQL SQL writer
- `pkg/writers/sqlite` - SQLite SQL writer
## Usage Examples
### Reading a Schema
```go
import (
"git.warky.dev/wdevs/relspecgo/pkg/readers"
"git.warky.dev/wdevs/relspecgo/pkg/readers/dbml"
)
reader := dbml.NewReader(&readers.ReaderOptions{
FilePath: "schema.dbml",
})
db, err := reader.ReadDatabase()
```
### Writing a Schema
```go
import (
"git.warky.dev/wdevs/relspecgo/pkg/writers"
"git.warky.dev/wdevs/relspecgo/pkg/writers/gorm"
)
writer := gorm.NewWriter(&writers.WriterOptions{
OutputPath: "./models",
PackageName: "models",
})
err := writer.WriteDatabase(db)
```
### Comparing Schemas
```go
import "git.warky.dev/wdevs/relspecgo/pkg/diff"
result := diff.CompareDatabases(sourceDB, targetDB)
err := diff.FormatDiff(result, diff.OutputFormatText, os.Stdout)
```
### Merging Schemas
```go
import "git.warky.dev/wdevs/relspecgo/pkg/merge"
result := merge.MergeDatabases(targetDB, sourceDB, nil)
fmt.Printf("Added %d tables\n", result.TablesAdded)
```
## Documentation Standards
All public APIs follow Go documentation conventions:
- Package documentation in `doc.go` files
- Type, function, and method comments start with the item name
- Examples where applicable
- Clear description of parameters and return values
- Usage notes and caveats where relevant
## Generating Documentation
To regenerate documentation after code changes:
```bash
# Verify documentation builds correctly
go doc -all ./pkg/... > /dev/null
# Check for undocumented exports
go vet ./...
```
## Contributing Documentation
When adding new packages or exported items:
1. Add package documentation in a `doc.go` file
2. Document all exported types, functions, and methods
3. Include usage examples for complex APIs
4. Follow Go documentation style guide
5. Verify with `go doc` before committing
## References
- [Go Documentation Guide](https://go.dev/doc/comment)
- [Effective Go - Commentary](https://go.dev/doc/effective_go#commentary)
- [godoc Documentation](https://pkg.go.dev/golang.org/x/tools/cmd/godoc)

View File

@@ -1,4 +1,4 @@
.PHONY: all build test test-unit test-integration lint coverage clean install help docker-up docker-down docker-test docker-test-integration start stop release release-version
.PHONY: all build test test-unit test-integration lint coverage clean install help docker-up docker-down docker-test docker-test-integration start stop release release-version godoc
# Binary name
BINARY_NAME=relspec
@@ -101,6 +101,29 @@ deps: ## Download dependencies
$(GOMOD) tidy
@echo "Dependencies updated"
godoc: ## Start godoc server on http://localhost:6060
@echo "Starting godoc server..."
@GOBIN=$$(go env GOPATH)/bin; \
if command -v godoc > /dev/null 2>&1; then \
echo "godoc server running on http://localhost:6060"; \
echo "View documentation at: http://localhost:6060/pkg/git.warky.dev/wdevs/relspecgo/"; \
echo "Press Ctrl+C to stop"; \
godoc -http=:6060; \
elif [ -f "$$GOBIN/godoc" ]; then \
echo "godoc server running on http://localhost:6060"; \
echo "View documentation at: http://localhost:6060/pkg/git.warky.dev/wdevs/relspecgo/"; \
echo "Press Ctrl+C to stop"; \
$$GOBIN/godoc -http=:6060; \
else \
echo "godoc not installed. Installing..."; \
go install golang.org/x/tools/cmd/godoc@latest; \
echo "godoc installed. Starting server..."; \
echo "godoc server running on http://localhost:6060"; \
echo "View documentation at: http://localhost:6060/pkg/git.warky.dev/wdevs/relspecgo/"; \
echo "Press Ctrl+C to stop"; \
$$GOBIN/godoc -http=:6060; \
fi
start: docker-up ## Alias for docker-up (start PostgreSQL test database)
stop: docker-down ## Alias for docker-down (stop PostgreSQL test database)

View File

@@ -37,6 +37,7 @@ RelSpec can read database schemas from multiple sources:
#### Database Inspection
- [PostgreSQL](pkg/readers/pgsql/README.md) - Direct PostgreSQL database introspection
- [SQLite](pkg/readers/sqlite/README.md) - Direct SQLite database introspection
#### Schema Formats
- [DBML](pkg/readers/dbml/README.md) - Database Markup Language (dbdiagram.io)
@@ -59,6 +60,7 @@ RelSpec can write database schemas to multiple formats:
#### Database DDL
- [PostgreSQL](pkg/writers/pgsql/README.md) - PostgreSQL DDL (CREATE TABLE, etc.)
- [SQLite](pkg/writers/sqlite/README.md) - SQLite DDL with automatic schema flattening
#### Schema Formats
- [DBML](pkg/writers/dbml/README.md) - Database Markup Language
@@ -185,6 +187,10 @@ relspec convert --from pgsql --from-conn "postgres://..." \
# Convert DBML to PostgreSQL SQL
relspec convert --from dbml --from-path schema.dbml \
--to pgsql --to-path schema.sql
# Convert PostgreSQL database to SQLite (with automatic schema flattening)
relspec convert --from pgsql --from-conn "postgres://..." \
--to sqlite --to-path sqlite_schema.sql
```
### Schema Validation

34
TODO.md
View File

@@ -1,43 +1,44 @@
# RelSpec - TODO List
## Input Readers / Writers
- [✔️] **Database Inspector**
- [✔️] PostgreSQL driver
- [✔️] PostgreSQL driver (reader + writer)
- [ ] MySQL driver
- [ ] SQLite driver
- [✔️] SQLite driver (reader + writer with automatic schema flattening)
- [ ] MSSQL driver
- [✔️] Foreign key detection
- [✔️] Index extraction
- [*] .sql file generation with sequence and priority
- [✔️] .sql file generation (PostgreSQL, SQLite)
- [✔️] .dbml: Database Markup Language (DBML) for textual schema representation.
- [✔️] Prisma schema support (PSL format) .prisma
- [✔️] Drizzle ORM support .ts (TypeScript / JavaScript) (Mr. Edd wanted to move from Prisma to Drizzle. If you are bugs, you are welcome to do pull requests or issues)
- [☠️] Entity Framework (.NET) model .edmx (Fuck no, EDMX files were bloated, verbose XML nightmares—hard to merge, error-prone, and a pain in teams. Microsoft wisely ditched them in EF Core for code-first. Classic overkill from old MS era.)
- [✔️] Drizzle ORM support .ts (TypeScript / JavaScript) (Mr. Edd wanted to move from Prisma to Drizzle. If you are bugs, you are welcome to do pull requests or issues)
- [☠️] Entity Framework (.NET) model .edmx (Fuck no, EDMX files were bloated, verbose XML nightmares—hard to merge, error-prone, and a pain in teams. Microsoft wisely ditched them in EF Core for code-first. Classic overkill from old MS era.)
- [✔️] TypeORM support
- [] .hbm.xml / schema.xml: Hibernate/Propel mappings (Java/PHP) (💲 Someone can do this, not me)
- [] .hbm.xml / schema.xml: Hibernate/Propel mappings (Java/PHP) (💲 Someone can do this, not me)
- [ ] Django models.py (Python classes), Sequelize migrations (JS) (💲 Someone can do this, not me)
- [] .avsc: Avro schema (JSON format for data serialization) (💲 Someone can do this, not me)
- [✔️] GraphQL schema generation
## UI
## UI
- [✔️] Basic UI (I went with tview)
- [✔️] Save / Load Database
- [✔️] Schemas / Domains / Tables
- [ ] Add Relations
- [ ] Add Indexes
- [ ] Add Views
- [ ] Add Sequences
- [ ] Add Scripts
- [ ] Domain / Table Assignment
- [✔️] Add Relations
- [ ] Add Indexes
- [ ] Add Views
- [ ] Add Sequences
- [ ] Add Scripts
- [ ] Domain / Table Assignment
## Documentation
- [ ] API documentation (godoc)
- [✔️] API documentation (godoc)
- [ ] Usage examples for each format combination
## Advanced Features
- [ ] Dry-run mode for validation
- [x] Diff tool for comparing specifications
- [ ] Migration script generation
@@ -46,12 +47,13 @@
- [ ] Watch mode for auto-regeneration
## Future Considerations
- [ ] Web UI for visual editing
- [ ] REST API server mode
- [ ] Support for NoSQL databases
## Performance
- [ ] Concurrent processing for multiple tables
- [ ] Streaming for large databases
- [ ] Memory optimization

View File

@@ -20,6 +20,7 @@ import (
"git.warky.dev/wdevs/relspecgo/pkg/readers/json"
"git.warky.dev/wdevs/relspecgo/pkg/readers/pgsql"
"git.warky.dev/wdevs/relspecgo/pkg/readers/prisma"
"git.warky.dev/wdevs/relspecgo/pkg/readers/sqlite"
"git.warky.dev/wdevs/relspecgo/pkg/readers/typeorm"
"git.warky.dev/wdevs/relspecgo/pkg/readers/yaml"
"git.warky.dev/wdevs/relspecgo/pkg/writers"
@@ -33,6 +34,7 @@ import (
wjson "git.warky.dev/wdevs/relspecgo/pkg/writers/json"
wpgsql "git.warky.dev/wdevs/relspecgo/pkg/writers/pgsql"
wprisma "git.warky.dev/wdevs/relspecgo/pkg/writers/prisma"
wsqlite "git.warky.dev/wdevs/relspecgo/pkg/writers/sqlite"
wtypeorm "git.warky.dev/wdevs/relspecgo/pkg/writers/typeorm"
wyaml "git.warky.dev/wdevs/relspecgo/pkg/writers/yaml"
)
@@ -70,6 +72,7 @@ Input formats:
- prisma: Prisma schema files (.prisma)
- typeorm: TypeORM entity files (TypeScript)
- pgsql: PostgreSQL database (live connection)
- sqlite: SQLite database file
Output formats:
- dbml: DBML schema files
@@ -84,13 +87,20 @@ Output formats:
- prisma: Prisma schema files (.prisma)
- typeorm: TypeORM entity files (TypeScript)
- pgsql: PostgreSQL SQL schema
- sqlite: SQLite SQL schema (with automatic schema flattening)
PostgreSQL Connection String Examples:
postgres://username:password@localhost:5432/database_name
postgres://username:password@localhost/database_name
postgresql://user:pass@host:5432/dbname?sslmode=disable
postgresql://user:pass@host/dbname?sslmode=require
host=localhost port=5432 user=username password=pass dbname=mydb sslmode=disable
Connection String Examples:
PostgreSQL:
postgres://username:password@localhost:5432/database_name
postgres://username:password@localhost/database_name
postgresql://user:pass@host:5432/dbname?sslmode=disable
postgresql://user:pass@host/dbname?sslmode=require
host=localhost port=5432 user=username password=pass dbname=mydb sslmode=disable
SQLite:
/path/to/database.db
./relative/path/database.sqlite
database.db
Examples:
@@ -136,14 +146,22 @@ Examples:
# Convert Bun models directory to JSON
relspec convert --from bun --from-path ./models \
--to json --to-path schema.json`,
--to json --to-path schema.json
# Convert SQLite database to JSON
relspec convert --from sqlite --from-path database.db \
--to json --to-path schema.json
# Convert SQLite to PostgreSQL SQL
relspec convert --from sqlite --from-path database.db \
--to pgsql --to-path schema.sql`,
RunE: runConvert,
}
func init() {
convertCmd.Flags().StringVar(&convertSourceType, "from", "", "Source format (dbml, dctx, drawdb, graphql, json, yaml, gorm, bun, drizzle, prisma, typeorm, pgsql)")
convertCmd.Flags().StringVar(&convertSourceType, "from", "", "Source format (dbml, dctx, drawdb, graphql, json, yaml, gorm, bun, drizzle, prisma, typeorm, pgsql, sqlite)")
convertCmd.Flags().StringVar(&convertSourcePath, "from-path", "", "Source file path (for file-based formats)")
convertCmd.Flags().StringVar(&convertSourceConn, "from-conn", "", "Source connection string (for database formats)")
convertCmd.Flags().StringVar(&convertSourceConn, "from-conn", "", "Source connection string (for pgsql) or file path (for sqlite)")
convertCmd.Flags().StringVar(&convertTargetType, "to", "", "Target format (dbml, dctx, drawdb, graphql, json, yaml, gorm, bun, drizzle, prisma, typeorm, pgsql)")
convertCmd.Flags().StringVar(&convertTargetPath, "to-path", "", "Target output path (file or directory)")
@@ -291,6 +309,17 @@ func readDatabaseForConvert(dbType, filePath, connString string) (*models.Databa
}
reader = graphql.NewReader(&readers.ReaderOptions{FilePath: filePath})
case "sqlite", "sqlite3":
// SQLite can use either file path or connection string
dbPath := filePath
if dbPath == "" {
dbPath = connString
}
if dbPath == "" {
return nil, fmt.Errorf("file path or connection string is required for SQLite format")
}
reader = sqlite.NewReader(&readers.ReaderOptions{FilePath: dbPath})
default:
return nil, fmt.Errorf("unsupported source format: %s", dbType)
}
@@ -346,6 +375,9 @@ func writeDatabase(db *models.Database, dbType, outputPath, packageName, schemaF
case "pgsql", "postgres", "postgresql", "sql":
writer = wpgsql.NewWriter(writerOpts)
case "sqlite", "sqlite3":
writer = wsqlite.NewWriter(writerOpts)
case "prisma":
writer = wprisma.NewWriter(writerOpts)

View File

@@ -16,6 +16,7 @@ import (
"git.warky.dev/wdevs/relspecgo/pkg/readers/drawdb"
"git.warky.dev/wdevs/relspecgo/pkg/readers/json"
"git.warky.dev/wdevs/relspecgo/pkg/readers/pgsql"
"git.warky.dev/wdevs/relspecgo/pkg/readers/sqlite"
"git.warky.dev/wdevs/relspecgo/pkg/readers/yaml"
)
@@ -254,6 +255,17 @@ func readDatabase(dbType, filePath, connString, label string) (*models.Database,
}
reader = pgsql.NewReader(&readers.ReaderOptions{ConnectionString: connString})
case "sqlite", "sqlite3":
// SQLite can use either file path or connection string
dbPath := filePath
if dbPath == "" {
dbPath = connString
}
if dbPath == "" {
return nil, fmt.Errorf("%s: file path or connection string is required for SQLite format", label)
}
reader = sqlite.NewReader(&readers.ReaderOptions{FilePath: dbPath})
default:
return nil, fmt.Errorf("%s: unsupported database format: %s", label, dbType)
}

View File

@@ -19,6 +19,7 @@ import (
"git.warky.dev/wdevs/relspecgo/pkg/readers/json"
"git.warky.dev/wdevs/relspecgo/pkg/readers/pgsql"
"git.warky.dev/wdevs/relspecgo/pkg/readers/prisma"
"git.warky.dev/wdevs/relspecgo/pkg/readers/sqlite"
"git.warky.dev/wdevs/relspecgo/pkg/readers/typeorm"
"git.warky.dev/wdevs/relspecgo/pkg/readers/yaml"
"git.warky.dev/wdevs/relspecgo/pkg/ui"
@@ -33,6 +34,7 @@ import (
wjson "git.warky.dev/wdevs/relspecgo/pkg/writers/json"
wpgsql "git.warky.dev/wdevs/relspecgo/pkg/writers/pgsql"
wprisma "git.warky.dev/wdevs/relspecgo/pkg/writers/prisma"
wsqlite "git.warky.dev/wdevs/relspecgo/pkg/writers/sqlite"
wtypeorm "git.warky.dev/wdevs/relspecgo/pkg/writers/typeorm"
wyaml "git.warky.dev/wdevs/relspecgo/pkg/writers/yaml"
)
@@ -73,6 +75,7 @@ Supports reading from and writing to all supported formats:
- prisma: Prisma schema files (.prisma)
- typeorm: TypeORM entity files (TypeScript)
- pgsql: PostgreSQL database (live connection)
- sqlite: SQLite database file
Output formats:
- dbml: DBML schema files
@@ -87,13 +90,19 @@ Supports reading from and writing to all supported formats:
- prisma: Prisma schema files (.prisma)
- typeorm: TypeORM entity files (TypeScript)
- pgsql: PostgreSQL SQL schema
- sqlite: SQLite SQL schema (with automatic schema flattening)
PostgreSQL Connection String Examples:
postgres://username:password@localhost:5432/database_name
postgres://username:password@localhost/database_name
postgresql://user:pass@host:5432/dbname?sslmode=disable
postgresql://user:pass@host/dbname?sslmode=require
host=localhost port=5432 user=username password=pass dbname=mydb sslmode=disable
Connection String Examples:
PostgreSQL:
postgres://username:password@localhost:5432/database_name
postgres://username:password@localhost/database_name
postgresql://user:pass@host:5432/dbname?sslmode=disable
postgresql://user:pass@host/dbname?sslmode=require
host=localhost port=5432 user=username password=pass dbname=mydb sslmode=disable
SQLite:
/path/to/database.db
./relative/path/database.sqlite
database.db
Examples:
# Edit a DBML schema file
@@ -107,15 +116,21 @@ Examples:
relspec edit --from json --from-path db.json --to gorm --to-path models/
# Edit GORM models in place
relspec edit --from gorm --from-path ./models --to gorm --to-path ./models`,
relspec edit --from gorm --from-path ./models --to gorm --to-path ./models
# Edit SQLite database
relspec edit --from sqlite --from-path database.db --to sqlite --to-path database.db
# Convert SQLite to DBML
relspec edit --from sqlite --from-path database.db --to dbml --to-path schema.dbml`,
RunE: runEdit,
}
func init() {
editCmd.Flags().StringVar(&editSourceType, "from", "", "Source format (dbml, dctx, drawdb, graphql, json, yaml, gorm, bun, drizzle, prisma, typeorm, pgsql)")
editCmd.Flags().StringVar(&editSourceType, "from", "", "Source format (dbml, dctx, drawdb, graphql, json, yaml, gorm, bun, drizzle, prisma, typeorm, pgsql, sqlite)")
editCmd.Flags().StringVar(&editSourcePath, "from-path", "", "Source file path (for file-based formats)")
editCmd.Flags().StringVar(&editSourceConn, "from-conn", "", "Source connection string (for database formats)")
editCmd.Flags().StringVar(&editTargetType, "to", "", "Target format (dbml, dctx, drawdb, graphql, json, yaml, gorm, bun, drizzle, prisma, typeorm, pgsql)")
editCmd.Flags().StringVar(&editSourceConn, "from-conn", "", "Source connection string (for pgsql) or file path (for sqlite)")
editCmd.Flags().StringVar(&editTargetType, "to", "", "Target format (dbml, dctx, drawdb, graphql, json, yaml, gorm, bun, drizzle, prisma, typeorm, pgsql, sqlite)")
editCmd.Flags().StringVar(&editTargetPath, "to-path", "", "Target file path (for file-based formats)")
editCmd.Flags().StringVar(&editSchemaFilter, "schema", "", "Filter to a specific schema by name")
@@ -281,6 +296,16 @@ func readDatabaseForEdit(dbType, filePath, connString, label string) (*models.Da
return nil, fmt.Errorf("%s: connection string is required for PostgreSQL format", label)
}
reader = pgsql.NewReader(&readers.ReaderOptions{ConnectionString: connString})
case "sqlite", "sqlite3":
// SQLite can use either file path or connection string
dbPath := filePath
if dbPath == "" {
dbPath = connString
}
if dbPath == "" {
return nil, fmt.Errorf("%s: file path or connection string is required for SQLite format", label)
}
reader = sqlite.NewReader(&readers.ReaderOptions{FilePath: dbPath})
default:
return nil, fmt.Errorf("%s: unsupported format: %s", label, dbType)
}
@@ -319,6 +344,8 @@ func writeDatabaseForEdit(dbType, filePath, connString string, db *models.Databa
writer = wprisma.NewWriter(&writers.WriterOptions{OutputPath: filePath})
case "typeorm":
writer = wtypeorm.NewWriter(&writers.WriterOptions{OutputPath: filePath})
case "sqlite", "sqlite3":
writer = wsqlite.NewWriter(&writers.WriterOptions{OutputPath: filePath})
case "pgsql":
writer = wpgsql.NewWriter(&writers.WriterOptions{OutputPath: filePath})
default:

View File

@@ -20,6 +20,7 @@ import (
"git.warky.dev/wdevs/relspecgo/pkg/readers/json"
"git.warky.dev/wdevs/relspecgo/pkg/readers/pgsql"
"git.warky.dev/wdevs/relspecgo/pkg/readers/prisma"
"git.warky.dev/wdevs/relspecgo/pkg/readers/sqlite"
"git.warky.dev/wdevs/relspecgo/pkg/readers/typeorm"
"git.warky.dev/wdevs/relspecgo/pkg/readers/yaml"
)
@@ -288,6 +289,17 @@ func readDatabaseForInspect(dbType, filePath, connString string) (*models.Databa
}
reader = pgsql.NewReader(&readers.ReaderOptions{ConnectionString: connString})
case "sqlite", "sqlite3":
// SQLite can use either file path or connection string
dbPath := filePath
if dbPath == "" {
dbPath = connString
}
if dbPath == "" {
return nil, fmt.Errorf("file path or connection string is required for SQLite format")
}
reader = sqlite.NewReader(&readers.ReaderOptions{FilePath: dbPath})
default:
return nil, fmt.Errorf("unsupported database type: %s", dbType)
}

View File

@@ -21,6 +21,7 @@ import (
"git.warky.dev/wdevs/relspecgo/pkg/readers/json"
"git.warky.dev/wdevs/relspecgo/pkg/readers/pgsql"
"git.warky.dev/wdevs/relspecgo/pkg/readers/prisma"
"git.warky.dev/wdevs/relspecgo/pkg/readers/sqlite"
"git.warky.dev/wdevs/relspecgo/pkg/readers/typeorm"
"git.warky.dev/wdevs/relspecgo/pkg/readers/yaml"
"git.warky.dev/wdevs/relspecgo/pkg/writers"
@@ -34,6 +35,7 @@ import (
wjson "git.warky.dev/wdevs/relspecgo/pkg/writers/json"
wpgsql "git.warky.dev/wdevs/relspecgo/pkg/writers/pgsql"
wprisma "git.warky.dev/wdevs/relspecgo/pkg/writers/prisma"
wsqlite "git.warky.dev/wdevs/relspecgo/pkg/writers/sqlite"
wtypeorm "git.warky.dev/wdevs/relspecgo/pkg/writers/typeorm"
wyaml "git.warky.dev/wdevs/relspecgo/pkg/writers/yaml"
)
@@ -314,6 +316,16 @@ func readDatabaseForMerge(dbType, filePath, connString, label string) (*models.D
return nil, fmt.Errorf("%s: connection string is required for PostgreSQL format", label)
}
reader = pgsql.NewReader(&readers.ReaderOptions{ConnectionString: connString})
case "sqlite", "sqlite3":
// SQLite can use either file path or connection string
dbPath := filePath
if dbPath == "" {
dbPath = connString
}
if dbPath == "" {
return nil, fmt.Errorf("%s: file path or connection string is required for SQLite format", label)
}
reader = sqlite.NewReader(&readers.ReaderOptions{FilePath: dbPath})
default:
return nil, fmt.Errorf("%s: unsupported format '%s'", label, dbType)
}
@@ -385,6 +397,8 @@ func writeDatabaseForMerge(dbType, filePath, connString string, db *models.Datab
return fmt.Errorf("%s: file path is required for TypeORM format", label)
}
writer = wtypeorm.NewWriter(&writers.WriterOptions{OutputPath: filePath, FlattenSchema: flattenSchema})
case "sqlite", "sqlite3":
writer = wsqlite.NewWriter(&writers.WriterOptions{OutputPath: filePath, FlattenSchema: flattenSchema})
case "pgsql":
writerOpts := &writers.WriterOptions{OutputPath: filePath, FlattenSchema: flattenSchema}
if connString != "" {

108
doc.go Normal file
View File

@@ -0,0 +1,108 @@
// Package relspecgo provides bidirectional conversion between database schema formats.
//
// RelSpec is a comprehensive database schema tool that reads, writes, and transforms
// database schemas across multiple formats including live databases, ORM models,
// schema definition languages, and data interchange formats.
//
// # Features
//
// - Read from 15+ formats: PostgreSQL, SQLite, DBML, GORM, Prisma, Drizzle, and more
// - Write to 15+ formats: SQL, ORM models, schema definitions, JSON/YAML
// - Interactive TUI editor for visual schema management
// - Schema diff and merge capabilities
// - Format-agnostic intermediate representation
//
// # Architecture
//
// RelSpec uses a hub-and-spoke architecture with models.Database as the central type:
//
// Input Format → Reader → models.Database → Writer → Output Format
//
// This allows any supported input format to be converted to any supported output format
// without requiring N² conversion implementations.
//
// # Key Packages
//
// - pkg/models: Core data structures (Database, Schema, Table, Column, etc.)
// - pkg/readers: Input format readers (dbml, pgsql, gorm, etc.)
// - pkg/writers: Output format writers (dbml, pgsql, gorm, etc.)
// - pkg/ui: Interactive terminal UI for schema editing
// - pkg/diff: Schema comparison and difference detection
// - pkg/merge: Schema merging utilities
// - pkg/transform: Validation and normalization
//
// # Installation
//
// go install git.warky.dev/wdevs/relspecgo/cmd/relspec@latest
//
// # Usage
//
// Command-line conversion:
//
// relspec convert --from dbml --from-path schema.dbml \
// --to gorm --to-path ./models
//
// Interactive editor:
//
// relspec edit --from pgsql --from-conn "postgres://..." \
// --to dbml --to-path schema.dbml
//
// Schema comparison:
//
// relspec diff --source-type pgsql --source-conn "postgres://..." \
// --target-type dbml --target-path schema.dbml
//
// Merge schemas:
//
// relspec merge --target schema1.dbml --sources schema2.dbml,schema3.dbml
//
// # Supported Formats
//
// Input/Output Formats:
// - dbml: Database Markup Language
// - dctx: DCTX schema files
// - drawdb: DrawDB JSON format
// - graphql: GraphQL schema definition
// - json: JSON schema representation
// - yaml: YAML schema representation
// - gorm: Go GORM models
// - bun: Go Bun models
// - drizzle: TypeScript Drizzle ORM
// - prisma: Prisma schema language
// - typeorm: TypeScript TypeORM entities
// - pgsql: PostgreSQL (live DB or SQL)
// - sqlite: SQLite (database file or SQL)
//
// # Library Usage
//
// RelSpec can be used as a Go library:
//
// import (
// "git.warky.dev/wdevs/relspecgo/pkg/models"
// "git.warky.dev/wdevs/relspecgo/pkg/readers/dbml"
// "git.warky.dev/wdevs/relspecgo/pkg/writers/gorm"
// )
//
// // Read DBML
// reader := dbml.NewReader(&readers.ReaderOptions{
// FilePath: "schema.dbml",
// })
// db, err := reader.ReadDatabase()
//
// // Write GORM models
// writer := gorm.NewWriter(&writers.WriterOptions{
// OutputPath: "./models",
// PackageName: "models",
// })
// err = writer.WriteDatabase(db)
//
// # Documentation
//
// Full documentation available at: https://git.warky.dev/wdevs/relspecgo
//
// API documentation: go doc git.warky.dev/wdevs/relspecgo/...
//
// # License
//
// See LICENSE file in the repository root.
package relspecgo

9
go.mod
View File

@@ -12,10 +12,12 @@ require (
github.com/uptrace/bun v1.2.16
golang.org/x/text v0.28.0
gopkg.in/yaml.v3 v3.0.1
modernc.org/sqlite v1.44.3
)
require (
github.com/davecgh/go-spew v1.1.1 // indirect
github.com/dustin/go-humanize v1.0.1 // indirect
github.com/gdamore/encoding v1.0.1 // indirect
github.com/inconshreveable/mousetrap v1.1.0 // indirect
github.com/jackc/pgpassfile v1.0.0 // indirect
@@ -23,9 +25,12 @@ require (
github.com/jinzhu/inflection v1.0.0 // indirect
github.com/kr/pretty v0.3.1 // indirect
github.com/lucasb-eyer/go-colorful v1.2.0 // indirect
github.com/mattn/go-isatty v0.0.20 // indirect
github.com/mattn/go-runewidth v0.0.16 // indirect
github.com/ncruces/go-strftime v1.0.0 // indirect
github.com/pmezard/go-difflib v1.0.0 // indirect
github.com/puzpuzpuz/xsync/v3 v3.5.1 // indirect
github.com/remyoudompheng/bigfft v0.0.0-20230129092748-24d4a6f8daec // indirect
github.com/rivo/uniseg v0.4.7 // indirect
github.com/rogpeppe/go-internal v1.14.1 // indirect
github.com/spf13/pflag v1.0.10 // indirect
@@ -33,6 +38,10 @@ require (
github.com/vmihailenco/msgpack/v5 v5.4.1 // indirect
github.com/vmihailenco/tagparser/v2 v2.0.0 // indirect
golang.org/x/crypto v0.41.0 // indirect
golang.org/x/exp v0.0.0-20251023183803-a4bb9ffd2546 // indirect
golang.org/x/sys v0.38.0 // indirect
golang.org/x/term v0.34.0 // indirect
modernc.org/libc v1.67.6 // indirect
modernc.org/mathutil v1.7.1 // indirect
modernc.org/memory v1.11.0 // indirect
)

51
go.sum
View File

@@ -3,13 +3,19 @@ github.com/creack/pty v1.1.9/go.mod h1:oKZEueFk5CKHvIhNR5MUki03XCEU+Q6VDXinZuGJ3
github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c=
github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/dustin/go-humanize v1.0.1 h1:GzkhY7T5VNhEkwH0PVJgjz+fX1rhBrR7pRT3mDkpeCY=
github.com/dustin/go-humanize v1.0.1/go.mod h1:Mu1zIs6XwVuF/gI1OepvI0qD18qycQx+mFykh5fBlto=
github.com/gdamore/encoding v1.0.1 h1:YzKZckdBL6jVt2Gc+5p82qhrGiqMdG/eNs6Wy0u3Uhw=
github.com/gdamore/encoding v1.0.1/go.mod h1:0Z0cMFinngz9kS1QfMjCP8TY7em3bZYeeklsSDPivEo=
github.com/gdamore/tcell/v2 v2.8.1 h1:KPNxyqclpWpWQlPLx6Xui1pMk8S+7+R37h3g07997NU=
github.com/gdamore/tcell/v2 v2.8.1/go.mod h1:bj8ori1BG3OYMjmb3IklZVWfZUJ1UBQt9JXrOCOhGWw=
github.com/google/go-cmp v0.6.0/go.mod h1:17dUlkBOakJ0+DkrSSNjCkIjxS6bF9zb3elmeNGIjoY=
github.com/google/pprof v0.0.0-20250317173921-a4b03ec1a45e h1:ijClszYn+mADRFY17kjQEVQ1XRhq2/JR1M3sGqeJoxs=
github.com/google/pprof v0.0.0-20250317173921-a4b03ec1a45e/go.mod h1:boTsfXsheKC2y+lKOCMpSfarhxDeIzfZG1jqGcPl3cA=
github.com/google/uuid v1.6.0 h1:NIvaJDMOsjHA8n1jAhLSgzrAzy1Hgr+hNrb57e+94F0=
github.com/google/uuid v1.6.0/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
github.com/hashicorp/golang-lru/v2 v2.0.7 h1:a+bsQ5rvGLjzHuww6tVxozPZFVghXaHOwFs4luLUK2k=
github.com/hashicorp/golang-lru/v2 v2.0.7/go.mod h1:QeFd9opnmA6QUJc5vARoKUSoFhyfM2/ZepoAG6RGpeM=
github.com/inconshreveable/mousetrap v1.1.0 h1:wN+x4NVGpMsO7ErUn/mUI3vEoE6Jt13X2s0bqwp9tc8=
github.com/inconshreveable/mousetrap v1.1.0/go.mod h1:vpF70FUmC8bwa3OWnCshd2FqLfsEA9PFc4w1p2J65bw=
github.com/jackc/pgpassfile v1.0.0 h1:/6Hmqy13Ss2zCq62VdNG8tM1wchn8zjSGOBJ6icpsIM=
@@ -28,13 +34,19 @@ github.com/kr/text v0.2.0 h1:5Nx0Ya0ZqY2ygV366QzturHI13Jq95ApcVaJBhpS+AY=
github.com/kr/text v0.2.0/go.mod h1:eLer722TekiGuMkidMxC/pM04lWEeraHUUmBw8l2grE=
github.com/lucasb-eyer/go-colorful v1.2.0 h1:1nnpGOrhyZZuNyfu1QjKiUICQ74+3FNCN69Aj6K7nkY=
github.com/lucasb-eyer/go-colorful v1.2.0/go.mod h1:R4dSotOR9KMtayYi1e77YzuveK+i7ruzyGqttikkLy0=
github.com/mattn/go-isatty v0.0.20 h1:xfD0iDuEKnDkl03q4limB+vH+GxLEtL/jb4xVJSWWEY=
github.com/mattn/go-isatty v0.0.20/go.mod h1:W+V8PltTTMOvKvAeJH7IuucS94S2C6jfK/D7dTCTo3Y=
github.com/mattn/go-runewidth v0.0.16 h1:E5ScNMtiwvlvB5paMFdw9p4kSQzbXFikJ5SQO6TULQc=
github.com/mattn/go-runewidth v0.0.16/go.mod h1:Jdepj2loyihRzMpdS35Xk/zdY8IAYHsh153qUoGf23w=
github.com/ncruces/go-strftime v1.0.0 h1:HMFp8mLCTPp341M/ZnA4qaf7ZlsbTc+miZjCLOFAw7w=
github.com/ncruces/go-strftime v1.0.0/go.mod h1:Fwc5htZGVVkseilnfgOVb9mKy6w1naJmn9CehxcKcls=
github.com/pkg/diff v0.0.0-20210226163009-20ebb0f2a09e/go.mod h1:pJLUxLENpZxwdsKMEsNbx1VGcRFpLqf3715MtcvvzbA=
github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM=
github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
github.com/puzpuzpuz/xsync/v3 v3.5.1 h1:GJYJZwO6IdxN/IKbneznS6yPkVC+c3zyY/j19c++5Fg=
github.com/puzpuzpuz/xsync/v3 v3.5.1/go.mod h1:VjzYrABPabuM4KyBh1Ftq6u8nhwY5tBPKP9jpmh0nnA=
github.com/remyoudompheng/bigfft v0.0.0-20230129092748-24d4a6f8daec h1:W09IVJc94icq4NjY3clb7Lk8O1qJ8BdBEF8z0ibU0rE=
github.com/remyoudompheng/bigfft v0.0.0-20230129092748-24d4a6f8daec/go.mod h1:qqbHyh8v60DhA7CoWK5oRCqLrMHRGoxYCSS9EjAz6Eo=
github.com/rivo/tview v0.42.0 h1:b/ftp+RxtDsHSaynXTbJb+/n/BxDEi+W3UfF5jILK6c=
github.com/rivo/tview v0.42.0/go.mod h1:cSfIYfhpSGCjp3r/ECJb+GKS7cGJnqV8vfjQPwoXyfY=
github.com/rivo/uniseg v0.2.0/go.mod h1:J6wj4VEh+S6ZtnVlnTBMWIodfgj8LQOQFoIToxlJtxc=
@@ -72,11 +84,15 @@ golang.org/x/crypto v0.19.0/go.mod h1:Iy9bg/ha4yyC70EfRS8jz+B6ybOBKMaSxLj6P6oBDf
golang.org/x/crypto v0.23.0/go.mod h1:CKFgDieR+mRhux2Lsu27y0fO304Db0wZe70UKqHu0v8=
golang.org/x/crypto v0.41.0 h1:WKYxWedPGCTVVl5+WHSSrOBT0O8lx32+zxmHxijgXp4=
golang.org/x/crypto v0.41.0/go.mod h1:pO5AFd7FA68rFak7rOAGVuygIISepHftHnr8dr6+sUc=
golang.org/x/exp v0.0.0-20251023183803-a4bb9ffd2546 h1:mgKeJMpvi0yx/sU5GsxQ7p6s2wtOnGAHZWCHUM4KGzY=
golang.org/x/exp v0.0.0-20251023183803-a4bb9ffd2546/go.mod h1:j/pmGrbnkbPtQfxEe5D0VQhZC6qKbfKifgD0oM7sR70=
golang.org/x/mod v0.6.0-dev.0.20220419223038-86c51ed26bb4/go.mod h1:jJ57K6gSWd91VN4djpZkiMVwK6gcyfeH4XE8wZrZaV4=
golang.org/x/mod v0.8.0/go.mod h1:iBbtSCu2XBx23ZKBPSOrRkjjQPZFPuis4dIYUhu/chs=
golang.org/x/mod v0.12.0/go.mod h1:iBbtSCu2XBx23ZKBPSOrRkjjQPZFPuis4dIYUhu/chs=
golang.org/x/mod v0.15.0/go.mod h1:hTbmBsO62+eylJbnUtE2MGJUyE7QWk4xUqPFrRgJ+7c=
golang.org/x/mod v0.17.0/go.mod h1:hTbmBsO62+eylJbnUtE2MGJUyE7QWk4xUqPFrRgJ+7c=
golang.org/x/mod v0.29.0 h1:HV8lRxZC4l2cr3Zq1LvtOsi/ThTgWnUk/y64QSs8GwA=
golang.org/x/mod v0.29.0/go.mod h1:NyhrlYXJ2H4eJiRy/WDBO6HMqZQ6q9nk4JzS3NuCK+w=
golang.org/x/net v0.0.0-20190620200207-3b0461eec859/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/net v0.0.0-20210226172049-e18ecbb05110/go.mod h1:m0MpNAwzfU5UDzcl9v0D8zg8gWTRqZa9RBIspLL5mdg=
golang.org/x/net v0.0.0-20220722155237-a158d28d115b/go.mod h1:XRhObCWvk6IyKnWLug+ECip1KBveYUHfp+8e9klMJ9c=
@@ -92,14 +108,15 @@ golang.org/x/sync v0.3.0/go.mod h1:FU7BRWz2tNW+3quACPkgCx/L+uEAv1htQ0V83Z9Rj+Y=
golang.org/x/sync v0.6.0/go.mod h1:Czt+wKu1gCyEFDUtn0jG5QVvpJ6rzVqr5aXyt9drQfk=
golang.org/x/sync v0.7.0/go.mod h1:Czt+wKu1gCyEFDUtn0jG5QVvpJ6rzVqr5aXyt9drQfk=
golang.org/x/sync v0.10.0/go.mod h1:Czt+wKu1gCyEFDUtn0jG5QVvpJ6rzVqr5aXyt9drQfk=
golang.org/x/sync v0.16.0 h1:ycBJEhp9p4vXvUZNszeOq0kGTPghopOL8q0fq3vstxw=
golang.org/x/sync v0.16.0/go.mod h1:1dzgHSNfp02xaA81J2MS99Qcpr2w7fw1gpm99rleRqA=
golang.org/x/sync v0.17.0 h1:l60nONMj9l5drqw6jlhIELNv9I0A4OFgRsG9k2oT9Ug=
golang.org/x/sync v0.17.0/go.mod h1:9KTHXmSnoGruLpwFjVSX0lNNA75CykiMECbovNTZqGI=
golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20201119102817-f84b799fce68/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210615035016-665e8c7367d1/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20220520151302-bc2c85ada10a/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20220722155257-8c9f86f7a55f/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.5.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.6.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.8.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.12.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.17.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA=
@@ -135,6 +152,8 @@ golang.org/x/tools v0.1.12/go.mod h1:hNGJHUnrk76NpqgfD5Aqm5Crs+Hm0VOH/i9J2+nxYbc
golang.org/x/tools v0.6.0/go.mod h1:Xwgl3UAJ/d3gWutnCtw505GrjyAbvKui8lOU390QaIU=
golang.org/x/tools v0.13.0/go.mod h1:HvlwmtVNQAhOuCjW7xxvovg8wbNq7LwfXh/k7wXUl58=
golang.org/x/tools v0.21.1-0.20240508182429-e35e4ccd0d2d/go.mod h1:aiJjzUbINMkxbQROHiO6hDPo2LHcIPhhQsa9DLh0yGk=
golang.org/x/tools v0.38.0 h1:Hx2Xv8hISq8Lm16jvBZ2VQf+RLmbd7wVUsALibYI/IQ=
golang.org/x/tools v0.38.0/go.mod h1:yEsQ/d/YK8cjh0L6rZlY8tgtlKiBNTL14pGDJPJpYQs=
golang.org/x/xerrors v0.0.0-20190717185122-a985d3407aa7/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c h1:Hei/4ADfdWqJk1ZMxUNpqntNwaWcugrBjAiHlqqRiVk=
@@ -142,3 +161,31 @@ gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c/go.mod h1:JHkPIbrfpd72SG/EV
gopkg.in/yaml.v3 v3.0.0-20200313102051-9f266ea9e77c/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
gopkg.in/yaml.v3 v3.0.1 h1:fxVm/GzAzEWqLHuvctI91KS9hhNmmWOoWu0XTYJS7CA=
gopkg.in/yaml.v3 v3.0.1/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
modernc.org/cc/v4 v4.27.1 h1:9W30zRlYrefrDV2JE2O8VDtJ1yPGownxciz5rrbQZis=
modernc.org/cc/v4 v4.27.1/go.mod h1:uVtb5OGqUKpoLWhqwNQo/8LwvoiEBLvZXIQ/SmO6mL0=
modernc.org/ccgo/v4 v4.30.1 h1:4r4U1J6Fhj98NKfSjnPUN7Ze2c6MnAdL0hWw6+LrJpc=
modernc.org/ccgo/v4 v4.30.1/go.mod h1:bIOeI1JL54Utlxn+LwrFyjCx2n2RDiYEaJVSrgdrRfM=
modernc.org/fileutil v1.3.40 h1:ZGMswMNc9JOCrcrakF1HrvmergNLAmxOPjizirpfqBA=
modernc.org/fileutil v1.3.40/go.mod h1:HxmghZSZVAz/LXcMNwZPA/DRrQZEVP9VX0V4LQGQFOc=
modernc.org/gc/v2 v2.6.5 h1:nyqdV8q46KvTpZlsw66kWqwXRHdjIlJOhG6kxiV/9xI=
modernc.org/gc/v2 v2.6.5/go.mod h1:YgIahr1ypgfe7chRuJi2gD7DBQiKSLMPgBQe9oIiito=
modernc.org/gc/v3 v3.1.1 h1:k8T3gkXWY9sEiytKhcgyiZ2L0DTyCQ/nvX+LoCljoRE=
modernc.org/gc/v3 v3.1.1/go.mod h1:HFK/6AGESC7Ex+EZJhJ2Gni6cTaYpSMmU/cT9RmlfYY=
modernc.org/goabi0 v0.2.0 h1:HvEowk7LxcPd0eq6mVOAEMai46V+i7Jrj13t4AzuNks=
modernc.org/goabi0 v0.2.0/go.mod h1:CEFRnnJhKvWT1c1JTI3Avm+tgOWbkOu5oPA8eH8LnMI=
modernc.org/libc v1.67.6 h1:eVOQvpModVLKOdT+LvBPjdQqfrZq+pC39BygcT+E7OI=
modernc.org/libc v1.67.6/go.mod h1:JAhxUVlolfYDErnwiqaLvUqc8nfb2r6S6slAgZOnaiE=
modernc.org/mathutil v1.7.1 h1:GCZVGXdaN8gTqB1Mf/usp1Y/hSqgI2vAGGP4jZMCxOU=
modernc.org/mathutil v1.7.1/go.mod h1:4p5IwJITfppl0G4sUEDtCr4DthTaT47/N3aT6MhfgJg=
modernc.org/memory v1.11.0 h1:o4QC8aMQzmcwCK3t3Ux/ZHmwFPzE6hf2Y5LbkRs+hbI=
modernc.org/memory v1.11.0/go.mod h1:/JP4VbVC+K5sU2wZi9bHoq2MAkCnrt2r98UGeSK7Mjw=
modernc.org/opt v0.1.4 h1:2kNGMRiUjrp4LcaPuLY2PzUfqM/w9N23quVwhKt5Qm8=
modernc.org/opt v0.1.4/go.mod h1:03fq9lsNfvkYSfxrfUhZCWPk1lm4cq4N+Bh//bEtgns=
modernc.org/sortutil v1.2.1 h1:+xyoGf15mM3NMlPDnFqrteY07klSFxLElE2PVuWIJ7w=
modernc.org/sortutil v1.2.1/go.mod h1:7ZI3a3REbai7gzCLcotuw9AC4VZVpYMjDzETGsSMqJE=
modernc.org/sqlite v1.44.3 h1:+39JvV/HWMcYslAwRxHb8067w+2zowvFOUrOWIy9PjY=
modernc.org/sqlite v1.44.3/go.mod h1:CzbrU2lSB1DKUusvwGz7rqEKIq+NUd8GWuBBZDs9/nA=
modernc.org/strutil v1.2.1 h1:UneZBkQA+DX2Rp35KcM69cSsNES9ly8mQWD71HKlOA0=
modernc.org/strutil v1.2.1/go.mod h1:EHkiggD70koQxjVdSBM3JKM7k6L0FbGE5eymy9i3B9A=
modernc.org/token v1.1.0 h1:Xl7Ap9dKaEs5kLoOQeQmPWevfnk/DM5qcLcYlA8ys6Y=
modernc.org/token v1.1.0/go.mod h1:UGzOrNV1mAFSEB63lOFHIpNRUVMvYTc6yu1SMY/XTDM=

28
pkg/commontypes/doc.go Normal file
View File

@@ -0,0 +1,28 @@
// Package commontypes provides shared type definitions used across multiple packages.
//
// # Overview
//
// The commontypes package contains common data structures, constants, and type
// definitions that are shared between different parts of RelSpec but don't belong
// to the core models package.
//
// # Purpose
//
// This package helps avoid circular dependencies by providing a common location
// for types that are used by multiple packages without creating import cycles.
//
// # Contents
//
// Common types may include:
// - Shared enums and constants
// - Utility type aliases
// - Common error types
// - Shared configuration structures
//
// # Usage
//
// import "git.warky.dev/wdevs/relspecgo/pkg/commontypes"
//
// // Use common types
// var formatType commontypes.FormatType
package commontypes

43
pkg/diff/doc.go Normal file
View File

@@ -0,0 +1,43 @@
// Package diff provides utilities for comparing database schemas and identifying differences.
//
// # Overview
//
// The diff package compares two database models at various granularity levels (database,
// schema, table, column) and produces detailed reports of differences including:
// - Missing items (present in source but not in target)
// - Extra items (present in target but not in source)
// - Modified items (present in both but with different properties)
//
// # Usage
//
// Compare two databases and format the output:
//
// result := diff.CompareDatabases(sourceDB, targetDB)
// err := diff.FormatDiff(result, diff.OutputFormatText, os.Stdout)
//
// # Output Formats
//
// The package supports multiple output formats:
// - OutputFormatText: Human-readable text format
// - OutputFormatJSON: Structured JSON output
// - OutputFormatYAML: Structured YAML output
//
// # Comparison Scope
//
// The comparison covers:
// - Schemas: Name, description, and contents
// - Tables: Name, description, and all sub-elements
// - Columns: Type, nullability, defaults, constraints
// - Indexes: Columns, uniqueness, type
// - Constraints: Type, columns, references
// - Relationships: Type, from/to tables and columns
// - Views: Definition and columns
// - Sequences: Start value, increment, min/max values
//
// # Use Cases
//
// - Schema migration planning
// - Database synchronization verification
// - Change tracking and auditing
// - CI/CD pipeline validation
package diff

40
pkg/inspector/doc.go Normal file
View File

@@ -0,0 +1,40 @@
// Package inspector provides database introspection capabilities for live databases.
//
// # Overview
//
// The inspector package contains utilities for connecting to live databases and
// extracting their schema information through system catalog queries and metadata
// inspection.
//
// # Features
//
// - Database connection management
// - Schema metadata extraction
// - Table structure analysis
// - Constraint and index discovery
// - Foreign key relationship mapping
//
// # Supported Databases
//
// - PostgreSQL (via pgx driver)
// - SQLite (via modernc.org/sqlite driver)
//
// # Usage
//
// This package is used internally by database readers (pgsql, sqlite) to perform
// live schema introspection:
//
// inspector := inspector.NewPostgreSQLInspector(connString)
// schemas, err := inspector.GetSchemas()
// tables, err := inspector.GetTables(schemaName)
//
// # Architecture
//
// Each database type has its own inspector implementation that understands the
// specific system catalogs and metadata structures of that database system.
//
// # Security
//
// Inspectors use read-only operations and never modify database structure.
// Connection credentials should be handled securely.
package inspector

36
pkg/pgsql/doc.go Normal file
View File

@@ -0,0 +1,36 @@
// Package pgsql provides PostgreSQL-specific utilities and helpers.
//
// # Overview
//
// The pgsql package contains PostgreSQL-specific functionality including:
// - SQL reserved keyword validation
// - Data type mappings and conversions
// - PostgreSQL-specific schema introspection helpers
//
// # Components
//
// keywords.go - SQL reserved keywords validation
//
// Provides functions to check if identifiers conflict with SQL reserved words
// and need quoting for safe usage in PostgreSQL queries.
//
// datatypes.go - PostgreSQL data type utilities
//
// Contains mappings between PostgreSQL data types and their equivalents in other
// systems, as well as type conversion and normalization functions.
//
// # Usage
//
// // Check if identifier needs quoting
// if pgsql.IsReservedKeyword("user") {
// // Quote the identifier
// }
//
// // Normalize data type
// normalizedType := pgsql.NormalizeDataType("varchar(255)")
//
// # Purpose
//
// This package supports the PostgreSQL reader and writer implementations by providing
// shared utilities for handling PostgreSQL-specific schema elements and constraints.
package pgsql

53
pkg/readers/doc.go Normal file
View File

@@ -0,0 +1,53 @@
// Package readers provides interfaces and implementations for reading database schemas
// from various input formats and data sources.
//
// # Overview
//
// The readers package defines a common Reader interface that all format-specific readers
// implement. This allows RelSpec to read database schemas from multiple sources including:
// - Live databases (PostgreSQL, SQLite)
// - Schema definition files (DBML, DCTX, DrawDB, GraphQL)
// - ORM model files (GORM, Bun, Drizzle, Prisma, TypeORM)
// - Data interchange formats (JSON, YAML)
//
// # Architecture
//
// Each reader implementation is located in its own subpackage (e.g., pkg/readers/dbml,
// pkg/readers/pgsql) and implements the Reader interface, supporting three levels of
// granularity:
// - ReadDatabase() - Read complete database with all schemas
// - ReadSchema() - Read single schema with all tables
// - ReadTable() - Read single table with all columns and metadata
//
// # Usage
//
// Readers are instantiated with ReaderOptions containing source-specific configuration:
//
// // Read from file
// reader := dbml.NewReader(&readers.ReaderOptions{
// FilePath: "schema.dbml",
// })
// db, err := reader.ReadDatabase()
//
// // Read from database
// reader := pgsql.NewReader(&readers.ReaderOptions{
// ConnectionString: "postgres://user:pass@localhost/mydb",
// })
// db, err := reader.ReadDatabase()
//
// # Supported Formats
//
// - dbml: Database Markup Language files
// - dctx: DCTX schema files
// - drawdb: DrawDB JSON format
// - graphql: GraphQL schema definition language
// - json: JSON database schema
// - yaml: YAML database schema
// - gorm: Go GORM model structs
// - bun: Go Bun model structs
// - drizzle: TypeScript Drizzle ORM schemas
// - prisma: Prisma schema language
// - typeorm: TypeScript TypeORM entities
// - pgsql: PostgreSQL live database introspection
// - sqlite: SQLite database files
package readers

View File

@@ -0,0 +1,75 @@
# SQLite Reader
Reads database schema from SQLite database files.
## Usage
```go
import (
"git.warky.dev/wdevs/relspecgo/pkg/readers"
"git.warky.dev/wdevs/relspecgo/pkg/readers/sqlite"
)
// Using file path
options := &readers.ReaderOptions{
FilePath: "path/to/database.db",
}
reader := sqlite.NewReader(options)
db, err := reader.ReadDatabase()
// Or using connection string
options := &readers.ReaderOptions{
ConnectionString: "path/to/database.db",
}
```
## Features
- Reads tables with columns and data types
- Reads views with definitions
- Reads primary keys
- Reads foreign keys with CASCADE actions
- Reads indexes (non-auto-generated)
- Maps SQLite types to canonical types
- Derives relationships from foreign keys
## SQLite Specifics
- SQLite doesn't support schemas, creates single "main" schema
- Uses pure Go driver (modernc.org/sqlite) - no CGo required
- Supports both file path and connection string
- Auto-increment detection for INTEGER PRIMARY KEY columns
- Foreign keys require `PRAGMA foreign_keys = ON` to be set
## Example Schema
```sql
PRAGMA foreign_keys = ON;
CREATE TABLE users (
id INTEGER PRIMARY KEY AUTOINCREMENT,
username VARCHAR(50) NOT NULL UNIQUE,
email VARCHAR(100) NOT NULL
);
CREATE TABLE posts (
id INTEGER PRIMARY KEY AUTOINCREMENT,
user_id INTEGER NOT NULL,
title VARCHAR(200) NOT NULL,
FOREIGN KEY (user_id) REFERENCES users(id) ON DELETE CASCADE
);
```
## Type Mappings
| SQLite Type | Canonical Type |
|-------------|---------------|
| INTEGER, INT | int |
| BIGINT | int64 |
| REAL, DOUBLE | float64 |
| TEXT, VARCHAR | string |
| BLOB | bytea |
| BOOLEAN | bool |
| DATE | date |
| DATETIME, TIMESTAMP | timestamp |

View File

@@ -0,0 +1,306 @@
package sqlite
import (
"fmt"
"strings"
"git.warky.dev/wdevs/relspecgo/pkg/models"
)
// queryTables retrieves all tables from the SQLite database
func (r *Reader) queryTables() ([]*models.Table, error) {
query := `
SELECT name
FROM sqlite_master
WHERE type = 'table'
AND name NOT LIKE 'sqlite_%'
ORDER BY name
`
rows, err := r.db.QueryContext(r.ctx, query)
if err != nil {
return nil, err
}
defer rows.Close()
tables := make([]*models.Table, 0)
for rows.Next() {
var tableName string
if err := rows.Scan(&tableName); err != nil {
return nil, err
}
table := models.InitTable(tableName, "main")
tables = append(tables, table)
}
return tables, rows.Err()
}
// queryViews retrieves all views from the SQLite database
func (r *Reader) queryViews() ([]*models.View, error) {
query := `
SELECT name, sql
FROM sqlite_master
WHERE type = 'view'
ORDER BY name
`
rows, err := r.db.QueryContext(r.ctx, query)
if err != nil {
return nil, err
}
defer rows.Close()
views := make([]*models.View, 0)
for rows.Next() {
var viewName string
var sql *string
if err := rows.Scan(&viewName, &sql); err != nil {
return nil, err
}
view := models.InitView(viewName, "main")
if sql != nil {
view.Definition = *sql
}
views = append(views, view)
}
return views, rows.Err()
}
// queryColumns retrieves all columns for a given table or view
func (r *Reader) queryColumns(tableName string) (map[string]*models.Column, error) {
query := fmt.Sprintf("PRAGMA table_info(%s)", tableName)
rows, err := r.db.QueryContext(r.ctx, query)
if err != nil {
return nil, err
}
defer rows.Close()
columns := make(map[string]*models.Column)
for rows.Next() {
var cid int
var name, dataType string
var notNull, pk int
var defaultValue *string
if err := rows.Scan(&cid, &name, &dataType, &notNull, &defaultValue, &pk); err != nil {
return nil, err
}
column := models.InitColumn(name, tableName, "main")
column.Type = r.mapDataType(strings.ToUpper(dataType))
column.NotNull = (notNull == 1)
column.IsPrimaryKey = (pk > 0)
column.Sequence = uint(cid + 1)
if defaultValue != nil {
column.Default = *defaultValue
}
// Check for autoincrement (SQLite uses INTEGER PRIMARY KEY AUTOINCREMENT)
if pk > 0 && strings.EqualFold(dataType, "INTEGER") {
column.AutoIncrement = r.isAutoIncrement(tableName, name)
}
columns[name] = column
}
return columns, rows.Err()
}
// isAutoIncrement checks if a column is autoincrement
func (r *Reader) isAutoIncrement(tableName, columnName string) bool {
// Check sqlite_sequence table or parse CREATE TABLE statement
query := `
SELECT sql
FROM sqlite_master
WHERE type = 'table' AND name = ?
`
var sql string
err := r.db.QueryRowContext(r.ctx, query, tableName).Scan(&sql)
if err != nil {
return false
}
// Check if the SQL contains AUTOINCREMENT for this column
return strings.Contains(strings.ToUpper(sql), strings.ToUpper(columnName)+" INTEGER PRIMARY KEY AUTOINCREMENT") ||
strings.Contains(strings.ToUpper(sql), strings.ToUpper(columnName)+" INTEGER AUTOINCREMENT")
}
// queryPrimaryKey retrieves the primary key constraint for a table
func (r *Reader) queryPrimaryKey(tableName string) (*models.Constraint, error) {
query := fmt.Sprintf("PRAGMA table_info(%s)", tableName)
rows, err := r.db.QueryContext(r.ctx, query)
if err != nil {
return nil, err
}
defer rows.Close()
var pkColumns []string
for rows.Next() {
var cid int
var name, dataType string
var notNull, pk int
var defaultValue *string
if err := rows.Scan(&cid, &name, &dataType, &notNull, &defaultValue, &pk); err != nil {
return nil, err
}
if pk > 0 {
pkColumns = append(pkColumns, name)
}
}
if len(pkColumns) == 0 {
return nil, nil
}
// Create primary key constraint
constraintName := fmt.Sprintf("%s_pkey", tableName)
constraint := models.InitConstraint(constraintName, models.PrimaryKeyConstraint)
constraint.Schema = "main"
constraint.Table = tableName
constraint.Columns = pkColumns
return constraint, rows.Err()
}
// queryForeignKeys retrieves all foreign key constraints for a table
func (r *Reader) queryForeignKeys(tableName string) ([]*models.Constraint, error) {
query := fmt.Sprintf("PRAGMA foreign_key_list(%s)", tableName)
rows, err := r.db.QueryContext(r.ctx, query)
if err != nil {
return nil, err
}
defer rows.Close()
// Group foreign keys by id (since composite FKs have multiple rows)
fkMap := make(map[int]*models.Constraint)
for rows.Next() {
var id, seq int
var referencedTable, fromColumn, toColumn string
var onUpdate, onDelete, match string
if err := rows.Scan(&id, &seq, &referencedTable, &fromColumn, &toColumn, &onUpdate, &onDelete, &match); err != nil {
return nil, err
}
if _, exists := fkMap[id]; !exists {
constraintName := fmt.Sprintf("%s_%s_fkey", tableName, referencedTable)
if id > 0 {
constraintName = fmt.Sprintf("%s_%s_fkey_%d", tableName, referencedTable, id)
}
constraint := models.InitConstraint(constraintName, models.ForeignKeyConstraint)
constraint.Schema = "main"
constraint.Table = tableName
constraint.ReferencedSchema = "main"
constraint.ReferencedTable = referencedTable
constraint.OnUpdate = onUpdate
constraint.OnDelete = onDelete
constraint.Columns = []string{}
constraint.ReferencedColumns = []string{}
fkMap[id] = constraint
}
// Add column to the constraint
fkMap[id].Columns = append(fkMap[id].Columns, fromColumn)
fkMap[id].ReferencedColumns = append(fkMap[id].ReferencedColumns, toColumn)
}
// Convert map to slice
foreignKeys := make([]*models.Constraint, 0, len(fkMap))
for _, fk := range fkMap {
foreignKeys = append(foreignKeys, fk)
}
return foreignKeys, rows.Err()
}
// queryIndexes retrieves all indexes for a table
func (r *Reader) queryIndexes(tableName string) ([]*models.Index, error) {
query := fmt.Sprintf("PRAGMA index_list(%s)", tableName)
rows, err := r.db.QueryContext(r.ctx, query)
if err != nil {
return nil, err
}
defer rows.Close()
indexes := make([]*models.Index, 0)
for rows.Next() {
var seq int
var name string
var unique int
var origin string
var partial int
if err := rows.Scan(&seq, &name, &unique, &origin, &partial); err != nil {
return nil, err
}
// Skip auto-generated indexes (origin = 'pk' for primary keys, etc.)
// origin: c = CREATE INDEX, u = UNIQUE constraint, pk = PRIMARY KEY
if origin == "pk" || origin == "u" {
continue
}
index := models.InitIndex(name, tableName, "main")
index.Unique = (unique == 1)
// Get index columns
columns, err := r.queryIndexColumns(name)
if err != nil {
return nil, err
}
index.Columns = columns
indexes = append(indexes, index)
}
return indexes, rows.Err()
}
// queryIndexColumns retrieves the columns for a specific index
func (r *Reader) queryIndexColumns(indexName string) ([]string, error) {
query := fmt.Sprintf("PRAGMA index_info(%s)", indexName)
rows, err := r.db.QueryContext(r.ctx, query)
if err != nil {
return nil, err
}
defer rows.Close()
columns := make([]string, 0)
for rows.Next() {
var seqno, cid int
var name *string
if err := rows.Scan(&seqno, &cid, &name); err != nil {
return nil, err
}
if name != nil {
columns = append(columns, *name)
}
}
return columns, rows.Err()
}

View File

@@ -0,0 +1,261 @@
package sqlite
import (
"context"
"database/sql"
"fmt"
"path/filepath"
_ "modernc.org/sqlite" // SQLite driver
"git.warky.dev/wdevs/relspecgo/pkg/models"
"git.warky.dev/wdevs/relspecgo/pkg/readers"
)
// Reader implements the readers.Reader interface for SQLite databases
type Reader struct {
options *readers.ReaderOptions
db *sql.DB
ctx context.Context
}
// NewReader creates a new SQLite reader
func NewReader(options *readers.ReaderOptions) *Reader {
return &Reader{
options: options,
ctx: context.Background(),
}
}
// ReadDatabase reads the entire database schema from SQLite
func (r *Reader) ReadDatabase() (*models.Database, error) {
// Validate file path or connection string
dbPath := r.options.FilePath
if dbPath == "" && r.options.ConnectionString != "" {
dbPath = r.options.ConnectionString
}
if dbPath == "" {
return nil, fmt.Errorf("file path or connection string is required")
}
// Connect to the database
if err := r.connect(dbPath); err != nil {
return nil, fmt.Errorf("failed to connect: %w", err)
}
defer r.close()
// Get database name from file path
dbName := filepath.Base(dbPath)
if dbName == "" {
dbName = "sqlite"
}
// Initialize database model
db := models.InitDatabase(dbName)
db.DatabaseType = models.SqlLiteDatabaseType
db.SourceFormat = "sqlite"
// Get SQLite version
var version string
err := r.db.QueryRowContext(r.ctx, "SELECT sqlite_version()").Scan(&version)
if err == nil {
db.DatabaseVersion = version
}
// SQLite doesn't have schemas, so we create a single "main" schema
schema := models.InitSchema("main")
schema.RefDatabase = db
// Query tables
tables, err := r.queryTables()
if err != nil {
return nil, fmt.Errorf("failed to query tables: %w", err)
}
schema.Tables = tables
// Query views
views, err := r.queryViews()
if err != nil {
return nil, fmt.Errorf("failed to query views: %w", err)
}
schema.Views = views
// Query columns for tables and views
for _, table := range schema.Tables {
columns, err := r.queryColumns(table.Name)
if err != nil {
return nil, fmt.Errorf("failed to query columns for table %s: %w", table.Name, err)
}
table.Columns = columns
table.RefSchema = schema
// Query primary key
pk, err := r.queryPrimaryKey(table.Name)
if err != nil {
return nil, fmt.Errorf("failed to query primary key for table %s: %w", table.Name, err)
}
if pk != nil {
table.Constraints[pk.Name] = pk
// Mark columns as primary key and not null
for _, colName := range pk.Columns {
if col, exists := table.Columns[colName]; exists {
col.IsPrimaryKey = true
col.NotNull = true
}
}
}
// Query foreign keys
foreignKeys, err := r.queryForeignKeys(table.Name)
if err != nil {
return nil, fmt.Errorf("failed to query foreign keys for table %s: %w", table.Name, err)
}
for _, fk := range foreignKeys {
table.Constraints[fk.Name] = fk
// Derive relationship from foreign key
r.deriveRelationship(table, fk)
}
// Query indexes
indexes, err := r.queryIndexes(table.Name)
if err != nil {
return nil, fmt.Errorf("failed to query indexes for table %s: %w", table.Name, err)
}
for _, idx := range indexes {
table.Indexes[idx.Name] = idx
}
}
// Query columns for views
for _, view := range schema.Views {
columns, err := r.queryColumns(view.Name)
if err != nil {
return nil, fmt.Errorf("failed to query columns for view %s: %w", view.Name, err)
}
view.Columns = columns
view.RefSchema = schema
}
// Add schema to database
db.Schemas = append(db.Schemas, schema)
return db, nil
}
// ReadSchema reads a single schema (returns the main schema from the database)
func (r *Reader) ReadSchema() (*models.Schema, error) {
db, err := r.ReadDatabase()
if err != nil {
return nil, err
}
if len(db.Schemas) == 0 {
return nil, fmt.Errorf("no schemas found in database")
}
return db.Schemas[0], nil
}
// ReadTable reads a single table (returns the first table from the schema)
func (r *Reader) ReadTable() (*models.Table, error) {
schema, err := r.ReadSchema()
if err != nil {
return nil, err
}
if len(schema.Tables) == 0 {
return nil, fmt.Errorf("no tables found in schema")
}
return schema.Tables[0], nil
}
// connect establishes a connection to the SQLite database
func (r *Reader) connect(dbPath string) error {
db, err := sql.Open("sqlite", dbPath)
if err != nil {
return err
}
r.db = db
return nil
}
// close closes the database connection
func (r *Reader) close() {
if r.db != nil {
r.db.Close()
}
}
// mapDataType maps SQLite data types to canonical types
func (r *Reader) mapDataType(sqliteType string) string {
// SQLite has a flexible type system, but we map common types
typeMap := map[string]string{
"INTEGER": "int",
"INT": "int",
"TINYINT": "int8",
"SMALLINT": "int16",
"MEDIUMINT": "int",
"BIGINT": "int64",
"UNSIGNED BIG INT": "uint64",
"INT2": "int16",
"INT8": "int64",
"REAL": "float64",
"DOUBLE": "float64",
"DOUBLE PRECISION": "float64",
"FLOAT": "float32",
"NUMERIC": "decimal",
"DECIMAL": "decimal",
"BOOLEAN": "bool",
"BOOL": "bool",
"DATE": "date",
"DATETIME": "timestamp",
"TIMESTAMP": "timestamp",
"TEXT": "string",
"VARCHAR": "string",
"CHAR": "string",
"CHARACTER": "string",
"VARYING CHARACTER": "string",
"NCHAR": "string",
"NVARCHAR": "string",
"CLOB": "text",
"BLOB": "bytea",
}
// Try exact match first
if mapped, exists := typeMap[sqliteType]; exists {
return mapped
}
// Try case-insensitive match for common types
sqliteTypeUpper := sqliteType
if len(sqliteType) > 0 {
// Extract base type (e.g., "VARCHAR(255)" -> "VARCHAR")
for baseType := range typeMap {
if len(sqliteTypeUpper) >= len(baseType) && sqliteTypeUpper[:len(baseType)] == baseType {
return typeMap[baseType]
}
}
}
// Default to string for unknown types
return "string"
}
// deriveRelationship creates a relationship from a foreign key constraint
func (r *Reader) deriveRelationship(table *models.Table, fk *models.Constraint) {
relationshipName := fmt.Sprintf("%s_to_%s", table.Name, fk.ReferencedTable)
relationship := models.InitRelationship(relationshipName, models.OneToMany)
relationship.FromTable = table.Name
relationship.FromSchema = table.Schema
relationship.ToTable = fk.ReferencedTable
relationship.ToSchema = fk.ReferencedSchema
relationship.ForeignKey = fk.Name
// Store constraint actions in properties
if fk.OnDelete != "" {
relationship.Properties["on_delete"] = fk.OnDelete
}
if fk.OnUpdate != "" {
relationship.Properties["on_update"] = fk.OnUpdate
}
table.Relationships[relationshipName] = relationship
}

View File

@@ -0,0 +1,334 @@
package sqlite
import (
"database/sql"
"os"
"path/filepath"
"testing"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
"git.warky.dev/wdevs/relspecgo/pkg/models"
"git.warky.dev/wdevs/relspecgo/pkg/readers"
)
// setupTestDatabase creates a temporary SQLite database with test data
func setupTestDatabase(t *testing.T) string {
tmpDir := t.TempDir()
dbPath := filepath.Join(tmpDir, "test.db")
db, err := sql.Open("sqlite", dbPath)
require.NoError(t, err)
defer db.Close()
// Create test schema
schema := `
PRAGMA foreign_keys = ON;
CREATE TABLE users (
id INTEGER PRIMARY KEY AUTOINCREMENT,
username VARCHAR(50) NOT NULL UNIQUE,
email VARCHAR(100) NOT NULL,
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP
);
CREATE TABLE posts (
id INTEGER PRIMARY KEY AUTOINCREMENT,
user_id INTEGER NOT NULL,
title VARCHAR(200) NOT NULL,
content TEXT,
published BOOLEAN DEFAULT 0,
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
FOREIGN KEY (user_id) REFERENCES users(id) ON DELETE CASCADE
);
CREATE TABLE comments (
id INTEGER PRIMARY KEY AUTOINCREMENT,
post_id INTEGER NOT NULL,
user_id INTEGER NOT NULL,
comment TEXT NOT NULL,
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
FOREIGN KEY (post_id) REFERENCES posts(id) ON DELETE CASCADE,
FOREIGN KEY (user_id) REFERENCES users(id) ON DELETE CASCADE
);
CREATE INDEX idx_posts_user_id ON posts(user_id);
CREATE INDEX idx_comments_post_id ON comments(post_id);
CREATE UNIQUE INDEX idx_users_email ON users(email);
CREATE VIEW user_post_count AS
SELECT u.id, u.username, COUNT(p.id) as post_count
FROM users u
LEFT JOIN posts p ON u.id = p.user_id
GROUP BY u.id, u.username;
`
_, err = db.Exec(schema)
require.NoError(t, err)
return dbPath
}
func TestReader_ReadDatabase(t *testing.T) {
dbPath := setupTestDatabase(t)
defer os.Remove(dbPath)
options := &readers.ReaderOptions{
FilePath: dbPath,
}
reader := NewReader(options)
db, err := reader.ReadDatabase()
require.NoError(t, err)
require.NotNil(t, db)
// Check database metadata
assert.Equal(t, "test.db", db.Name)
assert.Equal(t, models.SqlLiteDatabaseType, db.DatabaseType)
assert.Equal(t, "sqlite", db.SourceFormat)
assert.NotEmpty(t, db.DatabaseVersion)
// Check schemas (SQLite should have a single "main" schema)
require.Len(t, db.Schemas, 1)
schema := db.Schemas[0]
assert.Equal(t, "main", schema.Name)
// Check tables
assert.Len(t, schema.Tables, 3)
tableNames := make([]string, len(schema.Tables))
for i, table := range schema.Tables {
tableNames[i] = table.Name
}
assert.Contains(t, tableNames, "users")
assert.Contains(t, tableNames, "posts")
assert.Contains(t, tableNames, "comments")
// Check views
assert.Len(t, schema.Views, 1)
assert.Equal(t, "user_post_count", schema.Views[0].Name)
assert.NotEmpty(t, schema.Views[0].Definition)
}
func TestReader_ReadTable_Users(t *testing.T) {
dbPath := setupTestDatabase(t)
defer os.Remove(dbPath)
options := &readers.ReaderOptions{
FilePath: dbPath,
}
reader := NewReader(options)
db, err := reader.ReadDatabase()
require.NoError(t, err)
require.NotNil(t, db)
// Find users table
var usersTable *models.Table
for _, table := range db.Schemas[0].Tables {
if table.Name == "users" {
usersTable = table
break
}
}
require.NotNil(t, usersTable)
assert.Equal(t, "users", usersTable.Name)
assert.Equal(t, "main", usersTable.Schema)
// Check columns
assert.Len(t, usersTable.Columns, 4)
// Check id column
idCol, exists := usersTable.Columns["id"]
require.True(t, exists)
assert.Equal(t, "int", idCol.Type)
assert.True(t, idCol.IsPrimaryKey)
assert.True(t, idCol.AutoIncrement)
assert.True(t, idCol.NotNull)
// Check username column
usernameCol, exists := usersTable.Columns["username"]
require.True(t, exists)
assert.Equal(t, "string", usernameCol.Type)
assert.True(t, usernameCol.NotNull)
assert.False(t, usernameCol.IsPrimaryKey)
// Check email column
emailCol, exists := usersTable.Columns["email"]
require.True(t, exists)
assert.Equal(t, "string", emailCol.Type)
assert.True(t, emailCol.NotNull)
// Check primary key constraint
assert.Len(t, usersTable.Constraints, 1)
pkConstraint, exists := usersTable.Constraints["users_pkey"]
require.True(t, exists)
assert.Equal(t, models.PrimaryKeyConstraint, pkConstraint.Type)
assert.Equal(t, []string{"id"}, pkConstraint.Columns)
// Check indexes (should have unique index on email and username)
assert.GreaterOrEqual(t, len(usersTable.Indexes), 1)
}
func TestReader_ReadTable_Posts(t *testing.T) {
dbPath := setupTestDatabase(t)
defer os.Remove(dbPath)
options := &readers.ReaderOptions{
FilePath: dbPath,
}
reader := NewReader(options)
db, err := reader.ReadDatabase()
require.NoError(t, err)
require.NotNil(t, db)
// Find posts table
var postsTable *models.Table
for _, table := range db.Schemas[0].Tables {
if table.Name == "posts" {
postsTable = table
break
}
}
require.NotNil(t, postsTable)
// Check columns
assert.Len(t, postsTable.Columns, 6)
// Check foreign key constraint
hasForeignKey := false
for _, constraint := range postsTable.Constraints {
if constraint.Type == models.ForeignKeyConstraint {
hasForeignKey = true
assert.Equal(t, "users", constraint.ReferencedTable)
assert.Equal(t, "CASCADE", constraint.OnDelete)
}
}
assert.True(t, hasForeignKey, "Posts table should have a foreign key constraint")
// Check relationships
assert.GreaterOrEqual(t, len(postsTable.Relationships), 1)
// Check indexes
hasUserIdIndex := false
for _, index := range postsTable.Indexes {
if index.Name == "idx_posts_user_id" {
hasUserIdIndex = true
assert.Contains(t, index.Columns, "user_id")
}
}
assert.True(t, hasUserIdIndex, "Posts table should have idx_posts_user_id index")
}
func TestReader_ReadTable_Comments(t *testing.T) {
dbPath := setupTestDatabase(t)
defer os.Remove(dbPath)
options := &readers.ReaderOptions{
FilePath: dbPath,
}
reader := NewReader(options)
db, err := reader.ReadDatabase()
require.NoError(t, err)
require.NotNil(t, db)
// Find comments table
var commentsTable *models.Table
for _, table := range db.Schemas[0].Tables {
if table.Name == "comments" {
commentsTable = table
break
}
}
require.NotNil(t, commentsTable)
// Check foreign key constraints (should have 2)
fkCount := 0
for _, constraint := range commentsTable.Constraints {
if constraint.Type == models.ForeignKeyConstraint {
fkCount++
}
}
assert.Equal(t, 2, fkCount, "Comments table should have 2 foreign key constraints")
}
func TestReader_ReadSchema(t *testing.T) {
dbPath := setupTestDatabase(t)
defer os.Remove(dbPath)
options := &readers.ReaderOptions{
FilePath: dbPath,
}
reader := NewReader(options)
schema, err := reader.ReadSchema()
require.NoError(t, err)
require.NotNil(t, schema)
assert.Equal(t, "main", schema.Name)
assert.Len(t, schema.Tables, 3)
assert.Len(t, schema.Views, 1)
}
func TestReader_ReadTable(t *testing.T) {
dbPath := setupTestDatabase(t)
defer os.Remove(dbPath)
options := &readers.ReaderOptions{
FilePath: dbPath,
}
reader := NewReader(options)
table, err := reader.ReadTable()
require.NoError(t, err)
require.NotNil(t, table)
assert.NotEmpty(t, table.Name)
assert.NotEmpty(t, table.Columns)
}
func TestReader_ConnectionString(t *testing.T) {
dbPath := setupTestDatabase(t)
defer os.Remove(dbPath)
options := &readers.ReaderOptions{
ConnectionString: dbPath,
}
reader := NewReader(options)
db, err := reader.ReadDatabase()
require.NoError(t, err)
require.NotNil(t, db)
assert.Len(t, db.Schemas, 1)
}
func TestReader_InvalidPath(t *testing.T) {
options := &readers.ReaderOptions{
FilePath: "/nonexistent/path/to/database.db",
}
reader := NewReader(options)
_, err := reader.ReadDatabase()
assert.Error(t, err)
}
func TestReader_MissingPath(t *testing.T) {
options := &readers.ReaderOptions{}
reader := NewReader(options)
_, err := reader.ReadDatabase()
assert.Error(t, err)
assert.Contains(t, err.Error(), "file path or connection string is required")
}

36
pkg/reflectutil/doc.go Normal file
View File

@@ -0,0 +1,36 @@
// Package reflectutil provides reflection utilities for analyzing Go code structures.
//
// # Overview
//
// The reflectutil package offers helper functions for working with Go's reflection
// capabilities, particularly for parsing Go struct definitions and extracting type
// information. This is used by readers that parse ORM model files.
//
// # Features
//
// - Struct tag parsing and extraction
// - Type information analysis
// - Field metadata extraction
// - ORM tag interpretation (GORM, Bun, etc.)
//
// # Usage
//
// This package is primarily used internally by readers like GORM and Bun to parse
// Go struct definitions and convert them to database schema models.
//
// // Example: Parse struct tags
// tags := reflectutil.ParseStructTags(field)
// columnName := tags.Get("db")
//
// # Supported ORM Tags
//
// The package understands tag conventions from:
// - GORM (gorm tag)
// - Bun (bun tag)
// - Standard database/sql (db tag)
//
// # Purpose
//
// This package enables RelSpec to read existing ORM models and convert them to
// a unified schema representation for transformation to other formats.
package reflectutil

34
pkg/transform/doc.go Normal file
View File

@@ -0,0 +1,34 @@
// Package transform provides validation and transformation utilities for database models.
//
// # Overview
//
// The transform package contains a Transformer type that provides methods for validating
// and normalizing database schemas. It ensures schema correctness and consistency across
// different format conversions.
//
// # Features
//
// - Database validation (structure and naming conventions)
// - Schema validation (completeness and integrity)
// - Table validation (column definitions and constraints)
// - Data type normalization
//
// # Usage
//
// transformer := transform.NewTransformer()
// err := transformer.ValidateDatabase(db)
// if err != nil {
// log.Fatal("Invalid database schema:", err)
// }
//
// # Validation Scope
//
// The transformer validates:
// - Required fields presence
// - Naming convention adherence
// - Data type compatibility
// - Constraint consistency
// - Relationship integrity
//
// Note: Some validation methods are currently stubs and will be implemented as needed.
package transform

57
pkg/ui/doc.go Normal file
View File

@@ -0,0 +1,57 @@
// Package ui provides an interactive terminal user interface (TUI) for editing database schemas.
//
// # Overview
//
// The ui package implements a full-featured terminal-based schema editor using tview,
// allowing users to visually create, modify, and manage database schemas without writing
// code or SQL.
//
// # Features
//
// The schema editor supports:
// - Database management: Edit name, description, and properties
// - Schema management: Create, edit, delete schemas
// - Table management: Create, edit, delete tables
// - Column management: Add, modify, delete columns with full property support
// - Relationship management: Define and edit table relationships
// - Domain management: Organize tables into logical domains
// - Import & merge: Combine schemas from multiple sources
// - Save: Export to any supported format
//
// # Architecture
//
// The package is organized into several components:
// - editor.go: Main editor and application lifecycle
// - *_screens.go: UI screens for each entity type
// - *_dataops.go: Business logic and data operations
// - dialogs.go: Reusable dialog components
// - load_save_screens.go: File I/O and format selection
// - main_menu.go: Primary navigation menu
//
// # Usage
//
// editor := ui.NewSchemaEditor(database)
// if err := editor.Run(); err != nil {
// log.Fatal(err)
// }
//
// Or with pre-configured load/save options:
//
// editor := ui.NewSchemaEditorWithConfigs(database, loadConfig, saveConfig)
// if err := editor.Run(); err != nil {
// log.Fatal(err)
// }
//
// # Navigation
//
// - Arrow keys: Navigate between items
// - Enter: Select/edit item
// - Tab/Shift+Tab: Navigate between buttons
// - Escape: Go back/cancel
// - Letter shortcuts: Quick actions (e.g., 'n' for new, 'e' for edit, 'd' for delete)
//
// # Integration
//
// The editor integrates with all readers and writers, supporting load/save operations
// for any format supported by RelSpec (DBML, PostgreSQL, GORM, Prisma, etc.).
package ui

115
pkg/ui/relation_dataops.go Normal file
View File

@@ -0,0 +1,115 @@
package ui
import "git.warky.dev/wdevs/relspecgo/pkg/models"
// Relationship data operations - business logic for relationship management
// CreateRelationship creates a new relationship and adds it to a table
func (se *SchemaEditor) CreateRelationship(schemaIndex, tableIndex int, rel *models.Relationship) *models.Relationship {
if schemaIndex < 0 || schemaIndex >= len(se.db.Schemas) {
return nil
}
schema := se.db.Schemas[schemaIndex]
if tableIndex < 0 || tableIndex >= len(schema.Tables) {
return nil
}
table := schema.Tables[tableIndex]
if table.Relationships == nil {
table.Relationships = make(map[string]*models.Relationship)
}
table.Relationships[rel.Name] = rel
table.UpdateDate()
return rel
}
// UpdateRelationship updates an existing relationship
func (se *SchemaEditor) UpdateRelationship(schemaIndex, tableIndex int, oldName string, rel *models.Relationship) bool {
if schemaIndex < 0 || schemaIndex >= len(se.db.Schemas) {
return false
}
schema := se.db.Schemas[schemaIndex]
if tableIndex < 0 || tableIndex >= len(schema.Tables) {
return false
}
table := schema.Tables[tableIndex]
if table.Relationships == nil {
return false
}
// Delete old entry if name changed
if oldName != rel.Name {
delete(table.Relationships, oldName)
}
table.Relationships[rel.Name] = rel
table.UpdateDate()
return true
}
// DeleteRelationship removes a relationship from a table
func (se *SchemaEditor) DeleteRelationship(schemaIndex, tableIndex int, relName string) bool {
if schemaIndex < 0 || schemaIndex >= len(se.db.Schemas) {
return false
}
schema := se.db.Schemas[schemaIndex]
if tableIndex < 0 || tableIndex >= len(schema.Tables) {
return false
}
table := schema.Tables[tableIndex]
if table.Relationships == nil {
return false
}
delete(table.Relationships, relName)
table.UpdateDate()
return true
}
// GetRelationship returns a relationship by name
func (se *SchemaEditor) GetRelationship(schemaIndex, tableIndex int, relName string) *models.Relationship {
if schemaIndex < 0 || schemaIndex >= len(se.db.Schemas) {
return nil
}
schema := se.db.Schemas[schemaIndex]
if tableIndex < 0 || tableIndex >= len(schema.Tables) {
return nil
}
table := schema.Tables[tableIndex]
if table.Relationships == nil {
return nil
}
return table.Relationships[relName]
}
// GetRelationshipNames returns all relationship names for a table
func (se *SchemaEditor) GetRelationshipNames(schemaIndex, tableIndex int) []string {
if schemaIndex < 0 || schemaIndex >= len(se.db.Schemas) {
return nil
}
schema := se.db.Schemas[schemaIndex]
if tableIndex < 0 || tableIndex >= len(schema.Tables) {
return nil
}
table := schema.Tables[tableIndex]
if table.Relationships == nil {
return nil
}
names := make([]string, 0, len(table.Relationships))
for name := range table.Relationships {
names = append(names, name)
}
return names
}

486
pkg/ui/relation_screens.go Normal file
View File

@@ -0,0 +1,486 @@
package ui
import (
"fmt"
"strings"
"github.com/gdamore/tcell/v2"
"github.com/rivo/tview"
"git.warky.dev/wdevs/relspecgo/pkg/models"
)
// showRelationshipList displays all relationships for a table
func (se *SchemaEditor) showRelationshipList(schemaIndex, tableIndex int) {
table := se.GetTable(schemaIndex, tableIndex)
if table == nil {
return
}
flex := tview.NewFlex().SetDirection(tview.FlexRow)
// Title
title := tview.NewTextView().
SetText(fmt.Sprintf("[::b]Relationships for Table: %s", table.Name)).
SetDynamicColors(true).
SetTextAlign(tview.AlignCenter)
// Create relationships table
relTable := tview.NewTable().SetBorders(true).SetSelectable(true, false).SetFixed(1, 0)
// Add header row
headers := []string{"Name", "Type", "From Columns", "To Table", "To Columns", "Description"}
headerWidths := []int{20, 15, 20, 20, 20}
for i, header := range headers {
padding := ""
if i < len(headerWidths) {
padding = strings.Repeat(" ", headerWidths[i]-len(header))
}
cell := tview.NewTableCell(header + padding).
SetTextColor(tcell.ColorYellow).
SetSelectable(false).
SetAlign(tview.AlignLeft)
relTable.SetCell(0, i, cell)
}
// Get relationship names
relNames := se.GetRelationshipNames(schemaIndex, tableIndex)
for row, relName := range relNames {
rel := table.Relationships[relName]
// Name
nameStr := fmt.Sprintf("%-20s", rel.Name)
nameCell := tview.NewTableCell(nameStr).SetSelectable(true)
relTable.SetCell(row+1, 0, nameCell)
// Type
typeStr := fmt.Sprintf("%-15s", string(rel.Type))
typeCell := tview.NewTableCell(typeStr).SetSelectable(true)
relTable.SetCell(row+1, 1, typeCell)
// From Columns
fromColsStr := strings.Join(rel.FromColumns, ", ")
fromColsStr = fmt.Sprintf("%-20s", fromColsStr)
fromColsCell := tview.NewTableCell(fromColsStr).SetSelectable(true)
relTable.SetCell(row+1, 2, fromColsCell)
// To Table
toTableStr := rel.ToTable
if rel.ToSchema != "" && rel.ToSchema != table.Schema {
toTableStr = rel.ToSchema + "." + rel.ToTable
}
toTableStr = fmt.Sprintf("%-20s", toTableStr)
toTableCell := tview.NewTableCell(toTableStr).SetSelectable(true)
relTable.SetCell(row+1, 3, toTableCell)
// To Columns
toColsStr := strings.Join(rel.ToColumns, ", ")
toColsStr = fmt.Sprintf("%-20s", toColsStr)
toColsCell := tview.NewTableCell(toColsStr).SetSelectable(true)
relTable.SetCell(row+1, 4, toColsCell)
// Description
descCell := tview.NewTableCell(rel.Description).SetSelectable(true)
relTable.SetCell(row+1, 5, descCell)
}
relTable.SetTitle(" Relationships ").SetBorder(true).SetTitleAlign(tview.AlignLeft)
// Action buttons
btnFlex := tview.NewFlex()
btnNew := tview.NewButton("New Relationship [n]").SetSelectedFunc(func() {
se.showNewRelationshipDialog(schemaIndex, tableIndex)
})
btnEdit := tview.NewButton("Edit [e]").SetSelectedFunc(func() {
row, _ := relTable.GetSelection()
if row > 0 && row <= len(relNames) {
relName := relNames[row-1]
se.showEditRelationshipDialog(schemaIndex, tableIndex, relName)
}
})
btnDelete := tview.NewButton("Delete [d]").SetSelectedFunc(func() {
row, _ := relTable.GetSelection()
if row > 0 && row <= len(relNames) {
relName := relNames[row-1]
se.showDeleteRelationshipConfirm(schemaIndex, tableIndex, relName)
}
})
btnBack := tview.NewButton("Back [b]").SetSelectedFunc(func() {
se.pages.RemovePage("relationships")
se.pages.SwitchToPage("table-editor")
})
// Set up button navigation
btnNew.SetInputCapture(func(event *tcell.EventKey) *tcell.EventKey {
if event.Key() == tcell.KeyBacktab {
se.app.SetFocus(relTable)
return nil
}
if event.Key() == tcell.KeyTab {
se.app.SetFocus(btnEdit)
return nil
}
return event
})
btnEdit.SetInputCapture(func(event *tcell.EventKey) *tcell.EventKey {
if event.Key() == tcell.KeyBacktab {
se.app.SetFocus(btnNew)
return nil
}
if event.Key() == tcell.KeyTab {
se.app.SetFocus(btnDelete)
return nil
}
return event
})
btnDelete.SetInputCapture(func(event *tcell.EventKey) *tcell.EventKey {
if event.Key() == tcell.KeyBacktab {
se.app.SetFocus(btnEdit)
return nil
}
if event.Key() == tcell.KeyTab {
se.app.SetFocus(btnBack)
return nil
}
return event
})
btnBack.SetInputCapture(func(event *tcell.EventKey) *tcell.EventKey {
if event.Key() == tcell.KeyBacktab {
se.app.SetFocus(btnDelete)
return nil
}
if event.Key() == tcell.KeyTab {
se.app.SetFocus(relTable)
return nil
}
return event
})
btnFlex.AddItem(btnNew, 0, 1, true).
AddItem(btnEdit, 0, 1, false).
AddItem(btnDelete, 0, 1, false).
AddItem(btnBack, 0, 1, false)
relTable.SetInputCapture(func(event *tcell.EventKey) *tcell.EventKey {
if event.Key() == tcell.KeyEscape {
se.pages.RemovePage("relationships")
se.pages.SwitchToPage("table-editor")
return nil
}
if event.Key() == tcell.KeyTab {
se.app.SetFocus(btnNew)
return nil
}
if event.Key() == tcell.KeyEnter {
row, _ := relTable.GetSelection()
if row > 0 && row <= len(relNames) {
relName := relNames[row-1]
se.showEditRelationshipDialog(schemaIndex, tableIndex, relName)
}
return nil
}
if event.Rune() == 'n' {
se.showNewRelationshipDialog(schemaIndex, tableIndex)
return nil
}
if event.Rune() == 'e' {
row, _ := relTable.GetSelection()
if row > 0 && row <= len(relNames) {
relName := relNames[row-1]
se.showEditRelationshipDialog(schemaIndex, tableIndex, relName)
}
return nil
}
if event.Rune() == 'd' {
row, _ := relTable.GetSelection()
if row > 0 && row <= len(relNames) {
relName := relNames[row-1]
se.showDeleteRelationshipConfirm(schemaIndex, tableIndex, relName)
}
return nil
}
if event.Rune() == 'b' {
se.pages.RemovePage("relationships")
se.pages.SwitchToPage("table-editor")
return nil
}
return event
})
flex.AddItem(title, 1, 0, false).
AddItem(relTable, 0, 1, true).
AddItem(btnFlex, 1, 0, false)
se.pages.AddPage("relationships", flex, true, true)
}
// showNewRelationshipDialog shows dialog to create a new relationship
func (se *SchemaEditor) showNewRelationshipDialog(schemaIndex, tableIndex int) {
table := se.GetTable(schemaIndex, tableIndex)
if table == nil {
return
}
form := tview.NewForm()
// Collect all tables for dropdown
var allTables []string
var tableMap []struct{ schemaIdx, tableIdx int }
for si, schema := range se.db.Schemas {
for ti, t := range schema.Tables {
tableName := t.Name
if schema.Name != table.Schema {
tableName = schema.Name + "." + t.Name
}
allTables = append(allTables, tableName)
tableMap = append(tableMap, struct{ schemaIdx, tableIdx int }{si, ti})
}
}
relName := ""
relType := models.OneToMany
fromColumns := ""
toColumns := ""
description := ""
selectedTableIdx := 0
form.AddInputField("Name", "", 40, nil, func(value string) {
relName = value
})
form.AddDropDown("Type", []string{
string(models.OneToOne),
string(models.OneToMany),
string(models.ManyToMany),
}, 1, func(option string, optionIndex int) {
relType = models.RelationType(option)
})
form.AddInputField("From Columns (comma-separated)", "", 40, nil, func(value string) {
fromColumns = value
})
form.AddDropDown("To Table", allTables, 0, func(option string, optionIndex int) {
selectedTableIdx = optionIndex
})
form.AddInputField("To Columns (comma-separated)", "", 40, nil, func(value string) {
toColumns = value
})
form.AddInputField("Description", "", 60, nil, func(value string) {
description = value
})
form.AddButton("Save", func() {
if relName == "" {
return
}
// Parse columns
fromCols := strings.Split(fromColumns, ",")
for i := range fromCols {
fromCols[i] = strings.TrimSpace(fromCols[i])
}
toCols := strings.Split(toColumns, ",")
for i := range toCols {
toCols[i] = strings.TrimSpace(toCols[i])
}
// Get target table
targetSchema := se.db.Schemas[tableMap[selectedTableIdx].schemaIdx]
targetTable := targetSchema.Tables[tableMap[selectedTableIdx].tableIdx]
rel := models.InitRelationship(relName, relType)
rel.FromTable = table.Name
rel.FromSchema = table.Schema
rel.FromColumns = fromCols
rel.ToTable = targetTable.Name
rel.ToSchema = targetTable.Schema
rel.ToColumns = toCols
rel.Description = description
se.CreateRelationship(schemaIndex, tableIndex, rel)
se.pages.RemovePage("new-relationship")
se.pages.RemovePage("relationships")
se.showRelationshipList(schemaIndex, tableIndex)
})
form.AddButton("Back", func() {
se.pages.RemovePage("new-relationship")
})
form.SetBorder(true).SetTitle(" New Relationship ").SetTitleAlign(tview.AlignLeft)
form.SetInputCapture(func(event *tcell.EventKey) *tcell.EventKey {
if event.Key() == tcell.KeyEscape {
se.pages.RemovePage("new-relationship")
return nil
}
return event
})
se.pages.AddPage("new-relationship", form, true, true)
}
// showEditRelationshipDialog shows dialog to edit a relationship
func (se *SchemaEditor) showEditRelationshipDialog(schemaIndex, tableIndex int, relName string) {
table := se.GetTable(schemaIndex, tableIndex)
if table == nil {
return
}
rel := se.GetRelationship(schemaIndex, tableIndex, relName)
if rel == nil {
return
}
form := tview.NewForm()
// Collect all tables for dropdown
var allTables []string
var tableMap []struct{ schemaIdx, tableIdx int }
selectedTableIdx := 0
for si, schema := range se.db.Schemas {
for ti, t := range schema.Tables {
tableName := t.Name
if schema.Name != table.Schema {
tableName = schema.Name + "." + t.Name
}
allTables = append(allTables, tableName)
tableMap = append(tableMap, struct{ schemaIdx, tableIdx int }{si, ti})
// Check if this is the current target table
if t.Name == rel.ToTable && schema.Name == rel.ToSchema {
selectedTableIdx = len(allTables) - 1
}
}
}
newName := rel.Name
relType := rel.Type
fromColumns := strings.Join(rel.FromColumns, ", ")
toColumns := strings.Join(rel.ToColumns, ", ")
description := rel.Description
form.AddInputField("Name", rel.Name, 40, nil, func(value string) {
newName = value
})
// Find initial type index
typeIdx := 1 // OneToMany default
typeOptions := []string{
string(models.OneToOne),
string(models.OneToMany),
string(models.ManyToMany),
}
for i, opt := range typeOptions {
if opt == string(rel.Type) {
typeIdx = i
break
}
}
form.AddDropDown("Type", typeOptions, typeIdx, func(option string, optionIndex int) {
relType = models.RelationType(option)
})
form.AddInputField("From Columns (comma-separated)", fromColumns, 40, nil, func(value string) {
fromColumns = value
})
form.AddDropDown("To Table", allTables, selectedTableIdx, func(option string, optionIndex int) {
selectedTableIdx = optionIndex
})
form.AddInputField("To Columns (comma-separated)", toColumns, 40, nil, func(value string) {
toColumns = value
})
form.AddInputField("Description", rel.Description, 60, nil, func(value string) {
description = value
})
form.AddButton("Save", func() {
if newName == "" {
return
}
// Parse columns
fromCols := strings.Split(fromColumns, ",")
for i := range fromCols {
fromCols[i] = strings.TrimSpace(fromCols[i])
}
toCols := strings.Split(toColumns, ",")
for i := range toCols {
toCols[i] = strings.TrimSpace(toCols[i])
}
// Get target table
targetSchema := se.db.Schemas[tableMap[selectedTableIdx].schemaIdx]
targetTable := targetSchema.Tables[tableMap[selectedTableIdx].tableIdx]
updatedRel := models.InitRelationship(newName, relType)
updatedRel.FromTable = table.Name
updatedRel.FromSchema = table.Schema
updatedRel.FromColumns = fromCols
updatedRel.ToTable = targetTable.Name
updatedRel.ToSchema = targetTable.Schema
updatedRel.ToColumns = toCols
updatedRel.Description = description
updatedRel.GUID = rel.GUID
se.UpdateRelationship(schemaIndex, tableIndex, relName, updatedRel)
se.pages.RemovePage("edit-relationship")
se.pages.RemovePage("relationships")
se.showRelationshipList(schemaIndex, tableIndex)
})
form.AddButton("Back", func() {
se.pages.RemovePage("edit-relationship")
})
form.SetBorder(true).SetTitle(" Edit Relationship ").SetTitleAlign(tview.AlignLeft)
form.SetInputCapture(func(event *tcell.EventKey) *tcell.EventKey {
if event.Key() == tcell.KeyEscape {
se.pages.RemovePage("edit-relationship")
return nil
}
return event
})
se.pages.AddPage("edit-relationship", form, true, true)
}
// showDeleteRelationshipConfirm shows confirmation dialog for deleting a relationship
func (se *SchemaEditor) showDeleteRelationshipConfirm(schemaIndex, tableIndex int, relName string) {
modal := tview.NewModal().
SetText(fmt.Sprintf("Delete relationship '%s'? This action cannot be undone.", relName)).
AddButtons([]string{"Cancel", "Delete"}).
SetDoneFunc(func(buttonIndex int, buttonLabel string) {
if buttonLabel == "Delete" {
se.DeleteRelationship(schemaIndex, tableIndex, relName)
se.pages.RemovePage("delete-relationship-confirm")
se.pages.RemovePage("relationships")
se.showRelationshipList(schemaIndex, tableIndex)
} else {
se.pages.RemovePage("delete-relationship-confirm")
}
})
modal.SetInputCapture(func(event *tcell.EventKey) *tcell.EventKey {
if event.Key() == tcell.KeyEscape {
se.pages.RemovePage("delete-relationship-confirm")
return nil
}
return event
})
se.pages.AddAndSwitchToPage("delete-relationship-confirm", modal, true)
}

View File

@@ -270,6 +270,9 @@ func (se *SchemaEditor) showTableEditor(schemaIndex, tableIndex int, table *mode
se.showColumnEditor(schemaIndex, tableIndex, row-1, column)
}
})
btnRelations := tview.NewButton("Relations [r]").SetSelectedFunc(func() {
se.showRelationshipList(schemaIndex, tableIndex)
})
btnDelTable := tview.NewButton("Delete Table [d]").SetSelectedFunc(func() {
se.showDeleteTableConfirm(schemaIndex, tableIndex)
})
@@ -308,6 +311,18 @@ func (se *SchemaEditor) showTableEditor(schemaIndex, tableIndex int, table *mode
se.app.SetFocus(btnEditColumn)
return nil
}
if event.Key() == tcell.KeyTab {
se.app.SetFocus(btnRelations)
return nil
}
return event
})
btnRelations.SetInputCapture(func(event *tcell.EventKey) *tcell.EventKey {
if event.Key() == tcell.KeyBacktab {
se.app.SetFocus(btnEditTable)
return nil
}
if event.Key() == tcell.KeyTab {
se.app.SetFocus(btnDelTable)
return nil
@@ -317,7 +332,7 @@ func (se *SchemaEditor) showTableEditor(schemaIndex, tableIndex int, table *mode
btnDelTable.SetInputCapture(func(event *tcell.EventKey) *tcell.EventKey {
if event.Key() == tcell.KeyBacktab {
se.app.SetFocus(btnEditTable)
se.app.SetFocus(btnRelations)
return nil
}
if event.Key() == tcell.KeyTab {
@@ -342,6 +357,7 @@ func (se *SchemaEditor) showTableEditor(schemaIndex, tableIndex int, table *mode
btnFlex.AddItem(btnNewCol, 0, 1, true).
AddItem(btnEditColumn, 0, 1, false).
AddItem(btnEditTable, 0, 1, false).
AddItem(btnRelations, 0, 1, false).
AddItem(btnDelTable, 0, 1, false).
AddItem(btnBack, 0, 1, false)
@@ -373,6 +389,10 @@ func (se *SchemaEditor) showTableEditor(schemaIndex, tableIndex int, table *mode
}
return nil
}
if event.Rune() == 'r' {
se.showRelationshipList(schemaIndex, tableIndex)
return nil
}
if event.Rune() == 'b' {
se.pages.RemovePage("table-editor")
se.pages.SwitchToPage("schema-editor")

67
pkg/writers/doc.go Normal file
View File

@@ -0,0 +1,67 @@
// Package writers provides interfaces and implementations for writing database schemas
// to various output formats and destinations.
//
// # Overview
//
// The writers package defines a common Writer interface that all format-specific writers
// implement. This allows RelSpec to export database schemas to multiple formats including:
// - SQL schema files (PostgreSQL, SQLite)
// - Schema definition files (DBML, DCTX, DrawDB, GraphQL)
// - ORM model files (GORM, Bun, Drizzle, Prisma, TypeORM)
// - Data interchange formats (JSON, YAML)
//
// # Architecture
//
// Each writer implementation is located in its own subpackage (e.g., pkg/writers/dbml,
// pkg/writers/pgsql) and implements the Writer interface, supporting three levels of
// granularity:
// - WriteDatabase() - Write complete database with all schemas
// - WriteSchema() - Write single schema with all tables
// - WriteTable() - Write single table with all columns and metadata
//
// # Usage
//
// Writers are instantiated with WriterOptions containing destination-specific configuration:
//
// // Write to file
// writer := dbml.NewWriter(&writers.WriterOptions{
// OutputPath: "schema.dbml",
// })
// err := writer.WriteDatabase(db)
//
// // Write ORM models with package name
// writer := gorm.NewWriter(&writers.WriterOptions{
// OutputPath: "./models",
// PackageName: "models",
// })
// err := writer.WriteDatabase(db)
//
// // Write with schema flattening for SQLite
// writer := sqlite.NewWriter(&writers.WriterOptions{
// OutputPath: "schema.sql",
// FlattenSchema: true,
// })
// err := writer.WriteDatabase(db)
//
// # Schema Flattening
//
// The FlattenSchema option controls how schema-qualified table names are handled:
// - false (default): Uses dot notation (schema.table)
// - true: Joins with underscore (schema_table), useful for SQLite
//
// # Supported Formats
//
// - dbml: Database Markup Language files
// - dctx: DCTX schema files
// - drawdb: DrawDB JSON format
// - graphql: GraphQL schema definition language
// - json: JSON database schema
// - yaml: YAML database schema
// - gorm: Go GORM model structs
// - bun: Go Bun model structs
// - drizzle: TypeScript Drizzle ORM schemas
// - prisma: Prisma schema language
// - typeorm: TypeScript TypeORM entities
// - pgsql: PostgreSQL SQL schema
// - sqlite: SQLite SQL schema with automatic flattening
package writers

View File

@@ -0,0 +1,215 @@
# SQLite Writer
SQLite DDL (Data Definition Language) writer for RelSpec. Converts database schemas to SQLite-compatible SQL statements.
## Features
- **Automatic Schema Flattening** - SQLite doesn't support PostgreSQL-style schemas, so table names are automatically flattened (e.g., `public.users``public_users`)
- **Type Mapping** - Converts PostgreSQL data types to SQLite type affinities (TEXT, INTEGER, REAL, NUMERIC, BLOB)
- **Auto-Increment Detection** - Automatically converts SERIAL types and auto-increment columns to `INTEGER PRIMARY KEY AUTOINCREMENT`
- **Function Translation** - Converts PostgreSQL functions to SQLite equivalents (e.g., `now()``CURRENT_TIMESTAMP`)
- **Boolean Handling** - Maps boolean values to INTEGER (true=1, false=0)
- **Constraint Generation** - Creates indexes, unique constraints, and documents foreign keys
- **Identifier Quoting** - Properly quotes identifiers using double quotes
## Usage
### Convert PostgreSQL to SQLite
```bash
relspec convert --from pgsql --from-conn "postgres://user:pass@localhost/mydb" \
--to sqlite --to-path schema.sql
```
### Convert DBML to SQLite
```bash
relspec convert --from dbml --from-path schema.dbml \
--to sqlite --to-path schema.sql
```
### Multi-Schema Databases
SQLite doesn't support schemas, so multi-schema databases are automatically flattened:
```bash
# Input has auth.users and public.posts
# Output will have auth_users and public_posts
relspec convert --from json --from-path multi_schema.json \
--to sqlite --to-path flattened.sql
```
## Type Mapping
| PostgreSQL Type | SQLite Affinity | Examples |
|----------------|-----------------|----------|
| TEXT | TEXT | varchar, text, char, citext, uuid, timestamp, json |
| INTEGER | INTEGER | int, integer, smallint, bigint, serial, boolean |
| REAL | REAL | real, float, double precision |
| NUMERIC | NUMERIC | numeric, decimal |
| BLOB | BLOB | bytea, blob |
## Auto-Increment Handling
Columns are converted to `INTEGER PRIMARY KEY AUTOINCREMENT` when they meet these criteria:
- Marked as primary key
- Integer type
- Have `AutoIncrement` flag set, OR
- Type contains "serial", OR
- Default value contains "nextval"
**Example:**
```sql
-- Input (PostgreSQL)
CREATE TABLE users (
id SERIAL PRIMARY KEY,
name VARCHAR(100)
);
-- Output (SQLite)
CREATE TABLE "users" (
"id" INTEGER PRIMARY KEY AUTOINCREMENT,
"name" TEXT
);
```
## Default Value Translation
| PostgreSQL | SQLite | Notes |
|-----------|--------|-------|
| `now()`, `CURRENT_TIMESTAMP` | `CURRENT_TIMESTAMP` | Timestamp functions |
| `CURRENT_DATE` | `CURRENT_DATE` | Date function |
| `CURRENT_TIME` | `CURRENT_TIME` | Time function |
| `true`, `false` | `1`, `0` | Boolean values |
| `gen_random_uuid()` | *(removed)* | SQLite has no built-in UUID |
| `nextval(...)` | *(removed)* | Handled by AUTOINCREMENT |
## Foreign Keys
Foreign keys are generated as commented-out ALTER TABLE statements for reference:
```sql
-- Foreign key: fk_posts_user_id
-- ALTER TABLE "posts" ADD CONSTRAINT "posts_fk_posts_user_id"
-- FOREIGN KEY ("user_id")
-- REFERENCES "users" ("id");
-- Note: Foreign keys should be defined in CREATE TABLE for better SQLite compatibility
```
For production use, define foreign keys directly in the CREATE TABLE statement or execute the ALTER TABLE commands after creating all tables.
## Constraints
- **Primary Keys**: Inline for auto-increment columns, separate constraint for composite keys
- **Unique Constraints**: Converted to `CREATE UNIQUE INDEX` statements
- **Check Constraints**: Generated as comments (should be added to CREATE TABLE manually)
- **Indexes**: Generated without PostgreSQL-specific features (no GIN, GiST, operator classes)
## Output Structure
Generated SQL follows this order:
1. Header comments
2. `PRAGMA foreign_keys = ON;`
3. CREATE TABLE statements (sorted by schema, then table)
4. CREATE INDEX statements
5. CREATE UNIQUE INDEX statements (for unique constraints)
6. Check constraint comments
7. Foreign key comments
## Example
**Input (multi-schema PostgreSQL):**
```sql
CREATE SCHEMA auth;
CREATE TABLE auth.users (
id SERIAL PRIMARY KEY,
username VARCHAR(50) UNIQUE NOT NULL,
created_at TIMESTAMP DEFAULT now()
);
CREATE SCHEMA public;
CREATE TABLE public.posts (
id SERIAL PRIMARY KEY,
user_id INTEGER REFERENCES auth.users(id),
title VARCHAR(200) NOT NULL,
published BOOLEAN DEFAULT false
);
```
**Output (SQLite with flattened schemas):**
```sql
-- SQLite Database Schema
-- Database: mydb
-- Generated by RelSpec
-- Note: Schema names have been flattened (e.g., public.users -> public_users)
-- Enable foreign key constraints
PRAGMA foreign_keys = ON;
-- Schema: auth (flattened into table names)
CREATE TABLE "auth_users" (
"id" INTEGER PRIMARY KEY AUTOINCREMENT,
"username" TEXT NOT NULL,
"created_at" TEXT DEFAULT CURRENT_TIMESTAMP
);
CREATE UNIQUE INDEX "auth_users_users_username_key" ON "auth_users" ("username");
-- Schema: public (flattened into table names)
CREATE TABLE "public_posts" (
"id" INTEGER PRIMARY KEY AUTOINCREMENT,
"user_id" INTEGER NOT NULL,
"title" TEXT NOT NULL,
"published" INTEGER DEFAULT 0
);
-- Foreign key: posts_user_id_fkey
-- ALTER TABLE "public_posts" ADD CONSTRAINT "public_posts_posts_user_id_fkey"
-- FOREIGN KEY ("user_id")
-- REFERENCES "auth_users" ("id");
-- Note: Foreign keys should be defined in CREATE TABLE for better SQLite compatibility
```
## Programmatic Usage
```go
import (
"git.warky.dev/wdevs/relspecgo/pkg/models"
"git.warky.dev/wdevs/relspecgo/pkg/writers"
"git.warky.dev/wdevs/relspecgo/pkg/writers/sqlite"
)
func main() {
// Create writer (automatically enables schema flattening)
writer := sqlite.NewWriter(&writers.WriterOptions{
OutputPath: "schema.sql",
})
// Write database schema
db := &models.Database{
Name: "mydb",
Schemas: []*models.Schema{
// ... your schema data
},
}
err := writer.WriteDatabase(db)
if err != nil {
panic(err)
}
}
```
## Notes
- Schema flattening is **always enabled** for SQLite output (cannot be disabled)
- Constraint and index names are prefixed with the flattened table name to avoid collisions
- Generated SQL is compatible with SQLite 3.x
- Foreign key constraints require `PRAGMA foreign_keys = ON;` to be enforced
- For complex schemas, review and test the generated SQL before use in production

View File

@@ -0,0 +1,89 @@
package sqlite
import (
"strings"
)
// SQLite type affinities
const (
TypeText = "TEXT"
TypeInteger = "INTEGER"
TypeReal = "REAL"
TypeNumeric = "NUMERIC"
TypeBlob = "BLOB"
)
// MapPostgreSQLType maps PostgreSQL data types to SQLite type affinities
func MapPostgreSQLType(pgType string) string {
// Normalize the type
normalized := strings.ToLower(strings.TrimSpace(pgType))
// Remove array notation if present
normalized = strings.TrimSuffix(normalized, "[]")
// Remove precision/scale if present
if idx := strings.Index(normalized, "("); idx != -1 {
normalized = normalized[:idx]
}
// Map to SQLite type affinity
switch normalized {
// TEXT affinity
case "varchar", "character varying", "text", "char", "character",
"citext", "uuid", "timestamp", "timestamptz", "timestamp with time zone",
"timestamp without time zone", "date", "time", "timetz", "time with time zone",
"time without time zone", "json", "jsonb", "xml", "inet", "cidr", "macaddr":
return TypeText
// INTEGER affinity
case "int", "int2", "int4", "int8", "integer", "smallint", "bigint",
"serial", "smallserial", "bigserial", "boolean", "bool":
return TypeInteger
// REAL affinity
case "real", "float", "float4", "float8", "double precision":
return TypeReal
// NUMERIC affinity
case "numeric", "decimal", "money":
return TypeNumeric
// BLOB affinity
case "bytea", "blob":
return TypeBlob
default:
// Default to TEXT for unknown types
return TypeText
}
}
// IsIntegerType checks if a column type should be treated as integer
func IsIntegerType(colType string) bool {
normalized := strings.ToLower(strings.TrimSpace(colType))
normalized = strings.TrimSuffix(normalized, "[]")
if idx := strings.Index(normalized, "("); idx != -1 {
normalized = normalized[:idx]
}
switch normalized {
case "int", "int2", "int4", "int8", "integer", "smallint", "bigint",
"serial", "smallserial", "bigserial":
return true
default:
return false
}
}
// MapBooleanValue converts PostgreSQL boolean literals to SQLite (0/1)
func MapBooleanValue(value string) string {
normalized := strings.ToLower(strings.TrimSpace(value))
switch normalized {
case "true", "t", "yes", "y", "1":
return "1"
case "false", "f", "no", "n", "0":
return "0"
default:
return value
}
}

View File

@@ -0,0 +1,146 @@
package sqlite
import (
"fmt"
"strings"
"text/template"
"git.warky.dev/wdevs/relspecgo/pkg/models"
"git.warky.dev/wdevs/relspecgo/pkg/writers"
)
// GetTemplateFuncs returns template functions for SQLite SQL generation
func GetTemplateFuncs(opts *writers.WriterOptions) template.FuncMap {
return template.FuncMap{
"quote_ident": QuoteIdentifier,
"map_type": MapPostgreSQLType,
"is_autoincrement": IsAutoIncrementCandidate,
"qualified_table_name": func(schema, table string) string {
return writers.QualifiedTableName(schema, table, opts.FlattenSchema)
},
"format_default": FormatDefault,
"format_constraint_name": func(schema, table, constraint string) string {
return FormatConstraintName(schema, table, constraint, opts)
},
"join": strings.Join,
"lower": strings.ToLower,
"upper": strings.ToUpper,
}
}
// QuoteIdentifier quotes an identifier for SQLite (double quotes)
func QuoteIdentifier(name string) string {
// SQLite uses double quotes for identifiers
// Escape any existing double quotes by doubling them
escaped := strings.ReplaceAll(name, `"`, `""`)
return fmt.Sprintf(`"%s"`, escaped)
}
// IsAutoIncrementCandidate checks if a column should use AUTOINCREMENT
func IsAutoIncrementCandidate(col *models.Column) bool {
// Must be a primary key
if !col.IsPrimaryKey {
return false
}
// Must be an integer type
if !IsIntegerType(col.Type) {
return false
}
// Check AutoIncrement field
if col.AutoIncrement {
return true
}
// Check if default suggests auto-increment
if col.Default != nil {
defaultStr, ok := col.Default.(string)
if ok {
defaultLower := strings.ToLower(defaultStr)
if strings.Contains(defaultLower, "nextval") ||
strings.Contains(defaultLower, "autoincrement") ||
strings.Contains(defaultLower, "auto_increment") {
return true
}
}
}
// Serial types are auto-increment
typeLower := strings.ToLower(col.Type)
return strings.Contains(typeLower, "serial")
}
// FormatDefault formats a default value for SQLite
func FormatDefault(col *models.Column) string {
if col.Default == nil {
return ""
}
// Skip auto-increment defaults (handled by AUTOINCREMENT keyword)
if IsAutoIncrementCandidate(col) {
return ""
}
// Convert to string
defaultStr, ok := col.Default.(string)
if !ok {
// If not a string, convert to string representation
defaultStr = fmt.Sprintf("%v", col.Default)
}
if defaultStr == "" {
return ""
}
// Convert PostgreSQL-specific functions to SQLite equivalents
defaultLower := strings.ToLower(defaultStr)
// Current timestamp functions
if strings.Contains(defaultLower, "current_timestamp") ||
strings.Contains(defaultLower, "now()") {
return "CURRENT_TIMESTAMP"
}
// Current date
if strings.Contains(defaultLower, "current_date") {
return "CURRENT_DATE"
}
// Current time
if strings.Contains(defaultLower, "current_time") {
return "CURRENT_TIME"
}
// Boolean values
sqliteType := MapPostgreSQLType(col.Type)
if sqliteType == TypeInteger {
typeLower := strings.ToLower(col.Type)
if strings.Contains(typeLower, "bool") {
return MapBooleanValue(defaultStr)
}
}
// UUID generation - SQLite doesn't have built-in UUID, comment it out
if strings.Contains(defaultLower, "uuid") || strings.Contains(defaultLower, "gen_random_uuid") {
return "" // Remove UUID defaults, users must handle this
}
// Remove PostgreSQL-specific casting
defaultStr = strings.ReplaceAll(defaultStr, "::text", "")
defaultStr = strings.ReplaceAll(defaultStr, "::integer", "")
defaultStr = strings.ReplaceAll(defaultStr, "::bigint", "")
defaultStr = strings.ReplaceAll(defaultStr, "::boolean", "")
return defaultStr
}
// FormatConstraintName formats a constraint name with table prefix if flattening
func FormatConstraintName(schema, table, constraint string, opts *writers.WriterOptions) string {
if opts.FlattenSchema && schema != "" {
// Prefix constraint with flattened table name
flatTable := writers.QualifiedTableName(schema, table, opts.FlattenSchema)
return fmt.Sprintf("%s_%s", flatTable, constraint)
}
return constraint
}

View File

@@ -0,0 +1,174 @@
package sqlite
import (
"bytes"
"embed"
"fmt"
"text/template"
"git.warky.dev/wdevs/relspecgo/pkg/models"
"git.warky.dev/wdevs/relspecgo/pkg/writers"
)
//go:embed templates/*.tmpl
var templateFS embed.FS
// TemplateExecutor manages and executes SQLite SQL templates
type TemplateExecutor struct {
templates *template.Template
options *writers.WriterOptions
}
// NewTemplateExecutor creates a new template executor for SQLite
func NewTemplateExecutor(opts *writers.WriterOptions) (*TemplateExecutor, error) {
// Create template with SQLite-specific functions
funcMap := GetTemplateFuncs(opts)
tmpl, err := template.New("").Funcs(funcMap).ParseFS(templateFS, "templates/*.tmpl")
if err != nil {
return nil, fmt.Errorf("failed to parse templates: %w", err)
}
return &TemplateExecutor{
templates: tmpl,
options: opts,
}, nil
}
// Template data structures
// TableTemplateData contains data for table template
type TableTemplateData struct {
Schema string
Name string
Columns []*models.Column
PrimaryKey *models.Constraint
}
// IndexTemplateData contains data for index template
type IndexTemplateData struct {
Schema string
Table string
Name string
Columns []string
}
// ConstraintTemplateData contains data for constraint templates
type ConstraintTemplateData struct {
Schema string
Table string
Name string
Columns []string
Expression string
ForeignSchema string
ForeignTable string
ForeignColumns []string
OnDelete string
OnUpdate string
}
// Execute methods
// ExecutePragmaForeignKeys executes the pragma foreign keys template
func (te *TemplateExecutor) ExecutePragmaForeignKeys() (string, error) {
var buf bytes.Buffer
err := te.templates.ExecuteTemplate(&buf, "pragma_foreign_keys.tmpl", nil)
if err != nil {
return "", fmt.Errorf("failed to execute pragma_foreign_keys template: %w", err)
}
return buf.String(), nil
}
// ExecuteCreateTable executes the create table template
func (te *TemplateExecutor) ExecuteCreateTable(data TableTemplateData) (string, error) {
var buf bytes.Buffer
err := te.templates.ExecuteTemplate(&buf, "create_table.tmpl", data)
if err != nil {
return "", fmt.Errorf("failed to execute create_table template: %w", err)
}
return buf.String(), nil
}
// ExecuteCreateIndex executes the create index template
func (te *TemplateExecutor) ExecuteCreateIndex(data IndexTemplateData) (string, error) {
var buf bytes.Buffer
err := te.templates.ExecuteTemplate(&buf, "create_index.tmpl", data)
if err != nil {
return "", fmt.Errorf("failed to execute create_index template: %w", err)
}
return buf.String(), nil
}
// ExecuteCreateUniqueConstraint executes the create unique constraint template
func (te *TemplateExecutor) ExecuteCreateUniqueConstraint(data ConstraintTemplateData) (string, error) {
var buf bytes.Buffer
err := te.templates.ExecuteTemplate(&buf, "create_unique_constraint.tmpl", data)
if err != nil {
return "", fmt.Errorf("failed to execute create_unique_constraint template: %w", err)
}
return buf.String(), nil
}
// ExecuteCreateCheckConstraint executes the create check constraint template
func (te *TemplateExecutor) ExecuteCreateCheckConstraint(data ConstraintTemplateData) (string, error) {
var buf bytes.Buffer
err := te.templates.ExecuteTemplate(&buf, "create_check_constraint.tmpl", data)
if err != nil {
return "", fmt.Errorf("failed to execute create_check_constraint template: %w", err)
}
return buf.String(), nil
}
// ExecuteCreateForeignKey executes the create foreign key template
func (te *TemplateExecutor) ExecuteCreateForeignKey(data ConstraintTemplateData) (string, error) {
var buf bytes.Buffer
err := te.templates.ExecuteTemplate(&buf, "create_foreign_key.tmpl", data)
if err != nil {
return "", fmt.Errorf("failed to execute create_foreign_key template: %w", err)
}
return buf.String(), nil
}
// Helper functions to build template data from models
// BuildTableTemplateData builds TableTemplateData from a models.Table
func BuildTableTemplateData(schema string, table *models.Table) TableTemplateData {
// Get sorted columns
columns := make([]*models.Column, 0, len(table.Columns))
for _, col := range table.Columns {
columns = append(columns, col)
}
// Find primary key constraint
var pk *models.Constraint
for _, constraint := range table.Constraints {
if constraint.Type == models.PrimaryKeyConstraint {
pk = constraint
break
}
}
// If no explicit primary key constraint, build one from columns with IsPrimaryKey=true
if pk == nil {
pkCols := []string{}
for _, col := range table.Columns {
if col.IsPrimaryKey {
pkCols = append(pkCols, col.Name)
}
}
if len(pkCols) > 0 {
pk = &models.Constraint{
Name: "pk_" + table.Name,
Type: models.PrimaryKeyConstraint,
Columns: pkCols,
}
}
}
return TableTemplateData{
Schema: schema,
Name: table.Name,
Columns: columns,
PrimaryKey: pk,
}
}

View File

@@ -0,0 +1,4 @@
-- Check constraint: {{.Name}}
-- {{.Expression}}
-- Note: SQLite supports CHECK constraints in CREATE TABLE or ALTER TABLE ADD CHECK
-- This must be added manually to the table definition above

View File

@@ -0,0 +1,6 @@
-- Foreign key: {{.Name}}
-- ALTER TABLE {{quote_ident (qualified_table_name .Schema .Table)}} ADD CONSTRAINT {{quote_ident (format_constraint_name .Schema .Table .Name)}}
-- FOREIGN KEY ({{range $i, $col := .Columns}}{{if $i}}, {{end}}{{quote_ident $col}}{{end}})
-- REFERENCES {{quote_ident (qualified_table_name .ForeignSchema .ForeignTable)}} ({{range $i, $col := .ForeignColumns}}{{if $i}}, {{end}}{{quote_ident $col}}{{end}})
-- {{if .OnDelete}}ON DELETE {{.OnDelete}}{{end}}{{if .OnUpdate}} ON UPDATE {{.OnUpdate}}{{end}};
-- Note: Foreign keys should be defined in CREATE TABLE for better SQLite compatibility

View File

@@ -0,0 +1 @@
CREATE INDEX {{quote_ident (format_constraint_name .Schema .Table .Name)}} ON {{quote_ident (qualified_table_name .Schema .Table)}} ({{range $i, $col := .Columns}}{{if $i}}, {{end}}{{quote_ident $col}}{{end}});

View File

@@ -0,0 +1,9 @@
CREATE TABLE {{quote_ident (qualified_table_name .Schema .Name)}} (
{{- $hasAutoIncrement := false}}
{{- range $i, $col := .Columns}}{{if $i}},{{end}}
{{quote_ident $col.Name}} {{map_type $col.Type}}{{if is_autoincrement $col}}{{$hasAutoIncrement = true}} PRIMARY KEY AUTOINCREMENT{{else}}{{if $col.NotNull}} NOT NULL{{end}}{{if ne (format_default $col) ""}} DEFAULT {{format_default $col}}{{end}}{{end}}
{{- end}}
{{- if and .PrimaryKey (not $hasAutoIncrement)}}{{if gt (len .Columns) 0}},{{end}}
PRIMARY KEY ({{range $i, $colName := .PrimaryKey.Columns}}{{if $i}}, {{end}}{{quote_ident $colName}}{{end}})
{{- end}}
);

View File

@@ -0,0 +1 @@
CREATE UNIQUE INDEX {{quote_ident (format_constraint_name .Schema .Table .Name)}} ON {{quote_ident (qualified_table_name .Schema .Table)}} ({{range $i, $col := .Columns}}{{if $i}}, {{end}}{{quote_ident $col}}{{end}});

View File

@@ -0,0 +1,2 @@
-- Enable foreign key constraints
PRAGMA foreign_keys = ON;

View File

@@ -0,0 +1,291 @@
package sqlite
import (
"fmt"
"io"
"os"
"strings"
"git.warky.dev/wdevs/relspecgo/pkg/models"
"git.warky.dev/wdevs/relspecgo/pkg/writers"
)
// Writer implements the Writer interface for SQLite SQL output
type Writer struct {
options *writers.WriterOptions
writer io.Writer
executor *TemplateExecutor
}
// NewWriter creates a new SQLite SQL writer
// SQLite doesn't support schemas, so FlattenSchema is automatically enabled
func NewWriter(options *writers.WriterOptions) *Writer {
// Force schema flattening for SQLite
options.FlattenSchema = true
executor, _ := NewTemplateExecutor(options)
return &Writer{
options: options,
executor: executor,
}
}
// WriteDatabase writes the entire database schema as SQLite SQL
func (w *Writer) WriteDatabase(db *models.Database) error {
var writer io.Writer
var file *os.File
var err error
// Use existing writer if already set (for testing)
if w.writer != nil {
writer = w.writer
} else if w.options.OutputPath != "" {
// Determine output destination
file, err = os.Create(w.options.OutputPath)
if err != nil {
return fmt.Errorf("failed to create output file: %w", err)
}
defer file.Close()
writer = file
} else {
writer = os.Stdout
}
w.writer = writer
// Write header comment
fmt.Fprintf(w.writer, "-- SQLite Database Schema\n")
fmt.Fprintf(w.writer, "-- Database: %s\n", db.Name)
fmt.Fprintf(w.writer, "-- Generated by RelSpec\n")
fmt.Fprintf(w.writer, "-- Note: Schema names have been flattened (e.g., public.users -> public_users)\n\n")
// Enable foreign keys
pragma, err := w.executor.ExecutePragmaForeignKeys()
if err != nil {
return fmt.Errorf("failed to generate pragma statement: %w", err)
}
fmt.Fprintf(w.writer, "%s\n", pragma)
// Process each schema in the database
for _, schema := range db.Schemas {
if err := w.WriteSchema(schema); err != nil {
return fmt.Errorf("failed to write schema %s: %w", schema.Name, err)
}
}
return nil
}
// WriteSchema writes a single schema as SQLite SQL
func (w *Writer) WriteSchema(schema *models.Schema) error {
// SQLite doesn't have schemas, so we just write a comment
if schema.Name != "" {
fmt.Fprintf(w.writer, "-- Schema: %s (flattened into table names)\n\n", schema.Name)
}
// Phase 1: Create tables
for _, table := range schema.Tables {
if err := w.writeTable(schema.Name, table); err != nil {
return fmt.Errorf("failed to write table %s: %w", table.Name, err)
}
}
// Phase 2: Create indexes
for _, table := range schema.Tables {
if err := w.writeIndexes(schema.Name, table); err != nil {
return fmt.Errorf("failed to write indexes for table %s: %w", table.Name, err)
}
}
// Phase 3: Create unique constraints (as unique indexes)
for _, table := range schema.Tables {
if err := w.writeUniqueConstraints(schema.Name, table); err != nil {
return fmt.Errorf("failed to write unique constraints for table %s: %w", table.Name, err)
}
}
// Phase 4: Check constraints (as comments, since SQLite requires them in CREATE TABLE)
for _, table := range schema.Tables {
if err := w.writeCheckConstraints(schema.Name, table); err != nil {
return fmt.Errorf("failed to write check constraints for table %s: %w", table.Name, err)
}
}
// Phase 5: Foreign keys (as comments for compatibility)
for _, table := range schema.Tables {
if err := w.writeForeignKeys(schema.Name, table); err != nil {
return fmt.Errorf("failed to write foreign keys for table %s: %w", table.Name, err)
}
}
return nil
}
// WriteTable writes a single table as SQLite SQL
func (w *Writer) WriteTable(table *models.Table) error {
return w.writeTable("", table)
}
// writeTable is the internal implementation
func (w *Writer) writeTable(schema string, table *models.Table) error {
// Build table template data
data := BuildTableTemplateData(schema, table)
// Execute template
sql, err := w.executor.ExecuteCreateTable(data)
if err != nil {
return fmt.Errorf("failed to execute create table template: %w", err)
}
fmt.Fprintf(w.writer, "%s\n", sql)
return nil
}
// writeIndexes writes indexes for a table
func (w *Writer) writeIndexes(schema string, table *models.Table) error {
for _, index := range table.Indexes {
// Skip primary key indexes
if strings.HasSuffix(index.Name, "_pkey") {
continue
}
// Skip unique indexes (handled separately as unique constraints)
if index.Unique {
continue
}
data := IndexTemplateData{
Schema: schema,
Table: table.Name,
Name: index.Name,
Columns: index.Columns,
}
sql, err := w.executor.ExecuteCreateIndex(data)
if err != nil {
return fmt.Errorf("failed to execute create index template: %w", err)
}
fmt.Fprintf(w.writer, "%s\n", sql)
}
return nil
}
// writeUniqueConstraints writes unique constraints as unique indexes
func (w *Writer) writeUniqueConstraints(schema string, table *models.Table) error {
for _, constraint := range table.Constraints {
if constraint.Type != models.UniqueConstraint {
continue
}
data := ConstraintTemplateData{
Schema: schema,
Table: table.Name,
Name: constraint.Name,
Columns: constraint.Columns,
}
sql, err := w.executor.ExecuteCreateUniqueConstraint(data)
if err != nil {
return fmt.Errorf("failed to execute create unique constraint template: %w", err)
}
fmt.Fprintf(w.writer, "%s\n", sql)
}
// Also handle unique indexes from the Indexes map
for _, index := range table.Indexes {
if !index.Unique {
continue
}
// Skip if already handled as a constraint
alreadyHandled := false
for _, constraint := range table.Constraints {
if constraint.Type == models.UniqueConstraint && constraint.Name == index.Name {
alreadyHandled = true
break
}
}
if alreadyHandled {
continue
}
data := ConstraintTemplateData{
Schema: schema,
Table: table.Name,
Name: index.Name,
Columns: index.Columns,
}
sql, err := w.executor.ExecuteCreateUniqueConstraint(data)
if err != nil {
return fmt.Errorf("failed to execute create unique index template: %w", err)
}
fmt.Fprintf(w.writer, "%s\n", sql)
}
return nil
}
// writeCheckConstraints writes check constraints as comments
func (w *Writer) writeCheckConstraints(schema string, table *models.Table) error {
for _, constraint := range table.Constraints {
if constraint.Type != models.CheckConstraint {
continue
}
data := ConstraintTemplateData{
Schema: schema,
Table: table.Name,
Name: constraint.Name,
Expression: constraint.Expression,
}
sql, err := w.executor.ExecuteCreateCheckConstraint(data)
if err != nil {
return fmt.Errorf("failed to execute create check constraint template: %w", err)
}
fmt.Fprintf(w.writer, "%s\n", sql)
}
return nil
}
// writeForeignKeys writes foreign keys as comments
func (w *Writer) writeForeignKeys(schema string, table *models.Table) error {
for _, constraint := range table.Constraints {
if constraint.Type != models.ForeignKeyConstraint {
continue
}
refSchema := constraint.ReferencedSchema
if refSchema == "" {
refSchema = schema
}
data := ConstraintTemplateData{
Schema: schema,
Table: table.Name,
Name: constraint.Name,
Columns: constraint.Columns,
ForeignSchema: refSchema,
ForeignTable: constraint.ReferencedTable,
ForeignColumns: constraint.ReferencedColumns,
OnDelete: constraint.OnDelete,
OnUpdate: constraint.OnUpdate,
}
sql, err := w.executor.ExecuteCreateForeignKey(data)
if err != nil {
return fmt.Errorf("failed to execute create foreign key template: %w", err)
}
fmt.Fprintf(w.writer, "%s\n", sql)
}
return nil
}

View File

@@ -0,0 +1,418 @@
package sqlite
import (
"bytes"
"strings"
"testing"
"git.warky.dev/wdevs/relspecgo/pkg/models"
"git.warky.dev/wdevs/relspecgo/pkg/writers"
)
func TestNewWriter(t *testing.T) {
opts := &writers.WriterOptions{
OutputPath: "/tmp/test.sql",
FlattenSchema: false, // Should be forced to true
}
writer := NewWriter(opts)
if !writer.options.FlattenSchema {
t.Error("Expected FlattenSchema to be forced to true for SQLite")
}
}
func TestWriteDatabase(t *testing.T) {
db := &models.Database{
Name: "testdb",
Schemas: []*models.Schema{
{
Name: "public",
Tables: []*models.Table{
{
Name: "users",
Columns: map[string]*models.Column{
"id": {
Name: "id",
Type: "serial",
NotNull: true,
IsPrimaryKey: true,
Default: "nextval('users_id_seq'::regclass)",
},
"email": {
Name: "email",
Type: "varchar(255)",
NotNull: true,
},
"active": {
Name: "active",
Type: "boolean",
NotNull: true,
Default: "true",
},
},
Constraints: map[string]*models.Constraint{
"pk_users": {
Name: "pk_users",
Type: models.PrimaryKeyConstraint,
Columns: []string{"id"},
},
},
},
},
},
},
}
var buf bytes.Buffer
opts := &writers.WriterOptions{}
writer := NewWriter(opts)
writer.writer = &buf
err := writer.WriteDatabase(db)
if err != nil {
t.Fatalf("WriteDatabase failed: %v", err)
}
output := buf.String()
// Check for expected elements
if !strings.Contains(output, "PRAGMA foreign_keys = ON") {
t.Error("Expected PRAGMA foreign_keys statement")
}
if !strings.Contains(output, "CREATE TABLE") {
t.Error("Expected CREATE TABLE statement")
}
if !strings.Contains(output, "\"public_users\"") {
t.Error("Expected flattened table name public_users")
}
if !strings.Contains(output, "INTEGER PRIMARY KEY AUTOINCREMENT") {
t.Error("Expected autoincrement for serial primary key")
}
if !strings.Contains(output, "TEXT") {
t.Error("Expected TEXT type for varchar")
}
// Boolean should be mapped to INTEGER with default 1
if !strings.Contains(output, "active") {
t.Error("Expected active column")
}
}
func TestDataTypeMapping(t *testing.T) {
tests := []struct {
pgType string
expected string
}{
{"varchar(255)", "TEXT"},
{"text", "TEXT"},
{"integer", "INTEGER"},
{"bigint", "INTEGER"},
{"serial", "INTEGER"},
{"boolean", "INTEGER"},
{"real", "REAL"},
{"double precision", "REAL"},
{"numeric(10,2)", "NUMERIC"},
{"decimal", "NUMERIC"},
{"bytea", "BLOB"},
{"timestamp", "TEXT"},
{"uuid", "TEXT"},
{"json", "TEXT"},
{"jsonb", "TEXT"},
}
for _, tt := range tests {
result := MapPostgreSQLType(tt.pgType)
if result != tt.expected {
t.Errorf("MapPostgreSQLType(%q) = %q, want %q", tt.pgType, result, tt.expected)
}
}
}
func TestIsAutoIncrementCandidate(t *testing.T) {
tests := []struct {
name string
col *models.Column
expected bool
}{
{
name: "serial primary key",
col: &models.Column{
Name: "id",
Type: "serial",
IsPrimaryKey: true,
Default: "nextval('seq')",
},
expected: true,
},
{
name: "integer primary key with nextval",
col: &models.Column{
Name: "id",
Type: "integer",
IsPrimaryKey: true,
Default: "nextval('users_id_seq'::regclass)",
},
expected: true,
},
{
name: "integer not primary key",
col: &models.Column{
Name: "count",
Type: "integer",
IsPrimaryKey: false,
Default: "0",
},
expected: false,
},
{
name: "varchar primary key",
col: &models.Column{
Name: "code",
Type: "varchar",
IsPrimaryKey: true,
},
expected: false,
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
result := IsAutoIncrementCandidate(tt.col)
if result != tt.expected {
t.Errorf("IsAutoIncrementCandidate() = %v, want %v", result, tt.expected)
}
})
}
}
func TestFormatDefault(t *testing.T) {
tests := []struct {
name string
col *models.Column
expected string
}{
{
name: "current_timestamp",
col: &models.Column{
Type: "timestamp",
Default: "CURRENT_TIMESTAMP",
},
expected: "CURRENT_TIMESTAMP",
},
{
name: "now()",
col: &models.Column{
Type: "timestamp",
Default: "now()",
},
expected: "CURRENT_TIMESTAMP",
},
{
name: "boolean true",
col: &models.Column{
Type: "boolean",
Default: "true",
},
expected: "1",
},
{
name: "boolean false",
col: &models.Column{
Type: "boolean",
Default: "false",
},
expected: "0",
},
{
name: "serial autoincrement",
col: &models.Column{
Type: "serial",
IsPrimaryKey: true,
Default: "nextval('seq')",
},
expected: "",
},
{
name: "uuid default removed",
col: &models.Column{
Type: "uuid",
Default: "gen_random_uuid()",
},
expected: "",
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
result := FormatDefault(tt.col)
if result != tt.expected {
t.Errorf("FormatDefault() = %q, want %q", result, tt.expected)
}
})
}
}
func TestWriteSchema_MultiSchema(t *testing.T) {
db := &models.Database{
Name: "testdb",
Schemas: []*models.Schema{
{
Name: "auth",
Tables: []*models.Table{
{
Name: "sessions",
Columns: map[string]*models.Column{
"id": {
Name: "id",
Type: "uuid",
NotNull: true,
IsPrimaryKey: true,
},
},
Constraints: map[string]*models.Constraint{
"pk_sessions": {
Name: "pk_sessions",
Type: models.PrimaryKeyConstraint,
Columns: []string{"id"},
},
},
},
},
},
{
Name: "public",
Tables: []*models.Table{
{
Name: "posts",
Columns: map[string]*models.Column{
"id": {
Name: "id",
Type: "integer",
NotNull: true,
IsPrimaryKey: true,
},
},
Constraints: map[string]*models.Constraint{
"pk_posts": {
Name: "pk_posts",
Type: models.PrimaryKeyConstraint,
Columns: []string{"id"},
},
},
},
},
},
},
}
var buf bytes.Buffer
opts := &writers.WriterOptions{}
writer := NewWriter(opts)
writer.writer = &buf
err := writer.WriteDatabase(db)
if err != nil {
t.Fatalf("WriteDatabase failed: %v", err)
}
output := buf.String()
// Check for flattened table names from both schemas
if !strings.Contains(output, "\"auth_sessions\"") {
t.Error("Expected flattened table name auth_sessions")
}
if !strings.Contains(output, "\"public_posts\"") {
t.Error("Expected flattened table name public_posts")
}
}
func TestWriteIndexes(t *testing.T) {
table := &models.Table{
Name: "users",
Columns: map[string]*models.Column{
"email": {
Name: "email",
Type: "varchar(255)",
},
},
Indexes: map[string]*models.Index{
"idx_users_email": {
Name: "idx_users_email",
Columns: []string{"email"},
},
},
}
var buf bytes.Buffer
opts := &writers.WriterOptions{}
writer := NewWriter(opts)
writer.writer = &buf
err := writer.writeIndexes("public", table)
if err != nil {
t.Fatalf("writeIndexes failed: %v", err)
}
output := buf.String()
if !strings.Contains(output, "CREATE INDEX") {
t.Error("Expected CREATE INDEX statement")
}
if !strings.Contains(output, "public_users_idx_users_email") {
t.Errorf("Expected flattened index name public_users_idx_users_email, got output:\n%s", output)
}
}
func TestWriteUniqueConstraints(t *testing.T) {
table := &models.Table{
Name: "users",
Constraints: map[string]*models.Constraint{
"uk_users_email": {
Name: "uk_users_email",
Type: models.UniqueConstraint,
Columns: []string{"email"},
},
},
}
var buf bytes.Buffer
opts := &writers.WriterOptions{}
writer := NewWriter(opts)
writer.writer = &buf
err := writer.writeUniqueConstraints("public", table)
if err != nil {
t.Fatalf("writeUniqueConstraints failed: %v", err)
}
output := buf.String()
if !strings.Contains(output, "CREATE UNIQUE INDEX") {
t.Error("Expected CREATE UNIQUE INDEX statement")
}
}
func TestQuoteIdentifier(t *testing.T) {
tests := []struct {
input string
expected string
}{
{"users", `"users"`},
{"public_users", `"public_users"`},
{`user"name`, `"user""name"`}, // Double quotes should be escaped
}
for _, tt := range tests {
result := QuoteIdentifier(tt.input)
if result != tt.expected {
t.Errorf("QuoteIdentifier(%q) = %q, want %q", tt.input, result, tt.expected)
}
}
}

21
vendor/github.com/dustin/go-humanize/.travis.yml generated vendored Normal file
View File

@@ -0,0 +1,21 @@
sudo: false
language: go
go_import_path: github.com/dustin/go-humanize
go:
- 1.13.x
- 1.14.x
- 1.15.x
- 1.16.x
- stable
- master
matrix:
allow_failures:
- go: master
fast_finish: true
install:
- # Do nothing. This is needed to prevent default install action "go get -t -v ./..." from happening here (we want it to happen inside script step).
script:
- diff -u <(echo -n) <(gofmt -d -s .)
- go vet .
- go install -v -race ./...
- go test -v -race ./...

21
vendor/github.com/dustin/go-humanize/LICENSE generated vendored Normal file
View File

@@ -0,0 +1,21 @@
Copyright (c) 2005-2008 Dustin Sallings <dustin@spy.net>
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in
all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
<http://www.opensource.org/licenses/mit-license.php>

124
vendor/github.com/dustin/go-humanize/README.markdown generated vendored Normal file
View File

@@ -0,0 +1,124 @@
# Humane Units [![Build Status](https://travis-ci.org/dustin/go-humanize.svg?branch=master)](https://travis-ci.org/dustin/go-humanize) [![GoDoc](https://godoc.org/github.com/dustin/go-humanize?status.svg)](https://godoc.org/github.com/dustin/go-humanize)
Just a few functions for helping humanize times and sizes.
`go get` it as `github.com/dustin/go-humanize`, import it as
`"github.com/dustin/go-humanize"`, use it as `humanize`.
See [godoc](https://pkg.go.dev/github.com/dustin/go-humanize) for
complete documentation.
## Sizes
This lets you take numbers like `82854982` and convert them to useful
strings like, `83 MB` or `79 MiB` (whichever you prefer).
Example:
```go
fmt.Printf("That file is %s.", humanize.Bytes(82854982)) // That file is 83 MB.
```
## Times
This lets you take a `time.Time` and spit it out in relative terms.
For example, `12 seconds ago` or `3 days from now`.
Example:
```go
fmt.Printf("This was touched %s.", humanize.Time(someTimeInstance)) // This was touched 7 hours ago.
```
Thanks to Kyle Lemons for the time implementation from an IRC
conversation one day. It's pretty neat.
## Ordinals
From a [mailing list discussion][odisc] where a user wanted to be able
to label ordinals.
0 -> 0th
1 -> 1st
2 -> 2nd
3 -> 3rd
4 -> 4th
[...]
Example:
```go
fmt.Printf("You're my %s best friend.", humanize.Ordinal(193)) // You are my 193rd best friend.
```
## Commas
Want to shove commas into numbers? Be my guest.
0 -> 0
100 -> 100
1000 -> 1,000
1000000000 -> 1,000,000,000
-100000 -> -100,000
Example:
```go
fmt.Printf("You owe $%s.\n", humanize.Comma(6582491)) // You owe $6,582,491.
```
## Ftoa
Nicer float64 formatter that removes trailing zeros.
```go
fmt.Printf("%f", 2.24) // 2.240000
fmt.Printf("%s", humanize.Ftoa(2.24)) // 2.24
fmt.Printf("%f", 2.0) // 2.000000
fmt.Printf("%s", humanize.Ftoa(2.0)) // 2
```
## SI notation
Format numbers with [SI notation][sinotation].
Example:
```go
humanize.SI(0.00000000223, "M") // 2.23 nM
```
## English-specific functions
The following functions are in the `humanize/english` subpackage.
### Plurals
Simple English pluralization
```go
english.PluralWord(1, "object", "") // object
english.PluralWord(42, "object", "") // objects
english.PluralWord(2, "bus", "") // buses
english.PluralWord(99, "locus", "loci") // loci
english.Plural(1, "object", "") // 1 object
english.Plural(42, "object", "") // 42 objects
english.Plural(2, "bus", "") // 2 buses
english.Plural(99, "locus", "loci") // 99 loci
```
### Word series
Format comma-separated words lists with conjuctions:
```go
english.WordSeries([]string{"foo"}, "and") // foo
english.WordSeries([]string{"foo", "bar"}, "and") // foo and bar
english.WordSeries([]string{"foo", "bar", "baz"}, "and") // foo, bar and baz
english.OxfordWordSeries([]string{"foo", "bar", "baz"}, "and") // foo, bar, and baz
```
[odisc]: https://groups.google.com/d/topic/golang-nuts/l8NhI74jl-4/discussion
[sinotation]: http://en.wikipedia.org/wiki/Metric_prefix

31
vendor/github.com/dustin/go-humanize/big.go generated vendored Normal file
View File

@@ -0,0 +1,31 @@
package humanize
import (
"math/big"
)
// order of magnitude (to a max order)
func oomm(n, b *big.Int, maxmag int) (float64, int) {
mag := 0
m := &big.Int{}
for n.Cmp(b) >= 0 {
n.DivMod(n, b, m)
mag++
if mag == maxmag && maxmag >= 0 {
break
}
}
return float64(n.Int64()) + (float64(m.Int64()) / float64(b.Int64())), mag
}
// total order of magnitude
// (same as above, but with no upper limit)
func oom(n, b *big.Int) (float64, int) {
mag := 0
m := &big.Int{}
for n.Cmp(b) >= 0 {
n.DivMod(n, b, m)
mag++
}
return float64(n.Int64()) + (float64(m.Int64()) / float64(b.Int64())), mag
}

189
vendor/github.com/dustin/go-humanize/bigbytes.go generated vendored Normal file
View File

@@ -0,0 +1,189 @@
package humanize
import (
"fmt"
"math/big"
"strings"
"unicode"
)
var (
bigIECExp = big.NewInt(1024)
// BigByte is one byte in bit.Ints
BigByte = big.NewInt(1)
// BigKiByte is 1,024 bytes in bit.Ints
BigKiByte = (&big.Int{}).Mul(BigByte, bigIECExp)
// BigMiByte is 1,024 k bytes in bit.Ints
BigMiByte = (&big.Int{}).Mul(BigKiByte, bigIECExp)
// BigGiByte is 1,024 m bytes in bit.Ints
BigGiByte = (&big.Int{}).Mul(BigMiByte, bigIECExp)
// BigTiByte is 1,024 g bytes in bit.Ints
BigTiByte = (&big.Int{}).Mul(BigGiByte, bigIECExp)
// BigPiByte is 1,024 t bytes in bit.Ints
BigPiByte = (&big.Int{}).Mul(BigTiByte, bigIECExp)
// BigEiByte is 1,024 p bytes in bit.Ints
BigEiByte = (&big.Int{}).Mul(BigPiByte, bigIECExp)
// BigZiByte is 1,024 e bytes in bit.Ints
BigZiByte = (&big.Int{}).Mul(BigEiByte, bigIECExp)
// BigYiByte is 1,024 z bytes in bit.Ints
BigYiByte = (&big.Int{}).Mul(BigZiByte, bigIECExp)
// BigRiByte is 1,024 y bytes in bit.Ints
BigRiByte = (&big.Int{}).Mul(BigYiByte, bigIECExp)
// BigQiByte is 1,024 r bytes in bit.Ints
BigQiByte = (&big.Int{}).Mul(BigRiByte, bigIECExp)
)
var (
bigSIExp = big.NewInt(1000)
// BigSIByte is one SI byte in big.Ints
BigSIByte = big.NewInt(1)
// BigKByte is 1,000 SI bytes in big.Ints
BigKByte = (&big.Int{}).Mul(BigSIByte, bigSIExp)
// BigMByte is 1,000 SI k bytes in big.Ints
BigMByte = (&big.Int{}).Mul(BigKByte, bigSIExp)
// BigGByte is 1,000 SI m bytes in big.Ints
BigGByte = (&big.Int{}).Mul(BigMByte, bigSIExp)
// BigTByte is 1,000 SI g bytes in big.Ints
BigTByte = (&big.Int{}).Mul(BigGByte, bigSIExp)
// BigPByte is 1,000 SI t bytes in big.Ints
BigPByte = (&big.Int{}).Mul(BigTByte, bigSIExp)
// BigEByte is 1,000 SI p bytes in big.Ints
BigEByte = (&big.Int{}).Mul(BigPByte, bigSIExp)
// BigZByte is 1,000 SI e bytes in big.Ints
BigZByte = (&big.Int{}).Mul(BigEByte, bigSIExp)
// BigYByte is 1,000 SI z bytes in big.Ints
BigYByte = (&big.Int{}).Mul(BigZByte, bigSIExp)
// BigRByte is 1,000 SI y bytes in big.Ints
BigRByte = (&big.Int{}).Mul(BigYByte, bigSIExp)
// BigQByte is 1,000 SI r bytes in big.Ints
BigQByte = (&big.Int{}).Mul(BigRByte, bigSIExp)
)
var bigBytesSizeTable = map[string]*big.Int{
"b": BigByte,
"kib": BigKiByte,
"kb": BigKByte,
"mib": BigMiByte,
"mb": BigMByte,
"gib": BigGiByte,
"gb": BigGByte,
"tib": BigTiByte,
"tb": BigTByte,
"pib": BigPiByte,
"pb": BigPByte,
"eib": BigEiByte,
"eb": BigEByte,
"zib": BigZiByte,
"zb": BigZByte,
"yib": BigYiByte,
"yb": BigYByte,
"rib": BigRiByte,
"rb": BigRByte,
"qib": BigQiByte,
"qb": BigQByte,
// Without suffix
"": BigByte,
"ki": BigKiByte,
"k": BigKByte,
"mi": BigMiByte,
"m": BigMByte,
"gi": BigGiByte,
"g": BigGByte,
"ti": BigTiByte,
"t": BigTByte,
"pi": BigPiByte,
"p": BigPByte,
"ei": BigEiByte,
"e": BigEByte,
"z": BigZByte,
"zi": BigZiByte,
"y": BigYByte,
"yi": BigYiByte,
"r": BigRByte,
"ri": BigRiByte,
"q": BigQByte,
"qi": BigQiByte,
}
var ten = big.NewInt(10)
func humanateBigBytes(s, base *big.Int, sizes []string) string {
if s.Cmp(ten) < 0 {
return fmt.Sprintf("%d B", s)
}
c := (&big.Int{}).Set(s)
val, mag := oomm(c, base, len(sizes)-1)
suffix := sizes[mag]
f := "%.0f %s"
if val < 10 {
f = "%.1f %s"
}
return fmt.Sprintf(f, val, suffix)
}
// BigBytes produces a human readable representation of an SI size.
//
// See also: ParseBigBytes.
//
// BigBytes(82854982) -> 83 MB
func BigBytes(s *big.Int) string {
sizes := []string{"B", "kB", "MB", "GB", "TB", "PB", "EB", "ZB", "YB", "RB", "QB"}
return humanateBigBytes(s, bigSIExp, sizes)
}
// BigIBytes produces a human readable representation of an IEC size.
//
// See also: ParseBigBytes.
//
// BigIBytes(82854982) -> 79 MiB
func BigIBytes(s *big.Int) string {
sizes := []string{"B", "KiB", "MiB", "GiB", "TiB", "PiB", "EiB", "ZiB", "YiB", "RiB", "QiB"}
return humanateBigBytes(s, bigIECExp, sizes)
}
// ParseBigBytes parses a string representation of bytes into the number
// of bytes it represents.
//
// See also: BigBytes, BigIBytes.
//
// ParseBigBytes("42 MB") -> 42000000, nil
// ParseBigBytes("42 mib") -> 44040192, nil
func ParseBigBytes(s string) (*big.Int, error) {
lastDigit := 0
hasComma := false
for _, r := range s {
if !(unicode.IsDigit(r) || r == '.' || r == ',') {
break
}
if r == ',' {
hasComma = true
}
lastDigit++
}
num := s[:lastDigit]
if hasComma {
num = strings.Replace(num, ",", "", -1)
}
val := &big.Rat{}
_, err := fmt.Sscanf(num, "%f", val)
if err != nil {
return nil, err
}
extra := strings.ToLower(strings.TrimSpace(s[lastDigit:]))
if m, ok := bigBytesSizeTable[extra]; ok {
mv := (&big.Rat{}).SetInt(m)
val.Mul(val, mv)
rv := &big.Int{}
rv.Div(val.Num(), val.Denom())
return rv, nil
}
return nil, fmt.Errorf("unhandled size name: %v", extra)
}

143
vendor/github.com/dustin/go-humanize/bytes.go generated vendored Normal file
View File

@@ -0,0 +1,143 @@
package humanize
import (
"fmt"
"math"
"strconv"
"strings"
"unicode"
)
// IEC Sizes.
// kibis of bits
const (
Byte = 1 << (iota * 10)
KiByte
MiByte
GiByte
TiByte
PiByte
EiByte
)
// SI Sizes.
const (
IByte = 1
KByte = IByte * 1000
MByte = KByte * 1000
GByte = MByte * 1000
TByte = GByte * 1000
PByte = TByte * 1000
EByte = PByte * 1000
)
var bytesSizeTable = map[string]uint64{
"b": Byte,
"kib": KiByte,
"kb": KByte,
"mib": MiByte,
"mb": MByte,
"gib": GiByte,
"gb": GByte,
"tib": TiByte,
"tb": TByte,
"pib": PiByte,
"pb": PByte,
"eib": EiByte,
"eb": EByte,
// Without suffix
"": Byte,
"ki": KiByte,
"k": KByte,
"mi": MiByte,
"m": MByte,
"gi": GiByte,
"g": GByte,
"ti": TiByte,
"t": TByte,
"pi": PiByte,
"p": PByte,
"ei": EiByte,
"e": EByte,
}
func logn(n, b float64) float64 {
return math.Log(n) / math.Log(b)
}
func humanateBytes(s uint64, base float64, sizes []string) string {
if s < 10 {
return fmt.Sprintf("%d B", s)
}
e := math.Floor(logn(float64(s), base))
suffix := sizes[int(e)]
val := math.Floor(float64(s)/math.Pow(base, e)*10+0.5) / 10
f := "%.0f %s"
if val < 10 {
f = "%.1f %s"
}
return fmt.Sprintf(f, val, suffix)
}
// Bytes produces a human readable representation of an SI size.
//
// See also: ParseBytes.
//
// Bytes(82854982) -> 83 MB
func Bytes(s uint64) string {
sizes := []string{"B", "kB", "MB", "GB", "TB", "PB", "EB"}
return humanateBytes(s, 1000, sizes)
}
// IBytes produces a human readable representation of an IEC size.
//
// See also: ParseBytes.
//
// IBytes(82854982) -> 79 MiB
func IBytes(s uint64) string {
sizes := []string{"B", "KiB", "MiB", "GiB", "TiB", "PiB", "EiB"}
return humanateBytes(s, 1024, sizes)
}
// ParseBytes parses a string representation of bytes into the number
// of bytes it represents.
//
// See Also: Bytes, IBytes.
//
// ParseBytes("42 MB") -> 42000000, nil
// ParseBytes("42 mib") -> 44040192, nil
func ParseBytes(s string) (uint64, error) {
lastDigit := 0
hasComma := false
for _, r := range s {
if !(unicode.IsDigit(r) || r == '.' || r == ',') {
break
}
if r == ',' {
hasComma = true
}
lastDigit++
}
num := s[:lastDigit]
if hasComma {
num = strings.Replace(num, ",", "", -1)
}
f, err := strconv.ParseFloat(num, 64)
if err != nil {
return 0, err
}
extra := strings.ToLower(strings.TrimSpace(s[lastDigit:]))
if m, ok := bytesSizeTable[extra]; ok {
f *= float64(m)
if f >= math.MaxUint64 {
return 0, fmt.Errorf("too large: %v", s)
}
return uint64(f), nil
}
return 0, fmt.Errorf("unhandled size name: %v", extra)
}

116
vendor/github.com/dustin/go-humanize/comma.go generated vendored Normal file
View File

@@ -0,0 +1,116 @@
package humanize
import (
"bytes"
"math"
"math/big"
"strconv"
"strings"
)
// Comma produces a string form of the given number in base 10 with
// commas after every three orders of magnitude.
//
// e.g. Comma(834142) -> 834,142
func Comma(v int64) string {
sign := ""
// Min int64 can't be negated to a usable value, so it has to be special cased.
if v == math.MinInt64 {
return "-9,223,372,036,854,775,808"
}
if v < 0 {
sign = "-"
v = 0 - v
}
parts := []string{"", "", "", "", "", "", ""}
j := len(parts) - 1
for v > 999 {
parts[j] = strconv.FormatInt(v%1000, 10)
switch len(parts[j]) {
case 2:
parts[j] = "0" + parts[j]
case 1:
parts[j] = "00" + parts[j]
}
v = v / 1000
j--
}
parts[j] = strconv.Itoa(int(v))
return sign + strings.Join(parts[j:], ",")
}
// Commaf produces a string form of the given number in base 10 with
// commas after every three orders of magnitude.
//
// e.g. Commaf(834142.32) -> 834,142.32
func Commaf(v float64) string {
buf := &bytes.Buffer{}
if v < 0 {
buf.Write([]byte{'-'})
v = 0 - v
}
comma := []byte{','}
parts := strings.Split(strconv.FormatFloat(v, 'f', -1, 64), ".")
pos := 0
if len(parts[0])%3 != 0 {
pos += len(parts[0]) % 3
buf.WriteString(parts[0][:pos])
buf.Write(comma)
}
for ; pos < len(parts[0]); pos += 3 {
buf.WriteString(parts[0][pos : pos+3])
buf.Write(comma)
}
buf.Truncate(buf.Len() - 1)
if len(parts) > 1 {
buf.Write([]byte{'.'})
buf.WriteString(parts[1])
}
return buf.String()
}
// CommafWithDigits works like the Commaf but limits the resulting
// string to the given number of decimal places.
//
// e.g. CommafWithDigits(834142.32, 1) -> 834,142.3
func CommafWithDigits(f float64, decimals int) string {
return stripTrailingDigits(Commaf(f), decimals)
}
// BigComma produces a string form of the given big.Int in base 10
// with commas after every three orders of magnitude.
func BigComma(b *big.Int) string {
sign := ""
if b.Sign() < 0 {
sign = "-"
b.Abs(b)
}
athousand := big.NewInt(1000)
c := (&big.Int{}).Set(b)
_, m := oom(c, athousand)
parts := make([]string, m+1)
j := len(parts) - 1
mod := &big.Int{}
for b.Cmp(athousand) >= 0 {
b.DivMod(b, athousand, mod)
parts[j] = strconv.FormatInt(mod.Int64(), 10)
switch len(parts[j]) {
case 2:
parts[j] = "0" + parts[j]
case 1:
parts[j] = "00" + parts[j]
}
j--
}
parts[j] = strconv.Itoa(int(b.Int64()))
return sign + strings.Join(parts[j:], ",")
}

41
vendor/github.com/dustin/go-humanize/commaf.go generated vendored Normal file
View File

@@ -0,0 +1,41 @@
//go:build go1.6
// +build go1.6
package humanize
import (
"bytes"
"math/big"
"strings"
)
// BigCommaf produces a string form of the given big.Float in base 10
// with commas after every three orders of magnitude.
func BigCommaf(v *big.Float) string {
buf := &bytes.Buffer{}
if v.Sign() < 0 {
buf.Write([]byte{'-'})
v.Abs(v)
}
comma := []byte{','}
parts := strings.Split(v.Text('f', -1), ".")
pos := 0
if len(parts[0])%3 != 0 {
pos += len(parts[0]) % 3
buf.WriteString(parts[0][:pos])
buf.Write(comma)
}
for ; pos < len(parts[0]); pos += 3 {
buf.WriteString(parts[0][pos : pos+3])
buf.Write(comma)
}
buf.Truncate(buf.Len() - 1)
if len(parts) > 1 {
buf.Write([]byte{'.'})
buf.WriteString(parts[1])
}
return buf.String()
}

49
vendor/github.com/dustin/go-humanize/ftoa.go generated vendored Normal file
View File

@@ -0,0 +1,49 @@
package humanize
import (
"strconv"
"strings"
)
func stripTrailingZeros(s string) string {
if !strings.ContainsRune(s, '.') {
return s
}
offset := len(s) - 1
for offset > 0 {
if s[offset] == '.' {
offset--
break
}
if s[offset] != '0' {
break
}
offset--
}
return s[:offset+1]
}
func stripTrailingDigits(s string, digits int) string {
if i := strings.Index(s, "."); i >= 0 {
if digits <= 0 {
return s[:i]
}
i++
if i+digits >= len(s) {
return s
}
return s[:i+digits]
}
return s
}
// Ftoa converts a float to a string with no trailing zeros.
func Ftoa(num float64) string {
return stripTrailingZeros(strconv.FormatFloat(num, 'f', 6, 64))
}
// FtoaWithDigits converts a float to a string but limits the resulting string
// to the given number of decimal places, and no trailing zeros.
func FtoaWithDigits(num float64, digits int) string {
return stripTrailingZeros(stripTrailingDigits(strconv.FormatFloat(num, 'f', 6, 64), digits))
}

8
vendor/github.com/dustin/go-humanize/humanize.go generated vendored Normal file
View File

@@ -0,0 +1,8 @@
/*
Package humanize converts boring ugly numbers to human-friendly strings and back.
Durations can be turned into strings such as "3 days ago", numbers
representing sizes like 82854982 into useful strings like, "83 MB" or
"79 MiB" (whichever you prefer).
*/
package humanize

192
vendor/github.com/dustin/go-humanize/number.go generated vendored Normal file
View File

@@ -0,0 +1,192 @@
package humanize
/*
Slightly adapted from the source to fit go-humanize.
Author: https://github.com/gorhill
Source: https://gist.github.com/gorhill/5285193
*/
import (
"math"
"strconv"
)
var (
renderFloatPrecisionMultipliers = [...]float64{
1,
10,
100,
1000,
10000,
100000,
1000000,
10000000,
100000000,
1000000000,
}
renderFloatPrecisionRounders = [...]float64{
0.5,
0.05,
0.005,
0.0005,
0.00005,
0.000005,
0.0000005,
0.00000005,
0.000000005,
0.0000000005,
}
)
// FormatFloat produces a formatted number as string based on the following user-specified criteria:
// * thousands separator
// * decimal separator
// * decimal precision
//
// Usage: s := RenderFloat(format, n)
// The format parameter tells how to render the number n.
//
// See examples: http://play.golang.org/p/LXc1Ddm1lJ
//
// Examples of format strings, given n = 12345.6789:
// "#,###.##" => "12,345.67"
// "#,###." => "12,345"
// "#,###" => "12345,678"
// "#\u202F###,##" => "12345,68"
// "#.###,###### => 12.345,678900
// "" (aka default format) => 12,345.67
//
// The highest precision allowed is 9 digits after the decimal symbol.
// There is also a version for integer number, FormatInteger(),
// which is convenient for calls within template.
func FormatFloat(format string, n float64) string {
// Special cases:
// NaN = "NaN"
// +Inf = "+Infinity"
// -Inf = "-Infinity"
if math.IsNaN(n) {
return "NaN"
}
if n > math.MaxFloat64 {
return "Infinity"
}
if n < (0.0 - math.MaxFloat64) {
return "-Infinity"
}
// default format
precision := 2
decimalStr := "."
thousandStr := ","
positiveStr := ""
negativeStr := "-"
if len(format) > 0 {
format := []rune(format)
// If there is an explicit format directive,
// then default values are these:
precision = 9
thousandStr = ""
// collect indices of meaningful formatting directives
formatIndx := []int{}
for i, char := range format {
if char != '#' && char != '0' {
formatIndx = append(formatIndx, i)
}
}
if len(formatIndx) > 0 {
// Directive at index 0:
// Must be a '+'
// Raise an error if not the case
// index: 0123456789
// +0.000,000
// +000,000.0
// +0000.00
// +0000
if formatIndx[0] == 0 {
if format[formatIndx[0]] != '+' {
panic("RenderFloat(): invalid positive sign directive")
}
positiveStr = "+"
formatIndx = formatIndx[1:]
}
// Two directives:
// First is thousands separator
// Raise an error if not followed by 3-digit
// 0123456789
// 0.000,000
// 000,000.00
if len(formatIndx) == 2 {
if (formatIndx[1] - formatIndx[0]) != 4 {
panic("RenderFloat(): thousands separator directive must be followed by 3 digit-specifiers")
}
thousandStr = string(format[formatIndx[0]])
formatIndx = formatIndx[1:]
}
// One directive:
// Directive is decimal separator
// The number of digit-specifier following the separator indicates wanted precision
// 0123456789
// 0.00
// 000,0000
if len(formatIndx) == 1 {
decimalStr = string(format[formatIndx[0]])
precision = len(format) - formatIndx[0] - 1
}
}
}
// generate sign part
var signStr string
if n >= 0.000000001 {
signStr = positiveStr
} else if n <= -0.000000001 {
signStr = negativeStr
n = -n
} else {
signStr = ""
n = 0.0
}
// split number into integer and fractional parts
intf, fracf := math.Modf(n + renderFloatPrecisionRounders[precision])
// generate integer part string
intStr := strconv.FormatInt(int64(intf), 10)
// add thousand separator if required
if len(thousandStr) > 0 {
for i := len(intStr); i > 3; {
i -= 3
intStr = intStr[:i] + thousandStr + intStr[i:]
}
}
// no fractional part, we can leave now
if precision == 0 {
return signStr + intStr
}
// generate fractional part
fracStr := strconv.Itoa(int(fracf * renderFloatPrecisionMultipliers[precision]))
// may need padding
if len(fracStr) < precision {
fracStr = "000000000000000"[:precision-len(fracStr)] + fracStr
}
return signStr + intStr + decimalStr + fracStr
}
// FormatInteger produces a formatted number as string.
// See FormatFloat.
func FormatInteger(format string, n int) string {
return FormatFloat(format, float64(n))
}

25
vendor/github.com/dustin/go-humanize/ordinals.go generated vendored Normal file
View File

@@ -0,0 +1,25 @@
package humanize
import "strconv"
// Ordinal gives you the input number in a rank/ordinal format.
//
// Ordinal(3) -> 3rd
func Ordinal(x int) string {
suffix := "th"
switch x % 10 {
case 1:
if x%100 != 11 {
suffix = "st"
}
case 2:
if x%100 != 12 {
suffix = "nd"
}
case 3:
if x%100 != 13 {
suffix = "rd"
}
}
return strconv.Itoa(x) + suffix
}

127
vendor/github.com/dustin/go-humanize/si.go generated vendored Normal file
View File

@@ -0,0 +1,127 @@
package humanize
import (
"errors"
"math"
"regexp"
"strconv"
)
var siPrefixTable = map[float64]string{
-30: "q", // quecto
-27: "r", // ronto
-24: "y", // yocto
-21: "z", // zepto
-18: "a", // atto
-15: "f", // femto
-12: "p", // pico
-9: "n", // nano
-6: "µ", // micro
-3: "m", // milli
0: "",
3: "k", // kilo
6: "M", // mega
9: "G", // giga
12: "T", // tera
15: "P", // peta
18: "E", // exa
21: "Z", // zetta
24: "Y", // yotta
27: "R", // ronna
30: "Q", // quetta
}
var revSIPrefixTable = revfmap(siPrefixTable)
// revfmap reverses the map and precomputes the power multiplier
func revfmap(in map[float64]string) map[string]float64 {
rv := map[string]float64{}
for k, v := range in {
rv[v] = math.Pow(10, k)
}
return rv
}
var riParseRegex *regexp.Regexp
func init() {
ri := `^([\-0-9.]+)\s?([`
for _, v := range siPrefixTable {
ri += v
}
ri += `]?)(.*)`
riParseRegex = regexp.MustCompile(ri)
}
// ComputeSI finds the most appropriate SI prefix for the given number
// and returns the prefix along with the value adjusted to be within
// that prefix.
//
// See also: SI, ParseSI.
//
// e.g. ComputeSI(2.2345e-12) -> (2.2345, "p")
func ComputeSI(input float64) (float64, string) {
if input == 0 {
return 0, ""
}
mag := math.Abs(input)
exponent := math.Floor(logn(mag, 10))
exponent = math.Floor(exponent/3) * 3
value := mag / math.Pow(10, exponent)
// Handle special case where value is exactly 1000.0
// Should return 1 M instead of 1000 k
if value == 1000.0 {
exponent += 3
value = mag / math.Pow(10, exponent)
}
value = math.Copysign(value, input)
prefix := siPrefixTable[exponent]
return value, prefix
}
// SI returns a string with default formatting.
//
// SI uses Ftoa to format float value, removing trailing zeros.
//
// See also: ComputeSI, ParseSI.
//
// e.g. SI(1000000, "B") -> 1 MB
// e.g. SI(2.2345e-12, "F") -> 2.2345 pF
func SI(input float64, unit string) string {
value, prefix := ComputeSI(input)
return Ftoa(value) + " " + prefix + unit
}
// SIWithDigits works like SI but limits the resulting string to the
// given number of decimal places.
//
// e.g. SIWithDigits(1000000, 0, "B") -> 1 MB
// e.g. SIWithDigits(2.2345e-12, 2, "F") -> 2.23 pF
func SIWithDigits(input float64, decimals int, unit string) string {
value, prefix := ComputeSI(input)
return FtoaWithDigits(value, decimals) + " " + prefix + unit
}
var errInvalid = errors.New("invalid input")
// ParseSI parses an SI string back into the number and unit.
//
// See also: SI, ComputeSI.
//
// e.g. ParseSI("2.2345 pF") -> (2.2345e-12, "F", nil)
func ParseSI(input string) (float64, string, error) {
found := riParseRegex.FindStringSubmatch(input)
if len(found) != 4 {
return 0, "", errInvalid
}
mag := revSIPrefixTable[found[2]]
unit := found[3]
base, err := strconv.ParseFloat(found[1], 64)
return base * mag, unit, err
}

117
vendor/github.com/dustin/go-humanize/times.go generated vendored Normal file
View File

@@ -0,0 +1,117 @@
package humanize
import (
"fmt"
"math"
"sort"
"time"
)
// Seconds-based time units
const (
Day = 24 * time.Hour
Week = 7 * Day
Month = 30 * Day
Year = 12 * Month
LongTime = 37 * Year
)
// Time formats a time into a relative string.
//
// Time(someT) -> "3 weeks ago"
func Time(then time.Time) string {
return RelTime(then, time.Now(), "ago", "from now")
}
// A RelTimeMagnitude struct contains a relative time point at which
// the relative format of time will switch to a new format string. A
// slice of these in ascending order by their "D" field is passed to
// CustomRelTime to format durations.
//
// The Format field is a string that may contain a "%s" which will be
// replaced with the appropriate signed label (e.g. "ago" or "from
// now") and a "%d" that will be replaced by the quantity.
//
// The DivBy field is the amount of time the time difference must be
// divided by in order to display correctly.
//
// e.g. if D is 2*time.Minute and you want to display "%d minutes %s"
// DivBy should be time.Minute so whatever the duration is will be
// expressed in minutes.
type RelTimeMagnitude struct {
D time.Duration
Format string
DivBy time.Duration
}
var defaultMagnitudes = []RelTimeMagnitude{
{time.Second, "now", time.Second},
{2 * time.Second, "1 second %s", 1},
{time.Minute, "%d seconds %s", time.Second},
{2 * time.Minute, "1 minute %s", 1},
{time.Hour, "%d minutes %s", time.Minute},
{2 * time.Hour, "1 hour %s", 1},
{Day, "%d hours %s", time.Hour},
{2 * Day, "1 day %s", 1},
{Week, "%d days %s", Day},
{2 * Week, "1 week %s", 1},
{Month, "%d weeks %s", Week},
{2 * Month, "1 month %s", 1},
{Year, "%d months %s", Month},
{18 * Month, "1 year %s", 1},
{2 * Year, "2 years %s", 1},
{LongTime, "%d years %s", Year},
{math.MaxInt64, "a long while %s", 1},
}
// RelTime formats a time into a relative string.
//
// It takes two times and two labels. In addition to the generic time
// delta string (e.g. 5 minutes), the labels are used applied so that
// the label corresponding to the smaller time is applied.
//
// RelTime(timeInPast, timeInFuture, "earlier", "later") -> "3 weeks earlier"
func RelTime(a, b time.Time, albl, blbl string) string {
return CustomRelTime(a, b, albl, blbl, defaultMagnitudes)
}
// CustomRelTime formats a time into a relative string.
//
// It takes two times two labels and a table of relative time formats.
// In addition to the generic time delta string (e.g. 5 minutes), the
// labels are used applied so that the label corresponding to the
// smaller time is applied.
func CustomRelTime(a, b time.Time, albl, blbl string, magnitudes []RelTimeMagnitude) string {
lbl := albl
diff := b.Sub(a)
if a.After(b) {
lbl = blbl
diff = a.Sub(b)
}
n := sort.Search(len(magnitudes), func(i int) bool {
return magnitudes[i].D > diff
})
if n >= len(magnitudes) {
n = len(magnitudes) - 1
}
mag := magnitudes[n]
args := []interface{}{}
escaped := false
for _, ch := range mag.Format {
if escaped {
switch ch {
case 's':
args = append(args, lbl)
case 'd':
args = append(args, diff/mag.DivBy)
}
escaped = false
} else {
escaped = ch == '%'
}
}
return fmt.Sprintf(mag.Format, args...)
}

9
vendor/github.com/mattn/go-isatty/LICENSE generated vendored Normal file
View File

@@ -0,0 +1,9 @@
Copyright (c) Yasuhiro MATSUMOTO <mattn.jp@gmail.com>
MIT License (Expat)
Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.

50
vendor/github.com/mattn/go-isatty/README.md generated vendored Normal file
View File

@@ -0,0 +1,50 @@
# go-isatty
[![Godoc Reference](https://godoc.org/github.com/mattn/go-isatty?status.svg)](http://godoc.org/github.com/mattn/go-isatty)
[![Codecov](https://codecov.io/gh/mattn/go-isatty/branch/master/graph/badge.svg)](https://codecov.io/gh/mattn/go-isatty)
[![Coverage Status](https://coveralls.io/repos/github/mattn/go-isatty/badge.svg?branch=master)](https://coveralls.io/github/mattn/go-isatty?branch=master)
[![Go Report Card](https://goreportcard.com/badge/mattn/go-isatty)](https://goreportcard.com/report/mattn/go-isatty)
isatty for golang
## Usage
```go
package main
import (
"fmt"
"github.com/mattn/go-isatty"
"os"
)
func main() {
if isatty.IsTerminal(os.Stdout.Fd()) {
fmt.Println("Is Terminal")
} else if isatty.IsCygwinTerminal(os.Stdout.Fd()) {
fmt.Println("Is Cygwin/MSYS2 Terminal")
} else {
fmt.Println("Is Not Terminal")
}
}
```
## Installation
```
$ go get github.com/mattn/go-isatty
```
## License
MIT
## Author
Yasuhiro Matsumoto (a.k.a mattn)
## Thanks
* k-takata: base idea for IsCygwinTerminal
https://github.com/k-takata/go-iscygpty

2
vendor/github.com/mattn/go-isatty/doc.go generated vendored Normal file
View File

@@ -0,0 +1,2 @@
// Package isatty implements interface to isatty
package isatty

12
vendor/github.com/mattn/go-isatty/go.test.sh generated vendored Normal file
View File

@@ -0,0 +1,12 @@
#!/usr/bin/env bash
set -e
echo "" > coverage.txt
for d in $(go list ./... | grep -v vendor); do
go test -race -coverprofile=profile.out -covermode=atomic "$d"
if [ -f profile.out ]; then
cat profile.out >> coverage.txt
rm profile.out
fi
done

20
vendor/github.com/mattn/go-isatty/isatty_bsd.go generated vendored Normal file
View File

@@ -0,0 +1,20 @@
//go:build (darwin || freebsd || openbsd || netbsd || dragonfly || hurd) && !appengine && !tinygo
// +build darwin freebsd openbsd netbsd dragonfly hurd
// +build !appengine
// +build !tinygo
package isatty
import "golang.org/x/sys/unix"
// IsTerminal return true if the file descriptor is terminal.
func IsTerminal(fd uintptr) bool {
_, err := unix.IoctlGetTermios(int(fd), unix.TIOCGETA)
return err == nil
}
// IsCygwinTerminal return true if the file descriptor is a cygwin or msys2
// terminal. This is also always false on this environment.
func IsCygwinTerminal(fd uintptr) bool {
return false
}

17
vendor/github.com/mattn/go-isatty/isatty_others.go generated vendored Normal file
View File

@@ -0,0 +1,17 @@
//go:build (appengine || js || nacl || tinygo || wasm) && !windows
// +build appengine js nacl tinygo wasm
// +build !windows
package isatty
// IsTerminal returns true if the file descriptor is terminal which
// is always false on js and appengine classic which is a sandboxed PaaS.
func IsTerminal(fd uintptr) bool {
return false
}
// IsCygwinTerminal() return true if the file descriptor is a cygwin or msys2
// terminal. This is also always false on this environment.
func IsCygwinTerminal(fd uintptr) bool {
return false
}

23
vendor/github.com/mattn/go-isatty/isatty_plan9.go generated vendored Normal file
View File

@@ -0,0 +1,23 @@
//go:build plan9
// +build plan9
package isatty
import (
"syscall"
)
// IsTerminal returns true if the given file descriptor is a terminal.
func IsTerminal(fd uintptr) bool {
path, err := syscall.Fd2path(int(fd))
if err != nil {
return false
}
return path == "/dev/cons" || path == "/mnt/term/dev/cons"
}
// IsCygwinTerminal return true if the file descriptor is a cygwin or msys2
// terminal. This is also always false on this environment.
func IsCygwinTerminal(fd uintptr) bool {
return false
}

21
vendor/github.com/mattn/go-isatty/isatty_solaris.go generated vendored Normal file
View File

@@ -0,0 +1,21 @@
//go:build solaris && !appengine
// +build solaris,!appengine
package isatty
import (
"golang.org/x/sys/unix"
)
// IsTerminal returns true if the given file descriptor is a terminal.
// see: https://src.illumos.org/source/xref/illumos-gate/usr/src/lib/libc/port/gen/isatty.c
func IsTerminal(fd uintptr) bool {
_, err := unix.IoctlGetTermio(int(fd), unix.TCGETA)
return err == nil
}
// IsCygwinTerminal return true if the file descriptor is a cygwin or msys2
// terminal. This is also always false on this environment.
func IsCygwinTerminal(fd uintptr) bool {
return false
}

20
vendor/github.com/mattn/go-isatty/isatty_tcgets.go generated vendored Normal file
View File

@@ -0,0 +1,20 @@
//go:build (linux || aix || zos) && !appengine && !tinygo
// +build linux aix zos
// +build !appengine
// +build !tinygo
package isatty
import "golang.org/x/sys/unix"
// IsTerminal return true if the file descriptor is terminal.
func IsTerminal(fd uintptr) bool {
_, err := unix.IoctlGetTermios(int(fd), unix.TCGETS)
return err == nil
}
// IsCygwinTerminal return true if the file descriptor is a cygwin or msys2
// terminal. This is also always false on this environment.
func IsCygwinTerminal(fd uintptr) bool {
return false
}

125
vendor/github.com/mattn/go-isatty/isatty_windows.go generated vendored Normal file
View File

@@ -0,0 +1,125 @@
//go:build windows && !appengine
// +build windows,!appengine
package isatty
import (
"errors"
"strings"
"syscall"
"unicode/utf16"
"unsafe"
)
const (
objectNameInfo uintptr = 1
fileNameInfo = 2
fileTypePipe = 3
)
var (
kernel32 = syscall.NewLazyDLL("kernel32.dll")
ntdll = syscall.NewLazyDLL("ntdll.dll")
procGetConsoleMode = kernel32.NewProc("GetConsoleMode")
procGetFileInformationByHandleEx = kernel32.NewProc("GetFileInformationByHandleEx")
procGetFileType = kernel32.NewProc("GetFileType")
procNtQueryObject = ntdll.NewProc("NtQueryObject")
)
func init() {
// Check if GetFileInformationByHandleEx is available.
if procGetFileInformationByHandleEx.Find() != nil {
procGetFileInformationByHandleEx = nil
}
}
// IsTerminal return true if the file descriptor is terminal.
func IsTerminal(fd uintptr) bool {
var st uint32
r, _, e := syscall.Syscall(procGetConsoleMode.Addr(), 2, fd, uintptr(unsafe.Pointer(&st)), 0)
return r != 0 && e == 0
}
// Check pipe name is used for cygwin/msys2 pty.
// Cygwin/MSYS2 PTY has a name like:
// \{cygwin,msys}-XXXXXXXXXXXXXXXX-ptyN-{from,to}-master
func isCygwinPipeName(name string) bool {
token := strings.Split(name, "-")
if len(token) < 5 {
return false
}
if token[0] != `\msys` &&
token[0] != `\cygwin` &&
token[0] != `\Device\NamedPipe\msys` &&
token[0] != `\Device\NamedPipe\cygwin` {
return false
}
if token[1] == "" {
return false
}
if !strings.HasPrefix(token[2], "pty") {
return false
}
if token[3] != `from` && token[3] != `to` {
return false
}
if token[4] != "master" {
return false
}
return true
}
// getFileNameByHandle use the undocomented ntdll NtQueryObject to get file full name from file handler
// since GetFileInformationByHandleEx is not available under windows Vista and still some old fashion
// guys are using Windows XP, this is a workaround for those guys, it will also work on system from
// Windows vista to 10
// see https://stackoverflow.com/a/18792477 for details
func getFileNameByHandle(fd uintptr) (string, error) {
if procNtQueryObject == nil {
return "", errors.New("ntdll.dll: NtQueryObject not supported")
}
var buf [4 + syscall.MAX_PATH]uint16
var result int
r, _, e := syscall.Syscall6(procNtQueryObject.Addr(), 5,
fd, objectNameInfo, uintptr(unsafe.Pointer(&buf)), uintptr(2*len(buf)), uintptr(unsafe.Pointer(&result)), 0)
if r != 0 {
return "", e
}
return string(utf16.Decode(buf[4 : 4+buf[0]/2])), nil
}
// IsCygwinTerminal() return true if the file descriptor is a cygwin or msys2
// terminal.
func IsCygwinTerminal(fd uintptr) bool {
if procGetFileInformationByHandleEx == nil {
name, err := getFileNameByHandle(fd)
if err != nil {
return false
}
return isCygwinPipeName(name)
}
// Cygwin/msys's pty is a pipe.
ft, _, e := syscall.Syscall(procGetFileType.Addr(), 1, fd, 0, 0)
if ft != fileTypePipe || e != 0 {
return false
}
var buf [2 + syscall.MAX_PATH]uint16
r, _, e := syscall.Syscall6(procGetFileInformationByHandleEx.Addr(),
4, fd, fileNameInfo, uintptr(unsafe.Pointer(&buf)),
uintptr(len(buf)*2), 0, 0)
if r == 0 || e != 0 {
return false
}
l := *(*uint32)(unsafe.Pointer(&buf))
return isCygwinPipeName(string(utf16.Decode(buf[2 : 2+l/2])))
}

15
vendor/github.com/ncruces/go-strftime/.gitignore generated vendored Normal file
View File

@@ -0,0 +1,15 @@
# Binaries for programs and plugins
*.exe
*.exe~
*.dll
*.so
*.dylib
# Test binary, built with `go test -c`
*.test
# Output of the go coverage tool, specifically when used with LiteIDE
*.out
# Dependency directories (remove the comment below to include it)
# vendor/

21
vendor/github.com/ncruces/go-strftime/LICENSE generated vendored Normal file
View File

@@ -0,0 +1,21 @@
MIT License
Copyright (c) 2022 Nuno Cruces
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.

5
vendor/github.com/ncruces/go-strftime/README.md generated vendored Normal file
View File

@@ -0,0 +1,5 @@
# `strftime`/`strptime` compatible time formatting and parsing for Go
[![Go Reference](https://pkg.go.dev/badge/image)](https://pkg.go.dev/github.com/ncruces/go-strftime)
[![Go Report](https://goreportcard.com/badge/github.com/ncruces/go-strftime)](https://goreportcard.com/report/github.com/ncruces/go-strftime)
[![Go Coverage](https://github.com/ncruces/go-strftime/wiki/coverage.svg)](https://raw.githack.com/wiki/ncruces/go-strftime/coverage.html)

107
vendor/github.com/ncruces/go-strftime/parser.go generated vendored Normal file
View File

@@ -0,0 +1,107 @@
package strftime
import "unicode/utf8"
type parser struct {
format func(spec, flag byte) error
literal func(byte) error
}
func (p *parser) parse(fmt string) error {
const (
initial = iota
percent
flagged
modified
)
var flag, modifier byte
var err error
state := initial
start := 0
for i, b := range []byte(fmt) {
switch state {
default:
if b == '%' {
state = percent
start = i
continue
}
err = p.literal(b)
case percent:
if b == '-' || b == ':' {
state = flagged
flag = b
continue
}
if b == 'E' || b == 'O' {
state = modified
modifier = b
flag = 0
continue
}
err = p.format(b, 0)
state = initial
case flagged:
if b == 'E' || b == 'O' {
state = modified
modifier = b
continue
}
err = p.format(b, flag)
state = initial
case modified:
if okModifier(modifier, b) {
err = p.format(b, flag)
} else {
err = p.literals(fmt[start : i+1])
}
state = initial
}
if err != nil {
if err, ok := err.(formatError); ok {
err.setDirective(fmt, start, i)
return err
}
return err
}
}
if state != initial {
return p.literals(fmt[start:])
}
return nil
}
func (p *parser) literals(literal string) error {
for _, b := range []byte(literal) {
if err := p.literal(b); err != nil {
return err
}
}
return nil
}
type literalErr string
func (e literalErr) Error() string {
return "strftime: unsupported literal: " + string(e)
}
type formatError struct {
message string
directive string
}
func (e formatError) Error() string {
return "strftime: unsupported directive: " + e.directive + " " + e.message
}
func (e *formatError) setDirective(str string, i, j int) {
_, n := utf8.DecodeRuneInString(str[j:])
e.directive = str[i : j+n]
}

96
vendor/github.com/ncruces/go-strftime/pkg.go generated vendored Normal file
View File

@@ -0,0 +1,96 @@
/*
Package strftime provides strftime/strptime compatible time formatting and parsing.
The following formatting specifiers are available:
Date (Year, Month, Day):
%Y - Year with century (can be negative, 4 digits at least)
-0001, 0000, 1995, 2009, 14292, etc.
%C - year / 100 (round down, 20 in 2009)
%y - year % 100 (00..99)
%m - Month of the year, zero-padded (01..12)
%-m no-padded (1..12)
%B - Full month name (January)
%b - Abbreviated month name (Jan)
%h - Equivalent to %b
%d - Day of the month, zero-padded (01..31)
%-d no-padded (1..31)
%e - Day of the month, blank-padded ( 1..31)
%j - Day of the year (001..366)
%-j no-padded (1..366)
Time (Hour, Minute, Second, Subsecond):
%H - Hour of the day, 24-hour clock, zero-padded (00..23)
%-H no-padded (0..23)
%k - Hour of the day, 24-hour clock, blank-padded ( 0..23)
%I - Hour of the day, 12-hour clock, zero-padded (01..12)
%-I no-padded (1..12)
%l - Hour of the day, 12-hour clock, blank-padded ( 1..12)
%P - Meridian indicator, lowercase (am or pm)
%p - Meridian indicator, uppercase (AM or PM)
%M - Minute of the hour (00..59)
%-M no-padded (0..59)
%S - Second of the minute (00..60)
%-S no-padded (0..60)
%L - Millisecond of the second (000..999)
%f - Microsecond of the second (000000..999999)
%N - Nanosecond of the second (000000000..999999999)
Time zone:
%z - Time zone as hour and minute offset from UTC (e.g. +0900)
%:z - hour and minute offset from UTC with a colon (e.g. +09:00)
%Z - Time zone abbreviation (e.g. MST)
Weekday:
%A - Full weekday name (Sunday)
%a - Abbreviated weekday name (Sun)
%u - Day of the week (Monday is 1, 1..7)
%w - Day of the week (Sunday is 0, 0..6)
ISO 8601 week-based year and week number:
Week 1 of YYYY starts with a Monday and includes YYYY-01-04.
The days in the year before the first week are in the last week of
the previous year.
%G - Week-based year
%g - Last 2 digits of the week-based year (00..99)
%V - Week number of the week-based year (01..53)
%-V no-padded (1..53)
Week number:
Week 1 of YYYY starts with a Sunday or Monday (according to %U or %W).
The days in the year before the first week are in week 0.
%U - Week number of the year. The week starts with Sunday. (00..53)
%-U no-padded (0..53)
%W - Week number of the year. The week starts with Monday. (00..53)
%-W no-padded (0..53)
Seconds since the Unix Epoch:
%s - Number of seconds since 1970-01-01 00:00:00 UTC.
%Q - Number of milliseconds since 1970-01-01 00:00:00 UTC.
Literal string:
%n - Newline character (\n)
%t - Tab character (\t)
%% - Literal % character
Combination:
%c - date and time (%a %b %e %T %Y)
%D - Date (%m/%d/%y)
%F - ISO 8601 date format (%Y-%m-%d)
%v - VMS date (%e-%b-%Y)
%x - Same as %D
%X - Same as %T
%r - 12-hour time (%I:%M:%S %p)
%R - 24-hour time (%H:%M)
%T - 24-hour time (%H:%M:%S)
%+ - date(1) (%a %b %e %H:%M:%S %Z %Y)
The modifiers “E” and “O” are ignored.
*/
package strftime

241
vendor/github.com/ncruces/go-strftime/specifiers.go generated vendored Normal file
View File

@@ -0,0 +1,241 @@
package strftime
import "strings"
// https://strftime.org/
func goLayout(spec, flag byte, parsing bool) string {
switch spec {
default:
return ""
case 'B':
return "January"
case 'b', 'h':
return "Jan"
case 'm':
if flag == '-' || parsing {
return "1"
}
return "01"
case 'A':
return "Monday"
case 'a':
return "Mon"
case 'e':
return "_2"
case 'd':
if flag == '-' || parsing {
return "2"
}
return "02"
case 'j':
if flag == '-' {
if parsing {
return "__2"
}
return ""
}
return "002"
case 'I':
if flag == '-' || parsing {
return "3"
}
return "03"
case 'H':
if flag == '-' && !parsing {
return ""
}
return "15"
case 'M':
if flag == '-' || parsing {
return "4"
}
return "04"
case 'S':
if flag == '-' || parsing {
return "5"
}
return "05"
case 'y':
return "06"
case 'Y':
return "2006"
case 'p':
return "PM"
case 'P':
return "pm"
case 'Z':
return "MST"
case 'z':
if flag == ':' {
if parsing {
return "Z07:00"
}
return "-07:00"
}
if parsing {
return "Z0700"
}
return "-0700"
case '+':
if parsing {
return "Mon Jan _2 15:4:5 MST 2006"
}
return "Mon Jan _2 15:04:05 MST 2006"
case 'c':
if parsing {
return "Mon Jan _2 15:4:5 2006"
}
return "Mon Jan _2 15:04:05 2006"
case 'v':
return "_2-Jan-2006"
case 'F':
if parsing {
return "2006-1-2"
}
return "2006-01-02"
case 'D', 'x':
if parsing {
return "1/2/06"
}
return "01/02/06"
case 'r':
if parsing {
return "3:4:5 PM"
}
return "03:04:05 PM"
case 'T', 'X':
if parsing {
return "15:4:5"
}
return "15:04:05"
case 'R':
if parsing {
return "15:4"
}
return "15:04"
case '%':
return "%"
case 't':
return "\t"
case 'n':
return "\n"
}
}
// https://nsdateformatter.com/
func uts35Pattern(spec, flag byte) string {
switch spec {
default:
return ""
case 'B':
return "MMMM"
case 'b', 'h':
return "MMM"
case 'm':
if flag == '-' {
return "M"
}
return "MM"
case 'A':
return "EEEE"
case 'a':
return "E"
case 'd':
if flag == '-' {
return "d"
}
return "dd"
case 'j':
if flag == '-' {
return "D"
}
return "DDD"
case 'I':
if flag == '-' {
return "h"
}
return "hh"
case 'H':
if flag == '-' {
return "H"
}
return "HH"
case 'M':
if flag == '-' {
return "m"
}
return "mm"
case 'S':
if flag == '-' {
return "s"
}
return "ss"
case 'y':
return "yy"
case 'Y':
return "yyyy"
case 'g':
return "YY"
case 'G':
return "YYYY"
case 'V':
if flag == '-' {
return "w"
}
return "ww"
case 'p':
return "a"
case 'Z':
return "zzz"
case 'z':
if flag == ':' {
return "xxx"
}
return "xx"
case 'L':
return "SSS"
case 'f':
return "SSSSSS"
case 'N':
return "SSSSSSSSS"
case '+':
return "E MMM d HH:mm:ss zzz yyyy"
case 'c':
return "E MMM d HH:mm:ss yyyy"
case 'v':
return "d-MMM-yyyy"
case 'F':
return "yyyy-MM-dd"
case 'D', 'x':
return "MM/dd/yy"
case 'r':
return "hh:mm:ss a"
case 'T', 'X':
return "HH:mm:ss"
case 'R':
return "HH:mm"
case '%':
return "%"
case 't':
return "\t"
case 'n':
return "\n"
}
}
// http://man.he.net/man3/strftime
func okModifier(mod, spec byte) bool {
if mod == 'E' {
return strings.Contains("cCxXyY", string(spec))
}
if mod == 'O' {
return strings.Contains("deHImMSuUVwWy", string(spec))
}
return false
}

346
vendor/github.com/ncruces/go-strftime/strftime.go generated vendored Normal file
View File

@@ -0,0 +1,346 @@
package strftime
import (
"bytes"
"strconv"
"time"
)
// Format returns a textual representation of the time value
// formatted according to the strftime format specification.
func Format(fmt string, t time.Time) string {
buf := buffer(fmt)
return string(AppendFormat(buf, fmt, t))
}
// AppendFormat is like Format, but appends the textual representation
// to dst and returns the extended buffer.
func AppendFormat(dst []byte, fmt string, t time.Time) []byte {
var parser parser
parser.literal = func(b byte) error {
dst = append(dst, b)
return nil
}
parser.format = func(spec, flag byte) error {
switch spec {
case 'A':
dst = append(dst, t.Weekday().String()...)
return nil
case 'a':
dst = append(dst, t.Weekday().String()[:3]...)
return nil
case 'B':
dst = append(dst, t.Month().String()...)
return nil
case 'b', 'h':
dst = append(dst, t.Month().String()[:3]...)
return nil
case 'm':
dst = appendInt2(dst, int(t.Month()), flag)
return nil
case 'd':
dst = appendInt2(dst, int(t.Day()), flag)
return nil
case 'e':
dst = appendInt2(dst, int(t.Day()), ' ')
return nil
case 'I':
dst = append12Hour(dst, t, flag)
return nil
case 'l':
dst = append12Hour(dst, t, ' ')
return nil
case 'H':
dst = appendInt2(dst, t.Hour(), flag)
return nil
case 'k':
dst = appendInt2(dst, t.Hour(), ' ')
return nil
case 'M':
dst = appendInt2(dst, t.Minute(), flag)
return nil
case 'S':
dst = appendInt2(dst, t.Second(), flag)
return nil
case 'L':
dst = append(dst, t.Format(".000")[1:]...)
return nil
case 'f':
dst = append(dst, t.Format(".000000")[1:]...)
return nil
case 'N':
dst = append(dst, t.Format(".000000000")[1:]...)
return nil
case 'y':
dst = t.AppendFormat(dst, "06")
return nil
case 'Y':
dst = t.AppendFormat(dst, "2006")
return nil
case 'C':
dst = t.AppendFormat(dst, "2006")
dst = dst[:len(dst)-2]
return nil
case 'U':
dst = appendWeekNumber(dst, t, flag, true)
return nil
case 'W':
dst = appendWeekNumber(dst, t, flag, false)
return nil
case 'V':
_, w := t.ISOWeek()
dst = appendInt2(dst, w, flag)
return nil
case 'g':
y, _ := t.ISOWeek()
dst = year(y).AppendFormat(dst, "06")
return nil
case 'G':
y, _ := t.ISOWeek()
dst = year(y).AppendFormat(dst, "2006")
return nil
case 's':
dst = strconv.AppendInt(dst, t.Unix(), 10)
return nil
case 'Q':
dst = strconv.AppendInt(dst, t.UnixMilli(), 10)
return nil
case 'w':
w := t.Weekday()
dst = appendInt1(dst, int(w))
return nil
case 'u':
if w := t.Weekday(); w == 0 {
dst = append(dst, '7')
} else {
dst = appendInt1(dst, int(w))
}
return nil
case 'j':
if flag == '-' {
dst = strconv.AppendInt(dst, int64(t.YearDay()), 10)
} else {
dst = t.AppendFormat(dst, "002")
}
return nil
}
if layout := goLayout(spec, flag, false); layout != "" {
dst = t.AppendFormat(dst, layout)
return nil
}
dst = append(dst, '%')
if flag != 0 {
dst = append(dst, flag)
}
dst = append(dst, spec)
return nil
}
parser.parse(fmt)
return dst
}
// Parse converts a textual representation of time to the time value it represents
// according to the strptime format specification.
//
// The following specifiers are not supported for parsing:
//
// %g %k %l %s %u %w %C %G %Q %U %V %W
//
// You must also avoid digits and these letter sequences
// in fmt literals:
//
// Jan Mon MST PM pm
func Parse(fmt, value string) (time.Time, error) {
pattern, err := layout(fmt, true)
if err != nil {
return time.Time{}, err
}
return time.Parse(pattern, value)
}
// Layout converts a strftime format specification
// to a Go time pattern specification.
//
// The following specifiers are not supported by Go patterns:
//
// %f %g %k %l %s %u %w %C %G %L %N %Q %U %V %W
//
// You must also avoid digits and these letter sequences
// in fmt literals:
//
// Jan Mon MST PM pm
func Layout(fmt string) (string, error) {
return layout(fmt, false)
}
func layout(fmt string, parsing bool) (string, error) {
dst := buffer(fmt)
var parser parser
parser.literal = func(b byte) error {
if '0' <= b && b <= '9' {
return literalErr(b)
}
dst = append(dst, b)
if b == 'M' || b == 'T' || b == 'm' || b == 'n' {
switch {
case bytes.HasSuffix(dst, []byte("Jan")):
return literalErr("Jan")
case bytes.HasSuffix(dst, []byte("Mon")):
return literalErr("Mon")
case bytes.HasSuffix(dst, []byte("MST")):
return literalErr("MST")
case bytes.HasSuffix(dst, []byte("PM")):
return literalErr("PM")
case bytes.HasSuffix(dst, []byte("pm")):
return literalErr("pm")
}
}
return nil
}
parser.format = func(spec, flag byte) error {
if layout := goLayout(spec, flag, parsing); layout != "" {
dst = append(dst, layout...)
return nil
}
switch spec {
default:
return formatError{}
case 'L', 'f', 'N':
if bytes.HasSuffix(dst, []byte(".")) || bytes.HasSuffix(dst, []byte(",")) {
switch spec {
default:
dst = append(dst, "000"...)
case 'f':
dst = append(dst, "000000"...)
case 'N':
dst = append(dst, "000000000"...)
}
return nil
}
return formatError{message: "must follow '.' or ','"}
}
}
if err := parser.parse(fmt); err != nil {
return "", err
}
return string(dst), nil
}
// UTS35 converts a strftime format specification
// to a Unicode Technical Standard #35 Date Format Pattern.
//
// The following specifiers are not supported by UTS35:
//
// %e %k %l %u %w %C %P %U %W
func UTS35(fmt string) (string, error) {
const quote = '\''
var quoted bool
dst := buffer(fmt)
var parser parser
parser.literal = func(b byte) error {
if b == quote {
dst = append(dst, quote, quote)
return nil
}
if !quoted && ('a' <= b && b <= 'z' || 'A' <= b && b <= 'Z') {
dst = append(dst, quote)
quoted = true
}
dst = append(dst, b)
return nil
}
parser.format = func(spec, flag byte) error {
if quoted {
dst = append(dst, quote)
quoted = false
}
if pattern := uts35Pattern(spec, flag); pattern != "" {
dst = append(dst, pattern...)
return nil
}
return formatError{}
}
if err := parser.parse(fmt); err != nil {
return "", err
}
if quoted {
dst = append(dst, quote)
}
return string(dst), nil
}
func buffer(format string) (buf []byte) {
const bufSize = 64
max := len(format) + 10
if max < bufSize {
var b [bufSize]byte
buf = b[:0]
} else {
buf = make([]byte, 0, max)
}
return
}
func year(y int) time.Time {
return time.Date(y, time.January, 1, 0, 0, 0, 0, time.UTC)
}
func appendWeekNumber(dst []byte, t time.Time, flag byte, sunday bool) []byte {
offset := int(t.Weekday())
if sunday {
offset = 6 - offset
} else if offset != 0 {
offset = 7 - offset
}
return appendInt2(dst, (t.YearDay()+offset)/7, flag)
}
func append12Hour(dst []byte, t time.Time, flag byte) []byte {
h := t.Hour()
if h == 0 {
h = 12
} else if h > 12 {
h -= 12
}
return appendInt2(dst, h, flag)
}
func appendInt1(dst []byte, i int) []byte {
return append(dst, byte('0'+i))
}
func appendInt2(dst []byte, i int, flag byte) []byte {
if flag == 0 || i >= 10 {
return append(dst, smallsString[i*2:i*2+2]...)
}
if flag == ' ' {
dst = append(dst, flag)
}
return appendInt1(dst, i)
}
const smallsString = "" +
"00010203040506070809" +
"10111213141516171819" +
"20212223242526272829" +
"30313233343536373839" +
"40414243444546474849" +
"50515253545556575859" +
"60616263646566676869" +
"70717273747576777879" +
"80818283848586878889" +
"90919293949596979899"

27
vendor/github.com/remyoudompheng/bigfft/LICENSE generated vendored Normal file
View File

@@ -0,0 +1,27 @@
Copyright (c) 2012 The Go Authors. All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are
met:
* Redistributions of source code must retain the above copyright
notice, this list of conditions and the following disclaimer.
* Redistributions in binary form must reproduce the above
copyright notice, this list of conditions and the following disclaimer
in the documentation and/or other materials provided with the
distribution.
* Neither the name of Google Inc. nor the names of its
contributors may be used to endorse or promote products derived from
this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.

54
vendor/github.com/remyoudompheng/bigfft/README generated vendored Normal file
View File

@@ -0,0 +1,54 @@
This library is a toy proof-of-concept implementation of the
well-known Schonhage-Strassen method for multiplying integers.
It is not expected to have a real life usecase outside number
theory computations, nor is it expected to be used in any production
system.
If you are using it in your project, you may want to carefully
examine the actual requirement or problem you are trying to solve.
# Comparison with the standard library and GMP
Benchmarking math/big vs. bigfft
Number size old ns/op new ns/op delta
1kb 1599 1640 +2.56%
10kb 61533 62170 +1.04%
50kb 833693 831051 -0.32%
100kb 2567995 2693864 +4.90%
1Mb 105237800 28446400 -72.97%
5Mb 1272947000 168554600 -86.76%
10Mb 3834354000 405120200 -89.43%
20Mb 11514488000 845081600 -92.66%
50Mb 49199945000 2893950000 -94.12%
100Mb 147599836000 5921594000 -95.99%
Benchmarking GMP vs bigfft
Number size GMP ns/op Go ns/op delta
1kb 536 1500 +179.85%
10kb 26669 50777 +90.40%
50kb 252270 658534 +161.04%
100kb 686813 2127534 +209.77%
1Mb 12100000 22391830 +85.06%
5Mb 111731843 133550600 +19.53%
10Mb 212314000 318595800 +50.06%
20Mb 490196000 671512800 +36.99%
50Mb 1280000000 2451476000 +91.52%
100Mb 2673000000 5228991000 +95.62%
Benchmarks were run on a Core 2 Quad Q8200 (2.33GHz).
FFT is enabled when input numbers are over 200kbits.
Scanning large decimal number from strings.
(math/big [n^2 complexity] vs bigfft [n^1.6 complexity], Core i5-4590)
Digits old ns/op new ns/op delta
1e3 9995 10876 +8.81%
1e4 175356 243806 +39.03%
1e5 9427422 6780545 -28.08%
1e6 1776707489 144867502 -91.85%
2e6 6865499995 346540778 -94.95%
5e6 42641034189 1069878799 -97.49%
10e6 151975273589 2693328580 -98.23%

33
vendor/github.com/remyoudompheng/bigfft/arith_decl.go generated vendored Normal file
View File

@@ -0,0 +1,33 @@
// Copyright 2010 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package bigfft
import (
"math/big"
_ "unsafe"
)
type Word = big.Word
//go:linkname addVV math/big.addVV
func addVV(z, x, y []Word) (c Word)
//go:linkname subVV math/big.subVV
func subVV(z, x, y []Word) (c Word)
//go:linkname addVW math/big.addVW
func addVW(z, x []Word, y Word) (c Word)
//go:linkname subVW math/big.subVW
func subVW(z, x []Word, y Word) (c Word)
//go:linkname shlVU math/big.shlVU
func shlVU(z, x []Word, s uint) (c Word)
//go:linkname mulAddVWW math/big.mulAddVWW
func mulAddVWW(z, x []Word, y, r Word) (c Word)
//go:linkname addMulVVW math/big.addMulVVW
func addMulVVW(z, x []Word, y Word) (c Word)

216
vendor/github.com/remyoudompheng/bigfft/fermat.go generated vendored Normal file
View File

@@ -0,0 +1,216 @@
package bigfft
import (
"math/big"
)
// Arithmetic modulo 2^n+1.
// A fermat of length w+1 represents a number modulo 2^(w*_W) + 1. The last
// word is zero or one. A number has at most two representatives satisfying the
// 0-1 last word constraint.
type fermat nat
func (n fermat) String() string { return nat(n).String() }
func (z fermat) norm() {
n := len(z) - 1
c := z[n]
if c == 0 {
return
}
if z[0] >= c {
z[n] = 0
z[0] -= c
return
}
// z[0] < z[n].
subVW(z, z, c) // Substract c
if c > 1 {
z[n] -= c - 1
c = 1
}
// Add back c.
if z[n] == 1 {
z[n] = 0
return
} else {
addVW(z, z, 1)
}
}
// Shift computes (x << k) mod (2^n+1).
func (z fermat) Shift(x fermat, k int) {
if len(z) != len(x) {
panic("len(z) != len(x) in Shift")
}
n := len(x) - 1
// Shift by n*_W is taking the opposite.
k %= 2 * n * _W
if k < 0 {
k += 2 * n * _W
}
neg := false
if k >= n*_W {
k -= n * _W
neg = true
}
kw, kb := k/_W, k%_W
z[n] = 1 // Add (-1)
if !neg {
for i := 0; i < kw; i++ {
z[i] = 0
}
// Shift left by kw words.
// x = a·2^(n-k) + b
// x<<k = (b<<k) - a
copy(z[kw:], x[:n-kw])
b := subVV(z[:kw+1], z[:kw+1], x[n-kw:])
if z[kw+1] > 0 {
z[kw+1] -= b
} else {
subVW(z[kw+1:], z[kw+1:], b)
}
} else {
for i := kw + 1; i < n; i++ {
z[i] = 0
}
// Shift left and negate, by kw words.
copy(z[:kw+1], x[n-kw:n+1]) // z_low = x_high
b := subVV(z[kw:n], z[kw:n], x[:n-kw]) // z_high -= x_low
z[n] -= b
}
// Add back 1.
if z[n] > 0 {
z[n]--
} else if z[0] < ^big.Word(0) {
z[0]++
} else {
addVW(z, z, 1)
}
// Shift left by kb bits
shlVU(z, z, uint(kb))
z.norm()
}
// ShiftHalf shifts x by k/2 bits the left. Shifting by 1/2 bit
// is multiplication by sqrt(2) mod 2^n+1 which is 2^(3n/4) - 2^(n/4).
// A temporary buffer must be provided in tmp.
func (z fermat) ShiftHalf(x fermat, k int, tmp fermat) {
n := len(z) - 1
if k%2 == 0 {
z.Shift(x, k/2)
return
}
u := (k - 1) / 2
a := u + (3*_W/4)*n
b := u + (_W/4)*n
z.Shift(x, a)
tmp.Shift(x, b)
z.Sub(z, tmp)
}
// Add computes addition mod 2^n+1.
func (z fermat) Add(x, y fermat) fermat {
if len(z) != len(x) {
panic("Add: len(z) != len(x)")
}
addVV(z, x, y) // there cannot be a carry here.
z.norm()
return z
}
// Sub computes substraction mod 2^n+1.
func (z fermat) Sub(x, y fermat) fermat {
if len(z) != len(x) {
panic("Add: len(z) != len(x)")
}
n := len(y) - 1
b := subVV(z[:n], x[:n], y[:n])
b += y[n]
// If b > 0, we need to subtract b<<n, which is the same as adding b.
z[n] = x[n]
if z[0] <= ^big.Word(0)-b {
z[0] += b
} else {
addVW(z, z, b)
}
z.norm()
return z
}
func (z fermat) Mul(x, y fermat) fermat {
if len(x) != len(y) {
panic("Mul: len(x) != len(y)")
}
n := len(x) - 1
if n < 30 {
z = z[:2*n+2]
basicMul(z, x, y)
z = z[:2*n+1]
} else {
var xi, yi, zi big.Int
xi.SetBits(x)
yi.SetBits(y)
zi.SetBits(z)
zb := zi.Mul(&xi, &yi).Bits()
if len(zb) <= n {
// Short product.
copy(z, zb)
for i := len(zb); i < len(z); i++ {
z[i] = 0
}
return z
}
z = zb
}
// len(z) is at most 2n+1.
if len(z) > 2*n+1 {
panic("len(z) > 2n+1")
}
// We now have
// z = z[:n] + 1<<(n*W) * z[n:2n+1]
// which normalizes to:
// z = z[:n] - z[n:2n] + z[2n]
c1 := big.Word(0)
if len(z) > 2*n {
c1 = addVW(z[:n], z[:n], z[2*n])
}
c2 := big.Word(0)
if len(z) >= 2*n {
c2 = subVV(z[:n], z[:n], z[n:2*n])
} else {
m := len(z) - n
c2 = subVV(z[:m], z[:m], z[n:])
c2 = subVW(z[m:n], z[m:n], c2)
}
// Restore carries.
// Substracting z[n] -= c2 is the same
// as z[0] += c2
z = z[:n+1]
z[n] = c1
c := addVW(z, z, c2)
if c != 0 {
panic("impossible")
}
z.norm()
return z
}
// copied from math/big
//
// basicMul multiplies x and y and leaves the result in z.
// The (non-normalized) result is placed in z[0 : len(x) + len(y)].
func basicMul(z, x, y fermat) {
// initialize z
for i := 0; i < len(z); i++ {
z[i] = 0
}
for i, d := range y {
if d != 0 {
z[len(x)+i] = addMulVVW(z[i:i+len(x)], x, d)
}
}
}

370
vendor/github.com/remyoudompheng/bigfft/fft.go generated vendored Normal file
View File

@@ -0,0 +1,370 @@
// Package bigfft implements multiplication of big.Int using FFT.
//
// The implementation is based on the Schönhage-Strassen method
// using integer FFT modulo 2^n+1.
package bigfft
import (
"math/big"
"unsafe"
)
const _W = int(unsafe.Sizeof(big.Word(0)) * 8)
type nat []big.Word
func (n nat) String() string {
v := new(big.Int)
v.SetBits(n)
return v.String()
}
// fftThreshold is the size (in words) above which FFT is used over
// Karatsuba from math/big.
//
// TestCalibrate seems to indicate a threshold of 60kbits on 32-bit
// arches and 110kbits on 64-bit arches.
var fftThreshold = 1800
// Mul computes the product x*y and returns z.
// It can be used instead of the Mul method of
// *big.Int from math/big package.
func Mul(x, y *big.Int) *big.Int {
xwords := len(x.Bits())
ywords := len(y.Bits())
if xwords > fftThreshold && ywords > fftThreshold {
return mulFFT(x, y)
}
return new(big.Int).Mul(x, y)
}
func mulFFT(x, y *big.Int) *big.Int {
var xb, yb nat = x.Bits(), y.Bits()
zb := fftmul(xb, yb)
z := new(big.Int)
z.SetBits(zb)
if x.Sign()*y.Sign() < 0 {
z.Neg(z)
}
return z
}
// A FFT size of K=1<<k is adequate when K is about 2*sqrt(N) where
// N = x.Bitlen() + y.Bitlen().
func fftmul(x, y nat) nat {
k, m := fftSize(x, y)
xp := polyFromNat(x, k, m)
yp := polyFromNat(y, k, m)
rp := xp.Mul(&yp)
return rp.Int()
}
// fftSizeThreshold[i] is the maximal size (in bits) where we should use
// fft size i.
var fftSizeThreshold = [...]int64{0, 0, 0,
4 << 10, 8 << 10, 16 << 10, // 5
32 << 10, 64 << 10, 1 << 18, 1 << 20, 3 << 20, // 10
8 << 20, 30 << 20, 100 << 20, 300 << 20, 600 << 20,
}
// returns the FFT length k, m the number of words per chunk
// such that m << k is larger than the number of words
// in x*y.
func fftSize(x, y nat) (k uint, m int) {
words := len(x) + len(y)
bits := int64(words) * int64(_W)
k = uint(len(fftSizeThreshold))
for i := range fftSizeThreshold {
if fftSizeThreshold[i] > bits {
k = uint(i)
break
}
}
// The 1<<k chunks of m words must have N bits so that
// 2^N-1 is larger than x*y. That is, m<<k > words
m = words>>k + 1
return
}
// valueSize returns the length (in words) to use for polynomial
// coefficients, to compute a correct product of polynomials P*Q
// where deg(P*Q) < K (== 1<<k) and where coefficients of P and Q are
// less than b^m (== 1 << (m*_W)).
// The chosen length (in bits) must be a multiple of 1 << (k-extra).
func valueSize(k uint, m int, extra uint) int {
// The coefficients of P*Q are less than b^(2m)*K
// so we need W * valueSize >= 2*m*W+K
n := 2*m*_W + int(k) // necessary bits
K := 1 << (k - extra)
if K < _W {
K = _W
}
n = ((n / K) + 1) * K // round to a multiple of K
return n / _W
}
// poly represents an integer via a polynomial in Z[x]/(x^K+1)
// where K is the FFT length and b^m is the computation basis 1<<(m*_W).
// If P = a[0] + a[1] x + ... a[n] x^(K-1), the associated natural number
// is P(b^m).
type poly struct {
k uint // k is such that K = 1<<k.
m int // the m such that P(b^m) is the original number.
a []nat // a slice of at most K m-word coefficients.
}
// polyFromNat slices the number x into a polynomial
// with 1<<k coefficients made of m words.
func polyFromNat(x nat, k uint, m int) poly {
p := poly{k: k, m: m}
length := len(x)/m + 1
p.a = make([]nat, length)
for i := range p.a {
if len(x) < m {
p.a[i] = make(nat, m)
copy(p.a[i], x)
break
}
p.a[i] = x[:m]
x = x[m:]
}
return p
}
// Int evaluates back a poly to its integer value.
func (p *poly) Int() nat {
length := len(p.a)*p.m + 1
if na := len(p.a); na > 0 {
length += len(p.a[na-1])
}
n := make(nat, length)
m := p.m
np := n
for i := range p.a {
l := len(p.a[i])
c := addVV(np[:l], np[:l], p.a[i])
if np[l] < ^big.Word(0) {
np[l] += c
} else {
addVW(np[l:], np[l:], c)
}
np = np[m:]
}
n = trim(n)
return n
}
func trim(n nat) nat {
for i := range n {
if n[len(n)-1-i] != 0 {
return n[:len(n)-i]
}
}
return nil
}
// Mul multiplies p and q modulo X^K-1, where K = 1<<p.k.
// The product is done via a Fourier transform.
func (p *poly) Mul(q *poly) poly {
// extra=2 because:
// * some power of 2 is a K-th root of unity when n is a multiple of K/2.
// * 2 itself is a square (see fermat.ShiftHalf)
n := valueSize(p.k, p.m, 2)
pv, qv := p.Transform(n), q.Transform(n)
rv := pv.Mul(&qv)
r := rv.InvTransform()
r.m = p.m
return r
}
// A polValues represents the value of a poly at the powers of a
// K-th root of unity θ=2^(l/2) in Z/(b^n+1)Z, where b^n = 2^(K/4*l).
type polValues struct {
k uint // k is such that K = 1<<k.
n int // the length of coefficients, n*_W a multiple of K/4.
values []fermat // a slice of K (n+1)-word values
}
// Transform evaluates p at θ^i for i = 0...K-1, where
// θ is a K-th primitive root of unity in Z/(b^n+1)Z.
func (p *poly) Transform(n int) polValues {
k := p.k
inputbits := make([]big.Word, (n+1)<<k)
input := make([]fermat, 1<<k)
// Now computed q(ω^i) for i = 0 ... K-1
valbits := make([]big.Word, (n+1)<<k)
values := make([]fermat, 1<<k)
for i := range values {
input[i] = inputbits[i*(n+1) : (i+1)*(n+1)]
if i < len(p.a) {
copy(input[i], p.a[i])
}
values[i] = fermat(valbits[i*(n+1) : (i+1)*(n+1)])
}
fourier(values, input, false, n, k)
return polValues{k, n, values}
}
// InvTransform reconstructs p (modulo X^K - 1) from its
// values at θ^i for i = 0..K-1.
func (v *polValues) InvTransform() poly {
k, n := v.k, v.n
// Perform an inverse Fourier transform to recover p.
pbits := make([]big.Word, (n+1)<<k)
p := make([]fermat, 1<<k)
for i := range p {
p[i] = fermat(pbits[i*(n+1) : (i+1)*(n+1)])
}
fourier(p, v.values, true, n, k)
// Divide by K, and untwist q to recover p.
u := make(fermat, n+1)
a := make([]nat, 1<<k)
for i := range p {
u.Shift(p[i], -int(k))
copy(p[i], u)
a[i] = nat(p[i])
}
return poly{k: k, m: 0, a: a}
}
// NTransform evaluates p at θω^i for i = 0...K-1, where
// θ is a (2K)-th primitive root of unity in Z/(b^n+1)Z
// and ω = θ².
func (p *poly) NTransform(n int) polValues {
k := p.k
if len(p.a) >= 1<<k {
panic("Transform: len(p.a) >= 1<<k")
}
// θ is represented as a shift.
θshift := (n * _W) >> k
// p(x) = a_0 + a_1 x + ... + a_{K-1} x^(K-1)
// p(θx) = q(x) where
// q(x) = a_0 + θa_1 x + ... + θ^(K-1) a_{K-1} x^(K-1)
//
// Twist p by θ to obtain q.
tbits := make([]big.Word, (n+1)<<k)
twisted := make([]fermat, 1<<k)
src := make(fermat, n+1)
for i := range twisted {
twisted[i] = fermat(tbits[i*(n+1) : (i+1)*(n+1)])
if i < len(p.a) {
for i := range src {
src[i] = 0
}
copy(src, p.a[i])
twisted[i].Shift(src, θshift*i)
}
}
// Now computed q(ω^i) for i = 0 ... K-1
valbits := make([]big.Word, (n+1)<<k)
values := make([]fermat, 1<<k)
for i := range values {
values[i] = fermat(valbits[i*(n+1) : (i+1)*(n+1)])
}
fourier(values, twisted, false, n, k)
return polValues{k, n, values}
}
// InvTransform reconstructs a polynomial from its values at
// roots of x^K+1. The m field of the returned polynomial
// is unspecified.
func (v *polValues) InvNTransform() poly {
k := v.k
n := v.n
θshift := (n * _W) >> k
// Perform an inverse Fourier transform to recover q.
qbits := make([]big.Word, (n+1)<<k)
q := make([]fermat, 1<<k)
for i := range q {
q[i] = fermat(qbits[i*(n+1) : (i+1)*(n+1)])
}
fourier(q, v.values, true, n, k)
// Divide by K, and untwist q to recover p.
u := make(fermat, n+1)
a := make([]nat, 1<<k)
for i := range q {
u.Shift(q[i], -int(k)-i*θshift)
copy(q[i], u)
a[i] = nat(q[i])
}
return poly{k: k, m: 0, a: a}
}
// fourier performs an unnormalized Fourier transform
// of src, a length 1<<k vector of numbers modulo b^n+1
// where b = 1<<_W.
func fourier(dst []fermat, src []fermat, backward bool, n int, k uint) {
var rec func(dst, src []fermat, size uint)
tmp := make(fermat, n+1) // pre-allocate temporary variables.
tmp2 := make(fermat, n+1) // pre-allocate temporary variables.
// The recursion function of the FFT.
// The root of unity used in the transform is ω=1<<(ω2shift/2).
// The source array may use shifted indices (i.e. the i-th
// element is src[i << idxShift]).
rec = func(dst, src []fermat, size uint) {
idxShift := k - size
ω2shift := (4 * n * _W) >> size
if backward {
ω2shift = -ω2shift
}
// Easy cases.
if len(src[0]) != n+1 || len(dst[0]) != n+1 {
panic("len(src[0]) != n+1 || len(dst[0]) != n+1")
}
switch size {
case 0:
copy(dst[0], src[0])
return
case 1:
dst[0].Add(src[0], src[1<<idxShift]) // dst[0] = src[0] + src[1]
dst[1].Sub(src[0], src[1<<idxShift]) // dst[1] = src[0] - src[1]
return
}
// Let P(x) = src[0] + src[1<<idxShift] * x + ... + src[K-1 << idxShift] * x^(K-1)
// The P(x) = Q1(x²) + x*Q2(x²)
// where Q1's coefficients are src with indices shifted by 1
// where Q2's coefficients are src[1<<idxShift:] with indices shifted by 1
// Split destination vectors in halves.
dst1 := dst[:1<<(size-1)]
dst2 := dst[1<<(size-1):]
// Transform Q1 and Q2 in the halves.
rec(dst1, src, size-1)
rec(dst2, src[1<<idxShift:], size-1)
// Reconstruct P's transform from transforms of Q1 and Q2.
// dst[i] is dst1[i] + ω^i * dst2[i]
// dst[i + 1<<(k-1)] is dst1[i] + ω^(i+K/2) * dst2[i]
//
for i := range dst1 {
tmp.ShiftHalf(dst2[i], i*ω2shift, tmp2) // ω^i * dst2[i]
dst2[i].Sub(dst1[i], tmp)
dst1[i].Add(dst1[i], tmp)
}
}
rec(dst, src, k)
}
// Mul returns the pointwise product of p and q.
func (p *polValues) Mul(q *polValues) (r polValues) {
n := p.n
r.k, r.n = p.k, p.n
r.values = make([]fermat, len(p.values))
bits := make([]big.Word, len(p.values)*(n+1))
buf := make(fermat, 8*n)
for i := range r.values {
r.values[i] = bits[i*(n+1) : (i+1)*(n+1)]
z := buf.Mul(p.values[i], q.values[i])
copy(r.values[i], z)
}
return
}

70
vendor/github.com/remyoudompheng/bigfft/scan.go generated vendored Normal file
View File

@@ -0,0 +1,70 @@
package bigfft
import (
"math/big"
)
// FromDecimalString converts the base 10 string
// representation of a natural (non-negative) number
// into a *big.Int.
// Its asymptotic complexity is less than quadratic.
func FromDecimalString(s string) *big.Int {
var sc scanner
z := new(big.Int)
sc.scan(z, s)
return z
}
type scanner struct {
// powers[i] is 10^(2^i * quadraticScanThreshold).
powers []*big.Int
}
func (s *scanner) chunkSize(size int) (int, *big.Int) {
if size <= quadraticScanThreshold {
panic("size < quadraticScanThreshold")
}
pow := uint(0)
for n := size; n > quadraticScanThreshold; n /= 2 {
pow++
}
// threshold * 2^(pow-1) <= size < threshold * 2^pow
return quadraticScanThreshold << (pow - 1), s.power(pow - 1)
}
func (s *scanner) power(k uint) *big.Int {
for i := len(s.powers); i <= int(k); i++ {
z := new(big.Int)
if i == 0 {
if quadraticScanThreshold%14 != 0 {
panic("quadraticScanThreshold % 14 != 0")
}
z.Exp(big.NewInt(1e14), big.NewInt(quadraticScanThreshold/14), nil)
} else {
z.Mul(s.powers[i-1], s.powers[i-1])
}
s.powers = append(s.powers, z)
}
return s.powers[k]
}
func (s *scanner) scan(z *big.Int, str string) {
if len(str) <= quadraticScanThreshold {
z.SetString(str, 10)
return
}
sz, pow := s.chunkSize(len(str))
// Scan the left half.
s.scan(z, str[:len(str)-sz])
// FIXME: reuse temporaries.
left := Mul(z, pow)
// Scan the right half
s.scan(z, str[len(str)-sz:])
z.Add(z, left)
}
// quadraticScanThreshold is the number of digits
// below which big.Int.SetString is more efficient
// than subquadratic algorithms.
// 1232 digits fit in 4096 bits.
const quadraticScanThreshold = 1232

27
vendor/golang.org/x/exp/LICENSE generated vendored Normal file
View File

@@ -0,0 +1,27 @@
Copyright 2009 The Go Authors.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are
met:
* Redistributions of source code must retain the above copyright
notice, this list of conditions and the following disclaimer.
* Redistributions in binary form must reproduce the above
copyright notice, this list of conditions and the following disclaimer
in the documentation and/or other materials provided with the
distribution.
* Neither the name of Google LLC nor the names of its
contributors may be used to endorse or promote products derived from
this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.

22
vendor/golang.org/x/exp/PATENTS generated vendored Normal file
View File

@@ -0,0 +1,22 @@
Additional IP Rights Grant (Patents)
"This implementation" means the copyrightable works distributed by
Google as part of the Go project.
Google hereby grants to You a perpetual, worldwide, non-exclusive,
no-charge, royalty-free, irrevocable (except as stated in this section)
patent license to make, have made, use, offer to sell, sell, import,
transfer and otherwise run, modify and propagate the contents of this
implementation of Go, where such license applies only to those patent
claims, both currently owned or controlled by Google and acquired in
the future, licensable by Google that are necessarily infringed by this
implementation of Go. This grant does not include claims that would be
infringed only as a consequence of further modification of this
implementation. If you or your agent or exclusive licensee institute or
order or agree to the institution of patent litigation against any
entity (including a cross-claim or counterclaim in a lawsuit) alleging
that this implementation of Go or any code incorporated within this
implementation of Go constitutes direct or contributory patent
infringement, or inducement of patent infringement, then any patent
rights granted to you under this License for this implementation of Go
shall terminate as of the date such litigation is filed.

54
vendor/golang.org/x/exp/constraints/constraints.go generated vendored Normal file
View File

@@ -0,0 +1,54 @@
// Copyright 2021 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// Package constraints defines a set of useful constraints to be used
// with type parameters.
package constraints
import "cmp"
// Signed is a constraint that permits any signed integer type.
// If future releases of Go add new predeclared signed integer types,
// this constraint will be modified to include them.
type Signed interface {
~int | ~int8 | ~int16 | ~int32 | ~int64
}
// Unsigned is a constraint that permits any unsigned integer type.
// If future releases of Go add new predeclared unsigned integer types,
// this constraint will be modified to include them.
type Unsigned interface {
~uint | ~uint8 | ~uint16 | ~uint32 | ~uint64 | ~uintptr
}
// Integer is a constraint that permits any integer type.
// If future releases of Go add new predeclared integer types,
// this constraint will be modified to include them.
type Integer interface {
Signed | Unsigned
}
// Float is a constraint that permits any floating-point type.
// If future releases of Go add new predeclared floating-point types,
// this constraint will be modified to include them.
type Float interface {
~float32 | ~float64
}
// Complex is a constraint that permits any complex numeric type.
// If future releases of Go add new predeclared complex numeric types,
// this constraint will be modified to include them.
type Complex interface {
~complex64 | ~complex128
}
// Ordered is a constraint that permits any ordered type: any type
// that supports the operators < <= >= >.
// If future releases of Go add new ordered types,
// this constraint will be modified to include them.
//
// This type is redundant since Go 1.21 introduced [cmp.Ordered].
//
//go:fix inline
type Ordered = cmp.Ordered

6
vendor/modernc.org/libc/.gitignore generated vendored Normal file
View File

@@ -0,0 +1,6 @@
*.gz
*.zip
go.work
go.sum
musl-*
COPYRIGHT-MUSL

22
vendor/modernc.org/libc/AUTHORS generated vendored Normal file
View File

@@ -0,0 +1,22 @@
# This file lists authors for copyright purposes. This file is distinct from
# the CONTRIBUTORS files. See the latter for an explanation.
#
# Names should be added to this file as:
# Name or Organization <email address>
#
# The email address is not required for organizations.
#
# Please keep the list sorted.
Dan Kortschak <dan@kortschak.io>
Dan Peterson <danp@danp.net>
David Leadbeater <dgl@dgl.cx>
Fabrice Colliot <f.colliot@gmail.com>
Jan Mercl <0xjnml@gmail.com>
Jason DeBettencourt <jasond17@gmail.com>
Jasper Siepkes <jasper@siepkes.nl>
Koichi Shiraishi <zchee.io@gmail.com>
Marius Orcsik <marius@federated.id>
Patricio Whittingslow <graded.sp@gmail.com>
Scot C Bontrager <scot@indievisible.org>
Steffen Butzer <steffen(dot)butzer@outlook.com>

26
vendor/modernc.org/libc/CONTRIBUTORS generated vendored Normal file
View File

@@ -0,0 +1,26 @@
# This file lists people who contributed code to this repository. The AUTHORS
# file lists the copyright holders; this file lists people.
#
# Names should be added to this file like so:
# Name <email address>
#
# Please keep the list sorted.
Bjørn Wiegell <bj.wiegell@gmail.com>
Dan Kortschak <dan@kortschak.io>
Dan Peterson <danp@danp.net>
David Leadbeater <dgl@dgl.cx>
Fabrice Colliot <f.colliot@gmail.com>
Jaap Aarts <jaap.aarts1@gmail.com>
Jan Mercl <0xjnml@gmail.com>
Jason DeBettencourt <jasond17@gmail.com>
Jasper Siepkes <jasper@siepkes.nl>
Koichi Shiraishi <zchee.io@gmail.com>
Leonardo Taccari <leot@NetBSD.org>
Marius Orcsik <marius@federated.id>
Patricio Whittingslow <graded.sp@gmail.com>
Roman Khafizianov <roman@any.org>
Scot C Bontrager <scot@indievisible.org>
Steffen Butzer <steffen(dot)butzer@outlook.com>
W. Michael Petullo <mike@flyn.org>
ZHU Zijia <piggynl@outlook.com>

27
vendor/modernc.org/libc/LICENSE generated vendored Normal file
View File

@@ -0,0 +1,27 @@
Copyright (c) 2017 The Libc Authors. All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are
met:
* Redistributions of source code must retain the above copyright
notice, this list of conditions and the following disclaimer.
* Redistributions in binary form must reproduce the above
copyright notice, this list of conditions and the following disclaimer
in the documentation and/or other materials provided with the
distribution.
* Neither the names of the authors nor the names of the
contributors may be used to endorse or promote products derived from
this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.

305
vendor/modernc.org/libc/LICENSE-3RD-PARTY.md generated vendored Normal file
View File

@@ -0,0 +1,305 @@
# Third-Party Software Notices
This repository contains code and assets acquired from third-party sources.
While the main project is licensed under the BSD-3 License, the components
listed below are subject to their own specific license terms and copyright
notices.
The following is a list of third-party software included in this repository,
their locations, and their respective licenses.
----
## Go
* **URL:** https://github.com/golang/go
----
Copyright (c) 2009 The Go Authors. All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are
met:
* Redistributions of source code must retain the above copyright
notice, this list of conditions and the following disclaimer.
* Redistributions in binary form must reproduce the above
copyright notice, this list of conditions and the following disclaimer
in the documentation and/or other materials provided with the
distribution.
* Neither the name of Google Inc. nor the names of its
contributors may be used to endorse or promote products derived from
this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
----
## musl libc
* **URL:** https://musl.libc.org/
----
musl as a whole is licensed under the following standard MIT license:
----------------------------------------------------------------------
Copyright © 2005-2020 Rich Felker, et al.
Permission is hereby granted, free of charge, to any person obtaining
a copy of this software and associated documentation files (the
"Software"), to deal in the Software without restriction, including
without limitation the rights to use, copy, modify, merge, publish,
distribute, sublicense, and/or sell copies of the Software, and to
permit persons to whom the Software is furnished to do so, subject to
the following conditions:
The above copyright notice and this permission notice shall be
included in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.
IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY
CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT,
TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE
SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
----------------------------------------------------------------------
Authors/contributors include:
A. Wilcox
Ada Worcester
Alex Dowad
Alex Suykov
Alexander Monakov
Andre McCurdy
Andrew Kelley
Anthony G. Basile
Aric Belsito
Arvid Picciani
Bartosz Brachaczek
Benjamin Peterson
Bobby Bingham
Boris Brezillon
Brent Cook
Chris Spiegel
Clément Vasseur
Daniel Micay
Daniel Sabogal
Daurnimator
David Carlier
David Edelsohn
Denys Vlasenko
Dmitry Ivanov
Dmitry V. Levin
Drew DeVault
Emil Renner Berthing
Fangrui Song
Felix Fietkau
Felix Janda
Gianluca Anzolin
Hauke Mehrtens
He X
Hiltjo Posthuma
Isaac Dunham
Jaydeep Patil
Jens Gustedt
Jeremy Huntwork
Jo-Philipp Wich
Joakim Sindholt
John Spencer
Julien Ramseier
Justin Cormack
Kaarle Ritvanen
Khem Raj
Kylie McClain
Leah Neukirchen
Luca Barbato
Luka Perkov
M Farkas-Dyck (Strake)
Mahesh Bodapati
Markus Wichmann
Masanori Ogino
Michael Clark
Michael Forney
Mikhail Kremnyov
Natanael Copa
Nicholas J. Kain
orc
Pascal Cuoq
Patrick Oppenlander
Petr Hosek
Petr Skocik
Pierre Carrier
Reini Urban
Rich Felker
Richard Pennington
Ryan Fairfax
Samuel Holland
Segev Finer
Shiz
sin
Solar Designer
Stefan Kristiansson
Stefan O'Rear
Szabolcs Nagy
Timo Teräs
Trutz Behn
Valentin Ochs
Will Dietz
William Haddon
William Pitcock
Portions of this software are derived from third-party works licensed
under terms compatible with the above MIT license:
The TRE regular expression implementation (src/regex/reg* and
src/regex/tre*) is Copyright © 2001-2008 Ville Laurikari and licensed
under a 2-clause BSD license (license text in the source files). The
included version has been heavily modified by Rich Felker in 2012, in
the interests of size, simplicity, and namespace cleanliness.
Much of the math library code (src/math/* and src/complex/*) is
Copyright © 1993,2004 Sun Microsystems or
Copyright © 2003-2011 David Schultz or
Copyright © 2003-2009 Steven G. Kargl or
Copyright © 2003-2009 Bruce D. Evans or
Copyright © 2008 Stephen L. Moshier or
Copyright © 2017-2018 Arm Limited
and labelled as such in comments in the individual source files. All
have been licensed under extremely permissive terms.
The ARM memcpy code (src/string/arm/memcpy.S) is Copyright © 2008
The Android Open Source Project and is licensed under a two-clause BSD
license. It was taken from Bionic libc, used on Android.
The AArch64 memcpy and memset code (src/string/aarch64/*) are
Copyright © 1999-2019, Arm Limited.
The implementation of DES for crypt (src/crypt/crypt_des.c) is
Copyright © 1994 David Burren. It is licensed under a BSD license.
The implementation of blowfish crypt (src/crypt/crypt_blowfish.c) was
originally written by Solar Designer and placed into the public
domain. The code also comes with a fallback permissive license for use
in jurisdictions that may not recognize the public domain.
The smoothsort implementation (src/stdlib/qsort.c) is Copyright © 2011
Valentin Ochs and is licensed under an MIT-style license.
The x86_64 port was written by Nicholas J. Kain and is licensed under
the standard MIT terms.
The mips and microblaze ports were originally written by Richard
Pennington for use in the ellcc project. The original code was adapted
by Rich Felker for build system and code conventions during upstream
integration. It is licensed under the standard MIT terms.
The mips64 port was contributed by Imagination Technologies and is
licensed under the standard MIT terms.
The powerpc port was also originally written by Richard Pennington,
and later supplemented and integrated by John Spencer. It is licensed
under the standard MIT terms.
All other files which have no copyright comments are original works
produced specifically for use as part of this library, written either
by Rich Felker, the main author of the library, or by one or more
contibutors listed above. Details on authorship of individual files
can be found in the git version control history of the project. The
omission of copyright and license comments in each file is in the
interest of source tree size.
In addition, permission is hereby granted for all public header files
(include/* and arch/*/bits/*) and crt files intended to be linked into
applications (crt/*, ldso/dlstart.c, and arch/*/crt_arch.h) to omit
the copyright notice and permission notice otherwise required by the
license, and to use these files without any requirement of
attribution. These files include substantial contributions from:
Bobby Bingham
John Spencer
Nicholas J. Kain
Rich Felker
Richard Pennington
Stefan Kristiansson
Szabolcs Nagy
all of whom have explicitly granted such permission.
This file previously contained text expressing a belief that most of
the files covered by the above exception were sufficiently trivial not
to be subject to copyright, resulting in confusion over whether it
negated the permissions granted in the license. In the spirit of
permissive licensing, and of not having licensing issues being an
obstacle to adoption, that text has been removed.
----
## go-netdb
* **URL:** https://github.com/dominikh/go-netdb
----
Copyright (c) 2012 Dominik Honnef
Permission is hereby granted, free of charge, to any person obtaining
a copy of this software and associated documentation files (the
"Software"), to deal in the Software without restriction, including
without limitation the rights to use, copy, modify, merge, publish,
distribute, sublicense, and/or sell copies of the Software, and to
permit persons to whom the Software is furnished to do so, subject to
the following conditions:
The above copyright notice and this permission notice shall be
included in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.
IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY
CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT,
TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE
SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
----
## NixOS/nixpkgs
* **URL:** https://github.com/NixOS/nixpkgs
----
Copyright (c) 2003-2025 Eelco Dolstra and the Nixpkgs/NixOS contributors
Permission is hereby granted, free of charge, to any person obtaining
a copy of this software and associated documentation files (the
"Software"), to deal in the Software without restriction, including
without limitation the rights to use, copy, modify, merge, publish,
distribute, sublicense, and/or sell copies of the Software, and to
permit persons to whom the Software is furnished to do so, subject to
the following conditions:
The above copyright notice and this permission notice shall be
included in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE
LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION
WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.

119
vendor/modernc.org/libc/Makefile generated vendored Normal file
View File

@@ -0,0 +1,119 @@
# Copyright 2024 The Libc Authors. All rights reserved.
# Use of this source code is governed by a BSD-style
# license that can be found in the LICENSE file.
.PHONY: all build_all_targets check clean download edit editor generate dev membrk-test test work xtest short-test xlibc libc-test surface vet
SHELL=/bin/bash -o pipefail
DIR = /tmp/libc
TAR = musl-7ada6dde6f9dc6a2836c3d92c2f762d35fd229e0.tar.gz
URL = https://git.musl-libc.org/cgit/musl/snapshot/$(TAR)
all: editor
golint 2>&1
staticcheck 2>&1
build_all_targets: vet
./build_all_targets.sh
echo done
clean:
rm -f log-* cpu.test mem.test *.out
git clean -fd
find testdata/nsz.repo.hu/ -name \*.go -delete
make -C testdata/nsz.repo.hu/libc-test/ cleanall
go clean
check:
staticcheck 2>&1 | grep -v U1000
download:
@if [ ! -f $(TAR) ]; then wget $(URL) ; fi
edit:
@if [ -f "Session.vim" ]; then gvim -S & else gvim -p Makefile go.mod builder.json & fi
editor:
# gofmt -l -s -w *.go
go test -c -o /dev/null
go build -o /dev/null -v generator*.go
go vet 2>&1 | grep -n 'asm_' || true
generate: download
mkdir -p $(DIR) || true
rm -rf $(DIR)/*
GO_GENERATE_DIR=$(DIR) go run generator*.go
go build -v
go test -v -short -count=1 ./...
git status
dev: download
mkdir -p $(DIR) || true
rm -rf $(DIR)/*
echo -n > /tmp/ccgo.log
GO_GENERATE_DIR=$(DIR) GO_GENERATE_DEV=1 go run -tags=ccgo.dmesg,ccgo.assert generator*.go
go build -v
go test -v -short -count=1 ./...
git status
membrk-test:
echo -n > /tmp/ccgo.log
touch log-test
cp log-test log-test0
go test -v -timeout 24h -count=1 -tags=libc.membrk 2>&1 | tee log-test
grep -a 'TRC\|TODO\|ERRORF\|FAIL' log-test || true 2>&1 | tee -a log-test
test:
go test -v -timeout 24h -count=1
short-test:
echo -n > /tmp/ccgo.log
touch log-test
cp log-test log-test0
go test -v -timeout 24h -count=1 -short 2>&1 | tee log-test
grep -a 'TRC\|TODO\|ERRORF\|FAIL' log-test || true 2>&1 | tee -a log-test
xlibc:
echo -n > /tmp/ccgo.log
touch log-test
cp log-test log-test0
go test -v -timeout 24h -count=1 -tags=ccgo.dmesg,ccgo.assert 2>&1 -run TestLibc | tee log-test
grep -a 'TRC\|TODO\|ERRORF\|FAIL' log-test || true 2>&1 | tee -a log-test
xpthread:
echo -n > /tmp/ccgo.log
touch log-test
cp log-test log-test0
go test -v -timeout 24h -count=1 2>&1 -run TestLibc -re pthread | tee log-test
grep -a 'TRC\|TODO\|ERRORF\|FAIL' log-test || true 2>&1 | tee -a log-test
libc-test:
echo -n > /tmp/ccgo.log
touch log-test
cp log-test log-test0
go test -v -timeout 24h -count=1 2>&1 -run TestLibc | tee log-test
# grep -a 'TRC\|TODO\|ERRORF\|FAIL' log-test || true 2>&1 | tee -a log-test
grep -o 'undefined: \<.*\>' log-test | sort -u
xtest:
echo -n > /tmp/ccgo.log
touch log-test
cp log-test log-test0
go test -v -timeout 24h -count=1 -tags=ccgo.dmesg,ccgo.assert 2>&1 | tee log-test
grep -a 'TRC\|TODO\|ERRORF\|FAIL' log-test || true 2>&1 | tee -a log-test
work:
rm -f go.work*
go work init
go work use .
go work use ../ccgo/v4
go work use ../ccgo/v3
go work use ../cc/v4
surface:
surface > surface.new
surface surface.old surface.new > log-todo-surface || true
vet:
go vet 2>&1 | grep abi0 | grep -v 'misuse' || true

9
vendor/modernc.org/libc/README.md generated vendored Normal file
View File

@@ -0,0 +1,9 @@
# libc
[![LiberaPay](https://liberapay.com/assets/widgets/donate.svg)](https://liberapay.com/jnml/donate)
[![receives](https://img.shields.io/liberapay/receives/jnml.svg?logo=liberapay)](https://liberapay.com/jnml/donate)
[![patrons](https://img.shields.io/liberapay/patrons/jnml.svg?logo=liberapay)](https://liberapay.com/jnml/donate)
[![Go Reference](https://pkg.go.dev/badge/modernc.org/libc.svg)](https://pkg.go.dev/modernc.org/libc)
Package libc is a partial reimplementation of C libc in pure Go.

5762
vendor/modernc.org/libc/abi0_linux_amd64.go generated vendored Normal file

File diff suppressed because it is too large Load Diff

28747
vendor/modernc.org/libc/abi0_linux_amd64.s generated vendored Normal file

File diff suppressed because it is too large Load Diff

109
vendor/modernc.org/libc/aliases.go generated vendored Normal file
View File

@@ -0,0 +1,109 @@
// Copyright 2024 The Libc Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
//go:build linux && (amd64 || arm64 || loong64 || ppc64le || s390x || riscv64 || 386 || arm)
package libc // import "modernc.org/libc"
func X__vm_wait(tls *TLS) {}
// static volatile int *const dummy_lockptr = 0;
//
// weak_alias(dummy_lockptr, __atexit_lockptr);
// weak_alias(dummy_lockptr, __bump_lockptr);
// weak_alias(dummy_lockptr, __sem_open_lockptr);
var X__atexit_lockptr int32
var X__bump_lockptr int32
var X__sem_open_lockptr int32
// static int dummy(int fd)
//
// {
// return fd;
// }
//
// weak_alias(dummy, __aio_close);
func X__aio_close(tls *TLS, fd int32) int32 {
return fd
}
func Xtzset(tls *TLS) {
___tzset(tls)
}
type DIR = TDIR
const DT_DETACHED = _DT_DETACHED
const DT_EXITING = _DT_EXITING
const DT_JOINABLE = _DT_JOINABLE
type FILE = TFILE
type HEADER = THEADER
func Xfcntl64(tls *TLS, fd int32, cmd int32, va uintptr) (r int32) {
return Xfcntl(tls, fd, cmd, va)
}
func Xfopen64(tls *TLS, filename uintptr, mode uintptr) (r uintptr) {
return Xfopen(tls, filename, mode)
}
func Xfstat64(tls *TLS, fd int32, st uintptr) (r int32) {
return Xfstat(tls, fd, st)
}
func Xftruncate64(tls *TLS, fd int32, length Toff_t) (r int32) {
return Xftruncate(tls, fd, length)
}
func Xgetrlimit64(tls *TLS, resource int32, rlim uintptr) (r int32) {
return Xgetrlimit(tls, resource, rlim)
}
func Xlseek64(tls *TLS, fd int32, offset Toff_t, whence int32) (r Toff_t) {
return Xlseek(tls, fd, offset, whence)
}
func Xlstat64(tls *TLS, path uintptr, buf uintptr) (r int32) {
return Xlstat(tls, path, buf)
}
func Xmkstemp64(tls *TLS, template uintptr) (r int32) {
return Xmkstemp(tls, template)
}
func Xmkstemps64(tls *TLS, template uintptr, len1 int32) (r int32) {
return Xmkstemps(tls, template, len1)
}
func Xmmap64(tls *TLS, start uintptr, len1 Tsize_t, prot int32, flags int32, fd int32, off Toff_t) (r uintptr) {
return Xmmap(tls, start, len1, prot, flags, fd, off)
}
func Xopen64(tls *TLS, filename uintptr, flags int32, va uintptr) (r int32) {
return Xopen(tls, filename, flags, va)
}
func Xreaddir64(tls *TLS, dir uintptr) (r uintptr) {
return Xreaddir(tls, dir)
}
func Xsetrlimit64(tls *TLS, resource int32, rlim uintptr) (r int32) {
return Xsetrlimit(tls, resource, rlim)
}
func Xstat64(tls *TLS, path uintptr, buf uintptr) (r int32) {
return Xstat(tls, path, buf)
}
func Xpthread_setcancelstate(tls *TLS, new int32, old uintptr) int32 {
return _pthread_setcancelstate(tls, new, old)
}
func Xpthread_sigmask(tls *TLS, now int32, set, old uintptr) int32 {
return _pthread_sigmask(tls, now, set, old)
}

77
vendor/modernc.org/libc/asm_386.s generated vendored Normal file
View File

@@ -0,0 +1,77 @@
#include "textflag.h"
// static inline void a_or_64(volatile uint64_t *p, uint64_t v)
TEXT ·a_or_64(SB),NOSPLIT,$0
MOVL p+0(FP), BX
MOVL v+4(FP), AX
LOCK
ORL AX, 0(BX)
MOVL v+8(FP), AX
LOCK
ORL AX, 4(BX)
RET
// static inline void a_and_64(volatile uint64_t *p, uint64_t v)
TEXT ·a_and_64(SB),NOSPLIT,$0
MOVL p+0(FP), BX
MOVL v+4(FP), AX
LOCK
ANDL AX, 0(BX)
MOVL v+8(FP), AX
LOCK
ANDL AX, 4(BX)
RET
// static inline int a_cas(volatile int *p, int t, int s)
TEXT ·a_cas(SB),NOSPLIT,$0
MOVL p+0(FP), BX
MOVL t+4(FP), AX
MOVL s+8(FP), CX
LOCK
CMPXCHGL CX, 0(BX)
MOVL AX, ret+12(FP)
RET
// static inline void a_barrier()
TEXT ·a_barrier(SB),NOSPLIT,$0
MFENCE
RET
// #define a_crash a_crash
// static inline void a_crash()
// {
// __asm__ __volatile__( "hlt" : : : "memory" );
// }
TEXT ·a_crash(SB),NOSPLIT,$0
HLT
// static inline void *a_cas_p(volatile void *p, void *t, void *s)
TEXT ·a_cas_p(SB),NOSPLIT,$0
MOVL p+0(FP), BX
MOVL t+4(FP), AX
MOVL s+8(FP), CX
LOCK
CMPXCHGL CX, 0(BX)
MOVL AX, ret+12(FP)
RET
// static inline void a_or(volatile int *p, int v)
TEXT ·a_or(SB),NOSPLIT,$0
MOVL p+0(FP), BX
MOVL v+4(FP), AX
LOCK
ORL AX, 0(BX)
RET
// static inline int a_fetch_add(volatile int *p, int v)
TEXT ·a_fetch_add(SB),NOSPLIT,$0
MOVL p+0(FP), BX
MOVL v+4(FP), AX
LOCK
XADDL AX, 0(BX)
RET
// static inline void a_spin()
TEXT ·a_spin(SB),NOSPLIT,$0
PAUSE
RET

105
vendor/modernc.org/libc/atomic.go generated vendored Normal file
View File

@@ -0,0 +1,105 @@
// Copyright 2024 The Libc Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
//go:build linux && (amd64 || arm64 || loong64 || ppc64le || s390x || riscv64 || 386 || arm)
package libc // import "modernc.org/libc"
import (
"math"
mbits "math/bits"
"sync/atomic"
"unsafe"
)
func a_store_8(addr uintptr, val int8) int8 {
*(*int8)(unsafe.Pointer(addr)) = val
return val
}
func a_load_8(addr uintptr) (val int8) {
return *(*int8)(unsafe.Pointer(addr))
}
func a_load_16(addr uintptr) (val int16) {
if addr&1 != 0 {
panic("unaligned atomic access")
}
return *(*int16)(unsafe.Pointer(addr))
}
func a_store_16(addr uintptr, val uint16) {
if addr&1 != 0 {
panic("unaligned atomic access")
}
*(*uint16)(unsafe.Pointer(addr)) = val
}
// static inline int a_ctz_64(uint64_t x)
func _a_ctz_64(tls *TLS, x uint64) int32 {
return int32(mbits.TrailingZeros64(x))
}
func AtomicAddFloat32(addr *float32, delta float32) (new float32) {
v := AtomicLoadFloat32(addr) + delta
AtomicStoreFloat32(addr, v)
return v
}
func AtomicLoadFloat32(addr *float32) (val float32) {
return math.Float32frombits(atomic.LoadUint32((*uint32)(unsafe.Pointer(addr))))
}
func AtomicStoreFloat32(addr *float32, val float32) {
atomic.StoreUint32((*uint32)(unsafe.Pointer(addr)), math.Float32bits(val))
}
func AtomicAddFloat64(addr *float64, delta float64) (new float64) {
v := AtomicLoadFloat64(addr) + delta
AtomicStoreFloat64(addr, v)
return v
}
func AtomicLoadFloat64(addr *float64) (val float64) {
return math.Float64frombits(atomic.LoadUint64((*uint64)(unsafe.Pointer(addr))))
}
func AtomicStoreFloat64(addr *float64, val float64) {
atomic.StoreUint64((*uint64)(unsafe.Pointer(addr)), math.Float64bits(val))
}
func AtomicAddInt32(addr *int32, delta int32) (new int32) { return atomic.AddInt32(addr, delta) }
func AtomicAddInt64(addr *int64, delta int64) (new int64) { return atomic.AddInt64(addr, delta) }
func AtomicAddUint32(addr *uint32, delta uint32) (new uint32) { return atomic.AddUint32(addr, delta) }
func AtomicAddUint64(addr *uint64, delta uint64) (new uint64) { return atomic.AddUint64(addr, delta) }
func AtomicAddUintptr(addr *uintptr, delta uintptr) (new uintptr) {
return atomic.AddUintptr(addr, delta)
}
func AtomicLoadInt32(addr *int32) (val int32) { return atomic.LoadInt32(addr) }
func AtomicLoadInt64(addr *int64) (val int64) { return atomic.LoadInt64(addr) }
func AtomicLoadUint32(addr *uint32) (val uint32) { return atomic.LoadUint32(addr) }
func AtomicLoadUint64(addr *uint64) (val uint64) { return atomic.LoadUint64(addr) }
func AtomicLoadUintptr(addr *uintptr) (val uintptr) { return atomic.LoadUintptr(addr) }
func AtomicStoreInt32(addr *int32, val int32) { atomic.StoreInt32(addr, val) }
func AtomicStoreUint32(addr *uint32, val uint32) { atomic.StoreUint32(addr, val) }
func AtomicStoreUint64(addr *uint64, val uint64) { atomic.StoreUint64(addr, val) }
func AtomicStoreUintptr(addr *uintptr, val uintptr) { atomic.StoreUintptr(addr, val) }
func AtomicStoreInt64(addr *int64, val int64) { atomic.StoreInt64(addr, val) }

16
vendor/modernc.org/libc/atomic32.go generated vendored Normal file
View File

@@ -0,0 +1,16 @@
// Copyright 2024 The Libc Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
//go:build linux && (386 || arm)
package libc // import "modernc.org/libc"
import (
mbits "math/bits"
)
// static inline int a_ctz_l(unsigned long x)
func _a_ctz_l(tls *TLS, x ulong) int32 {
return int32(mbits.TrailingZeros32(x))
}

16
vendor/modernc.org/libc/atomic64.go generated vendored Normal file
View File

@@ -0,0 +1,16 @@
// Copyright 2024 The Libc Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
//go:build linux && (amd64 || arm64 || loong64 || ppc64le || s390x || riscv64)
package libc // import "modernc.org/libc"
import (
mbits "math/bits"
)
// static inline int a_ctz_l(unsigned long x)
func _a_ctz_l(tls *TLS, x ulong) int32 {
return int32(mbits.TrailingZeros64(x))
}

77
vendor/modernc.org/libc/build_all_targets.sh generated vendored Normal file
View File

@@ -0,0 +1,77 @@
set -e
for tag in none libc.dmesg libc.membrk libc.memgrind libc.strace libc.memexpvar
do
echo "-tags=$tag"
echo "GOOS=darwin GOARCH=amd64"
GOOS=darwin GOARCH=amd64 go build -tags=$tag -v ./...
GOOS=darwin GOARCH=amd64 go test -tags=$tag -c -o /dev/null
echo "GOOS=darwin GOARCH=arm64"
GOOS=darwin GOARCH=arm64 go build -tags=$tag -v ./...
GOOS=darwin GOARCH=arm64 go test -tags=$tag -c -o /dev/null
#TODO echo "GOOS=freebsd GOARCH=386"
#TODO GOOS=freebsd GOARCH=386 go build -tags=$tag -v ./...
#TODO GOOS=freebsd GOARCH=386 go test -tags=$tag -c -o /dev/null
echo "GOOS=freebsd GOARCH=amd64"
GOOS=freebsd GOARCH=amd64 go build -tags=$tag -v ./...
GOOS=freebsd GOARCH=amd64 go test -tags=$tag -c -o /dev/null
echo "GOOS=freebsd GOARCH=arm64"
GOOS=freebsd GOARCH=arm64 go build -tags=$tag -v ./...
GOOS=freebsd GOARCH=arm64 go test -tags=$tag -c -o /dev/null
echo "GOOS=illumos GOARCH=amd64"
GOOS=illumos GOARCH=amd64 go build -tags=$tag -v ./...
GOOS=illumos GOARCH=amd64 go test -tags=$tag -c -o /dev/null
#TODO echo "GOOS=freebsd GOARCH=arm"
#TODO GOOS=freebsd GOARCH=arm go build -tags=$tag -v ./...
#TODO GOOS=freebsd GOARCH=arm go test -tags=$tag -c -o /dev/null
echo "GOOS=linux GOARCH=386"
GOOS=linux GOARCH=386 go build -tags=$tag -v ./...
GOOS=linux GOARCH=386 go test -tags=$tag -c -o /dev/null
echo "GOOS=linux GOARCH=amd64"
GOOS=linux GOARCH=amd64 go build -tags=$tag -v ./...
GOOS=linux GOARCH=amd64 go test -tags=$tag -c -o /dev/null
echo "GOOS=linux GOARCH=arm"
GOOS=linux GOARCH=arm go build -tags=$tag -v ./...
GOOS=linux GOARCH=arm go test -tags=$tag -c -o /dev/null
echo "GOOS=linux GOARCH=arm64"
GOOS=linux GOARCH=arm64 go build -tags=$tag -v ./...
GOOS=linux GOARCH=arm64 go test -tags=$tag -c -o /dev/null
echo "GOOS=linux GOARCH=loong64"
GOOS=linux GOARCH=loong64 go build -tags=$tag -v ./...
GOOS=linux GOARCH=loong64 go test -tags=$tag -c -o /dev/null
# echo "GOOS=linux GOARCH=mips64le"
# GOOS=linux GOARCH=mips64le go build -tags=$tag -v ./...
# GOOS=linux GOARCH=mips64le go test -tags=$tag -c -o /dev/null
echo "GOOS=linux GOARCH=ppc64le"
GOOS=linux GOARCH=ppc64le go build -tags=$tag -v ./...
GOOS=linux GOARCH=ppc64le go test -tags=$tag -c -o /dev/null
echo "GOOS=linux GOARCH=riscv64"
GOOS=linux GOARCH=riscv64 go build -tags=$tag -v ./...
GOOS=linux GOARCH=riscv64 go test -tags=$tag -c -o /dev/null
echo "GOOS=linux GOARCH=s390x"
GOOS=linux GOARCH=s390x go build -tags=$tag -v ./...
GOOS=linux GOARCH=s390x go test -tags=$tag -c -o /dev/null
echo "GOOS=netbsd GOARCH=amd64"
GOOS=netbsd GOARCH=amd64 go build -tags=$tag -v ./...
GOOS=netbsd GOARCH=amd64 go test -tags=$tag -c -o /dev/null
echo "GOOS=netbsd GOARCH=arm"
GOOS=netbsd GOARCH=arm go build -tags=$tag -v ./...
GOOS=netbsd GOARCH=arm go test -tags=$tag -c -o /dev/null
echo "GOOS=openbsd GOARCH=386"
GOOS=openbsd GOARCH=386 go build -tags=$tag -v ./...
GOOS=openbsd GOARCH=386 go test -tags=$tag -c -o /dev/null
echo "GOOS=openbsd GOARCH=amd64"
GOOS=openbsd GOARCH=amd64 go build -tags=$tag -v ./...
GOOS=openbsd GOARCH=amd64 go test -tags=$tag -c -o /dev/null
echo "GOOS=openbsd GOARCH=arm64"
GOOS=openbsd GOARCH=arm64 go build -tags=$tag -v ./...
GOOS=openbsd GOARCH=arm64 go test -tags=$tag -c -o /dev/null
echo "GOOS=windows GOARCH=386"
GOOS=windows GOARCH=386 go build -tags=$tag -v ./...
GOOS=windows GOARCH=386 go test -tags=$tag -c -o /dev/null
echo "GOOS=windows GOARCH=amd64"
GOOS=windows GOARCH=amd64 go build -tags=$tag -v ./...
GOOS=windows GOARCH=amd64 go test -tags=$tag -c -o /dev/null
echo "GOOS=windows GOARCH=arm64"
GOOS=windows GOARCH=arm64 go build -tags=$tag -v ./...
GOOS=windows GOARCH=arm64 go test -tags=$tag -c -o /dev/null
done

9
vendor/modernc.org/libc/builder.json generated vendored Normal file
View File

@@ -0,0 +1,9 @@
{
"autogen": "linux/(amd64|arm64|loong64|ppc64le|s390x|riscv64|386|arm$)",
"autotag": "darwin/(amd64|arm64)|freebsd/(amd64|arm64)|linux/(386|amd64|arm$|arm64|loong64|ppc64le|riscv64|s390x)|netbsd/amd64|openbsd/(amd64|arm64)|windows/(amd64|arm64|386)",
"autoupdate": "linux/amd64",
"download": [
{"re": "linux/(amd64|arm64|loong64|ppc64le|s390x|riscv64|386|arm$)", "files": ["https://git.musl-libc.org/cgit/musl/snapshot/musl-7ada6dde6f9dc6a2836c3d92c2f762d35fd229e0.tar.gz"]}
],
"test": "darwin/(amd64|arm64)|freebsd/(amd64|arm64)|linux/(386|amd64|arm$|arm64|loong64|ppc64le|riscv64|s390x)|netbsd/amd64|openbsd/(amd64|arm64)|windows/(amd64|arm64|386)"
}

443
vendor/modernc.org/libc/builtin.go generated vendored Normal file
View File

@@ -0,0 +1,443 @@
// Copyright 2024 The Libc Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
//go:build linux && (amd64 || arm64 || loong64 || ppc64le || s390x || riscv64 || 386 || arm)
package libc // import "modernc.org/libc"
import (
"fmt"
"math"
mbits "math/bits"
"os"
"unsafe"
"modernc.org/mathutil"
)
func X__builtin_inff(tls *TLS) float32 {
return float32(math.Inf(1))
}
func X__builtin_nanf(tls *TLS, s uintptr) float32 {
return float32(math.NaN())
}
func X__builtin_printf(tls *TLS, fmt uintptr, va uintptr) (r int32) {
return Xprintf(tls, fmt, va)
}
func X__builtin_round(tls *TLS, x float64) (r float64) {
return Xround(tls, x)
}
func X__builtin_lround(tls *TLS, x float64) (r long) {
return Xlround(tls, x)
}
func X__builtin_roundf(tls *TLS, x float32) (r float32) {
return Xroundf(tls, x)
}
func X__builtin_expect(t *TLS, exp, c long) long {
return exp
}
func X__builtin_bzero(t *TLS, s uintptr, n Tsize_t) {
Xbzero(t, s, n)
}
func X__builtin_abort(t *TLS) {
Xabort(t)
}
func X__builtin_abs(t *TLS, j int32) int32 {
return Xabs(t, j)
}
func X__builtin_ctz(t *TLS, n uint32) int32 {
return int32(mbits.TrailingZeros32(n))
}
func X__builtin_clz(t *TLS, n uint32) int32 {
return int32(mbits.LeadingZeros32(n))
}
func X__builtin_clzll(t *TLS, n uint64) int32 {
return int32(mbits.LeadingZeros64(n))
}
func X__builtin_constant_p_impl() { panic(todo("internal error: should never be called")) }
func X__builtin_copysign(t *TLS, x, y float64) float64 {
return Xcopysign(t, x, y)
}
func X__builtin_copysignf(t *TLS, x, y float32) float32 {
return Xcopysignf(t, x, y)
}
func X__builtin_copysignl(t *TLS, x, y float64) float64 {
return Xcopysign(t, x, y)
}
func X__builtin_exit(t *TLS, status int32) {
Xexit(t, status)
}
func X__builtin_fabs(t *TLS, x float64) float64 {
return Xfabs(t, x)
}
func X__builtin_fabsf(t *TLS, x float32) float32 {
return Xfabsf(t, x)
}
func X__builtin_fabsl(t *TLS, x float64) float64 {
return Xfabsl(t, x)
}
func X__builtin_free(t *TLS, ptr uintptr) {
Xfree(t, ptr)
}
func X__builtin_getentropy(t *TLS, buf uintptr, n Tsize_t) int32 {
return Xgetentropy(t, buf, n)
}
func X__builtin_huge_val(t *TLS) float64 {
return math.Inf(1)
}
func X__builtin_huge_valf(t *TLS) float32 {
return float32(math.Inf(1))
}
func X__builtin_inf(t *TLS) float64 {
return math.Inf(1)
}
func X__builtin_infl(t *TLS) float64 {
return math.Inf(1)
}
func X__builtin_malloc(t *TLS, size Tsize_t) uintptr {
return Xmalloc(t, size)
}
func X__builtin_memcmp(t *TLS, s1, s2 uintptr, n Tsize_t) int32 {
return Xmemcmp(t, s1, s2, n)
}
func X__builtin_nan(t *TLS, s uintptr) float64 {
return math.NaN()
}
func X__builtin_nanl(t *TLS, s uintptr) float64 {
return math.NaN()
}
func X__builtin_prefetch(t *TLS, addr, args uintptr) {
}
func X__builtin_strchr(t *TLS, s uintptr, c int32) uintptr {
return Xstrchr(t, s, c)
}
func X__builtin_strcmp(t *TLS, s1, s2 uintptr) int32 {
return Xstrcmp(t, s1, s2)
}
func X__builtin_strcpy(t *TLS, dest, src uintptr) uintptr {
return Xstrcpy(t, dest, src)
}
func X__builtin_strlen(t *TLS, s uintptr) Tsize_t {
return Xstrlen(t, s)
}
func X__builtin_trap(t *TLS) {
Xabort(t)
}
func X__builtin_popcount(t *TLS, x uint32) int32 {
return int32(mbits.OnesCount32(x))
}
// char * __builtin___strcpy_chk (char *dest, const char *src, size_t os);
func X__builtin___strcpy_chk(t *TLS, dest, src uintptr, os Tsize_t) uintptr {
return Xstrcpy(t, dest, src)
}
func X__builtin_mmap(t *TLS, addr uintptr, length Tsize_t, prot, flags, fd int32, offset Toff_t) uintptr {
return Xmmap(t, addr, length, prot, flags, fd, offset)
}
// uint16_t __builtin_bswap16 (uint32_t x)
func X__builtin_bswap16(t *TLS, x uint16) uint16 {
return x<<8 |
x>>8
}
// uint32_t __builtin_bswap32 (uint32_t x)
func X__builtin_bswap32(t *TLS, x uint32) uint32 {
return x<<24 |
x&0xff00<<8 |
x&0xff0000>>8 |
x>>24
}
// uint64_t __builtin_bswap64 (uint64_t x)
func X__builtin_bswap64(t *TLS, x uint64) uint64 {
return x<<56 |
x&0xff00<<40 |
x&0xff0000<<24 |
x&0xff000000<<8 |
x&0xff00000000>>8 |
x&0xff0000000000>>24 |
x&0xff000000000000>>40 |
x>>56
}
// bool __builtin_add_overflow (type1 a, type2 b, type3 *res)
func X__builtin_add_overflowInt64(t *TLS, a, b int64, res uintptr) int32 {
r, ovf := mathutil.AddOverflowInt64(a, b)
*(*int64)(unsafe.Pointer(res)) = r
return Bool32(ovf)
}
// bool __builtin_add_overflow (type1 a, type2 b, type3 *res)
func X__builtin_add_overflowUint32(t *TLS, a, b uint32, res uintptr) int32 {
r := a + b
*(*uint32)(unsafe.Pointer(res)) = r
return Bool32(r < a)
}
// bool __builtin_add_overflow (type1 a, type2 b, type3 *res)
func X__builtin_add_overflowUint64(t *TLS, a, b uint64, res uintptr) int32 {
r := a + b
*(*uint64)(unsafe.Pointer(res)) = r
return Bool32(r < a)
}
// bool __builtin_sub_overflow (type1 a, type2 b, type3 *res)
func X__builtin_sub_overflowInt64(t *TLS, a, b int64, res uintptr) int32 {
r, ovf := mathutil.SubOverflowInt64(a, b)
*(*int64)(unsafe.Pointer(res)) = r
return Bool32(ovf)
}
// bool __builtin_mul_overflow (type1 a, type2 b, type3 *res)
func X__builtin_mul_overflowInt64(t *TLS, a, b int64, res uintptr) int32 {
r, ovf := mathutil.MulOverflowInt64(a, b)
*(*int64)(unsafe.Pointer(res)) = r
return Bool32(ovf)
}
// bool __builtin_mul_overflow (type1 a, type2 b, type3 *res)
func X__builtin_mul_overflowUint64(t *TLS, a, b uint64, res uintptr) int32 {
hi, lo := mbits.Mul64(a, b)
*(*uint64)(unsafe.Pointer(res)) = lo
return Bool32(hi != 0)
}
// bool __builtin_mul_overflow (type1 a, type2 b, type3 *res)
func X__builtin_mul_overflowUint128(t *TLS, a, b Uint128, res uintptr) int32 {
r, ovf := a.mulOvf(b)
*(*Uint128)(unsafe.Pointer(res)) = r
return Bool32(ovf)
}
func X__builtin_unreachable(t *TLS) {
fmt.Fprintf(os.Stderr, "unrechable\n")
os.Stderr.Sync()
Xexit(t, 1)
}
func X__builtin_snprintf(t *TLS, str uintptr, size Tsize_t, format, args uintptr) int32 {
return Xsnprintf(t, str, size, format, args)
}
func X__builtin_sprintf(t *TLS, str, format, args uintptr) (r int32) {
return Xsprintf(t, str, format, args)
}
func X__builtin_memcpy(t *TLS, dest, src uintptr, n Tsize_t) (r uintptr) {
return Xmemcpy(t, dest, src, n)
}
// void * __builtin___memcpy_chk (void *dest, const void *src, size_t n, size_t os);
func X__builtin___memcpy_chk(t *TLS, dest, src uintptr, n, os Tsize_t) (r uintptr) {
if os != ^Tsize_t(0) && n < os {
Xabort(t)
}
return Xmemcpy(t, dest, src, n)
}
func X__builtin_memset(t *TLS, s uintptr, c int32, n Tsize_t) uintptr {
return Xmemset(t, s, c, n)
}
// void * __builtin___memset_chk (void *s, int c, size_t n, size_t os);
func X__builtin___memset_chk(t *TLS, s uintptr, c int32, n, os Tsize_t) uintptr {
if os < n {
Xabort(t)
}
return Xmemset(t, s, c, n)
}
// size_t __builtin_object_size (const void * ptr, int type)
func X__builtin_object_size(t *TLS, p uintptr, typ int32) Tsize_t {
switch typ {
case 0, 1:
return ^Tsize_t(0)
default:
return 0
}
}
// int __builtin___sprintf_chk (char *s, int flag, size_t os, const char *fmt, ...);
func X__builtin___sprintf_chk(t *TLS, s uintptr, flag int32, os Tsize_t, format, args uintptr) (r int32) {
return Xsprintf(t, s, format, args)
}
func X__builtin_vsnprintf(t *TLS, str uintptr, size Tsize_t, format, va uintptr) int32 {
return Xvsnprintf(t, str, size, format, va)
}
// int __builtin___snprintf_chk(char * str, size_t maxlen, int flag, size_t os, const char * format, ...);
func X__builtin___snprintf_chk(t *TLS, str uintptr, maxlen Tsize_t, flag int32, os Tsize_t, format, args uintptr) (r int32) {
if os != ^Tsize_t(0) && maxlen > os {
Xabort(t)
}
return Xsnprintf(t, str, maxlen, format, args)
}
// int __builtin___vsnprintf_chk (char *s, size_t maxlen, int flag, size_t os, const char *fmt, va_list ap);
func X__builtin___vsnprintf_chk(t *TLS, str uintptr, maxlen Tsize_t, flag int32, os Tsize_t, format, args uintptr) (r int32) {
if os != ^Tsize_t(0) && maxlen > os {
Xabort(t)
}
return Xsnprintf(t, str, maxlen, format, args)
}
func Xisnan(t *TLS, x float64) int32 {
return X__builtin_isnan(t, x)
}
func X__isnan(t *TLS, x float64) int32 {
return X__builtin_isnan(t, x)
}
func X__builtin_isnan(t *TLS, x float64) int32 {
return Bool32(math.IsNaN(x))
}
func Xisnanf(t *TLS, arg float32) int32 {
return X__builtin_isnanf(t, arg)
}
func X__isnanf(t *TLS, arg float32) int32 {
return X__builtin_isnanf(t, arg)
}
func X__builtin_isnanf(t *TLS, x float32) int32 {
return Bool32(math.IsNaN(float64(x)))
}
func Xisnanl(t *TLS, arg float64) int32 {
return X__builtin_isnanl(t, arg)
}
func X__isnanl(t *TLS, arg float64) int32 {
return X__builtin_isnanl(t, arg)
}
func X__builtin_isnanl(t *TLS, x float64) int32 {
return Bool32(math.IsNaN(x))
}
func X__builtin_llabs(tls *TLS, a int64) int64 {
return Xllabs(tls, a)
}
func X__builtin_log2(t *TLS, x float64) float64 {
return Xlog2(t, x)
}
func X__builtin___strncpy_chk(t *TLS, dest, src uintptr, n, os Tsize_t) (r uintptr) {
if n != ^Tsize_t(0) && os < n {
Xabort(t)
}
return Xstrncpy(t, dest, src, n)
}
func X__builtin___strcat_chk(t *TLS, dest, src uintptr, os Tsize_t) (r uintptr) {
return Xstrcat(t, dest, src)
}
func X__builtin___memmove_chk(t *TLS, dest, src uintptr, n, os Tsize_t) uintptr {
if os != ^Tsize_t(0) && os < n {
Xabort(t)
}
return Xmemmove(t, dest, src, n)
}
func X__builtin_isunordered(t *TLS, a, b float64) int32 {
return Bool32(math.IsNaN(a) || math.IsNaN(b))
}
func X__builtin_ffs(tls *TLS, i int32) (r int32) {
return Xffs(tls, i)
}
func X__builtin_rintf(tls *TLS, x float32) (r float32) {
return Xrintf(tls, x)
}
func X__builtin_lrintf(tls *TLS, x float32) (r long) {
return Xlrintf(tls, x)
}
func X__builtin_lrint(tls *TLS, x float64) (r long) {
return Xlrint(tls, x)
}
// double __builtin_fma(double x, double y, double z);
func X__builtin_fma(tls *TLS, x, y, z float64) (r float64) {
return math.FMA(x, y, z)
}
func X__builtin_alloca(tls *TLS, size Tsize_t) uintptr {
return Xalloca(tls, size)
}
func X__builtin_isprint(tls *TLS, c int32) (r int32) {
return Xisprint(tls, c)
}
func X__builtin_isblank(tls *TLS, c int32) (r int32) {
return Xisblank(tls, c)
}
func X__builtin_trunc(tls *TLS, x float64) (r float64) {
return Xtrunc(tls, x)
}
func X__builtin_hypot(tls *TLS, x float64, y float64) (r float64) {
return Xhypot(tls, x, y)
}
func X__builtin_fmax(tls *TLS, x float64, y float64) (r float64) {
return Xfmax(tls, x, y)
}
func X__builtin_fmin(tls *TLS, x float64, y float64) (r float64) {
return Xfmin(tls, x, y)
}

24
vendor/modernc.org/libc/builtin32.go generated vendored Normal file
View File

@@ -0,0 +1,24 @@
// Copyright 2024 The Libc Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
//go:build linux && (386 || arm)
package libc // import "modernc.org/libc"
import (
mbits "math/bits"
)
func X__builtin_ctzl(tls *TLS, x ulong) int32 {
return int32(mbits.TrailingZeros32(x))
}
func X__builtin_clzl(t *TLS, n ulong) int32 {
return int32(mbits.LeadingZeros32(n))
}
// int __builtin_popcountl (unsigned long x)
func X__builtin_popcountl(t *TLS, x ulong) int32 {
return int32(mbits.OnesCount32(x))
}

24
vendor/modernc.org/libc/builtin64.go generated vendored Normal file
View File

@@ -0,0 +1,24 @@
// Copyright 2024 The Libc Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
//go:build linux && (amd64 || arm64 || loong64 || ppc64le || s390x || riscv64)
package libc // import "modernc.org/libc"
import (
mbits "math/bits"
)
func X__builtin_ctzl(tls *TLS, x ulong) int32 {
return int32(mbits.TrailingZeros64(x))
}
func X__builtin_clzl(t *TLS, n ulong) int32 {
return int32(mbits.LeadingZeros64(n))
}
// int __builtin_popcountl (unsigned long x)
func X__builtin_popcountl(t *TLS, x ulong) int32 {
return int32(mbits.OnesCount64(x))
}

Some files were not shown because too many files have changed in this diff Show More