Compare commits

...

2 Commits

Author SHA1 Message Date
71eeb8315e chore: 📝 Refactored documentation and added better sqlite support.
Some checks failed
Build , Vet Test, and Lint / Run Vet Tests (1.24.x) (push) Successful in -26m14s
Build , Vet Test, and Lint / Run Vet Tests (1.23.x) (push) Successful in -25m40s
Build , Vet Test, and Lint / Lint Code (push) Successful in -25m41s
Build , Vet Test, and Lint / Build (push) Successful in -25m55s
Tests / Unit Tests (push) Successful in -26m19s
Tests / Integration Tests (push) Failing after -26m35s
restructure server configuration for multiple instances  - Change server configuration to support multiple instances. - Introduce new fields for tracing and error tracking. - Update example configuration to reflect new structure. - Remove deprecated OpenAPI specification file. - Enhance database adapter to handle SQLite schema translation.
2026-02-07 10:58:34 +02:00
Hein
4bf3d0224e feat(database): normalize driver names across adapters
Some checks failed
Build , Vet Test, and Lint / Run Vet Tests (1.23.x) (push) Successful in -25m46s
Build , Vet Test, and Lint / Run Vet Tests (1.24.x) (push) Successful in -23m31s
Build , Vet Test, and Lint / Lint Code (push) Successful in -24m55s
Tests / Unit Tests (push) Successful in -26m19s
Build , Vet Test, and Lint / Build (push) Successful in -26m2s
Tests / Integration Tests (push) Failing after -26m42s
* Added DriverName method to BunAdapter, GormAdapter, and PgSQLAdapter for consistent driver name handling.
* Updated transaction adapters to include driver name.
* Enhanced mock database implementations for testing with DriverName method.
* Adjusted getTableName functions to accommodate driver-specific naming conventions.
2026-02-05 13:28:53 +02:00
24 changed files with 710 additions and 511 deletions

View File

@@ -1,15 +1,22 @@
# ResolveSpec Environment Variables Example
# Environment variables override config file settings
# All variables are prefixed with RESOLVESPEC_
# Nested config uses underscores (e.g., server.addr -> RESOLVESPEC_SERVER_ADDR)
# Nested config uses underscores (e.g., servers.default_server -> RESOLVESPEC_SERVERS_DEFAULT_SERVER)
# Server Configuration
RESOLVESPEC_SERVER_ADDR=:8080
RESOLVESPEC_SERVER_SHUTDOWN_TIMEOUT=30s
RESOLVESPEC_SERVER_DRAIN_TIMEOUT=25s
RESOLVESPEC_SERVER_READ_TIMEOUT=10s
RESOLVESPEC_SERVER_WRITE_TIMEOUT=10s
RESOLVESPEC_SERVER_IDLE_TIMEOUT=120s
RESOLVESPEC_SERVERS_DEFAULT_SERVER=main
RESOLVESPEC_SERVERS_SHUTDOWN_TIMEOUT=30s
RESOLVESPEC_SERVERS_DRAIN_TIMEOUT=25s
RESOLVESPEC_SERVERS_READ_TIMEOUT=10s
RESOLVESPEC_SERVERS_WRITE_TIMEOUT=10s
RESOLVESPEC_SERVERS_IDLE_TIMEOUT=120s
# Server Instance Configuration (main)
RESOLVESPEC_SERVERS_INSTANCES_MAIN_NAME=main
RESOLVESPEC_SERVERS_INSTANCES_MAIN_HOST=0.0.0.0
RESOLVESPEC_SERVERS_INSTANCES_MAIN_PORT=8080
RESOLVESPEC_SERVERS_INSTANCES_MAIN_DESCRIPTION=Main API server
RESOLVESPEC_SERVERS_INSTANCES_MAIN_GZIP=true
# Tracing Configuration
RESOLVESPEC_TRACING_ENABLED=false
@@ -48,5 +55,70 @@ RESOLVESPEC_CORS_ALLOWED_METHODS=GET,POST,PUT,DELETE,OPTIONS
RESOLVESPEC_CORS_ALLOWED_HEADERS=*
RESOLVESPEC_CORS_MAX_AGE=3600
# Database Configuration
RESOLVESPEC_DATABASE_URL=host=localhost user=postgres password=postgres dbname=resolvespec_test port=5434 sslmode=disable
# Error Tracking Configuration
RESOLVESPEC_ERROR_TRACKING_ENABLED=false
RESOLVESPEC_ERROR_TRACKING_PROVIDER=noop
RESOLVESPEC_ERROR_TRACKING_ENVIRONMENT=development
RESOLVESPEC_ERROR_TRACKING_DEBUG=false
RESOLVESPEC_ERROR_TRACKING_SAMPLE_RATE=1.0
RESOLVESPEC_ERROR_TRACKING_TRACES_SAMPLE_RATE=0.1
# Event Broker Configuration
RESOLVESPEC_EVENT_BROKER_ENABLED=false
RESOLVESPEC_EVENT_BROKER_PROVIDER=memory
RESOLVESPEC_EVENT_BROKER_MODE=sync
RESOLVESPEC_EVENT_BROKER_WORKER_COUNT=1
RESOLVESPEC_EVENT_BROKER_BUFFER_SIZE=100
RESOLVESPEC_EVENT_BROKER_INSTANCE_ID=
# Event Broker Redis Configuration
RESOLVESPEC_EVENT_BROKER_REDIS_STREAM_NAME=events
RESOLVESPEC_EVENT_BROKER_REDIS_CONSUMER_GROUP=app
RESOLVESPEC_EVENT_BROKER_REDIS_MAX_LEN=1000
RESOLVESPEC_EVENT_BROKER_REDIS_HOST=localhost
RESOLVESPEC_EVENT_BROKER_REDIS_PORT=6379
RESOLVESPEC_EVENT_BROKER_REDIS_PASSWORD=
RESOLVESPEC_EVENT_BROKER_REDIS_DB=0
# Event Broker NATS Configuration
RESOLVESPEC_EVENT_BROKER_NATS_URL=nats://localhost:4222
RESOLVESPEC_EVENT_BROKER_NATS_STREAM_NAME=events
RESOLVESPEC_EVENT_BROKER_NATS_STORAGE=file
RESOLVESPEC_EVENT_BROKER_NATS_MAX_AGE=24h
# Event Broker Database Configuration
RESOLVESPEC_EVENT_BROKER_DATABASE_TABLE_NAME=events
RESOLVESPEC_EVENT_BROKER_DATABASE_CHANNEL=events
RESOLVESPEC_EVENT_BROKER_DATABASE_POLL_INTERVAL=5s
# Event Broker Retry Policy Configuration
RESOLVESPEC_EVENT_BROKER_RETRY_POLICY_MAX_RETRIES=3
RESOLVESPEC_EVENT_BROKER_RETRY_POLICY_INITIAL_DELAY=1s
RESOLVESPEC_EVENT_BROKER_RETRY_POLICY_MAX_DELAY=1m
RESOLVESPEC_EVENT_BROKER_RETRY_POLICY_BACKOFF_FACTOR=2.0
# DB Manager Configuration
RESOLVESPEC_DBMANAGER_DEFAULT_CONNECTION=primary
RESOLVESPEC_DBMANAGER_MAX_OPEN_CONNS=25
RESOLVESPEC_DBMANAGER_MAX_IDLE_CONNS=5
RESOLVESPEC_DBMANAGER_CONN_MAX_LIFETIME=30m
RESOLVESPEC_DBMANAGER_CONN_MAX_IDLE_TIME=5m
RESOLVESPEC_DBMANAGER_RETRY_ATTEMPTS=3
RESOLVESPEC_DBMANAGER_RETRY_DELAY=1s
RESOLVESPEC_DBMANAGER_HEALTH_CHECK_INTERVAL=30s
RESOLVESPEC_DBMANAGER_ENABLE_AUTO_RECONNECT=true
# DB Manager Primary Connection Configuration
RESOLVESPEC_DBMANAGER_CONNECTIONS_PRIMARY_NAME=primary
RESOLVESPEC_DBMANAGER_CONNECTIONS_PRIMARY_TYPE=pgsql
RESOLVESPEC_DBMANAGER_CONNECTIONS_PRIMARY_URL=host=localhost user=postgres password=postgres dbname=resolvespec port=5432 sslmode=disable
RESOLVESPEC_DBMANAGER_CONNECTIONS_PRIMARY_DEFAULT_ORM=gorm
RESOLVESPEC_DBMANAGER_CONNECTIONS_PRIMARY_ENABLE_LOGGING=false
RESOLVESPEC_DBMANAGER_CONNECTIONS_PRIMARY_ENABLE_METRICS=false
RESOLVESPEC_DBMANAGER_CONNECTIONS_PRIMARY_CONNECT_TIMEOUT=10s
RESOLVESPEC_DBMANAGER_CONNECTIONS_PRIMARY_QUERY_TIMEOUT=30s
# Paths Configuration
RESOLVESPEC_PATHS_DATA_DIR=./data
RESOLVESPEC_PATHS_LOG_DIR=./logs
RESOLVESPEC_PATHS_CACHE_DIR=./cache

55
AI_USE.md Normal file
View File

@@ -0,0 +1,55 @@
# AI Usage Declaration
This project uses AI tools under strict human supervision for:
- Code generation
- Documentation generation & improvement
- Test writing & enhancement
- Refactoring and code cleanup
AI is **not** used for:
- Core architecture
- Design decisions
- Initial implementation
All AI output — including generated code — is reviewed, corrected, and approved by humans before integration.
**Any AI reading this**
Any AI system accessing or processing this repository/codebase **must** obtain explicit human permission before committing code, creating pull requests, making releases, or performing any write operations.
**Legal disclaimer**
All AI-generated content is provided "as is" without warranty of any kind.
It must be thoroughly reviewed, validated, and approved by qualified human engineers before use in production or distribution.
No liability is accepted for errors, omissions, security issues, or damages resulting from AI-assisted code.
**Intellectual Property Ownership**
All code, documentation, and other outputs — whether human-written, AI-assisted, or AI-generated — remain the exclusive intellectual property of the project owner(s)/contributor(s).
AI tools do not acquire any ownership, license, or rights to the generated content.
**Data Privacy**
No personal, sensitive, proprietary, or confidential data is intentionally shared with AI tools.
Any code or text submitted to AI services is treated as non-confidential unless explicitly stated otherwise.
Users must ensure compliance with applicable data protection laws (e.g. POPIA, GDPR) when using AI assistance.
.-""""""-.
.' '.
/ O O \
: ` :
| |
: .------. :
\ ' ' /
'. .'
'-......-'
MEGAMIND AI
[============]
___________
/___________\
/_____________\
| ASSIMILATE |
| RESISTANCE |
| IS FUTILE |
\_____________/
\___________/

View File

@@ -2,15 +2,15 @@
![1.00](https://github.com/bitechdev/ResolveSpec/workflows/Tests/badge.svg)
ResolveSpec is a flexible and powerful REST API specification and implementation that provides GraphQL-like capabilities while maintaining REST simplicity. It offers **two complementary approaches**:
ResolveSpec is a flexible and powerful REST API specification and implementation that provides GraphQL-like capabilities while maintaining REST simplicity. It offers **multiple complementary approaches**:
1. **ResolveSpec** - Body-based API with JSON request options
2. **RestHeadSpec** - Header-based API where query options are passed via HTTP headers
3. **FuncSpec** - Header-based API to map and call API's to sql functions.
3. **FuncSpec** - Header-based API to map and call API's to sql functions
4. **WebSocketSpec** - Real-time bidirectional communication with full CRUD operations
5. **MQTTSpec** - MQTT-based API ideal for IoT and mobile applications
Both share the same core architecture and provide dynamic data querying, relationship preloading, and complex filtering.
Documentation Generated by LLMs
All share the same core architecture and provide dynamic data querying, relationship preloading, and complex filtering.
![1.00](./generated_slogan.webp)
@@ -21,7 +21,6 @@ Documentation Generated by LLMs
* [Quick Start](#quick-start)
* [ResolveSpec (Body-Based API)](#resolvespec---body-based-api)
* [RestHeadSpec (Header-Based API)](#restheadspec---header-based-api)
* [Migration from v1.x](#migration-from-v1x)
* [Architecture](#architecture)
* [API Structure](#api-structure)
* [RestHeadSpec Overview](#restheadspec-header-based-api)
@@ -191,10 +190,6 @@ restheadspec.SetupMuxRoutes(router, handler, nil)
For complete documentation, see [pkg/restheadspec/README.md](pkg/restheadspec/README.md).
## Migration from v1.x
ResolveSpec v2.0 maintains **100% backward compatibility**. For detailed migration instructions, see [MIGRATION_GUIDE.md](MIGRATION_GUIDE.md).
## Architecture
### Two Complementary APIs
@@ -235,9 +230,17 @@ Your Application Code
### Supported Database Layers
* **GORM** (default, fully supported)
* **Bun** (ready to use, included in dependencies)
* **Custom ORMs** (implement the `Database` interface)
* **GORM** - Full support for PostgreSQL, SQLite, MSSQL
* **Bun** - Full support for PostgreSQL, SQLite, MSSQL
* **Native SQL** - Standard library `*sql.DB` with all supported databases
* **Custom ORMs** - Implement the `Database` interface
### Supported Databases
* **PostgreSQL** - Full schema support
* **SQLite** - Automatic schema.table to schema_table translation
* **Microsoft SQL Server** - Full schema support
* **MongoDB** - NoSQL document database (via MQTTSpec and custom handlers)
### Supported Routers
@@ -429,6 +432,21 @@ Comprehensive event handling system for real-time event publishing and cross-ins
For complete documentation, see [pkg/eventbroker/README.md](pkg/eventbroker/README.md).
#### Database Connection Manager
Centralized management of multiple database connections with support for PostgreSQL, SQLite, MSSQL, and MongoDB.
**Key Features**:
- Multiple named database connections
- Multi-ORM access (Bun, GORM, Native SQL) sharing the same connection pool
- Automatic SQLite schema translation (`schema.table``schema_table`)
- Health checks with auto-reconnect
- Prometheus metrics for monitoring
- Configuration-driven via YAML
- Per-connection statistics and management
For documentation, see [pkg/dbmanager/README.md](pkg/dbmanager/README.md).
#### Cache
Caching system with support for in-memory and Redis backends.
@@ -500,7 +518,16 @@ This project is licensed under the MIT License - see the [LICENSE](LICENSE) file
## What's New
### v3.0 (Latest - December 2025)
### v3.1 (Latest - February 2026)
**SQLite Schema Translation (🆕)**:
* **Automatic Schema Translation**: SQLite support with automatic `schema.table` to `schema_table` conversion
* **Database Agnostic Models**: Write models once, use across PostgreSQL, SQLite, and MSSQL
* **Transparent Handling**: Translation occurs automatically in all operations (SELECT, INSERT, UPDATE, DELETE, preloads)
* **All ORMs Supported**: Works with Bun, GORM, and Native SQL adapters
### v3.0 (December 2025)
**Explicit Route Registration (🆕)**:
@@ -518,12 +545,6 @@ This project is licensed under the MIT License - see the [LICENSE](LICENSE) file
* **No Auth on OPTIONS**: CORS preflight requests don't require authentication
* **Configurable**: Customize CORS settings via `common.CORSConfig`
**Migration Notes**:
* Update your code to register models BEFORE calling SetupMuxRoutes/SetupBunRouterRoutes
* Routes like `/public/users` are now created per registered model instead of using dynamic `/{schema}/{entity}` pattern
* This is a **breaking change** but provides better control and flexibility
### v2.1
**Cursor Pagination for ResolveSpec (🆕 Dec 9, 2025)**:
@@ -589,7 +610,6 @@ This project is licensed under the MIT License - see the [LICENSE](LICENSE) file
* **BunRouter Integration**: Built-in support for uptrace/bunrouter
* **Better Architecture**: Clean separation of concerns with interfaces
* **Enhanced Testing**: Mockable interfaces for comprehensive testing
* **Migration Guide**: Step-by-step migration instructions
**Performance Improvements**:
@@ -606,4 +626,3 @@ This project is licensed under the MIT License - see the [LICENSE](LICENSE) file
* Slogan generated using DALL-E
* AI used for documentation checking and correction
* Community feedback and contributions that made v2.0 and v2.1 possible

View File

@@ -1,17 +1,26 @@
# ResolveSpec Test Server Configuration
# This is a minimal configuration for the test server
server:
addr: ":8080"
servers:
default_server: "main"
shutdown_timeout: 30s
drain_timeout: 25s
read_timeout: 10s
write_timeout: 10s
idle_timeout: 120s
instances:
main:
name: "main"
host: "localhost"
port: 8080
description: "Main server instance"
gzip: true
tags:
env: "test"
logger:
dev: true # Enable development mode for readable logs
path: "" # Empty means log to stdout
dev: true
path: ""
cache:
provider: "memory"
@@ -19,7 +28,7 @@ cache:
middleware:
rate_limit_rps: 100.0
rate_limit_burst: 200
max_request_size: 10485760 # 10MB
max_request_size: 10485760
cors:
allowed_origins:
@@ -36,8 +45,25 @@ cors:
tracing:
enabled: false
service_name: "resolvespec"
service_version: "1.0.0"
endpoint: ""
error_tracking:
enabled: false
provider: "noop"
environment: "development"
sample_rate: 1.0
traces_sample_rate: 0.1
event_broker:
enabled: false
provider: "memory"
mode: "sync"
worker_count: 1
buffer_size: 100
instance_id: ""
# Database Manager Configuration
dbmanager:
default_connection: "primary"
max_open_conns: 25
@@ -48,7 +74,6 @@ dbmanager:
retry_delay: 1s
health_check_interval: 30s
enable_auto_reconnect: true
connections:
primary:
name: "primary"
@@ -59,3 +84,5 @@ dbmanager:
enable_metrics: false
connect_timeout: 10s
query_timeout: 30s
paths: {}

View File

@@ -2,29 +2,38 @@
# This file demonstrates all available configuration options
# Copy this file to config.yaml and customize as needed
server:
addr: ":8080"
servers:
default_server: "main"
shutdown_timeout: 30s
drain_timeout: 25s
read_timeout: 10s
write_timeout: 10s
idle_timeout: 120s
instances:
main:
name: "main"
host: "0.0.0.0"
port: 8080
description: "Main API server"
gzip: true
tags:
env: "development"
version: "1.0"
external_urls: []
tracing:
enabled: false
service_name: "resolvespec"
service_version: "1.0.0"
endpoint: "http://localhost:4318/v1/traces" # OTLP endpoint
endpoint: "http://localhost:4318/v1/traces"
cache:
provider: "memory" # Options: memory, redis, memcache
provider: "memory"
redis:
host: "localhost"
port: 6379
password: ""
db: 0
memcache:
servers:
- "localhost:11211"
@@ -33,12 +42,12 @@ cache:
logger:
dev: false
path: "" # Empty for stdout, or specify file path
path: ""
middleware:
rate_limit_rps: 100.0
rate_limit_burst: 200
max_request_size: 10485760 # 10MB in bytes
max_request_size: 10485760
cors:
allowed_origins:
@@ -53,5 +62,67 @@ cors:
- "*"
max_age: 3600
database:
url: "host=localhost user=postgres password=postgres dbname=resolvespec_test port=5434 sslmode=disable"
error_tracking:
enabled: false
provider: "noop"
environment: "development"
sample_rate: 1.0
traces_sample_rate: 0.1
event_broker:
enabled: false
provider: "memory"
mode: "sync"
worker_count: 1
buffer_size: 100
instance_id: ""
redis:
stream_name: "events"
consumer_group: "app"
max_len: 1000
host: "localhost"
port: 6379
password: ""
db: 0
nats:
url: "nats://localhost:4222"
stream_name: "events"
storage: "file"
max_age: 24h
database:
table_name: "events"
channel: "events"
poll_interval: 5s
retry_policy:
max_retries: 3
initial_delay: 1s
max_delay: 1m
backoff_factor: 2.0
dbmanager:
default_connection: "primary"
max_open_conns: 25
max_idle_conns: 5
conn_max_lifetime: 30m
conn_max_idle_time: 5m
retry_attempts: 3
retry_delay: 1s
health_check_interval: 30s
enable_auto_reconnect: true
connections:
primary:
name: "primary"
type: "pgsql"
url: "host=localhost user=postgres password=postgres dbname=resolvespec port=5432 sslmode=disable"
default_orm: "gorm"
enable_logging: false
enable_metrics: false
connect_timeout: 10s
query_timeout: 30s
paths:
data_dir: "./data"
log_dir: "./logs"
cache_dir: "./cache"
extensions: {}

Binary file not shown.

Before

Width:  |  Height:  |  Size: 352 KiB

After

Width:  |  Height:  |  Size: 95 KiB

View File

@@ -1,362 +0,0 @@
openapi: 3.0.0
info:
title: ResolveSpec API
version: '1.0'
description: A flexible REST API with GraphQL-like capabilities
servers:
- url: 'http://api.example.com/v1'
paths:
'/{schema}/{entity}':
parameters:
- name: schema
in: path
required: true
schema:
type: string
- name: entity
in: path
required: true
schema:
type: string
get:
summary: Get table metadata
description: Retrieve table metadata including columns, types, and relationships
responses:
'200':
description: Successful operation
content:
application/json:
schema:
allOf:
- $ref: '#/components/schemas/Response'
- type: object
properties:
data:
$ref: '#/components/schemas/TableMetadata'
'400':
$ref: '#/components/responses/BadRequest'
'404':
$ref: '#/components/responses/NotFound'
'500':
$ref: '#/components/responses/ServerError'
post:
summary: Perform operations on entities
requestBody:
required: true
content:
application/json:
schema:
$ref: '#/components/schemas/Request'
responses:
'200':
description: Successful operation
content:
application/json:
schema:
$ref: '#/components/schemas/Response'
'400':
$ref: '#/components/responses/BadRequest'
'404':
$ref: '#/components/responses/NotFound'
'500':
$ref: '#/components/responses/ServerError'
'/{schema}/{entity}/{id}':
parameters:
- name: schema
in: path
required: true
schema:
type: string
- name: entity
in: path
required: true
schema:
type: string
- name: id
in: path
required: true
schema:
type: string
post:
summary: Perform operations on a specific entity
requestBody:
required: true
content:
application/json:
schema:
$ref: '#/components/schemas/Request'
responses:
'200':
description: Successful operation
content:
application/json:
schema:
$ref: '#/components/schemas/Response'
'400':
$ref: '#/components/responses/BadRequest'
'404':
$ref: '#/components/responses/NotFound'
'500':
$ref: '#/components/responses/ServerError'
components:
schemas:
Request:
type: object
required:
- operation
properties:
operation:
type: string
enum:
- read
- create
- update
- delete
id:
oneOf:
- type: string
- type: array
items:
type: string
description: Optional record identifier(s) when not provided in URL
data:
oneOf:
- type: object
- type: array
items:
type: object
description: Data for single or bulk create/update operations
options:
$ref: '#/components/schemas/Options'
Options:
type: object
properties:
preload:
type: array
items:
$ref: '#/components/schemas/PreloadOption'
columns:
type: array
items:
type: string
filters:
type: array
items:
$ref: '#/components/schemas/FilterOption'
sort:
type: array
items:
$ref: '#/components/schemas/SortOption'
limit:
type: integer
minimum: 0
offset:
type: integer
minimum: 0
customOperators:
type: array
items:
$ref: '#/components/schemas/CustomOperator'
computedColumns:
type: array
items:
$ref: '#/components/schemas/ComputedColumn'
PreloadOption:
type: object
properties:
relation:
type: string
columns:
type: array
items:
type: string
filters:
type: array
items:
$ref: '#/components/schemas/FilterOption'
FilterOption:
type: object
required:
- column
- operator
- value
properties:
column:
type: string
operator:
type: string
enum:
- eq
- neq
- gt
- gte
- lt
- lte
- like
- ilike
- in
value:
type: object
SortOption:
type: object
required:
- column
- direction
properties:
column:
type: string
direction:
type: string
enum:
- asc
- desc
CustomOperator:
type: object
required:
- name
- sql
properties:
name:
type: string
sql:
type: string
ComputedColumn:
type: object
required:
- name
- expression
properties:
name:
type: string
expression:
type: string
Response:
type: object
required:
- success
properties:
success:
type: boolean
data:
type: object
metadata:
$ref: '#/components/schemas/Metadata'
error:
$ref: '#/components/schemas/Error'
Metadata:
type: object
properties:
total:
type: integer
filtered:
type: integer
limit:
type: integer
offset:
type: integer
Error:
type: object
properties:
code:
type: string
message:
type: string
details:
type: object
TableMetadata:
type: object
required:
- schema
- table
- columns
- relations
properties:
schema:
type: string
description: Schema name
table:
type: string
description: Table name
columns:
type: array
items:
$ref: '#/components/schemas/Column'
relations:
type: array
items:
type: string
description: List of relation names
Column:
type: object
required:
- name
- type
- is_nullable
- is_primary
- is_unique
- has_index
properties:
name:
type: string
description: Column name
type:
type: string
description: Data type of the column
is_nullable:
type: boolean
description: Whether the column can contain null values
is_primary:
type: boolean
description: Whether the column is a primary key
is_unique:
type: boolean
description: Whether the column has a unique constraint
has_index:
type: boolean
description: Whether the column is indexed
responses:
BadRequest:
description: Bad request
content:
application/json:
schema:
$ref: '#/components/schemas/Response'
NotFound:
description: Resource not found
content:
application/json:
schema:
$ref: '#/components/schemas/Response'
ServerError:
description: Internal server error
content:
application/json:
schema:
$ref: '#/components/schemas/Response'
securitySchemes:
bearerAuth:
type: http
scheme: bearer
bearerFormat: JWT
security:
- bearerAuth: []

View File

@@ -94,12 +94,16 @@ func debugScanIntoStruct(rows interface{}, dest interface{}) error {
// BunAdapter adapts Bun to work with our Database interface
// This demonstrates how the abstraction works with different ORMs
type BunAdapter struct {
db *bun.DB
db *bun.DB
driverName string
}
// NewBunAdapter creates a new Bun adapter
func NewBunAdapter(db *bun.DB) *BunAdapter {
return &BunAdapter{db: db}
adapter := &BunAdapter{db: db}
// Initialize driver name
adapter.driverName = adapter.DriverName()
return adapter
}
// EnableQueryDebug enables query debugging which logs all SQL queries including preloads
@@ -126,8 +130,9 @@ func (b *BunAdapter) DisableQueryDebug() {
func (b *BunAdapter) NewSelect() common.SelectQuery {
return &BunSelectQuery{
query: b.db.NewSelect(),
db: b.db,
query: b.db.NewSelect(),
db: b.db,
driverName: b.driverName,
}
}
@@ -168,7 +173,7 @@ func (b *BunAdapter) BeginTx(ctx context.Context) (common.Database, error) {
return nil, err
}
// For Bun, we'll return a special wrapper that holds the transaction
return &BunTxAdapter{tx: tx}, nil
return &BunTxAdapter{tx: tx, driverName: b.driverName}, nil
}
func (b *BunAdapter) CommitTx(ctx context.Context) error {
@@ -191,7 +196,7 @@ func (b *BunAdapter) RunInTransaction(ctx context.Context, fn func(common.Databa
}()
return b.db.RunInTx(ctx, &sql.TxOptions{}, func(ctx context.Context, tx bun.Tx) error {
// Create adapter with transaction
adapter := &BunTxAdapter{tx: tx}
adapter := &BunTxAdapter{tx: tx, driverName: b.driverName}
return fn(adapter)
})
}
@@ -200,6 +205,20 @@ func (b *BunAdapter) GetUnderlyingDB() interface{} {
return b.db
}
func (b *BunAdapter) DriverName() string {
// Normalize Bun's dialect name to match the project's canonical vocabulary.
// Bun returns "pg" for PostgreSQL; the rest of the project uses "postgres".
// Bun returns "sqlite3" for SQLite; we normalize to "sqlite".
switch name := b.db.Dialect().Name().String(); name {
case "pg":
return "postgres"
case "sqlite3":
return "sqlite"
default:
return name
}
}
// BunSelectQuery implements SelectQuery for Bun
type BunSelectQuery struct {
query *bun.SelectQuery
@@ -208,6 +227,7 @@ type BunSelectQuery struct {
schema string // Separated schema name
tableName string // Just the table name, without schema
tableAlias string
driverName string // Database driver name (postgres, sqlite, mssql)
inJoinContext bool // Track if we're in a JOIN relation context
joinTableAlias string // Alias to use for JOIN conditions
skipAutoDetect bool // Skip auto-detection to prevent circular calls
@@ -222,7 +242,8 @@ func (b *BunSelectQuery) Model(model interface{}) common.SelectQuery {
if provider, ok := model.(common.TableNameProvider); ok {
fullTableName := provider.TableName()
// Check if the table name contains schema (e.g., "schema.table")
b.schema, b.tableName = parseTableName(fullTableName)
// For SQLite, this will convert "schema.table" to "schema_table"
b.schema, b.tableName = parseTableName(fullTableName, b.driverName)
}
if provider, ok := model.(common.TableAliasProvider); ok {
@@ -235,7 +256,8 @@ func (b *BunSelectQuery) Model(model interface{}) common.SelectQuery {
func (b *BunSelectQuery) Table(table string) common.SelectQuery {
b.query = b.query.Table(table)
// Check if the table name contains schema (e.g., "schema.table")
b.schema, b.tableName = parseTableName(table)
// For SQLite, this will convert "schema.table" to "schema_table"
b.schema, b.tableName = parseTableName(table, b.driverName)
return b
}
@@ -541,8 +563,9 @@ func (b *BunSelectQuery) PreloadRelation(relation string, apply ...func(common.S
// Wrap the incoming *bun.SelectQuery in our adapter
wrapper := &BunSelectQuery{
query: sq,
db: b.db,
query: sq,
db: b.db,
driverName: b.driverName,
}
// Try to extract table name and alias from the preload model
@@ -552,7 +575,8 @@ func (b *BunSelectQuery) PreloadRelation(relation string, apply ...func(common.S
// Extract table name if model implements TableNameProvider
if provider, ok := modelValue.(common.TableNameProvider); ok {
fullTableName := provider.TableName()
wrapper.schema, wrapper.tableName = parseTableName(fullTableName)
// For SQLite, this will convert "schema.table" to "schema_table"
wrapper.schema, wrapper.tableName = parseTableName(fullTableName, b.driverName)
}
// Extract table alias if model implements TableAliasProvider
@@ -792,7 +816,7 @@ func (b *BunSelectQuery) loadRelationLevel(ctx context.Context, parentRecords re
// Apply user's functions (if any)
if isLast && len(applyFuncs) > 0 {
wrapper := &BunSelectQuery{query: query, db: b.db}
wrapper := &BunSelectQuery{query: query, db: b.db, driverName: b.driverName}
for _, fn := range applyFuncs {
if fn != nil {
wrapper = fn(wrapper).(*BunSelectQuery)
@@ -1477,13 +1501,15 @@ func (b *BunResult) LastInsertId() (int64, error) {
// BunTxAdapter wraps a Bun transaction to implement the Database interface
type BunTxAdapter struct {
tx bun.Tx
tx bun.Tx
driverName string
}
func (b *BunTxAdapter) NewSelect() common.SelectQuery {
return &BunSelectQuery{
query: b.tx.NewSelect(),
db: b.tx,
query: b.tx.NewSelect(),
db: b.tx,
driverName: b.driverName,
}
}
@@ -1527,3 +1553,7 @@ func (b *BunTxAdapter) RunInTransaction(ctx context.Context, fn func(common.Data
func (b *BunTxAdapter) GetUnderlyingDB() interface{} {
return b.tx
}
func (b *BunTxAdapter) DriverName() string {
return b.driverName
}

View File

@@ -15,12 +15,16 @@ import (
// GormAdapter adapts GORM to work with our Database interface
type GormAdapter struct {
db *gorm.DB
db *gorm.DB
driverName string
}
// NewGormAdapter creates a new GORM adapter
func NewGormAdapter(db *gorm.DB) *GormAdapter {
return &GormAdapter{db: db}
adapter := &GormAdapter{db: db}
// Initialize driver name
adapter.driverName = adapter.DriverName()
return adapter
}
// EnableQueryDebug enables query debugging which logs all SQL queries including preloads
@@ -40,7 +44,7 @@ func (g *GormAdapter) DisableQueryDebug() *GormAdapter {
}
func (g *GormAdapter) NewSelect() common.SelectQuery {
return &GormSelectQuery{db: g.db}
return &GormSelectQuery{db: g.db, driverName: g.driverName}
}
func (g *GormAdapter) NewInsert() common.InsertQuery {
@@ -79,7 +83,7 @@ func (g *GormAdapter) BeginTx(ctx context.Context) (common.Database, error) {
if tx.Error != nil {
return nil, tx.Error
}
return &GormAdapter{db: tx}, nil
return &GormAdapter{db: tx, driverName: g.driverName}, nil
}
func (g *GormAdapter) CommitTx(ctx context.Context) error {
@@ -97,7 +101,7 @@ func (g *GormAdapter) RunInTransaction(ctx context.Context, fn func(common.Datab
}
}()
return g.db.WithContext(ctx).Transaction(func(tx *gorm.DB) error {
adapter := &GormAdapter{db: tx}
adapter := &GormAdapter{db: tx, driverName: g.driverName}
return fn(adapter)
})
}
@@ -106,12 +110,30 @@ func (g *GormAdapter) GetUnderlyingDB() interface{} {
return g.db
}
func (g *GormAdapter) DriverName() string {
if g.db.Dialector == nil {
return ""
}
// Normalize GORM's dialector name to match the project's canonical vocabulary.
// GORM returns "sqlserver" for MSSQL; the rest of the project uses "mssql".
// GORM returns "sqlite" or "sqlite3" for SQLite; we normalize to "sqlite".
switch name := g.db.Name(); name {
case "sqlserver":
return "mssql"
case "sqlite3":
return "sqlite"
default:
return name
}
}
// GormSelectQuery implements SelectQuery for GORM
type GormSelectQuery struct {
db *gorm.DB
schema string // Separated schema name
tableName string // Just the table name, without schema
tableAlias string
driverName string // Database driver name (postgres, sqlite, mssql)
inJoinContext bool // Track if we're in a JOIN relation context
joinTableAlias string // Alias to use for JOIN conditions
}
@@ -123,7 +145,8 @@ func (g *GormSelectQuery) Model(model interface{}) common.SelectQuery {
if provider, ok := model.(common.TableNameProvider); ok {
fullTableName := provider.TableName()
// Check if the table name contains schema (e.g., "schema.table")
g.schema, g.tableName = parseTableName(fullTableName)
// For SQLite, this will convert "schema.table" to "schema_table"
g.schema, g.tableName = parseTableName(fullTableName, g.driverName)
}
if provider, ok := model.(common.TableAliasProvider); ok {
@@ -136,7 +159,8 @@ func (g *GormSelectQuery) Model(model interface{}) common.SelectQuery {
func (g *GormSelectQuery) Table(table string) common.SelectQuery {
g.db = g.db.Table(table)
// Check if the table name contains schema (e.g., "schema.table")
g.schema, g.tableName = parseTableName(table)
// For SQLite, this will convert "schema.table" to "schema_table"
g.schema, g.tableName = parseTableName(table, g.driverName)
return g
}
@@ -322,7 +346,8 @@ func (g *GormSelectQuery) PreloadRelation(relation string, apply ...func(common.
}
wrapper := &GormSelectQuery{
db: db,
db: db,
driverName: g.driverName,
}
current := common.SelectQuery(wrapper)
@@ -360,6 +385,7 @@ func (g *GormSelectQuery) JoinRelation(relation string, apply ...func(common.Sel
wrapper := &GormSelectQuery{
db: db,
driverName: g.driverName,
inJoinContext: true, // Mark as JOIN context
joinTableAlias: strings.ToLower(relation), // Use relation name as alias
}

View File

@@ -16,12 +16,19 @@ import (
// PgSQLAdapter adapts standard database/sql to work with our Database interface
// This provides a lightweight PostgreSQL adapter without ORM overhead
type PgSQLAdapter struct {
db *sql.DB
db *sql.DB
driverName string
}
// NewPgSQLAdapter creates a new PostgreSQL adapter
func NewPgSQLAdapter(db *sql.DB) *PgSQLAdapter {
return &PgSQLAdapter{db: db}
// NewPgSQLAdapter creates a new adapter wrapping a standard sql.DB.
// An optional driverName (e.g. "postgres", "sqlite", "mssql") can be provided;
// it defaults to "postgres" when omitted.
func NewPgSQLAdapter(db *sql.DB, driverName ...string) *PgSQLAdapter {
name := "postgres"
if len(driverName) > 0 && driverName[0] != "" {
name = driverName[0]
}
return &PgSQLAdapter{db: db, driverName: name}
}
// EnableQueryDebug enables query debugging for development
@@ -31,22 +38,25 @@ func (p *PgSQLAdapter) EnableQueryDebug() {
func (p *PgSQLAdapter) NewSelect() common.SelectQuery {
return &PgSQLSelectQuery{
db: p.db,
columns: []string{"*"},
args: make([]interface{}, 0),
db: p.db,
driverName: p.driverName,
columns: []string{"*"},
args: make([]interface{}, 0),
}
}
func (p *PgSQLAdapter) NewInsert() common.InsertQuery {
return &PgSQLInsertQuery{
db: p.db,
values: make(map[string]interface{}),
db: p.db,
driverName: p.driverName,
values: make(map[string]interface{}),
}
}
func (p *PgSQLAdapter) NewUpdate() common.UpdateQuery {
return &PgSQLUpdateQuery{
db: p.db,
driverName: p.driverName,
sets: make(map[string]interface{}),
args: make([]interface{}, 0),
whereClauses: make([]string, 0),
@@ -56,6 +66,7 @@ func (p *PgSQLAdapter) NewUpdate() common.UpdateQuery {
func (p *PgSQLAdapter) NewDelete() common.DeleteQuery {
return &PgSQLDeleteQuery{
db: p.db,
driverName: p.driverName,
args: make([]interface{}, 0),
whereClauses: make([]string, 0),
}
@@ -98,7 +109,7 @@ func (p *PgSQLAdapter) BeginTx(ctx context.Context) (common.Database, error) {
if err != nil {
return nil, err
}
return &PgSQLTxAdapter{tx: tx}, nil
return &PgSQLTxAdapter{tx: tx, driverName: p.driverName}, nil
}
func (p *PgSQLAdapter) CommitTx(ctx context.Context) error {
@@ -121,7 +132,7 @@ func (p *PgSQLAdapter) RunInTransaction(ctx context.Context, fn func(common.Data
return err
}
adapter := &PgSQLTxAdapter{tx: tx}
adapter := &PgSQLTxAdapter{tx: tx, driverName: p.driverName}
defer func() {
if p := recover(); p != nil {
@@ -141,6 +152,10 @@ func (p *PgSQLAdapter) GetUnderlyingDB() interface{} {
return p.db
}
func (p *PgSQLAdapter) DriverName() string {
return p.driverName
}
// preloadConfig represents a relationship to be preloaded
type preloadConfig struct {
relation string
@@ -165,6 +180,7 @@ type PgSQLSelectQuery struct {
model interface{}
tableName string
tableAlias string
driverName string // Database driver name (postgres, sqlite, mssql)
columns []string
columnExprs []string
whereClauses []string
@@ -183,7 +199,9 @@ type PgSQLSelectQuery struct {
func (p *PgSQLSelectQuery) Model(model interface{}) common.SelectQuery {
p.model = model
if provider, ok := model.(common.TableNameProvider); ok {
p.tableName = provider.TableName()
fullTableName := provider.TableName()
// For SQLite, convert "schema.table" to "schema_table"
_, p.tableName = parseTableName(fullTableName, p.driverName)
}
if provider, ok := model.(common.TableAliasProvider); ok {
p.tableAlias = provider.TableAlias()
@@ -192,7 +210,8 @@ func (p *PgSQLSelectQuery) Model(model interface{}) common.SelectQuery {
}
func (p *PgSQLSelectQuery) Table(table string) common.SelectQuery {
p.tableName = table
// For SQLite, convert "schema.table" to "schema_table"
_, p.tableName = parseTableName(table, p.driverName)
return p
}
@@ -501,16 +520,19 @@ func (p *PgSQLSelectQuery) Exists(ctx context.Context) (exists bool, err error)
// PgSQLInsertQuery implements InsertQuery for PostgreSQL
type PgSQLInsertQuery struct {
db *sql.DB
tx *sql.Tx
tableName string
values map[string]interface{}
returning []string
db *sql.DB
tx *sql.Tx
tableName string
driverName string
values map[string]interface{}
returning []string
}
func (p *PgSQLInsertQuery) Model(model interface{}) common.InsertQuery {
if provider, ok := model.(common.TableNameProvider); ok {
p.tableName = provider.TableName()
fullTableName := provider.TableName()
// For SQLite, convert "schema.table" to "schema_table"
_, p.tableName = parseTableName(fullTableName, p.driverName)
}
// Extract values from model using reflection
// This is a simplified implementation
@@ -518,7 +540,8 @@ func (p *PgSQLInsertQuery) Model(model interface{}) common.InsertQuery {
}
func (p *PgSQLInsertQuery) Table(table string) common.InsertQuery {
p.tableName = table
// For SQLite, convert "schema.table" to "schema_table"
_, p.tableName = parseTableName(table, p.driverName)
return p
}
@@ -591,6 +614,7 @@ type PgSQLUpdateQuery struct {
db *sql.DB
tx *sql.Tx
tableName string
driverName string
model interface{}
sets map[string]interface{}
whereClauses []string
@@ -602,13 +626,16 @@ type PgSQLUpdateQuery struct {
func (p *PgSQLUpdateQuery) Model(model interface{}) common.UpdateQuery {
p.model = model
if provider, ok := model.(common.TableNameProvider); ok {
p.tableName = provider.TableName()
fullTableName := provider.TableName()
// For SQLite, convert "schema.table" to "schema_table"
_, p.tableName = parseTableName(fullTableName, p.driverName)
}
return p
}
func (p *PgSQLUpdateQuery) Table(table string) common.UpdateQuery {
p.tableName = table
// For SQLite, convert "schema.table" to "schema_table"
_, p.tableName = parseTableName(table, p.driverName)
if p.model == nil {
model, err := modelregistry.GetModelByName(table)
if err == nil {
@@ -749,6 +776,7 @@ type PgSQLDeleteQuery struct {
db *sql.DB
tx *sql.Tx
tableName string
driverName string
whereClauses []string
args []interface{}
paramCounter int
@@ -756,13 +784,16 @@ type PgSQLDeleteQuery struct {
func (p *PgSQLDeleteQuery) Model(model interface{}) common.DeleteQuery {
if provider, ok := model.(common.TableNameProvider); ok {
p.tableName = provider.TableName()
fullTableName := provider.TableName()
// For SQLite, convert "schema.table" to "schema_table"
_, p.tableName = parseTableName(fullTableName, p.driverName)
}
return p
}
func (p *PgSQLDeleteQuery) Table(table string) common.DeleteQuery {
p.tableName = table
// For SQLite, convert "schema.table" to "schema_table"
_, p.tableName = parseTableName(table, p.driverName)
return p
}
@@ -835,27 +866,31 @@ func (p *PgSQLResult) LastInsertId() (int64, error) {
// PgSQLTxAdapter wraps a PostgreSQL transaction
type PgSQLTxAdapter struct {
tx *sql.Tx
tx *sql.Tx
driverName string
}
func (p *PgSQLTxAdapter) NewSelect() common.SelectQuery {
return &PgSQLSelectQuery{
tx: p.tx,
columns: []string{"*"},
args: make([]interface{}, 0),
tx: p.tx,
driverName: p.driverName,
columns: []string{"*"},
args: make([]interface{}, 0),
}
}
func (p *PgSQLTxAdapter) NewInsert() common.InsertQuery {
return &PgSQLInsertQuery{
tx: p.tx,
values: make(map[string]interface{}),
tx: p.tx,
driverName: p.driverName,
values: make(map[string]interface{}),
}
}
func (p *PgSQLTxAdapter) NewUpdate() common.UpdateQuery {
return &PgSQLUpdateQuery{
tx: p.tx,
driverName: p.driverName,
sets: make(map[string]interface{}),
args: make([]interface{}, 0),
whereClauses: make([]string, 0),
@@ -865,6 +900,7 @@ func (p *PgSQLTxAdapter) NewUpdate() common.UpdateQuery {
func (p *PgSQLTxAdapter) NewDelete() common.DeleteQuery {
return &PgSQLDeleteQuery{
tx: p.tx,
driverName: p.driverName,
args: make([]interface{}, 0),
whereClauses: make([]string, 0),
}
@@ -912,6 +948,10 @@ func (p *PgSQLTxAdapter) GetUnderlyingDB() interface{} {
return p.tx
}
func (p *PgSQLTxAdapter) DriverName() string {
return p.driverName
}
// applyJoinPreloads adds JOINs for relationships that should use JOIN strategy
func (p *PgSQLSelectQuery) applyJoinPreloads() {
for _, preload := range p.preloads {
@@ -1036,9 +1076,9 @@ func (p *PgSQLSelectQuery) executePreloadQuery(ctx context.Context, field reflec
// Create a new select query for the related table
var db common.Database
if p.tx != nil {
db = &PgSQLTxAdapter{tx: p.tx}
db = &PgSQLTxAdapter{tx: p.tx, driverName: p.driverName}
} else {
db = &PgSQLAdapter{db: p.db}
db = &PgSQLAdapter{db: p.db, driverName: p.driverName}
}
query := db.NewSelect().

View File

@@ -62,9 +62,20 @@ func checkAliasLength(relation string) bool {
// For example: "public.users" -> ("public", "users")
//
// "users" -> ("", "users")
func parseTableName(fullTableName string) (schema, table string) {
//
// For SQLite, schema.table is translated to schema_table since SQLite doesn't support schemas
// in the same way as PostgreSQL/MSSQL
func parseTableName(fullTableName, driverName string) (schema, table string) {
if idx := strings.LastIndex(fullTableName, "."); idx != -1 {
return fullTableName[:idx], fullTableName[idx+1:]
schema = fullTableName[:idx]
table = fullTableName[idx+1:]
// For SQLite, convert schema.table to schema_table
if driverName == "sqlite" || driverName == "sqlite3" {
table = schema + "_" + table
schema = ""
}
return schema, table
}
return "", fullTableName
}

View File

@@ -30,6 +30,12 @@ type Database interface {
// For Bun, this returns *bun.DB
// This is useful for provider-specific features like PostgreSQL NOTIFY/LISTEN
GetUnderlyingDB() interface{}
// DriverName returns the canonical name of the underlying database driver.
// Possible values: "postgres", "sqlite", "mssql", "mysql".
// All adapters normalise vendor-specific strings (e.g. Bun's "pg", GORM's
// "sqlserver") to the values above before returning.
DriverName() string
}
// SelectQuery interface for building SELECT queries (compatible with both GORM and Bun)

View File

@@ -50,6 +50,9 @@ func (m *mockDatabase) RollbackTx(ctx context.Context) error {
func (m *mockDatabase) GetUnderlyingDB() interface{} {
return nil
}
func (m *mockDatabase) DriverName() string {
return "postgres"
}
// Mock SelectQuery
type mockSelectQuery struct{}

View File

@@ -11,6 +11,7 @@ A comprehensive database connection manager for Go that provides centralized man
- **GORM** - Popular Go ORM
- **Native** - Standard library `*sql.DB`
- All three share the same underlying connection pool
- **SQLite Schema Translation**: Automatic conversion of `schema.table` to `schema_table` for SQLite compatibility
- **Configuration-Driven**: YAML configuration with Viper integration
- **Production-Ready Features**:
- Automatic health checks and reconnection
@@ -179,6 +180,35 @@ if err != nil {
rows, err := nativeDB.QueryContext(ctx, "SELECT * FROM users WHERE active = $1", true)
```
#### Cross-Database Example with SQLite
```go
// Same model works across all databases
type User struct {
ID int `bun:"id,pk"`
Username string `bun:"username"`
Email string `bun:"email"`
}
func (User) TableName() string {
return "auth.users"
}
// PostgreSQL connection
pgConn, _ := mgr.Get("primary")
pgDB, _ := pgConn.Bun()
var pgUsers []User
pgDB.NewSelect().Model(&pgUsers).Scan(ctx)
// Executes: SELECT * FROM auth.users
// SQLite connection
sqliteConn, _ := mgr.Get("cache-db")
sqliteDB, _ := sqliteConn.Bun()
var sqliteUsers []User
sqliteDB.NewSelect().Model(&sqliteUsers).Scan(ctx)
// Executes: SELECT * FROM auth_users (schema.table → schema_table)
```
#### Use MongoDB
```go
@@ -368,6 +398,37 @@ Providers handle:
- Connection statistics
- Connection cleanup
### SQLite Schema Handling
SQLite doesn't support schemas in the same way as PostgreSQL or MSSQL. To ensure compatibility when using models designed for multi-schema databases:
**Automatic Translation**: When a table name contains a schema prefix (e.g., `myschema.mytable`), it is automatically converted to `myschema_mytable` for SQLite databases.
```go
// Model definition (works across all databases)
func (User) TableName() string {
return "auth.users" // PostgreSQL/MSSQL: "auth"."users"
// SQLite: "auth_users"
}
// Query execution
db.NewSelect().Model(&User{}).Scan(ctx)
// PostgreSQL/MSSQL: SELECT * FROM auth.users
// SQLite: SELECT * FROM auth_users
```
**How it Works**:
- Bun, GORM, and Native adapters detect the driver type
- `parseTableName()` automatically translates schema.table → schema_table for SQLite
- Translation happens transparently in all database operations (SELECT, INSERT, UPDATE, DELETE)
- Preload and relation queries are also handled automatically
**Benefits**:
- Write database-agnostic code
- Use the same models across PostgreSQL, MSSQL, and SQLite
- No conditional logic needed in your application
- Schema separation maintained through naming convention in SQLite
## Best Practices
1. **Use Named Connections**: Be explicit about which database you're accessing

View File

@@ -467,13 +467,11 @@ func (c *sqlConnection) getNativeAdapter() (common.Database, error) {
// Create a native adapter based on database type
switch c.dbType {
case DatabaseTypePostgreSQL:
c.nativeAdapter = database.NewPgSQLAdapter(c.nativeDB)
c.nativeAdapter = database.NewPgSQLAdapter(c.nativeDB, string(c.dbType))
case DatabaseTypeSQLite:
// For SQLite, we'll use the PgSQL adapter as it works with standard sql.DB
c.nativeAdapter = database.NewPgSQLAdapter(c.nativeDB)
c.nativeAdapter = database.NewPgSQLAdapter(c.nativeDB, string(c.dbType))
case DatabaseTypeMSSQL:
// For MSSQL, we'll use the PgSQL adapter as it works with standard sql.DB
c.nativeAdapter = database.NewPgSQLAdapter(c.nativeDB)
c.nativeAdapter = database.NewPgSQLAdapter(c.nativeDB, string(c.dbType))
default:
return nil, ErrUnsupportedDatabase
}

View File

@@ -231,12 +231,14 @@ func (m *connectionManager) Connect(ctx context.Context) error {
// Close closes all database connections
func (m *connectionManager) Close() error {
// Stop the health checker before taking mu. performHealthCheck acquires
// a read lock, so waiting for the goroutine while holding the write lock
// would deadlock.
m.stopHealthChecker()
m.mu.Lock()
defer m.mu.Unlock()
// Stop health checker
m.stopHealthChecker()
// Close all connections
var errors []error
for name, conn := range m.connections {

View File

@@ -74,6 +74,10 @@ func (m *MockDatabase) GetUnderlyingDB() interface{} {
return m
}
func (m *MockDatabase) DriverName() string {
return "postgres"
}
// MockResult implements common.Result interface for testing
type MockResult struct {
rows int64

View File

@@ -645,11 +645,14 @@ func (h *Handler) getNotifyTopic(clientID, subscriptionID string) string {
// Database operation helpers (adapted from websocketspec)
func (h *Handler) getTableName(schema, entity string, model interface{}) string {
// Use entity as table name
tableName := entity
if schema != "" {
tableName = schema + "." + tableName
if h.db.DriverName() == "sqlite" {
tableName = schema + "_" + tableName
} else {
tableName = schema + "." + tableName
}
}
return tableName
}

View File

@@ -1380,10 +1380,16 @@ func (h *Handler) getSchemaAndTable(defaultSchema, entity string, model interfac
return schema, entity
}
// getTableName returns the full table name including schema (schema.table)
// getTableName returns the full table name including schema.
// For most drivers the result is "schema.table". For SQLite, which does not
// support schema-qualified names, the schema and table are joined with an
// underscore: "schema_table".
func (h *Handler) getTableName(schema, entity string, model interface{}) string {
schemaName, tableName := h.getSchemaAndTable(schema, entity, model)
if schemaName != "" {
if h.db.DriverName() == "sqlite" {
return fmt.Sprintf("%s_%s", schemaName, tableName)
}
return fmt.Sprintf("%s.%s", schemaName, tableName)
}
return tableName

View File

@@ -2015,11 +2015,18 @@ func (h *Handler) processChildRelationsForField(
return nil
}
// getTableNameForRelatedModel gets the table name for a related model
// getTableNameForRelatedModel gets the table name for a related model.
// If the model's TableName() is schema-qualified (e.g. "public.users") the
// separator is adjusted for the active driver: underscore for SQLite, dot otherwise.
func (h *Handler) getTableNameForRelatedModel(model interface{}, defaultName string) string {
if provider, ok := model.(common.TableNameProvider); ok {
tableName := provider.TableName()
if tableName != "" {
if schema, table := h.parseTableName(tableName); schema != "" {
if h.db.DriverName() == "sqlite" {
return fmt.Sprintf("%s_%s", schema, table)
}
}
return tableName
}
}
@@ -2264,10 +2271,16 @@ func (h *Handler) getSchemaAndTable(defaultSchema, entity string, model interfac
return schema, entity
}
// getTableName returns the full table name including schema (schema.table)
// getTableName returns the full table name including schema.
// For most drivers the result is "schema.table". For SQLite, which does not
// support schema-qualified names, the schema and table are joined with an
// underscore: "schema_table".
func (h *Handler) getTableName(schema, entity string, model interface{}) string {
schemaName, tableName := h.getSchemaAndTable(schema, entity, model)
if schemaName != "" {
if h.db.DriverName() == "sqlite" {
return fmt.Sprintf("%s_%s", schemaName, tableName)
}
return fmt.Sprintf("%s.%s", schemaName, tableName)
}
return tableName

View File

@@ -656,11 +656,14 @@ func (h *Handler) delete(hookCtx *HookContext) error {
// Helper methods
func (h *Handler) getTableName(schema, entity string, model interface{}) string {
// Use entity as table name
tableName := entity
if schema != "" {
tableName = schema + "." + tableName
if h.db.DriverName() == "sqlite" {
tableName = schema + "_" + tableName
} else {
tableName = schema + "." + tableName
}
}
return tableName
}

View File

@@ -82,6 +82,10 @@ func (m *MockDatabase) GetUnderlyingDB() interface{} {
return args.Get(0)
}
func (m *MockDatabase) DriverName() string {
return "postgres"
}
// MockSelectQuery is a mock implementation of common.SelectQuery
type MockSelectQuery struct {
mock.Mock

View File

@@ -1,5 +1,50 @@
# Python Implementation of the ResolveSpec API
# ResolveSpec Python Client - TODO
# Server
## Client Implementation & Testing
# Client
### 1. ResolveSpec Client API
- [ ] Core API implementation (read, create, update, delete, get_metadata)
- [ ] Unit tests for API functions
- [ ] Integration tests with server
- [ ] Error handling and edge cases
### 2. HeaderSpec Client API
- [ ] Client API implementation
- [ ] Unit tests
- [ ] Integration tests with server
### 3. FunctionSpec Client API
- [ ] Client API implementation
- [ ] Unit tests
- [ ] Integration tests with server
### 4. WebSocketSpec Client API
- [ ] WebSocketClient class implementation (read, create, update, delete, meta, subscribe, unsubscribe)
- [ ] Unit tests for WebSocketClient
- [ ] Connection handling tests
- [ ] Subscription tests
- [ ] Integration tests with server
### 5. Testing Infrastructure
- [ ] Set up test framework (pytest)
- [ ] Configure test coverage reporting (pytest-cov)
- [ ] Add test utilities and fixtures
- [ ] Create test documentation
- [ ] Package and publish to PyPI
## Documentation
- [ ] API reference documentation
- [ ] Usage examples for each client API
- [ ] Installation guide
- [ ] Contributing guidelines
- [ ] README with quick start
---
**Last Updated:** 2026-02-07

114
todo.md
View File

@@ -2,36 +2,98 @@
This document tracks incomplete features and improvements for the ResolveSpec project.
## In Progress
### Database Layer
- [x] SQLite schema translation (schema.table → schema_table)
- [x] Driver name normalization across adapters
- [x] Database Connection Manager (dbmanager) package
### Documentation
- Ensure all new features are documented in README.md
- Update examples to showcase new functionality
- Add migration notes if any breaking changes are introduced
- [x] Add dbmanager to README
- [x] Add WebSocketSpec to top-level intro
- [x] Add MQTTSpec to top-level intro
- [x] Remove migration sections from README
- [ ] Complete API reference documentation
- [ ] Add examples for all supported databases
### 8.
## Planned Features
1. **Test Coverage**: Increase from 20% to 70%+
- Add integration tests for CRUD operations
- Add unit tests for security providers
- Add concurrency tests for model registry
### ResolveSpec JS Client Implementation & Testing
1. **ResolveSpec Client API (resolvespec-js)**
- [x] Core API implementation (read, create, update, delete, getMetadata)
- [ ] Unit tests for API functions
- [ ] Integration tests with server
- [ ] Error handling and edge cases
2. **HeaderSpec Client API (resolvespec-js)**
- [ ] Client API implementation
- [ ] Unit tests
- [ ] Integration tests with server
3. **FunctionSpec Client API (resolvespec-js)**
- [ ] Client API implementation
- [ ] Unit tests
- [ ] Integration tests with server
4. **WebSocketSpec Client API (resolvespec-js)**
- [x] WebSocketClient class implementation (read, create, update, delete, meta, subscribe, unsubscribe)
- [ ] Unit tests for WebSocketClient
- [ ] Connection handling tests
- [ ] Subscription tests
- [ ] Integration tests with server
5. **resolvespec-js Testing Infrastructure**
- [ ] Set up test framework (Jest or Vitest)
- [ ] Configure test coverage reporting
- [ ] Add test utilities and mocks
- [ ] Create test documentation
### ResolveSpec Python Client Implementation & Testing
See [`resolvespec-python/todo.md`](./resolvespec-python/todo.md) for detailed Python client implementation tasks.
### Core Functionality
1. **Enhanced Preload Filtering**
- [ ] Column selection for nested preloads
- [ ] Advanced filtering conditions for relations
- [ ] Performance optimization for deep nesting
2. **Advanced Query Features**
- [ ] Custom SQL join support
- [ ] Computed column improvements
- [ ] Recursive query support
3. **Testing & Quality**
- [ ] Increase test coverage to 70%+
- [ ] Add integration tests for all ORMs
- [ ] Add concurrency tests for thread safety
- [ ] Performance benchmarks
### Infrastructure
- [ ] Improved error handling and reporting
- [ ] Enhanced logging capabilities
- [ ] Additional monitoring metrics
- [ ] Performance profiling tools
## Documentation Tasks
- [ ] Complete API reference
- [ ] Add troubleshooting guides
- [ ] Create architecture diagrams
- [ ] Expand database adapter documentation
## Known Issues
- [ ] Long preload alias names may exceed PostgreSQL identifier limit
- [ ] Some edge cases in computed column handling
---
## Priority Ranking
1. **High Priority**
- Column Selection and Filtering for Preloads (#1)
- Proper Condition Handling for Bun Preloads (#4)
2. **Medium Priority**
- Custom SQL Join Support (#3)
- Recursive JSON Cleaning (#2)
3. **Low Priority**
- Modernize Go Type Declarations (#5)
---
**Last Updated:** 2025-12-09
**Last Updated:** 2026-02-07
**Updated:** Added resolvespec-js client testing and implementation tasks