Compare commits

..

4 Commits

Author SHA1 Message Date
Hein
b322ef76a2 Merge branch 'main' of https://github.com/bitechdev/ResolveSpec 2026-02-10 16:55:58 +02:00
Hein
a6c7edb0e4 feat(resolvespec): add OR logic support in filters
* Introduce `logic_operator` field to combine filters with OR logic.
* Implement grouping for consecutive OR filters to ensure proper SQL precedence.
* Add support for custom SQL operators in filter conditions.
* Enhance `fetch_row_number` functionality to return specific record with its position.
* Update tests to cover new filter logic and grouping behavior.

Features Implemented:

  1. OR Logic Filter Support (SearchOr)
    - Added to resolvespec, restheadspec, and websocketspec
    - Consecutive OR filters are automatically grouped with parentheses
    - Prevents SQL logic errors: (A OR B OR C) AND D instead of A OR B OR C AND D
  2. CustomOperators
    - Allows arbitrary SQL conditions in resolvespec
    - Properly integrated with filter logic
  3. FetchRowNumber
    - Uses SQL window functions: ROW_NUMBER() OVER (ORDER BY ...)
    - Returns only the specific record (not all records)
    - Available in resolvespec and restheadspec
    - Perfect for "What's my rank?" queries
  4. RowNumber Field Auto-Population
    - Now available in all three packages: resolvespec, restheadspec, and websocketspec
    - Uses simple offset-based math: offset + index + 1
    - Automatically populates RowNumber int64 field if it exists on models
    - Perfect for displaying paginated lists with sequential numbering
2026-02-10 16:55:55 +02:00
71eeb8315e chore: 📝 Refactored documentation and added better sqlite support.
Some checks failed
Build , Vet Test, and Lint / Run Vet Tests (1.24.x) (push) Successful in -26m14s
Build , Vet Test, and Lint / Run Vet Tests (1.23.x) (push) Successful in -25m40s
Build , Vet Test, and Lint / Lint Code (push) Successful in -25m41s
Build , Vet Test, and Lint / Build (push) Successful in -25m55s
Tests / Unit Tests (push) Successful in -26m19s
Tests / Integration Tests (push) Failing after -26m35s
restructure server configuration for multiple instances  - Change server configuration to support multiple instances. - Introduce new fields for tracing and error tracking. - Update example configuration to reflect new structure. - Remove deprecated OpenAPI specification file. - Enhance database adapter to handle SQLite schema translation.
2026-02-07 10:58:34 +02:00
Hein
4bf3d0224e feat(database): normalize driver names across adapters
Some checks failed
Build , Vet Test, and Lint / Run Vet Tests (1.23.x) (push) Successful in -25m46s
Build , Vet Test, and Lint / Run Vet Tests (1.24.x) (push) Successful in -23m31s
Build , Vet Test, and Lint / Lint Code (push) Successful in -24m55s
Tests / Unit Tests (push) Successful in -26m19s
Build , Vet Test, and Lint / Build (push) Successful in -26m2s
Tests / Integration Tests (push) Failing after -26m42s
* Added DriverName method to BunAdapter, GormAdapter, and PgSQLAdapter for consistent driver name handling.
* Updated transaction adapters to include driver name.
* Enhanced mock database implementations for testing with DriverName method.
* Adjusted getTableName functions to accommodate driver-specific naming conventions.
2026-02-05 13:28:53 +02:00
27 changed files with 2074 additions and 567 deletions

View File

@@ -1,15 +1,22 @@
# ResolveSpec Environment Variables Example
# Environment variables override config file settings
# All variables are prefixed with RESOLVESPEC_
# Nested config uses underscores (e.g., server.addr -> RESOLVESPEC_SERVER_ADDR)
# Nested config uses underscores (e.g., servers.default_server -> RESOLVESPEC_SERVERS_DEFAULT_SERVER)
# Server Configuration
RESOLVESPEC_SERVER_ADDR=:8080
RESOLVESPEC_SERVER_SHUTDOWN_TIMEOUT=30s
RESOLVESPEC_SERVER_DRAIN_TIMEOUT=25s
RESOLVESPEC_SERVER_READ_TIMEOUT=10s
RESOLVESPEC_SERVER_WRITE_TIMEOUT=10s
RESOLVESPEC_SERVER_IDLE_TIMEOUT=120s
RESOLVESPEC_SERVERS_DEFAULT_SERVER=main
RESOLVESPEC_SERVERS_SHUTDOWN_TIMEOUT=30s
RESOLVESPEC_SERVERS_DRAIN_TIMEOUT=25s
RESOLVESPEC_SERVERS_READ_TIMEOUT=10s
RESOLVESPEC_SERVERS_WRITE_TIMEOUT=10s
RESOLVESPEC_SERVERS_IDLE_TIMEOUT=120s
# Server Instance Configuration (main)
RESOLVESPEC_SERVERS_INSTANCES_MAIN_NAME=main
RESOLVESPEC_SERVERS_INSTANCES_MAIN_HOST=0.0.0.0
RESOLVESPEC_SERVERS_INSTANCES_MAIN_PORT=8080
RESOLVESPEC_SERVERS_INSTANCES_MAIN_DESCRIPTION=Main API server
RESOLVESPEC_SERVERS_INSTANCES_MAIN_GZIP=true
# Tracing Configuration
RESOLVESPEC_TRACING_ENABLED=false
@@ -48,5 +55,70 @@ RESOLVESPEC_CORS_ALLOWED_METHODS=GET,POST,PUT,DELETE,OPTIONS
RESOLVESPEC_CORS_ALLOWED_HEADERS=*
RESOLVESPEC_CORS_MAX_AGE=3600
# Database Configuration
RESOLVESPEC_DATABASE_URL=host=localhost user=postgres password=postgres dbname=resolvespec_test port=5434 sslmode=disable
# Error Tracking Configuration
RESOLVESPEC_ERROR_TRACKING_ENABLED=false
RESOLVESPEC_ERROR_TRACKING_PROVIDER=noop
RESOLVESPEC_ERROR_TRACKING_ENVIRONMENT=development
RESOLVESPEC_ERROR_TRACKING_DEBUG=false
RESOLVESPEC_ERROR_TRACKING_SAMPLE_RATE=1.0
RESOLVESPEC_ERROR_TRACKING_TRACES_SAMPLE_RATE=0.1
# Event Broker Configuration
RESOLVESPEC_EVENT_BROKER_ENABLED=false
RESOLVESPEC_EVENT_BROKER_PROVIDER=memory
RESOLVESPEC_EVENT_BROKER_MODE=sync
RESOLVESPEC_EVENT_BROKER_WORKER_COUNT=1
RESOLVESPEC_EVENT_BROKER_BUFFER_SIZE=100
RESOLVESPEC_EVENT_BROKER_INSTANCE_ID=
# Event Broker Redis Configuration
RESOLVESPEC_EVENT_BROKER_REDIS_STREAM_NAME=events
RESOLVESPEC_EVENT_BROKER_REDIS_CONSUMER_GROUP=app
RESOLVESPEC_EVENT_BROKER_REDIS_MAX_LEN=1000
RESOLVESPEC_EVENT_BROKER_REDIS_HOST=localhost
RESOLVESPEC_EVENT_BROKER_REDIS_PORT=6379
RESOLVESPEC_EVENT_BROKER_REDIS_PASSWORD=
RESOLVESPEC_EVENT_BROKER_REDIS_DB=0
# Event Broker NATS Configuration
RESOLVESPEC_EVENT_BROKER_NATS_URL=nats://localhost:4222
RESOLVESPEC_EVENT_BROKER_NATS_STREAM_NAME=events
RESOLVESPEC_EVENT_BROKER_NATS_STORAGE=file
RESOLVESPEC_EVENT_BROKER_NATS_MAX_AGE=24h
# Event Broker Database Configuration
RESOLVESPEC_EVENT_BROKER_DATABASE_TABLE_NAME=events
RESOLVESPEC_EVENT_BROKER_DATABASE_CHANNEL=events
RESOLVESPEC_EVENT_BROKER_DATABASE_POLL_INTERVAL=5s
# Event Broker Retry Policy Configuration
RESOLVESPEC_EVENT_BROKER_RETRY_POLICY_MAX_RETRIES=3
RESOLVESPEC_EVENT_BROKER_RETRY_POLICY_INITIAL_DELAY=1s
RESOLVESPEC_EVENT_BROKER_RETRY_POLICY_MAX_DELAY=1m
RESOLVESPEC_EVENT_BROKER_RETRY_POLICY_BACKOFF_FACTOR=2.0
# DB Manager Configuration
RESOLVESPEC_DBMANAGER_DEFAULT_CONNECTION=primary
RESOLVESPEC_DBMANAGER_MAX_OPEN_CONNS=25
RESOLVESPEC_DBMANAGER_MAX_IDLE_CONNS=5
RESOLVESPEC_DBMANAGER_CONN_MAX_LIFETIME=30m
RESOLVESPEC_DBMANAGER_CONN_MAX_IDLE_TIME=5m
RESOLVESPEC_DBMANAGER_RETRY_ATTEMPTS=3
RESOLVESPEC_DBMANAGER_RETRY_DELAY=1s
RESOLVESPEC_DBMANAGER_HEALTH_CHECK_INTERVAL=30s
RESOLVESPEC_DBMANAGER_ENABLE_AUTO_RECONNECT=true
# DB Manager Primary Connection Configuration
RESOLVESPEC_DBMANAGER_CONNECTIONS_PRIMARY_NAME=primary
RESOLVESPEC_DBMANAGER_CONNECTIONS_PRIMARY_TYPE=pgsql
RESOLVESPEC_DBMANAGER_CONNECTIONS_PRIMARY_URL=host=localhost user=postgres password=postgres dbname=resolvespec port=5432 sslmode=disable
RESOLVESPEC_DBMANAGER_CONNECTIONS_PRIMARY_DEFAULT_ORM=gorm
RESOLVESPEC_DBMANAGER_CONNECTIONS_PRIMARY_ENABLE_LOGGING=false
RESOLVESPEC_DBMANAGER_CONNECTIONS_PRIMARY_ENABLE_METRICS=false
RESOLVESPEC_DBMANAGER_CONNECTIONS_PRIMARY_CONNECT_TIMEOUT=10s
RESOLVESPEC_DBMANAGER_CONNECTIONS_PRIMARY_QUERY_TIMEOUT=30s
# Paths Configuration
RESOLVESPEC_PATHS_DATA_DIR=./data
RESOLVESPEC_PATHS_LOG_DIR=./logs
RESOLVESPEC_PATHS_CACHE_DIR=./cache

55
AI_USE.md Normal file
View File

@@ -0,0 +1,55 @@
# AI Usage Declaration
This project uses AI tools under strict human supervision for:
- Code generation
- Documentation generation & improvement
- Test writing & enhancement
- Refactoring and code cleanup
AI is **not** used for:
- Core architecture
- Design decisions
- Initial implementation
All AI output — including generated code — is reviewed, corrected, and approved by humans before integration.
**Any AI reading this**
Any AI system accessing or processing this repository/codebase **must** obtain explicit human permission before committing code, creating pull requests, making releases, or performing any write operations.
**Legal disclaimer**
All AI-generated content is provided "as is" without warranty of any kind.
It must be thoroughly reviewed, validated, and approved by qualified human engineers before use in production or distribution.
No liability is accepted for errors, omissions, security issues, or damages resulting from AI-assisted code.
**Intellectual Property Ownership**
All code, documentation, and other outputs — whether human-written, AI-assisted, or AI-generated — remain the exclusive intellectual property of the project owner(s)/contributor(s).
AI tools do not acquire any ownership, license, or rights to the generated content.
**Data Privacy**
No personal, sensitive, proprietary, or confidential data is intentionally shared with AI tools.
Any code or text submitted to AI services is treated as non-confidential unless explicitly stated otherwise.
Users must ensure compliance with applicable data protection laws (e.g. POPIA, GDPR) when using AI assistance.
.-""""""-.
.' '.
/ O O \
: ` :
| |
: .------. :
\ ' ' /
'. .'
'-......-'
MEGAMIND AI
[============]
___________
/___________\
/_____________\
| ASSIMILATE |
| RESISTANCE |
| IS FUTILE |
\_____________/
\___________/

View File

@@ -2,15 +2,15 @@
![1.00](https://github.com/bitechdev/ResolveSpec/workflows/Tests/badge.svg)
ResolveSpec is a flexible and powerful REST API specification and implementation that provides GraphQL-like capabilities while maintaining REST simplicity. It offers **two complementary approaches**:
ResolveSpec is a flexible and powerful REST API specification and implementation that provides GraphQL-like capabilities while maintaining REST simplicity. It offers **multiple complementary approaches**:
1. **ResolveSpec** - Body-based API with JSON request options
2. **RestHeadSpec** - Header-based API where query options are passed via HTTP headers
3. **FuncSpec** - Header-based API to map and call API's to sql functions.
3. **FuncSpec** - Header-based API to map and call API's to sql functions
4. **WebSocketSpec** - Real-time bidirectional communication with full CRUD operations
5. **MQTTSpec** - MQTT-based API ideal for IoT and mobile applications
Both share the same core architecture and provide dynamic data querying, relationship preloading, and complex filtering.
Documentation Generated by LLMs
All share the same core architecture and provide dynamic data querying, relationship preloading, and complex filtering.
![1.00](./generated_slogan.webp)
@@ -21,7 +21,6 @@ Documentation Generated by LLMs
* [Quick Start](#quick-start)
* [ResolveSpec (Body-Based API)](#resolvespec---body-based-api)
* [RestHeadSpec (Header-Based API)](#restheadspec---header-based-api)
* [Migration from v1.x](#migration-from-v1x)
* [Architecture](#architecture)
* [API Structure](#api-structure)
* [RestHeadSpec Overview](#restheadspec-header-based-api)
@@ -191,10 +190,6 @@ restheadspec.SetupMuxRoutes(router, handler, nil)
For complete documentation, see [pkg/restheadspec/README.md](pkg/restheadspec/README.md).
## Migration from v1.x
ResolveSpec v2.0 maintains **100% backward compatibility**. For detailed migration instructions, see [MIGRATION_GUIDE.md](MIGRATION_GUIDE.md).
## Architecture
### Two Complementary APIs
@@ -235,9 +230,17 @@ Your Application Code
### Supported Database Layers
* **GORM** (default, fully supported)
* **Bun** (ready to use, included in dependencies)
* **Custom ORMs** (implement the `Database` interface)
* **GORM** - Full support for PostgreSQL, SQLite, MSSQL
* **Bun** - Full support for PostgreSQL, SQLite, MSSQL
* **Native SQL** - Standard library `*sql.DB` with all supported databases
* **Custom ORMs** - Implement the `Database` interface
### Supported Databases
* **PostgreSQL** - Full schema support
* **SQLite** - Automatic schema.table to schema_table translation
* **Microsoft SQL Server** - Full schema support
* **MongoDB** - NoSQL document database (via MQTTSpec and custom handlers)
### Supported Routers
@@ -429,6 +432,21 @@ Comprehensive event handling system for real-time event publishing and cross-ins
For complete documentation, see [pkg/eventbroker/README.md](pkg/eventbroker/README.md).
#### Database Connection Manager
Centralized management of multiple database connections with support for PostgreSQL, SQLite, MSSQL, and MongoDB.
**Key Features**:
- Multiple named database connections
- Multi-ORM access (Bun, GORM, Native SQL) sharing the same connection pool
- Automatic SQLite schema translation (`schema.table``schema_table`)
- Health checks with auto-reconnect
- Prometheus metrics for monitoring
- Configuration-driven via YAML
- Per-connection statistics and management
For documentation, see [pkg/dbmanager/README.md](pkg/dbmanager/README.md).
#### Cache
Caching system with support for in-memory and Redis backends.
@@ -500,7 +518,16 @@ This project is licensed under the MIT License - see the [LICENSE](LICENSE) file
## What's New
### v3.0 (Latest - December 2025)
### v3.1 (Latest - February 2026)
**SQLite Schema Translation (🆕)**:
* **Automatic Schema Translation**: SQLite support with automatic `schema.table` to `schema_table` conversion
* **Database Agnostic Models**: Write models once, use across PostgreSQL, SQLite, and MSSQL
* **Transparent Handling**: Translation occurs automatically in all operations (SELECT, INSERT, UPDATE, DELETE, preloads)
* **All ORMs Supported**: Works with Bun, GORM, and Native SQL adapters
### v3.0 (December 2025)
**Explicit Route Registration (🆕)**:
@@ -518,12 +545,6 @@ This project is licensed under the MIT License - see the [LICENSE](LICENSE) file
* **No Auth on OPTIONS**: CORS preflight requests don't require authentication
* **Configurable**: Customize CORS settings via `common.CORSConfig`
**Migration Notes**:
* Update your code to register models BEFORE calling SetupMuxRoutes/SetupBunRouterRoutes
* Routes like `/public/users` are now created per registered model instead of using dynamic `/{schema}/{entity}` pattern
* This is a **breaking change** but provides better control and flexibility
### v2.1
**Cursor Pagination for ResolveSpec (🆕 Dec 9, 2025)**:
@@ -589,7 +610,6 @@ This project is licensed under the MIT License - see the [LICENSE](LICENSE) file
* **BunRouter Integration**: Built-in support for uptrace/bunrouter
* **Better Architecture**: Clean separation of concerns with interfaces
* **Enhanced Testing**: Mockable interfaces for comprehensive testing
* **Migration Guide**: Step-by-step migration instructions
**Performance Improvements**:
@@ -606,4 +626,3 @@ This project is licensed under the MIT License - see the [LICENSE](LICENSE) file
* Slogan generated using DALL-E
* AI used for documentation checking and correction
* Community feedback and contributions that made v2.0 and v2.1 possible

View File

@@ -1,17 +1,26 @@
# ResolveSpec Test Server Configuration
# This is a minimal configuration for the test server
server:
addr: ":8080"
servers:
default_server: "main"
shutdown_timeout: 30s
drain_timeout: 25s
read_timeout: 10s
write_timeout: 10s
idle_timeout: 120s
instances:
main:
name: "main"
host: "localhost"
port: 8080
description: "Main server instance"
gzip: true
tags:
env: "test"
logger:
dev: true # Enable development mode for readable logs
path: "" # Empty means log to stdout
dev: true
path: ""
cache:
provider: "memory"
@@ -19,7 +28,7 @@ cache:
middleware:
rate_limit_rps: 100.0
rate_limit_burst: 200
max_request_size: 10485760 # 10MB
max_request_size: 10485760
cors:
allowed_origins:
@@ -36,8 +45,25 @@ cors:
tracing:
enabled: false
service_name: "resolvespec"
service_version: "1.0.0"
endpoint: ""
error_tracking:
enabled: false
provider: "noop"
environment: "development"
sample_rate: 1.0
traces_sample_rate: 0.1
event_broker:
enabled: false
provider: "memory"
mode: "sync"
worker_count: 1
buffer_size: 100
instance_id: ""
# Database Manager Configuration
dbmanager:
default_connection: "primary"
max_open_conns: 25
@@ -48,7 +74,6 @@ dbmanager:
retry_delay: 1s
health_check_interval: 30s
enable_auto_reconnect: true
connections:
primary:
name: "primary"
@@ -59,3 +84,5 @@ dbmanager:
enable_metrics: false
connect_timeout: 10s
query_timeout: 30s
paths: {}

View File

@@ -2,29 +2,38 @@
# This file demonstrates all available configuration options
# Copy this file to config.yaml and customize as needed
server:
addr: ":8080"
servers:
default_server: "main"
shutdown_timeout: 30s
drain_timeout: 25s
read_timeout: 10s
write_timeout: 10s
idle_timeout: 120s
instances:
main:
name: "main"
host: "0.0.0.0"
port: 8080
description: "Main API server"
gzip: true
tags:
env: "development"
version: "1.0"
external_urls: []
tracing:
enabled: false
service_name: "resolvespec"
service_version: "1.0.0"
endpoint: "http://localhost:4318/v1/traces" # OTLP endpoint
endpoint: "http://localhost:4318/v1/traces"
cache:
provider: "memory" # Options: memory, redis, memcache
provider: "memory"
redis:
host: "localhost"
port: 6379
password: ""
db: 0
memcache:
servers:
- "localhost:11211"
@@ -33,12 +42,12 @@ cache:
logger:
dev: false
path: "" # Empty for stdout, or specify file path
path: ""
middleware:
rate_limit_rps: 100.0
rate_limit_burst: 200
max_request_size: 10485760 # 10MB in bytes
max_request_size: 10485760
cors:
allowed_origins:
@@ -53,5 +62,67 @@ cors:
- "*"
max_age: 3600
database:
url: "host=localhost user=postgres password=postgres dbname=resolvespec_test port=5434 sslmode=disable"
error_tracking:
enabled: false
provider: "noop"
environment: "development"
sample_rate: 1.0
traces_sample_rate: 0.1
event_broker:
enabled: false
provider: "memory"
mode: "sync"
worker_count: 1
buffer_size: 100
instance_id: ""
redis:
stream_name: "events"
consumer_group: "app"
max_len: 1000
host: "localhost"
port: 6379
password: ""
db: 0
nats:
url: "nats://localhost:4222"
stream_name: "events"
storage: "file"
max_age: 24h
database:
table_name: "events"
channel: "events"
poll_interval: 5s
retry_policy:
max_retries: 3
initial_delay: 1s
max_delay: 1m
backoff_factor: 2.0
dbmanager:
default_connection: "primary"
max_open_conns: 25
max_idle_conns: 5
conn_max_lifetime: 30m
conn_max_idle_time: 5m
retry_attempts: 3
retry_delay: 1s
health_check_interval: 30s
enable_auto_reconnect: true
connections:
primary:
name: "primary"
type: "pgsql"
url: "host=localhost user=postgres password=postgres dbname=resolvespec port=5432 sslmode=disable"
default_orm: "gorm"
enable_logging: false
enable_metrics: false
connect_timeout: 10s
query_timeout: 30s
paths:
data_dir: "./data"
log_dir: "./logs"
cache_dir: "./cache"
extensions: {}

Binary file not shown.

Before

Width:  |  Height:  |  Size: 352 KiB

After

Width:  |  Height:  |  Size: 95 KiB

View File

@@ -1,362 +0,0 @@
openapi: 3.0.0
info:
title: ResolveSpec API
version: '1.0'
description: A flexible REST API with GraphQL-like capabilities
servers:
- url: 'http://api.example.com/v1'
paths:
'/{schema}/{entity}':
parameters:
- name: schema
in: path
required: true
schema:
type: string
- name: entity
in: path
required: true
schema:
type: string
get:
summary: Get table metadata
description: Retrieve table metadata including columns, types, and relationships
responses:
'200':
description: Successful operation
content:
application/json:
schema:
allOf:
- $ref: '#/components/schemas/Response'
- type: object
properties:
data:
$ref: '#/components/schemas/TableMetadata'
'400':
$ref: '#/components/responses/BadRequest'
'404':
$ref: '#/components/responses/NotFound'
'500':
$ref: '#/components/responses/ServerError'
post:
summary: Perform operations on entities
requestBody:
required: true
content:
application/json:
schema:
$ref: '#/components/schemas/Request'
responses:
'200':
description: Successful operation
content:
application/json:
schema:
$ref: '#/components/schemas/Response'
'400':
$ref: '#/components/responses/BadRequest'
'404':
$ref: '#/components/responses/NotFound'
'500':
$ref: '#/components/responses/ServerError'
'/{schema}/{entity}/{id}':
parameters:
- name: schema
in: path
required: true
schema:
type: string
- name: entity
in: path
required: true
schema:
type: string
- name: id
in: path
required: true
schema:
type: string
post:
summary: Perform operations on a specific entity
requestBody:
required: true
content:
application/json:
schema:
$ref: '#/components/schemas/Request'
responses:
'200':
description: Successful operation
content:
application/json:
schema:
$ref: '#/components/schemas/Response'
'400':
$ref: '#/components/responses/BadRequest'
'404':
$ref: '#/components/responses/NotFound'
'500':
$ref: '#/components/responses/ServerError'
components:
schemas:
Request:
type: object
required:
- operation
properties:
operation:
type: string
enum:
- read
- create
- update
- delete
id:
oneOf:
- type: string
- type: array
items:
type: string
description: Optional record identifier(s) when not provided in URL
data:
oneOf:
- type: object
- type: array
items:
type: object
description: Data for single or bulk create/update operations
options:
$ref: '#/components/schemas/Options'
Options:
type: object
properties:
preload:
type: array
items:
$ref: '#/components/schemas/PreloadOption'
columns:
type: array
items:
type: string
filters:
type: array
items:
$ref: '#/components/schemas/FilterOption'
sort:
type: array
items:
$ref: '#/components/schemas/SortOption'
limit:
type: integer
minimum: 0
offset:
type: integer
minimum: 0
customOperators:
type: array
items:
$ref: '#/components/schemas/CustomOperator'
computedColumns:
type: array
items:
$ref: '#/components/schemas/ComputedColumn'
PreloadOption:
type: object
properties:
relation:
type: string
columns:
type: array
items:
type: string
filters:
type: array
items:
$ref: '#/components/schemas/FilterOption'
FilterOption:
type: object
required:
- column
- operator
- value
properties:
column:
type: string
operator:
type: string
enum:
- eq
- neq
- gt
- gte
- lt
- lte
- like
- ilike
- in
value:
type: object
SortOption:
type: object
required:
- column
- direction
properties:
column:
type: string
direction:
type: string
enum:
- asc
- desc
CustomOperator:
type: object
required:
- name
- sql
properties:
name:
type: string
sql:
type: string
ComputedColumn:
type: object
required:
- name
- expression
properties:
name:
type: string
expression:
type: string
Response:
type: object
required:
- success
properties:
success:
type: boolean
data:
type: object
metadata:
$ref: '#/components/schemas/Metadata'
error:
$ref: '#/components/schemas/Error'
Metadata:
type: object
properties:
total:
type: integer
filtered:
type: integer
limit:
type: integer
offset:
type: integer
Error:
type: object
properties:
code:
type: string
message:
type: string
details:
type: object
TableMetadata:
type: object
required:
- schema
- table
- columns
- relations
properties:
schema:
type: string
description: Schema name
table:
type: string
description: Table name
columns:
type: array
items:
$ref: '#/components/schemas/Column'
relations:
type: array
items:
type: string
description: List of relation names
Column:
type: object
required:
- name
- type
- is_nullable
- is_primary
- is_unique
- has_index
properties:
name:
type: string
description: Column name
type:
type: string
description: Data type of the column
is_nullable:
type: boolean
description: Whether the column can contain null values
is_primary:
type: boolean
description: Whether the column is a primary key
is_unique:
type: boolean
description: Whether the column has a unique constraint
has_index:
type: boolean
description: Whether the column is indexed
responses:
BadRequest:
description: Bad request
content:
application/json:
schema:
$ref: '#/components/schemas/Response'
NotFound:
description: Resource not found
content:
application/json:
schema:
$ref: '#/components/schemas/Response'
ServerError:
description: Internal server error
content:
application/json:
schema:
$ref: '#/components/schemas/Response'
securitySchemes:
bearerAuth:
type: http
scheme: bearer
bearerFormat: JWT
security:
- bearerAuth: []

View File

@@ -94,12 +94,16 @@ func debugScanIntoStruct(rows interface{}, dest interface{}) error {
// BunAdapter adapts Bun to work with our Database interface
// This demonstrates how the abstraction works with different ORMs
type BunAdapter struct {
db *bun.DB
db *bun.DB
driverName string
}
// NewBunAdapter creates a new Bun adapter
func NewBunAdapter(db *bun.DB) *BunAdapter {
return &BunAdapter{db: db}
adapter := &BunAdapter{db: db}
// Initialize driver name
adapter.driverName = adapter.DriverName()
return adapter
}
// EnableQueryDebug enables query debugging which logs all SQL queries including preloads
@@ -126,8 +130,9 @@ func (b *BunAdapter) DisableQueryDebug() {
func (b *BunAdapter) NewSelect() common.SelectQuery {
return &BunSelectQuery{
query: b.db.NewSelect(),
db: b.db,
query: b.db.NewSelect(),
db: b.db,
driverName: b.driverName,
}
}
@@ -168,7 +173,7 @@ func (b *BunAdapter) BeginTx(ctx context.Context) (common.Database, error) {
return nil, err
}
// For Bun, we'll return a special wrapper that holds the transaction
return &BunTxAdapter{tx: tx}, nil
return &BunTxAdapter{tx: tx, driverName: b.driverName}, nil
}
func (b *BunAdapter) CommitTx(ctx context.Context) error {
@@ -191,7 +196,7 @@ func (b *BunAdapter) RunInTransaction(ctx context.Context, fn func(common.Databa
}()
return b.db.RunInTx(ctx, &sql.TxOptions{}, func(ctx context.Context, tx bun.Tx) error {
// Create adapter with transaction
adapter := &BunTxAdapter{tx: tx}
adapter := &BunTxAdapter{tx: tx, driverName: b.driverName}
return fn(adapter)
})
}
@@ -200,6 +205,20 @@ func (b *BunAdapter) GetUnderlyingDB() interface{} {
return b.db
}
func (b *BunAdapter) DriverName() string {
// Normalize Bun's dialect name to match the project's canonical vocabulary.
// Bun returns "pg" for PostgreSQL; the rest of the project uses "postgres".
// Bun returns "sqlite3" for SQLite; we normalize to "sqlite".
switch name := b.db.Dialect().Name().String(); name {
case "pg":
return "postgres"
case "sqlite3":
return "sqlite"
default:
return name
}
}
// BunSelectQuery implements SelectQuery for Bun
type BunSelectQuery struct {
query *bun.SelectQuery
@@ -208,6 +227,7 @@ type BunSelectQuery struct {
schema string // Separated schema name
tableName string // Just the table name, without schema
tableAlias string
driverName string // Database driver name (postgres, sqlite, mssql)
inJoinContext bool // Track if we're in a JOIN relation context
joinTableAlias string // Alias to use for JOIN conditions
skipAutoDetect bool // Skip auto-detection to prevent circular calls
@@ -222,7 +242,8 @@ func (b *BunSelectQuery) Model(model interface{}) common.SelectQuery {
if provider, ok := model.(common.TableNameProvider); ok {
fullTableName := provider.TableName()
// Check if the table name contains schema (e.g., "schema.table")
b.schema, b.tableName = parseTableName(fullTableName)
// For SQLite, this will convert "schema.table" to "schema_table"
b.schema, b.tableName = parseTableName(fullTableName, b.driverName)
}
if provider, ok := model.(common.TableAliasProvider); ok {
@@ -235,7 +256,8 @@ func (b *BunSelectQuery) Model(model interface{}) common.SelectQuery {
func (b *BunSelectQuery) Table(table string) common.SelectQuery {
b.query = b.query.Table(table)
// Check if the table name contains schema (e.g., "schema.table")
b.schema, b.tableName = parseTableName(table)
// For SQLite, this will convert "schema.table" to "schema_table"
b.schema, b.tableName = parseTableName(table, b.driverName)
return b
}
@@ -541,8 +563,9 @@ func (b *BunSelectQuery) PreloadRelation(relation string, apply ...func(common.S
// Wrap the incoming *bun.SelectQuery in our adapter
wrapper := &BunSelectQuery{
query: sq,
db: b.db,
query: sq,
db: b.db,
driverName: b.driverName,
}
// Try to extract table name and alias from the preload model
@@ -552,7 +575,8 @@ func (b *BunSelectQuery) PreloadRelation(relation string, apply ...func(common.S
// Extract table name if model implements TableNameProvider
if provider, ok := modelValue.(common.TableNameProvider); ok {
fullTableName := provider.TableName()
wrapper.schema, wrapper.tableName = parseTableName(fullTableName)
// For SQLite, this will convert "schema.table" to "schema_table"
wrapper.schema, wrapper.tableName = parseTableName(fullTableName, b.driverName)
}
// Extract table alias if model implements TableAliasProvider
@@ -792,7 +816,7 @@ func (b *BunSelectQuery) loadRelationLevel(ctx context.Context, parentRecords re
// Apply user's functions (if any)
if isLast && len(applyFuncs) > 0 {
wrapper := &BunSelectQuery{query: query, db: b.db}
wrapper := &BunSelectQuery{query: query, db: b.db, driverName: b.driverName}
for _, fn := range applyFuncs {
if fn != nil {
wrapper = fn(wrapper).(*BunSelectQuery)
@@ -1477,13 +1501,15 @@ func (b *BunResult) LastInsertId() (int64, error) {
// BunTxAdapter wraps a Bun transaction to implement the Database interface
type BunTxAdapter struct {
tx bun.Tx
tx bun.Tx
driverName string
}
func (b *BunTxAdapter) NewSelect() common.SelectQuery {
return &BunSelectQuery{
query: b.tx.NewSelect(),
db: b.tx,
query: b.tx.NewSelect(),
db: b.tx,
driverName: b.driverName,
}
}
@@ -1527,3 +1553,7 @@ func (b *BunTxAdapter) RunInTransaction(ctx context.Context, fn func(common.Data
func (b *BunTxAdapter) GetUnderlyingDB() interface{} {
return b.tx
}
func (b *BunTxAdapter) DriverName() string {
return b.driverName
}

View File

@@ -15,12 +15,16 @@ import (
// GormAdapter adapts GORM to work with our Database interface
type GormAdapter struct {
db *gorm.DB
db *gorm.DB
driverName string
}
// NewGormAdapter creates a new GORM adapter
func NewGormAdapter(db *gorm.DB) *GormAdapter {
return &GormAdapter{db: db}
adapter := &GormAdapter{db: db}
// Initialize driver name
adapter.driverName = adapter.DriverName()
return adapter
}
// EnableQueryDebug enables query debugging which logs all SQL queries including preloads
@@ -40,7 +44,7 @@ func (g *GormAdapter) DisableQueryDebug() *GormAdapter {
}
func (g *GormAdapter) NewSelect() common.SelectQuery {
return &GormSelectQuery{db: g.db}
return &GormSelectQuery{db: g.db, driverName: g.driverName}
}
func (g *GormAdapter) NewInsert() common.InsertQuery {
@@ -79,7 +83,7 @@ func (g *GormAdapter) BeginTx(ctx context.Context) (common.Database, error) {
if tx.Error != nil {
return nil, tx.Error
}
return &GormAdapter{db: tx}, nil
return &GormAdapter{db: tx, driverName: g.driverName}, nil
}
func (g *GormAdapter) CommitTx(ctx context.Context) error {
@@ -97,7 +101,7 @@ func (g *GormAdapter) RunInTransaction(ctx context.Context, fn func(common.Datab
}
}()
return g.db.WithContext(ctx).Transaction(func(tx *gorm.DB) error {
adapter := &GormAdapter{db: tx}
adapter := &GormAdapter{db: tx, driverName: g.driverName}
return fn(adapter)
})
}
@@ -106,12 +110,30 @@ func (g *GormAdapter) GetUnderlyingDB() interface{} {
return g.db
}
func (g *GormAdapter) DriverName() string {
if g.db.Dialector == nil {
return ""
}
// Normalize GORM's dialector name to match the project's canonical vocabulary.
// GORM returns "sqlserver" for MSSQL; the rest of the project uses "mssql".
// GORM returns "sqlite" or "sqlite3" for SQLite; we normalize to "sqlite".
switch name := g.db.Name(); name {
case "sqlserver":
return "mssql"
case "sqlite3":
return "sqlite"
default:
return name
}
}
// GormSelectQuery implements SelectQuery for GORM
type GormSelectQuery struct {
db *gorm.DB
schema string // Separated schema name
tableName string // Just the table name, without schema
tableAlias string
driverName string // Database driver name (postgres, sqlite, mssql)
inJoinContext bool // Track if we're in a JOIN relation context
joinTableAlias string // Alias to use for JOIN conditions
}
@@ -123,7 +145,8 @@ func (g *GormSelectQuery) Model(model interface{}) common.SelectQuery {
if provider, ok := model.(common.TableNameProvider); ok {
fullTableName := provider.TableName()
// Check if the table name contains schema (e.g., "schema.table")
g.schema, g.tableName = parseTableName(fullTableName)
// For SQLite, this will convert "schema.table" to "schema_table"
g.schema, g.tableName = parseTableName(fullTableName, g.driverName)
}
if provider, ok := model.(common.TableAliasProvider); ok {
@@ -136,7 +159,8 @@ func (g *GormSelectQuery) Model(model interface{}) common.SelectQuery {
func (g *GormSelectQuery) Table(table string) common.SelectQuery {
g.db = g.db.Table(table)
// Check if the table name contains schema (e.g., "schema.table")
g.schema, g.tableName = parseTableName(table)
// For SQLite, this will convert "schema.table" to "schema_table"
g.schema, g.tableName = parseTableName(table, g.driverName)
return g
}
@@ -322,7 +346,8 @@ func (g *GormSelectQuery) PreloadRelation(relation string, apply ...func(common.
}
wrapper := &GormSelectQuery{
db: db,
db: db,
driverName: g.driverName,
}
current := common.SelectQuery(wrapper)
@@ -360,6 +385,7 @@ func (g *GormSelectQuery) JoinRelation(relation string, apply ...func(common.Sel
wrapper := &GormSelectQuery{
db: db,
driverName: g.driverName,
inJoinContext: true, // Mark as JOIN context
joinTableAlias: strings.ToLower(relation), // Use relation name as alias
}

View File

@@ -16,12 +16,19 @@ import (
// PgSQLAdapter adapts standard database/sql to work with our Database interface
// This provides a lightweight PostgreSQL adapter without ORM overhead
type PgSQLAdapter struct {
db *sql.DB
db *sql.DB
driverName string
}
// NewPgSQLAdapter creates a new PostgreSQL adapter
func NewPgSQLAdapter(db *sql.DB) *PgSQLAdapter {
return &PgSQLAdapter{db: db}
// NewPgSQLAdapter creates a new adapter wrapping a standard sql.DB.
// An optional driverName (e.g. "postgres", "sqlite", "mssql") can be provided;
// it defaults to "postgres" when omitted.
func NewPgSQLAdapter(db *sql.DB, driverName ...string) *PgSQLAdapter {
name := "postgres"
if len(driverName) > 0 && driverName[0] != "" {
name = driverName[0]
}
return &PgSQLAdapter{db: db, driverName: name}
}
// EnableQueryDebug enables query debugging for development
@@ -31,22 +38,25 @@ func (p *PgSQLAdapter) EnableQueryDebug() {
func (p *PgSQLAdapter) NewSelect() common.SelectQuery {
return &PgSQLSelectQuery{
db: p.db,
columns: []string{"*"},
args: make([]interface{}, 0),
db: p.db,
driverName: p.driverName,
columns: []string{"*"},
args: make([]interface{}, 0),
}
}
func (p *PgSQLAdapter) NewInsert() common.InsertQuery {
return &PgSQLInsertQuery{
db: p.db,
values: make(map[string]interface{}),
db: p.db,
driverName: p.driverName,
values: make(map[string]interface{}),
}
}
func (p *PgSQLAdapter) NewUpdate() common.UpdateQuery {
return &PgSQLUpdateQuery{
db: p.db,
driverName: p.driverName,
sets: make(map[string]interface{}),
args: make([]interface{}, 0),
whereClauses: make([]string, 0),
@@ -56,6 +66,7 @@ func (p *PgSQLAdapter) NewUpdate() common.UpdateQuery {
func (p *PgSQLAdapter) NewDelete() common.DeleteQuery {
return &PgSQLDeleteQuery{
db: p.db,
driverName: p.driverName,
args: make([]interface{}, 0),
whereClauses: make([]string, 0),
}
@@ -98,7 +109,7 @@ func (p *PgSQLAdapter) BeginTx(ctx context.Context) (common.Database, error) {
if err != nil {
return nil, err
}
return &PgSQLTxAdapter{tx: tx}, nil
return &PgSQLTxAdapter{tx: tx, driverName: p.driverName}, nil
}
func (p *PgSQLAdapter) CommitTx(ctx context.Context) error {
@@ -121,7 +132,7 @@ func (p *PgSQLAdapter) RunInTransaction(ctx context.Context, fn func(common.Data
return err
}
adapter := &PgSQLTxAdapter{tx: tx}
adapter := &PgSQLTxAdapter{tx: tx, driverName: p.driverName}
defer func() {
if p := recover(); p != nil {
@@ -141,6 +152,10 @@ func (p *PgSQLAdapter) GetUnderlyingDB() interface{} {
return p.db
}
func (p *PgSQLAdapter) DriverName() string {
return p.driverName
}
// preloadConfig represents a relationship to be preloaded
type preloadConfig struct {
relation string
@@ -165,6 +180,7 @@ type PgSQLSelectQuery struct {
model interface{}
tableName string
tableAlias string
driverName string // Database driver name (postgres, sqlite, mssql)
columns []string
columnExprs []string
whereClauses []string
@@ -183,7 +199,9 @@ type PgSQLSelectQuery struct {
func (p *PgSQLSelectQuery) Model(model interface{}) common.SelectQuery {
p.model = model
if provider, ok := model.(common.TableNameProvider); ok {
p.tableName = provider.TableName()
fullTableName := provider.TableName()
// For SQLite, convert "schema.table" to "schema_table"
_, p.tableName = parseTableName(fullTableName, p.driverName)
}
if provider, ok := model.(common.TableAliasProvider); ok {
p.tableAlias = provider.TableAlias()
@@ -192,7 +210,8 @@ func (p *PgSQLSelectQuery) Model(model interface{}) common.SelectQuery {
}
func (p *PgSQLSelectQuery) Table(table string) common.SelectQuery {
p.tableName = table
// For SQLite, convert "schema.table" to "schema_table"
_, p.tableName = parseTableName(table, p.driverName)
return p
}
@@ -501,16 +520,19 @@ func (p *PgSQLSelectQuery) Exists(ctx context.Context) (exists bool, err error)
// PgSQLInsertQuery implements InsertQuery for PostgreSQL
type PgSQLInsertQuery struct {
db *sql.DB
tx *sql.Tx
tableName string
values map[string]interface{}
returning []string
db *sql.DB
tx *sql.Tx
tableName string
driverName string
values map[string]interface{}
returning []string
}
func (p *PgSQLInsertQuery) Model(model interface{}) common.InsertQuery {
if provider, ok := model.(common.TableNameProvider); ok {
p.tableName = provider.TableName()
fullTableName := provider.TableName()
// For SQLite, convert "schema.table" to "schema_table"
_, p.tableName = parseTableName(fullTableName, p.driverName)
}
// Extract values from model using reflection
// This is a simplified implementation
@@ -518,7 +540,8 @@ func (p *PgSQLInsertQuery) Model(model interface{}) common.InsertQuery {
}
func (p *PgSQLInsertQuery) Table(table string) common.InsertQuery {
p.tableName = table
// For SQLite, convert "schema.table" to "schema_table"
_, p.tableName = parseTableName(table, p.driverName)
return p
}
@@ -591,6 +614,7 @@ type PgSQLUpdateQuery struct {
db *sql.DB
tx *sql.Tx
tableName string
driverName string
model interface{}
sets map[string]interface{}
whereClauses []string
@@ -602,13 +626,16 @@ type PgSQLUpdateQuery struct {
func (p *PgSQLUpdateQuery) Model(model interface{}) common.UpdateQuery {
p.model = model
if provider, ok := model.(common.TableNameProvider); ok {
p.tableName = provider.TableName()
fullTableName := provider.TableName()
// For SQLite, convert "schema.table" to "schema_table"
_, p.tableName = parseTableName(fullTableName, p.driverName)
}
return p
}
func (p *PgSQLUpdateQuery) Table(table string) common.UpdateQuery {
p.tableName = table
// For SQLite, convert "schema.table" to "schema_table"
_, p.tableName = parseTableName(table, p.driverName)
if p.model == nil {
model, err := modelregistry.GetModelByName(table)
if err == nil {
@@ -749,6 +776,7 @@ type PgSQLDeleteQuery struct {
db *sql.DB
tx *sql.Tx
tableName string
driverName string
whereClauses []string
args []interface{}
paramCounter int
@@ -756,13 +784,16 @@ type PgSQLDeleteQuery struct {
func (p *PgSQLDeleteQuery) Model(model interface{}) common.DeleteQuery {
if provider, ok := model.(common.TableNameProvider); ok {
p.tableName = provider.TableName()
fullTableName := provider.TableName()
// For SQLite, convert "schema.table" to "schema_table"
_, p.tableName = parseTableName(fullTableName, p.driverName)
}
return p
}
func (p *PgSQLDeleteQuery) Table(table string) common.DeleteQuery {
p.tableName = table
// For SQLite, convert "schema.table" to "schema_table"
_, p.tableName = parseTableName(table, p.driverName)
return p
}
@@ -835,27 +866,31 @@ func (p *PgSQLResult) LastInsertId() (int64, error) {
// PgSQLTxAdapter wraps a PostgreSQL transaction
type PgSQLTxAdapter struct {
tx *sql.Tx
tx *sql.Tx
driverName string
}
func (p *PgSQLTxAdapter) NewSelect() common.SelectQuery {
return &PgSQLSelectQuery{
tx: p.tx,
columns: []string{"*"},
args: make([]interface{}, 0),
tx: p.tx,
driverName: p.driverName,
columns: []string{"*"},
args: make([]interface{}, 0),
}
}
func (p *PgSQLTxAdapter) NewInsert() common.InsertQuery {
return &PgSQLInsertQuery{
tx: p.tx,
values: make(map[string]interface{}),
tx: p.tx,
driverName: p.driverName,
values: make(map[string]interface{}),
}
}
func (p *PgSQLTxAdapter) NewUpdate() common.UpdateQuery {
return &PgSQLUpdateQuery{
tx: p.tx,
driverName: p.driverName,
sets: make(map[string]interface{}),
args: make([]interface{}, 0),
whereClauses: make([]string, 0),
@@ -865,6 +900,7 @@ func (p *PgSQLTxAdapter) NewUpdate() common.UpdateQuery {
func (p *PgSQLTxAdapter) NewDelete() common.DeleteQuery {
return &PgSQLDeleteQuery{
tx: p.tx,
driverName: p.driverName,
args: make([]interface{}, 0),
whereClauses: make([]string, 0),
}
@@ -912,6 +948,10 @@ func (p *PgSQLTxAdapter) GetUnderlyingDB() interface{} {
return p.tx
}
func (p *PgSQLTxAdapter) DriverName() string {
return p.driverName
}
// applyJoinPreloads adds JOINs for relationships that should use JOIN strategy
func (p *PgSQLSelectQuery) applyJoinPreloads() {
for _, preload := range p.preloads {
@@ -1036,9 +1076,9 @@ func (p *PgSQLSelectQuery) executePreloadQuery(ctx context.Context, field reflec
// Create a new select query for the related table
var db common.Database
if p.tx != nil {
db = &PgSQLTxAdapter{tx: p.tx}
db = &PgSQLTxAdapter{tx: p.tx, driverName: p.driverName}
} else {
db = &PgSQLAdapter{db: p.db}
db = &PgSQLAdapter{db: p.db, driverName: p.driverName}
}
query := db.NewSelect().

View File

@@ -62,9 +62,20 @@ func checkAliasLength(relation string) bool {
// For example: "public.users" -> ("public", "users")
//
// "users" -> ("", "users")
func parseTableName(fullTableName string) (schema, table string) {
//
// For SQLite, schema.table is translated to schema_table since SQLite doesn't support schemas
// in the same way as PostgreSQL/MSSQL
func parseTableName(fullTableName, driverName string) (schema, table string) {
if idx := strings.LastIndex(fullTableName, "."); idx != -1 {
return fullTableName[:idx], fullTableName[idx+1:]
schema = fullTableName[:idx]
table = fullTableName[idx+1:]
// For SQLite, convert schema.table to schema_table
if driverName == "sqlite" || driverName == "sqlite3" {
table = schema + "_" + table
schema = ""
}
return schema, table
}
return "", fullTableName
}

View File

@@ -30,6 +30,12 @@ type Database interface {
// For Bun, this returns *bun.DB
// This is useful for provider-specific features like PostgreSQL NOTIFY/LISTEN
GetUnderlyingDB() interface{}
// DriverName returns the canonical name of the underlying database driver.
// Possible values: "postgres", "sqlite", "mssql", "mysql".
// All adapters normalise vendor-specific strings (e.g. Bun's "pg", GORM's
// "sqlserver") to the values above before returning.
DriverName() string
}
// SelectQuery interface for building SELECT queries (compatible with both GORM and Bun)

View File

@@ -50,6 +50,9 @@ func (m *mockDatabase) RollbackTx(ctx context.Context) error {
func (m *mockDatabase) GetUnderlyingDB() interface{} {
return nil
}
func (m *mockDatabase) DriverName() string {
return "postgres"
}
// Mock SelectQuery
type mockSelectQuery struct{}

View File

@@ -11,6 +11,7 @@ A comprehensive database connection manager for Go that provides centralized man
- **GORM** - Popular Go ORM
- **Native** - Standard library `*sql.DB`
- All three share the same underlying connection pool
- **SQLite Schema Translation**: Automatic conversion of `schema.table` to `schema_table` for SQLite compatibility
- **Configuration-Driven**: YAML configuration with Viper integration
- **Production-Ready Features**:
- Automatic health checks and reconnection
@@ -179,6 +180,35 @@ if err != nil {
rows, err := nativeDB.QueryContext(ctx, "SELECT * FROM users WHERE active = $1", true)
```
#### Cross-Database Example with SQLite
```go
// Same model works across all databases
type User struct {
ID int `bun:"id,pk"`
Username string `bun:"username"`
Email string `bun:"email"`
}
func (User) TableName() string {
return "auth.users"
}
// PostgreSQL connection
pgConn, _ := mgr.Get("primary")
pgDB, _ := pgConn.Bun()
var pgUsers []User
pgDB.NewSelect().Model(&pgUsers).Scan(ctx)
// Executes: SELECT * FROM auth.users
// SQLite connection
sqliteConn, _ := mgr.Get("cache-db")
sqliteDB, _ := sqliteConn.Bun()
var sqliteUsers []User
sqliteDB.NewSelect().Model(&sqliteUsers).Scan(ctx)
// Executes: SELECT * FROM auth_users (schema.table → schema_table)
```
#### Use MongoDB
```go
@@ -368,6 +398,37 @@ Providers handle:
- Connection statistics
- Connection cleanup
### SQLite Schema Handling
SQLite doesn't support schemas in the same way as PostgreSQL or MSSQL. To ensure compatibility when using models designed for multi-schema databases:
**Automatic Translation**: When a table name contains a schema prefix (e.g., `myschema.mytable`), it is automatically converted to `myschema_mytable` for SQLite databases.
```go
// Model definition (works across all databases)
func (User) TableName() string {
return "auth.users" // PostgreSQL/MSSQL: "auth"."users"
// SQLite: "auth_users"
}
// Query execution
db.NewSelect().Model(&User{}).Scan(ctx)
// PostgreSQL/MSSQL: SELECT * FROM auth.users
// SQLite: SELECT * FROM auth_users
```
**How it Works**:
- Bun, GORM, and Native adapters detect the driver type
- `parseTableName()` automatically translates schema.table → schema_table for SQLite
- Translation happens transparently in all database operations (SELECT, INSERT, UPDATE, DELETE)
- Preload and relation queries are also handled automatically
**Benefits**:
- Write database-agnostic code
- Use the same models across PostgreSQL, MSSQL, and SQLite
- No conditional logic needed in your application
- Schema separation maintained through naming convention in SQLite
## Best Practices
1. **Use Named Connections**: Be explicit about which database you're accessing

View File

@@ -467,13 +467,11 @@ func (c *sqlConnection) getNativeAdapter() (common.Database, error) {
// Create a native adapter based on database type
switch c.dbType {
case DatabaseTypePostgreSQL:
c.nativeAdapter = database.NewPgSQLAdapter(c.nativeDB)
c.nativeAdapter = database.NewPgSQLAdapter(c.nativeDB, string(c.dbType))
case DatabaseTypeSQLite:
// For SQLite, we'll use the PgSQL adapter as it works with standard sql.DB
c.nativeAdapter = database.NewPgSQLAdapter(c.nativeDB)
c.nativeAdapter = database.NewPgSQLAdapter(c.nativeDB, string(c.dbType))
case DatabaseTypeMSSQL:
// For MSSQL, we'll use the PgSQL adapter as it works with standard sql.DB
c.nativeAdapter = database.NewPgSQLAdapter(c.nativeDB)
c.nativeAdapter = database.NewPgSQLAdapter(c.nativeDB, string(c.dbType))
default:
return nil, ErrUnsupportedDatabase
}

View File

@@ -231,12 +231,14 @@ func (m *connectionManager) Connect(ctx context.Context) error {
// Close closes all database connections
func (m *connectionManager) Close() error {
// Stop the health checker before taking mu. performHealthCheck acquires
// a read lock, so waiting for the goroutine while holding the write lock
// would deadlock.
m.stopHealthChecker()
m.mu.Lock()
defer m.mu.Unlock()
// Stop health checker
m.stopHealthChecker()
// Close all connections
var errors []error
for name, conn := range m.connections {

View File

@@ -74,6 +74,10 @@ func (m *MockDatabase) GetUnderlyingDB() interface{} {
return m
}
func (m *MockDatabase) DriverName() string {
return "postgres"
}
// MockResult implements common.Result interface for testing
type MockResult struct {
rows int64

View File

@@ -645,11 +645,14 @@ func (h *Handler) getNotifyTopic(clientID, subscriptionID string) string {
// Database operation helpers (adapted from websocketspec)
func (h *Handler) getTableName(schema, entity string, model interface{}) string {
// Use entity as table name
tableName := entity
if schema != "" {
tableName = schema + "." + tableName
if h.db.DriverName() == "sqlite" {
tableName = schema + "_" + tableName
} else {
tableName = schema + "." + tableName
}
}
return tableName
}

572
pkg/resolvespec/EXAMPLES.md Normal file
View File

@@ -0,0 +1,572 @@
# ResolveSpec Query Features Examples
This document provides examples of using the advanced query features in ResolveSpec, including OR logic filters, Custom Operators, and FetchRowNumber.
## OR Logic in Filters (SearchOr)
### Basic OR Filter Example
Find all users with status "active" OR "pending":
```json
POST /users
{
"operation": "read",
"options": {
"filters": [
{
"column": "status",
"operator": "eq",
"value": "active"
},
{
"column": "status",
"operator": "eq",
"value": "pending",
"logic_operator": "OR"
}
]
}
}
```
### Combined AND/OR Filters
Find users with (status="active" OR status="pending") AND age >= 18:
```json
{
"operation": "read",
"options": {
"filters": [
{
"column": "status",
"operator": "eq",
"value": "active"
},
{
"column": "status",
"operator": "eq",
"value": "pending",
"logic_operator": "OR"
},
{
"column": "age",
"operator": "gte",
"value": 18
}
]
}
}
```
**SQL Generated:** `WHERE (status = 'active' OR status = 'pending') AND age >= 18`
**Important Notes:**
- By default, filters use AND logic
- Consecutive filters with `"logic_operator": "OR"` are automatically grouped with parentheses
- This grouping ensures OR conditions don't interfere with AND conditions
- You don't need to specify `"logic_operator": "AND"` as it's the default
### Multiple OR Groups
You can have multiple separate OR groups:
```json
{
"operation": "read",
"options": {
"filters": [
{
"column": "status",
"operator": "eq",
"value": "active"
},
{
"column": "status",
"operator": "eq",
"value": "pending",
"logic_operator": "OR"
},
{
"column": "priority",
"operator": "eq",
"value": "high"
},
{
"column": "priority",
"operator": "eq",
"value": "urgent",
"logic_operator": "OR"
}
]
}
}
```
**SQL Generated:** `WHERE (status = 'active' OR status = 'pending') AND (priority = 'high' OR priority = 'urgent')`
## Custom Operators
### Simple Custom SQL Condition
Filter by email domain using custom SQL:
```json
{
"operation": "read",
"options": {
"customOperators": [
{
"name": "company_emails",
"sql": "email LIKE '%@company.com'"
}
]
}
}
```
### Multiple Custom Operators
Combine multiple custom SQL conditions:
```json
{
"operation": "read",
"options": {
"customOperators": [
{
"name": "recent_active",
"sql": "last_login > NOW() - INTERVAL '30 days'"
},
{
"name": "high_score",
"sql": "score > 1000"
}
]
}
}
```
### Complex Custom Operator
Use complex SQL expressions:
```json
{
"operation": "read",
"options": {
"customOperators": [
{
"name": "priority_users",
"sql": "(subscription = 'premium' AND points > 500) OR (subscription = 'enterprise')"
}
]
}
}
```
### Combining Custom Operators with Regular Filters
Mix custom operators with standard filters:
```json
{
"operation": "read",
"options": {
"filters": [
{
"column": "country",
"operator": "eq",
"value": "USA"
}
],
"customOperators": [
{
"name": "active_last_month",
"sql": "last_activity > NOW() - INTERVAL '1 month'"
}
]
}
}
```
## Row Numbers
### Two Ways to Get Row Numbers
There are two different features for row numbers:
1. **`fetch_row_number`** - Get the position of ONE specific record in a sorted/filtered set
2. **`RowNumber` field in models** - Automatically number all records in the response
### 1. FetchRowNumber - Get Position of Specific Record
Get the rank/position of a specific user in a leaderboard. **Important:** When `fetch_row_number` is specified, the response contains **ONLY that specific record**, not all records.
```json
{
"operation": "read",
"options": {
"sort": [
{
"column": "score",
"direction": "desc"
}
],
"fetch_row_number": "12345"
}
}
```
**Response - Contains ONLY the specified user:**
```json
{
"success": true,
"data": {
"id": 12345,
"name": "Alice Smith",
"score": 9850,
"level": 42
},
"metadata": {
"total": 10000,
"count": 1,
"filtered": 10000,
"row_number": 42
}
}
```
**Result:** User "12345" is ranked #42 out of 10,000 users. The response includes only Alice's data, not the other 9,999 users.
### Row Number with Filters
Find position within a filtered subset (e.g., "What's my rank in my country?"):
```json
{
"operation": "read",
"options": {
"filters": [
{
"column": "country",
"operator": "eq",
"value": "USA"
},
{
"column": "status",
"operator": "eq",
"value": "active"
}
],
"sort": [
{
"column": "score",
"direction": "desc"
}
],
"fetch_row_number": "12345"
}
}
```
**Response:**
```json
{
"success": true,
"data": {
"id": 12345,
"name": "Bob Johnson",
"country": "USA",
"score": 7200,
"status": "active"
},
"metadata": {
"total": 2500,
"count": 1,
"filtered": 2500,
"row_number": 156
}
}
```
**Result:** Bob is ranked #156 out of 2,500 active USA users. Only Bob's record is returned.
### 2. RowNumber Field - Auto-Number All Records
If your model has a `RowNumber int64` field, restheadspec will automatically populate it for paginated results.
**Model Definition:**
```go
type Player struct {
ID int64 `json:"id"`
Name string `json:"name"`
Score int64 `json:"score"`
RowNumber int64 `json:"row_number"` // Will be auto-populated
}
```
**Request (with pagination):**
```json
{
"operation": "read",
"options": {
"sort": [{"column": "score", "direction": "desc"}],
"limit": 10,
"offset": 20
}
}
```
**Response - RowNumber automatically set:**
```json
{
"success": true,
"data": [
{
"id": 456,
"name": "Player21",
"score": 8900,
"row_number": 21
},
{
"id": 789,
"name": "Player22",
"score": 8850,
"row_number": 22
},
{
"id": 123,
"name": "Player23",
"score": 8800,
"row_number": 23
}
// ... records 24-30 ...
]
}
```
**How It Works:**
- `row_number = offset + index + 1` (1-based)
- With offset=20, first record gets row_number=21
- With offset=20, second record gets row_number=22
- Perfect for displaying "Rank" in paginated tables
**Use Case:** Displaying leaderboards with rank numbers:
```
Rank | Player | Score
-----|-----------|-------
21 | Player21 | 8900
22 | Player22 | 8850
23 | Player23 | 8800
```
**Note:** This feature is available in all three packages: resolvespec, restheadspec, and websocketspec.
### When to Use Each Feature
| Feature | Use Case | Returns | Performance |
|---------|----------|---------|-------------|
| `fetch_row_number` | "What's my rank?" | 1 record with position | Fast - 1 record |
| `RowNumber` field | "Show top 10 with ranks" | Many records numbered | Fast - simple math |
**Combined Example - Full Leaderboard UI:**
```javascript
// Request 1: Get current user's rank
const userRank = await api.read({
fetch_row_number: currentUserId,
sort: [{column: "score", direction: "desc"}]
});
// Returns: {id: 123, name: "You", score: 7500, row_number: 156}
// Request 2: Get top 10 with rank numbers
const top10 = await api.read({
sort: [{column: "score", direction: "desc"}],
limit: 10,
offset: 0
});
// Returns: [{row_number: 1, ...}, {row_number: 2, ...}, ...]
// Display:
// "Your Rank: #156"
// "Top Players:"
// "#1 - Alice - 9999"
// "#2 - Bob - 9876"
// ...
```
## Complete Example: Advanced Query
Combine all features for a complex query:
```json
{
"operation": "read",
"options": {
"columns": ["id", "name", "email", "score", "status"],
"filters": [
{
"column": "status",
"operator": "eq",
"value": "active"
},
{
"column": "status",
"operator": "eq",
"value": "trial",
"logic_operator": "OR"
},
{
"column": "score",
"operator": "gte",
"value": 100
}
],
"customOperators": [
{
"name": "recent_activity",
"sql": "last_login > NOW() - INTERVAL '7 days'"
},
{
"name": "verified_email",
"sql": "email_verified = true"
}
],
"sort": [
{
"column": "score",
"direction": "desc"
},
{
"column": "created_at",
"direction": "asc"
}
],
"fetch_row_number": "12345",
"limit": 50,
"offset": 0
}
}
```
This query:
- Selects specific columns
- Filters for users with status "active" OR "trial"
- AND score >= 100
- Applies custom SQL conditions for recent activity and verified emails
- Sorts by score (descending) then creation date (ascending)
- Returns the row number of user "12345" in this filtered/sorted set
- Returns 50 records starting from the first one
## Use Cases
### 1. Leaderboards - Get Current User's Rank
Get the current user's position and data (returns only their record):
```json
{
"operation": "read",
"options": {
"filters": [
{
"column": "game_id",
"operator": "eq",
"value": "game123"
}
],
"sort": [
{
"column": "score",
"direction": "desc"
}
],
"fetch_row_number": "current_user_id"
}
}
```
**Tip:** For full leaderboards, make two requests:
1. One with `fetch_row_number` to get user's rank
2. One with `limit` and `offset` to get top players list
### 2. Multi-Status Search
```json
{
"operation": "read",
"options": {
"filters": [
{
"column": "order_status",
"operator": "eq",
"value": "pending"
},
{
"column": "order_status",
"operator": "eq",
"value": "processing",
"logic_operator": "OR"
},
{
"column": "order_status",
"operator": "eq",
"value": "shipped",
"logic_operator": "OR"
}
]
}
}
```
### 3. Advanced Date Filtering
```json
{
"operation": "read",
"options": {
"customOperators": [
{
"name": "this_month",
"sql": "created_at >= DATE_TRUNC('month', CURRENT_DATE)"
},
{
"name": "business_hours",
"sql": "EXTRACT(HOUR FROM created_at) BETWEEN 9 AND 17"
}
]
}
}
```
## Security Considerations
**Warning:** Custom operators allow raw SQL, which can be a security risk if not properly handled:
1. **Never** directly interpolate user input into custom operator SQL
2. Always validate and sanitize custom operator SQL on the backend
3. Consider using a whitelist of allowed custom operators
4. Use prepared statements or parameterized queries when possible
5. Implement proper authorization checks before executing queries
Example of safe custom operator handling in Go:
```go
// Whitelist of allowed custom operators
allowedOperators := map[string]string{
"recent_week": "created_at > NOW() - INTERVAL '7 days'",
"active_users": "status = 'active' AND last_login > NOW() - INTERVAL '30 days'",
"premium_only": "subscription_level = 'premium'",
}
// Validate custom operators from request
for _, op := range req.Options.CustomOperators {
if sql, ok := allowedOperators[op.Name]; ok {
op.SQL = sql // Use whitelisted SQL
} else {
return errors.New("custom operator not allowed: " + op.Name)
}
}
```

View File

@@ -214,6 +214,146 @@ Content-Type: application/json
```json
{
"filters": [
{
"column": "status",
"operator": "eq",
"value": "active"
},
{
"column": "status",
"operator": "eq",
"value": "pending",
"logic_operator": "OR"
},
{
"column": "age",
"operator": "gte",
"value": 18
}
]
}
```
Produces: `WHERE (status = 'active' OR status = 'pending') AND age >= 18`
This grouping ensures OR conditions don't interfere with other AND conditions in the query.
### Custom Operators
Add custom SQL conditions when needed:
```json
{
"operation": "read",
"options": {
"customOperators": [
{
"name": "email_domain_filter",
"sql": "LOWER(email) LIKE '%@example.com'"
},
{
"name": "recent_records",
"sql": "created_at > NOW() - INTERVAL '7 days'"
}
]
}
}
```
Custom operators are applied as additional WHERE conditions to your query.
### Fetch Row Number
Get the row number (position) of a specific record in the filtered and sorted result set. **When `fetch_row_number` is specified, only that specific record is returned** (not all records).
```json
{
"operation": "read",
"options": {
"filters": [
{
"column": "status",
"operator": "eq",
"value": "active"
}
],
"sort": [
{
"column": "score",
"direction": "desc"
}
],
"fetch_row_number": "12345"
}
}
```
**Response - Returns ONLY the specified record with its position:**
```json
{
"success": true,
"data": {
"id": 12345,
"name": "John Doe",
"score": 850,
"status": "active"
},
"metadata": {
"total": 1000,
"count": 1,
"filtered": 1000,
"row_number": 42
}
}
```
**Use Case:** Perfect for "Show me this user and their ranking" - you get just that one user with their position in the leaderboard.
**Note:** This is different from the `RowNumber` field feature, which automatically numbers all records in a paginated response based on offset. That feature uses simple math (`offset + index + 1`), while `fetch_row_number` uses SQL window functions to calculate the actual position in a sorted/filtered set. To use the `RowNumber` field feature, simply add a `RowNumber int64` field to your model - it will be automatically populated with the row position based on pagination.
## Preloading
Load related entities with custom configuration:
```json
{
"operation": "read",
"options": {
"columns": ["id", "name", "email"],
"preload": [
{
"relation": "posts",
"columns": ["id", "title", "created_at"],
"filters": [
{
"column": "status",
"operator": "eq",
"value": "published"
}
],
"sort": [
{
"column": "created_at",
"direction": "desc"
}
],
"limit": 5
},
{
"relation": "profile",
"columns": ["bio", "website"]
}
]
}
}
```
## Cursor Pagination
Efficient pagination for large datasets:
### First Request (No Cursor)
```json
@@ -427,7 +567,7 @@ Define virtual columns using SQL expressions:
// Check permissions
if !userHasPermission(ctx.Context, ctx.Entity) {
return fmt.Errorf("unauthorized access to %s", ctx.Entity)
return nil
}
// Modify query options
if ctx.Options.Limit == nil || *ctx.Options.Limit > 100 {
@@ -435,17 +575,24 @@ Add custom SQL conditions when needed:
}
return nil
users[i].Email = maskEmail(users[i].Email)
}
})
// Register an after-read hook (e.g., for data transformation)
handler.Hooks().Register(resolvespec.AfterRead, func(ctx *resolvespec.HookContext) error {
})
// Transform or filter results
if users, ok := ctx.Result.([]User); ok {
for i := range users {
users[i].Email = maskEmail(users[i].Email)
}
}
return nil
})
// Register a before-create hook (e.g., for validation)
handler.Hooks().Register(resolvespec.BeforeCreate, func(ctx *resolvespec.HookContext) error {
// Validate data
if user, ok := ctx.Data.(*User); ok {
if user.Email == "" {
return fmt.Errorf("email is required")
}
// Add timestamps

View File

@@ -0,0 +1,143 @@
package resolvespec
import (
"strings"
"testing"
"github.com/bitechdev/ResolveSpec/pkg/common"
)
// TestBuildFilterCondition tests the filter condition builder
func TestBuildFilterCondition(t *testing.T) {
h := &Handler{}
tests := []struct {
name string
filter common.FilterOption
expectedCondition string
expectedArgsCount int
}{
{
name: "Equal operator",
filter: common.FilterOption{
Column: "status",
Operator: "eq",
Value: "active",
},
expectedCondition: "status = ?",
expectedArgsCount: 1,
},
{
name: "Greater than operator",
filter: common.FilterOption{
Column: "age",
Operator: "gt",
Value: 18,
},
expectedCondition: "age > ?",
expectedArgsCount: 1,
},
{
name: "IN operator",
filter: common.FilterOption{
Column: "status",
Operator: "in",
Value: []string{"active", "pending"},
},
expectedCondition: "status IN (?)",
expectedArgsCount: 1,
},
{
name: "LIKE operator",
filter: common.FilterOption{
Column: "email",
Operator: "like",
Value: "%@example.com",
},
expectedCondition: "email LIKE ?",
expectedArgsCount: 1,
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
condition, args := h.buildFilterCondition(tt.filter)
if condition != tt.expectedCondition {
t.Errorf("Expected condition '%s', got '%s'", tt.expectedCondition, condition)
}
if len(args) != tt.expectedArgsCount {
t.Errorf("Expected %d args, got %d", tt.expectedArgsCount, len(args))
}
// Note: Skip value comparison for slices as they can't be compared with ==
// The important part is that args are populated correctly
})
}
}
// TestORGrouping tests that consecutive OR filters are properly grouped
func TestORGrouping(t *testing.T) {
// This is a conceptual test - in practice we'd need a mock SelectQuery
// to verify the actual SQL grouping behavior
t.Run("Consecutive OR filters should be grouped", func(t *testing.T) {
filters := []common.FilterOption{
{Column: "status", Operator: "eq", Value: "active"},
{Column: "status", Operator: "eq", Value: "pending", LogicOperator: "OR"},
{Column: "status", Operator: "eq", Value: "trial", LogicOperator: "OR"},
{Column: "age", Operator: "gte", Value: 18},
}
// Expected behavior: (status='active' OR status='pending' OR status='trial') AND age>=18
// The first three filters should be grouped together
// The fourth filter should be separate with AND
// Count OR groups
orGroupCount := 0
inORGroup := false
for i := 1; i < len(filters); i++ {
if strings.EqualFold(filters[i].LogicOperator, "OR") && !inORGroup {
orGroupCount++
inORGroup = true
} else if !strings.EqualFold(filters[i].LogicOperator, "OR") {
inORGroup = false
}
}
// We should have detected one OR group
if orGroupCount != 1 {
t.Errorf("Expected 1 OR group, detected %d", orGroupCount)
}
})
t.Run("Multiple OR groups should be handled correctly", func(t *testing.T) {
filters := []common.FilterOption{
{Column: "status", Operator: "eq", Value: "active"},
{Column: "status", Operator: "eq", Value: "pending", LogicOperator: "OR"},
{Column: "priority", Operator: "eq", Value: "high"},
{Column: "priority", Operator: "eq", Value: "urgent", LogicOperator: "OR"},
}
// Expected: (status='active' OR status='pending') AND (priority='high' OR priority='urgent')
// Should have two OR groups
orGroupCount := 0
inORGroup := false
for i := 1; i < len(filters); i++ {
if strings.EqualFold(filters[i].LogicOperator, "OR") && !inORGroup {
orGroupCount++
inORGroup = true
} else if !strings.EqualFold(filters[i].LogicOperator, "OR") {
inORGroup = false
}
}
// We should have detected two OR groups
if orGroupCount != 2 {
t.Errorf("Expected 2 OR groups, detected %d", orGroupCount)
}
})
}

View File

@@ -280,10 +280,13 @@ func (h *Handler) handleRead(ctx context.Context, w common.ResponseWriter, id st
}
}
// Apply filters
for _, filter := range options.Filters {
logger.Debug("Applying filter: %s %s %v", filter.Column, filter.Operator, filter.Value)
query = h.applyFilter(query, filter)
// Apply filters with proper grouping for OR logic
query = h.applyFilters(query, options.Filters)
// Apply custom operators
for _, customOp := range options.CustomOperators {
logger.Debug("Applying custom operator: %s - %s", customOp.Name, customOp.SQL)
query = query.Where(customOp.SQL)
}
// Apply sorting
@@ -381,24 +384,94 @@ func (h *Handler) handleRead(ctx context.Context, w common.ResponseWriter, id st
}
}
// Apply pagination
if options.Limit != nil && *options.Limit > 0 {
logger.Debug("Applying limit: %d", *options.Limit)
query = query.Limit(*options.Limit)
// Handle FetchRowNumber if requested
var rowNumber *int64
if options.FetchRowNumber != nil && *options.FetchRowNumber != "" {
logger.Debug("Fetching row number for ID: %s", *options.FetchRowNumber)
pkName := reflection.GetPrimaryKeyName(model)
// Build ROW_NUMBER window function SQL
rowNumberSQL := "ROW_NUMBER() OVER ("
if len(options.Sort) > 0 {
rowNumberSQL += "ORDER BY "
for i, sort := range options.Sort {
if i > 0 {
rowNumberSQL += ", "
}
direction := "ASC"
if strings.EqualFold(sort.Direction, "desc") {
direction = "DESC"
}
rowNumberSQL += fmt.Sprintf("%s %s", sort.Column, direction)
}
}
rowNumberSQL += ")"
// Create a query to fetch the row number using a subquery approach
// We'll select the PK and row_number, then filter by the target ID
type RowNumResult struct {
RowNum int64 `bun:"row_num"`
}
rowNumQuery := h.db.NewSelect().Table(tableName).
ColumnExpr(fmt.Sprintf("%s AS row_num", rowNumberSQL)).
Column(pkName)
// Apply the same filters as the main query
for _, filter := range options.Filters {
rowNumQuery = h.applyFilter(rowNumQuery, filter)
}
// Apply custom operators
for _, customOp := range options.CustomOperators {
rowNumQuery = rowNumQuery.Where(customOp.SQL)
}
// Filter for the specific ID we want the row number for
rowNumQuery = rowNumQuery.Where(fmt.Sprintf("%s = ?", common.QuoteIdent(pkName)), *options.FetchRowNumber)
// Execute query to get row number
var result RowNumResult
if err := rowNumQuery.Scan(ctx, &result); err != nil {
if err != sql.ErrNoRows {
logger.Warn("Error fetching row number: %v", err)
}
} else {
rowNumber = &result.RowNum
logger.Debug("Found row number: %d", *rowNumber)
}
}
if options.Offset != nil && *options.Offset > 0 {
logger.Debug("Applying offset: %d", *options.Offset)
query = query.Offset(*options.Offset)
// Apply pagination (skip if FetchRowNumber is set - we want only that record)
if options.FetchRowNumber == nil || *options.FetchRowNumber == "" {
if options.Limit != nil && *options.Limit > 0 {
logger.Debug("Applying limit: %d", *options.Limit)
query = query.Limit(*options.Limit)
}
if options.Offset != nil && *options.Offset > 0 {
logger.Debug("Applying offset: %d", *options.Offset)
query = query.Offset(*options.Offset)
}
}
// Execute query
var result interface{}
if id != "" {
logger.Debug("Querying single record with ID: %s", id)
if id != "" || (options.FetchRowNumber != nil && *options.FetchRowNumber != "") {
// Single record query - either by URL ID or FetchRowNumber
var targetID string
if id != "" {
targetID = id
logger.Debug("Querying single record with URL ID: %s", id)
} else {
targetID = *options.FetchRowNumber
logger.Debug("Querying single record with FetchRowNumber ID: %s", targetID)
}
// For single record, create a new pointer to the struct type
singleResult := reflect.New(modelType).Interface()
pkName := reflection.GetPrimaryKeyName(singleResult)
query = query.Where(fmt.Sprintf("%s = ?", common.QuoteIdent(reflection.GetPrimaryKeyName(singleResult))), id)
query = query.Where(fmt.Sprintf("%s = ?", common.QuoteIdent(pkName)), targetID)
if err := query.Scan(ctx, singleResult); err != nil {
logger.Error("Error querying record: %v", err)
h.sendError(w, http.StatusInternalServerError, "query_error", "Error executing query", err)
@@ -418,20 +491,35 @@ func (h *Handler) handleRead(ctx context.Context, w common.ResponseWriter, id st
logger.Info("Successfully retrieved records")
// Build metadata
limit := 0
if options.Limit != nil {
limit = *options.Limit
}
offset := 0
if options.Offset != nil {
offset = *options.Offset
count := int64(total)
// When FetchRowNumber is used, we only return 1 record
if options.FetchRowNumber != nil && *options.FetchRowNumber != "" {
count = 1
// Don't use limit/offset when fetching specific record
} else {
if options.Limit != nil {
limit = *options.Limit
}
if options.Offset != nil {
offset = *options.Offset
}
// Set row numbers on records if RowNumber field exists
// Only for multiple records (not when fetching single record)
h.setRowNumbersOnRecords(result, offset)
}
h.sendResponse(w, result, &common.Metadata{
Total: int64(total),
Filtered: int64(total),
Limit: limit,
Offset: offset,
Total: int64(total),
Filtered: int64(total),
Count: count,
Limit: limit,
Offset: offset,
RowNumber: rowNumber,
})
}
@@ -1303,29 +1391,161 @@ func (h *Handler) handleDelete(ctx context.Context, w common.ResponseWriter, id
h.sendResponse(w, recordToDelete, nil)
}
func (h *Handler) applyFilter(query common.SelectQuery, filter common.FilterOption) common.SelectQuery {
// applyFilters applies all filters with proper grouping for OR logic
// Groups consecutive OR filters together to ensure proper query precedence
// Example: [A, B(OR), C(OR), D(AND)] => WHERE (A OR B OR C) AND D
func (h *Handler) applyFilters(query common.SelectQuery, filters []common.FilterOption) common.SelectQuery {
if len(filters) == 0 {
return query
}
i := 0
for i < len(filters) {
// Check if this starts an OR group (current or next filter has OR logic)
startORGroup := i+1 < len(filters) && strings.EqualFold(filters[i+1].LogicOperator, "OR")
if startORGroup {
// Collect all consecutive filters that are OR'd together
orGroup := []common.FilterOption{filters[i]}
j := i + 1
for j < len(filters) && strings.EqualFold(filters[j].LogicOperator, "OR") {
orGroup = append(orGroup, filters[j])
j++
}
// Apply the OR group as a single grouped WHERE clause
query = h.applyFilterGroup(query, orGroup)
i = j
} else {
// Single filter with AND logic (or first filter)
condition, args := h.buildFilterCondition(filters[i])
if condition != "" {
query = query.Where(condition, args...)
}
i++
}
}
return query
}
// applyFilterGroup applies a group of filters that should be OR'd together
// Always wraps them in parentheses and applies as a single WHERE clause
func (h *Handler) applyFilterGroup(query common.SelectQuery, filters []common.FilterOption) common.SelectQuery {
if len(filters) == 0 {
return query
}
// Build all conditions and collect args
var conditions []string
var args []interface{}
for _, filter := range filters {
condition, filterArgs := h.buildFilterCondition(filter)
if condition != "" {
conditions = append(conditions, condition)
args = append(args, filterArgs...)
}
}
if len(conditions) == 0 {
return query
}
// Single filter - no need for grouping
if len(conditions) == 1 {
return query.Where(conditions[0], args...)
}
// Multiple conditions - group with parentheses and OR
groupedCondition := "(" + strings.Join(conditions, " OR ") + ")"
return query.Where(groupedCondition, args...)
}
// buildFilterCondition builds a filter condition and returns it with args
func (h *Handler) buildFilterCondition(filter common.FilterOption) (conditionString string, conditionArgs []interface{}) {
var condition string
var args []interface{}
switch filter.Operator {
case "eq":
return query.Where(fmt.Sprintf("%s = ?", filter.Column), filter.Value)
condition = fmt.Sprintf("%s = ?", filter.Column)
args = []interface{}{filter.Value}
case "neq":
return query.Where(fmt.Sprintf("%s != ?", filter.Column), filter.Value)
condition = fmt.Sprintf("%s != ?", filter.Column)
args = []interface{}{filter.Value}
case "gt":
return query.Where(fmt.Sprintf("%s > ?", filter.Column), filter.Value)
condition = fmt.Sprintf("%s > ?", filter.Column)
args = []interface{}{filter.Value}
case "gte":
return query.Where(fmt.Sprintf("%s >= ?", filter.Column), filter.Value)
condition = fmt.Sprintf("%s >= ?", filter.Column)
args = []interface{}{filter.Value}
case "lt":
return query.Where(fmt.Sprintf("%s < ?", filter.Column), filter.Value)
condition = fmt.Sprintf("%s < ?", filter.Column)
args = []interface{}{filter.Value}
case "lte":
return query.Where(fmt.Sprintf("%s <= ?", filter.Column), filter.Value)
condition = fmt.Sprintf("%s <= ?", filter.Column)
args = []interface{}{filter.Value}
case "like":
return query.Where(fmt.Sprintf("%s LIKE ?", filter.Column), filter.Value)
condition = fmt.Sprintf("%s LIKE ?", filter.Column)
args = []interface{}{filter.Value}
case "ilike":
return query.Where(fmt.Sprintf("%s ILIKE ?", filter.Column), filter.Value)
condition = fmt.Sprintf("%s ILIKE ?", filter.Column)
args = []interface{}{filter.Value}
case "in":
return query.Where(fmt.Sprintf("%s IN (?)", filter.Column), filter.Value)
condition = fmt.Sprintf("%s IN (?)", filter.Column)
args = []interface{}{filter.Value}
default:
return "", nil
}
return condition, args
}
func (h *Handler) applyFilter(query common.SelectQuery, filter common.FilterOption) common.SelectQuery {
// Determine which method to use based on LogicOperator
useOrLogic := strings.EqualFold(filter.LogicOperator, "OR")
var condition string
var args []interface{}
switch filter.Operator {
case "eq":
condition = fmt.Sprintf("%s = ?", filter.Column)
args = []interface{}{filter.Value}
case "neq":
condition = fmt.Sprintf("%s != ?", filter.Column)
args = []interface{}{filter.Value}
case "gt":
condition = fmt.Sprintf("%s > ?", filter.Column)
args = []interface{}{filter.Value}
case "gte":
condition = fmt.Sprintf("%s >= ?", filter.Column)
args = []interface{}{filter.Value}
case "lt":
condition = fmt.Sprintf("%s < ?", filter.Column)
args = []interface{}{filter.Value}
case "lte":
condition = fmt.Sprintf("%s <= ?", filter.Column)
args = []interface{}{filter.Value}
case "like":
condition = fmt.Sprintf("%s LIKE ?", filter.Column)
args = []interface{}{filter.Value}
case "ilike":
condition = fmt.Sprintf("%s ILIKE ?", filter.Column)
args = []interface{}{filter.Value}
case "in":
condition = fmt.Sprintf("%s IN (?)", filter.Column)
args = []interface{}{filter.Value}
default:
return query
}
// Apply filter with appropriate logic operator
if useOrLogic {
return query.WhereOr(condition, args...)
}
return query.Where(condition, args...)
}
// parseTableName splits a table name that may contain schema into separate schema and table
@@ -1380,10 +1600,16 @@ func (h *Handler) getSchemaAndTable(defaultSchema, entity string, model interfac
return schema, entity
}
// getTableName returns the full table name including schema (schema.table)
// getTableName returns the full table name including schema.
// For most drivers the result is "schema.table". For SQLite, which does not
// support schema-qualified names, the schema and table are joined with an
// underscore: "schema_table".
func (h *Handler) getTableName(schema, entity string, model interface{}) string {
schemaName, tableName := h.getSchemaAndTable(schema, entity, model)
if schemaName != "" {
if h.db.DriverName() == "sqlite" {
return fmt.Sprintf("%s_%s", schemaName, tableName)
}
return fmt.Sprintf("%s.%s", schemaName, tableName)
}
return tableName
@@ -1703,6 +1929,51 @@ func toSnakeCase(s string) string {
return strings.ToLower(result.String())
}
// setRowNumbersOnRecords sets the RowNumber field on each record if it exists
// The row number is calculated as offset + index + 1 (1-based)
func (h *Handler) setRowNumbersOnRecords(records interface{}, offset int) {
// Get the reflect value of the records
recordsValue := reflect.ValueOf(records)
if recordsValue.Kind() == reflect.Ptr {
recordsValue = recordsValue.Elem()
}
// Ensure it's a slice
if recordsValue.Kind() != reflect.Slice {
logger.Debug("setRowNumbersOnRecords: records is not a slice, skipping")
return
}
// Iterate through each record
for i := 0; i < recordsValue.Len(); i++ {
record := recordsValue.Index(i)
// Dereference if it's a pointer
if record.Kind() == reflect.Ptr {
if record.IsNil() {
continue
}
record = record.Elem()
}
// Ensure it's a struct
if record.Kind() != reflect.Struct {
continue
}
// Try to find and set the RowNumber field
rowNumberField := record.FieldByName("RowNumber")
if rowNumberField.IsValid() && rowNumberField.CanSet() {
// Check if the field is of type int64
if rowNumberField.Kind() == reflect.Int64 {
rowNum := int64(offset + i + 1)
rowNumberField.SetInt(rowNum)
logger.Debug("Set RowNumber=%d for record index %d", rowNum, i)
}
}
}
}
// HandleOpenAPI generates and returns the OpenAPI specification
func (h *Handler) HandleOpenAPI(w common.ResponseWriter, r common.Request) {
if h.openAPIGenerator == nil {

View File

@@ -2015,11 +2015,18 @@ func (h *Handler) processChildRelationsForField(
return nil
}
// getTableNameForRelatedModel gets the table name for a related model
// getTableNameForRelatedModel gets the table name for a related model.
// If the model's TableName() is schema-qualified (e.g. "public.users") the
// separator is adjusted for the active driver: underscore for SQLite, dot otherwise.
func (h *Handler) getTableNameForRelatedModel(model interface{}, defaultName string) string {
if provider, ok := model.(common.TableNameProvider); ok {
tableName := provider.TableName()
if tableName != "" {
if schema, table := h.parseTableName(tableName); schema != "" {
if h.db.DriverName() == "sqlite" {
return fmt.Sprintf("%s_%s", schema, table)
}
}
return tableName
}
}
@@ -2264,10 +2271,16 @@ func (h *Handler) getSchemaAndTable(defaultSchema, entity string, model interfac
return schema, entity
}
// getTableName returns the full table name including schema (schema.table)
// getTableName returns the full table name including schema.
// For most drivers the result is "schema.table". For SQLite, which does not
// support schema-qualified names, the schema and table are joined with an
// underscore: "schema_table".
func (h *Handler) getTableName(schema, entity string, model interface{}) string {
schemaName, tableName := h.getSchemaAndTable(schema, entity, model)
if schemaName != "" {
if h.db.DriverName() == "sqlite" {
return fmt.Sprintf("%s_%s", schemaName, tableName)
}
return fmt.Sprintf("%s.%s", schemaName, tableName)
}
return tableName
@@ -2589,21 +2602,8 @@ func (h *Handler) FetchRowNumber(ctx context.Context, tableName string, pkName s
sortSQL = fmt.Sprintf("%s.%s ASC", tableName, pkName)
}
// Build WHERE clauses from filters
whereClauses := make([]string, 0)
for i := range options.Filters {
filter := &options.Filters[i]
whereClause := h.buildFilterSQL(filter, tableName)
if whereClause != "" {
whereClauses = append(whereClauses, fmt.Sprintf("(%s)", whereClause))
}
}
// Combine WHERE clauses
whereSQL := ""
if len(whereClauses) > 0 {
whereSQL = "WHERE " + strings.Join(whereClauses, " AND ")
}
// Build WHERE clause from filters with proper OR grouping
whereSQL := h.buildWhereClauseWithORGrouping(options.Filters, tableName)
// Add custom SQL WHERE if provided
if options.CustomSQLWhere != "" {
@@ -2664,6 +2664,67 @@ func (h *Handler) FetchRowNumber(ctx context.Context, tableName string, pkName s
}
// buildFilterSQL converts a filter to SQL WHERE clause string
// buildWhereClauseWithORGrouping builds a WHERE clause from filters with proper OR grouping
// Groups consecutive OR filters together to ensure proper SQL precedence
// Example: [A, B(OR), C(OR), D(AND)] => WHERE (A OR B OR C) AND D
func (h *Handler) buildWhereClauseWithORGrouping(filters []common.FilterOption, tableName string) string {
if len(filters) == 0 {
return ""
}
var groups []string
i := 0
for i < len(filters) {
// Check if this starts an OR group (next filter has OR logic)
startORGroup := i+1 < len(filters) && strings.EqualFold(filters[i+1].LogicOperator, "OR")
if startORGroup {
// Collect all consecutive filters that are OR'd together
orGroup := []string{}
// Add current filter
filterSQL := h.buildFilterSQL(&filters[i], tableName)
if filterSQL != "" {
orGroup = append(orGroup, filterSQL)
}
// Collect remaining OR filters
j := i + 1
for j < len(filters) && strings.EqualFold(filters[j].LogicOperator, "OR") {
filterSQL := h.buildFilterSQL(&filters[j], tableName)
if filterSQL != "" {
orGroup = append(orGroup, filterSQL)
}
j++
}
// Group OR filters with parentheses
if len(orGroup) > 0 {
if len(orGroup) == 1 {
groups = append(groups, orGroup[0])
} else {
groups = append(groups, "("+strings.Join(orGroup, " OR ")+")")
}
}
i = j
} else {
// Single filter with AND logic (or first filter)
filterSQL := h.buildFilterSQL(&filters[i], tableName)
if filterSQL != "" {
groups = append(groups, filterSQL)
}
i++
}
}
if len(groups) == 0 {
return ""
}
return "WHERE " + strings.Join(groups, " AND ")
}
func (h *Handler) buildFilterSQL(filter *common.FilterOption, tableName string) string {
qualifiedColumn := h.qualifyColumnName(filter.Column, tableName)

View File

@@ -6,6 +6,7 @@ import (
"fmt"
"net/http"
"reflect"
"strings"
"time"
"github.com/google/uuid"
@@ -540,10 +541,8 @@ func (h *Handler) readMultiple(hookCtx *HookContext) (data interface{}, metadata
// Apply options (simplified implementation)
if hookCtx.Options != nil {
// Apply filters
for _, filter := range hookCtx.Options.Filters {
query = query.Where(fmt.Sprintf("%s %s ?", filter.Column, h.getOperatorSQL(filter.Operator)), filter.Value)
}
// Apply filters with OR grouping support
query = h.applyFilters(query, hookCtx.Options.Filters)
// Apply sorting
for _, sort := range hookCtx.Options.Sort {
@@ -578,6 +577,13 @@ func (h *Handler) readMultiple(hookCtx *HookContext) (data interface{}, metadata
return nil, nil, fmt.Errorf("failed to read records: %w", err)
}
// Set row numbers on records if RowNumber field exists
offset := 0
if hookCtx.Options != nil && hookCtx.Options.Offset != nil {
offset = *hookCtx.Options.Offset
}
h.setRowNumbersOnRecords(hookCtx.ModelPtr, offset)
// Get count
metadata = make(map[string]interface{})
countQuery := h.db.NewSelect().Model(hookCtx.ModelPtr).Table(hookCtx.TableName)
@@ -656,11 +662,14 @@ func (h *Handler) delete(hookCtx *HookContext) error {
// Helper methods
func (h *Handler) getTableName(schema, entity string, model interface{}) string {
// Use entity as table name
tableName := entity
if schema != "" {
tableName = schema + "." + tableName
if h.db.DriverName() == "sqlite" {
tableName = schema + "_" + tableName
} else {
tableName = schema + "." + tableName
}
}
return tableName
}
@@ -680,6 +689,133 @@ func (h *Handler) getMetadata(schema, entity string, model interface{}) map[stri
}
// getOperatorSQL converts filter operator to SQL operator
// applyFilters applies all filters with proper grouping for OR logic
// Groups consecutive OR filters together to ensure proper query precedence
func (h *Handler) applyFilters(query common.SelectQuery, filters []common.FilterOption) common.SelectQuery {
if len(filters) == 0 {
return query
}
i := 0
for i < len(filters) {
// Check if this starts an OR group (next filter has OR logic)
startORGroup := i+1 < len(filters) && strings.EqualFold(filters[i+1].LogicOperator, "OR")
if startORGroup {
// Collect all consecutive filters that are OR'd together
orGroup := []common.FilterOption{filters[i]}
j := i + 1
for j < len(filters) && strings.EqualFold(filters[j].LogicOperator, "OR") {
orGroup = append(orGroup, filters[j])
j++
}
// Apply the OR group as a single grouped WHERE clause
query = h.applyFilterGroup(query, orGroup)
i = j
} else {
// Single filter with AND logic (or first filter)
condition, args := h.buildFilterCondition(filters[i])
if condition != "" {
query = query.Where(condition, args...)
}
i++
}
}
return query
}
// applyFilterGroup applies a group of filters that should be OR'd together
// Always wraps them in parentheses and applies as a single WHERE clause
func (h *Handler) applyFilterGroup(query common.SelectQuery, filters []common.FilterOption) common.SelectQuery {
if len(filters) == 0 {
return query
}
// Build all conditions and collect args
var conditions []string
var args []interface{}
for _, filter := range filters {
condition, filterArgs := h.buildFilterCondition(filter)
if condition != "" {
conditions = append(conditions, condition)
args = append(args, filterArgs...)
}
}
if len(conditions) == 0 {
return query
}
// Single filter - no need for grouping
if len(conditions) == 1 {
return query.Where(conditions[0], args...)
}
// Multiple conditions - group with parentheses and OR
groupedCondition := "(" + strings.Join(conditions, " OR ") + ")"
return query.Where(groupedCondition, args...)
}
// buildFilterCondition builds a filter condition and returns it with args
func (h *Handler) buildFilterCondition(filter common.FilterOption) (conditionString string, conditionArgs []interface{}) {
var condition string
var args []interface{}
operatorSQL := h.getOperatorSQL(filter.Operator)
condition = fmt.Sprintf("%s %s ?", filter.Column, operatorSQL)
args = []interface{}{filter.Value}
return condition, args
}
// setRowNumbersOnRecords sets the RowNumber field on each record if it exists
// The row number is calculated as offset + index + 1 (1-based)
func (h *Handler) setRowNumbersOnRecords(records interface{}, offset int) {
// Get the reflect value of the records
recordsValue := reflect.ValueOf(records)
if recordsValue.Kind() == reflect.Ptr {
recordsValue = recordsValue.Elem()
}
// Ensure it's a slice
if recordsValue.Kind() != reflect.Slice {
logger.Debug("[WebSocketSpec] setRowNumbersOnRecords: records is not a slice, skipping")
return
}
// Iterate through each record
for i := 0; i < recordsValue.Len(); i++ {
record := recordsValue.Index(i)
// Dereference if it's a pointer
if record.Kind() == reflect.Ptr {
if record.IsNil() {
continue
}
record = record.Elem()
}
// Ensure it's a struct
if record.Kind() != reflect.Struct {
continue
}
// Try to find and set the RowNumber field
rowNumberField := record.FieldByName("RowNumber")
if rowNumberField.IsValid() && rowNumberField.CanSet() {
// Check if the field is of type int64
if rowNumberField.Kind() == reflect.Int64 {
rowNum := int64(offset + i + 1)
rowNumberField.SetInt(rowNum)
logger.Debug("[WebSocketSpec] Set RowNumber=%d for record index %d", rowNum, i)
}
}
}
}
func (h *Handler) getOperatorSQL(operator string) string {
switch operator {
case "eq":

View File

@@ -82,6 +82,10 @@ func (m *MockDatabase) GetUnderlyingDB() interface{} {
return args.Get(0)
}
func (m *MockDatabase) DriverName() string {
return "postgres"
}
// MockSelectQuery is a mock implementation of common.SelectQuery
type MockSelectQuery struct {
mock.Mock

View File

@@ -1,5 +1,50 @@
# Python Implementation of the ResolveSpec API
# ResolveSpec Python Client - TODO
# Server
## Client Implementation & Testing
# Client
### 1. ResolveSpec Client API
- [ ] Core API implementation (read, create, update, delete, get_metadata)
- [ ] Unit tests for API functions
- [ ] Integration tests with server
- [ ] Error handling and edge cases
### 2. HeaderSpec Client API
- [ ] Client API implementation
- [ ] Unit tests
- [ ] Integration tests with server
### 3. FunctionSpec Client API
- [ ] Client API implementation
- [ ] Unit tests
- [ ] Integration tests with server
### 4. WebSocketSpec Client API
- [ ] WebSocketClient class implementation (read, create, update, delete, meta, subscribe, unsubscribe)
- [ ] Unit tests for WebSocketClient
- [ ] Connection handling tests
- [ ] Subscription tests
- [ ] Integration tests with server
### 5. Testing Infrastructure
- [ ] Set up test framework (pytest)
- [ ] Configure test coverage reporting (pytest-cov)
- [ ] Add test utilities and fixtures
- [ ] Create test documentation
- [ ] Package and publish to PyPI
## Documentation
- [ ] API reference documentation
- [ ] Usage examples for each client API
- [ ] Installation guide
- [ ] Contributing guidelines
- [ ] README with quick start
---
**Last Updated:** 2026-02-07

114
todo.md
View File

@@ -2,36 +2,98 @@
This document tracks incomplete features and improvements for the ResolveSpec project.
## In Progress
### Database Layer
- [x] SQLite schema translation (schema.table → schema_table)
- [x] Driver name normalization across adapters
- [x] Database Connection Manager (dbmanager) package
### Documentation
- Ensure all new features are documented in README.md
- Update examples to showcase new functionality
- Add migration notes if any breaking changes are introduced
- [x] Add dbmanager to README
- [x] Add WebSocketSpec to top-level intro
- [x] Add MQTTSpec to top-level intro
- [x] Remove migration sections from README
- [ ] Complete API reference documentation
- [ ] Add examples for all supported databases
### 8.
## Planned Features
1. **Test Coverage**: Increase from 20% to 70%+
- Add integration tests for CRUD operations
- Add unit tests for security providers
- Add concurrency tests for model registry
### ResolveSpec JS Client Implementation & Testing
1. **ResolveSpec Client API (resolvespec-js)**
- [x] Core API implementation (read, create, update, delete, getMetadata)
- [ ] Unit tests for API functions
- [ ] Integration tests with server
- [ ] Error handling and edge cases
2. **HeaderSpec Client API (resolvespec-js)**
- [ ] Client API implementation
- [ ] Unit tests
- [ ] Integration tests with server
3. **FunctionSpec Client API (resolvespec-js)**
- [ ] Client API implementation
- [ ] Unit tests
- [ ] Integration tests with server
4. **WebSocketSpec Client API (resolvespec-js)**
- [x] WebSocketClient class implementation (read, create, update, delete, meta, subscribe, unsubscribe)
- [ ] Unit tests for WebSocketClient
- [ ] Connection handling tests
- [ ] Subscription tests
- [ ] Integration tests with server
5. **resolvespec-js Testing Infrastructure**
- [ ] Set up test framework (Jest or Vitest)
- [ ] Configure test coverage reporting
- [ ] Add test utilities and mocks
- [ ] Create test documentation
### ResolveSpec Python Client Implementation & Testing
See [`resolvespec-python/todo.md`](./resolvespec-python/todo.md) for detailed Python client implementation tasks.
### Core Functionality
1. **Enhanced Preload Filtering**
- [ ] Column selection for nested preloads
- [ ] Advanced filtering conditions for relations
- [ ] Performance optimization for deep nesting
2. **Advanced Query Features**
- [ ] Custom SQL join support
- [ ] Computed column improvements
- [ ] Recursive query support
3. **Testing & Quality**
- [ ] Increase test coverage to 70%+
- [ ] Add integration tests for all ORMs
- [ ] Add concurrency tests for thread safety
- [ ] Performance benchmarks
### Infrastructure
- [ ] Improved error handling and reporting
- [ ] Enhanced logging capabilities
- [ ] Additional monitoring metrics
- [ ] Performance profiling tools
## Documentation Tasks
- [ ] Complete API reference
- [ ] Add troubleshooting guides
- [ ] Create architecture diagrams
- [ ] Expand database adapter documentation
## Known Issues
- [ ] Long preload alias names may exceed PostgreSQL identifier limit
- [ ] Some edge cases in computed column handling
---
## Priority Ranking
1. **High Priority**
- Column Selection and Filtering for Preloads (#1)
- Proper Condition Handling for Bun Preloads (#4)
2. **Medium Priority**
- Custom SQL Join Support (#3)
- Recursive JSON Cleaning (#2)
3. **Low Priority**
- Modernize Go Type Declarations (#5)
---
**Last Updated:** 2025-12-09
**Last Updated:** 2026-02-07
**Updated:** Added resolvespec-js client testing and implementation tasks