Compare commits

...

58 Commits

Author SHA1 Message Date
Hein Puth (Warkanum)
41e4956510 Merge pull request #12 from bitechdev/copilot/fix-prefix-event-issue
[WIP] Fix prefix addition in where queries and xfiles options
2025-12-30 15:38:35 +02:00
copilot-swe-agent[bot]
8e8c3c6de6 Refactor: Extract common logic from stripOuterParentheses functions
Co-authored-by: warkanum <208308+warkanum@users.noreply.github.com>
2025-12-30 13:36:29 +00:00
copilot-swe-agent[bot]
aa9b7312f6 Fix AddTablePrefixToColumns to handle parenthesized AND conditions correctly
Co-authored-by: warkanum <208308+warkanum@users.noreply.github.com>
2025-12-30 13:31:18 +00:00
copilot-swe-agent[bot]
dca43b0e05 Initial analysis: identified bug in AddTablePrefixToColumns
Co-authored-by: warkanum <208308+warkanum@users.noreply.github.com>
2025-12-30 13:26:37 +00:00
copilot-swe-agent[bot]
6f368bbce5 Initial plan 2025-12-30 13:18:17 +00:00
Hein Puth (Warkanum)
8704cee941 Merge pull request #9 from bitechdev/websocketspec
feature: Websocketspec and mqtt spec
2025-12-30 15:02:59 +02:00
Hein Puth (Warkanum)
4ce5afe0ac Merge pull request #10 from bitechdev/copilot/sub-pr-9
Add WebSocketSpec and MQTTSpec real-time protocol implementations
2025-12-30 14:50:35 +02:00
copilot-swe-agent[bot]
7b98ea2145 Initial plan 2025-12-30 12:41:53 +00:00
Hein
897cb2ae0d fix: liniting issues and events dev 2025-12-30 14:40:45 +02:00
Hein
01420e6b63 Merge branch 'main' of https://github.com/bitechdev/ResolveSpec into websocketspec 2025-12-30 14:13:52 +02:00
Hein Puth (Warkanum)
645907d355 Merge pull request #5 from bitechdev/server
feature: Server Manager
2025-12-30 14:13:23 +02:00
Hein
e81d7b48cc feature: mqtt support 2025-12-30 14:12:36 +02:00
Hein
8f5a725a09 Bugfix with xfiles 2025-12-30 14:12:07 +02:00
Hein Puth (Warkanum)
3d5d7b788e Merge pull request #8 from bitechdev/copilot/sub-pr-5
Fix impossible type assertion in Remove method
2025-12-30 14:04:08 +02:00
copilot-swe-agent[bot]
eaecef686e Fix type assertion error in Remove method
Co-authored-by: warkanum <208308+warkanum@users.noreply.github.com>
2025-12-30 11:44:56 +00:00
copilot-swe-agent[bot]
e0d21b17ec Initial plan 2025-12-30 11:38:31 +00:00
Hein Puth (Warkanum)
7e1718e864 Merge pull request #7 from bitechdev/copilot/sub-pr-5-again
Fix recover() not working in CatchPanic functions
2025-12-30 13:29:36 +02:00
Hein Puth (Warkanum)
16d416030e Merge pull request #6 from bitechdev/copilot/sub-pr-5
Implement persistent certificate storage with reuse for self-signed SSL
2025-12-30 13:27:50 +02:00
Hein
bf8500714a Websocket spec fixes 2025-12-30 13:25:16 +02:00
copilot-swe-agent[bot]
4f8edd6469 Add security improvements and race condition protection
Co-authored-by: warkanum <208308+warkanum@users.noreply.github.com>
2025-12-30 11:14:59 +00:00
copilot-swe-agent[bot]
ccf8522f88 Refactor: Use persistent cert storage with reuse logic
Co-authored-by: warkanum <208308+warkanum@users.noreply.github.com>
2025-12-30 11:12:21 +00:00
copilot-swe-agent[bot]
92a83e9cc6 Final update
Co-authored-by: warkanum <208308+warkanum@users.noreply.github.com>
2025-12-30 11:09:06 +00:00
copilot-swe-agent[bot]
4cb35a78b0 Improve CatchPanicCallback: extract context early and clarify example
Co-authored-by: warkanum <208308+warkanum@users.noreply.github.com>
2025-12-30 11:07:46 +00:00
copilot-swe-agent[bot]
e10e2e1c27 Fix recover() usage in CatchPanic functions by returning deferred function
Co-authored-by: warkanum <208308+warkanum@users.noreply.github.com>
2025-12-30 11:06:43 +00:00
copilot-swe-agent[bot]
64f56325d4 Final verification and cleanup
Co-authored-by: warkanum <208308+warkanum@users.noreply.github.com>
2025-12-30 11:03:01 +00:00
copilot-swe-agent[bot]
5e6032c91d Initial plan 2025-12-30 11:02:05 +00:00
Hein Puth (Warkanum)
bc2fdc143b Update pkg/logger/logger.go
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2025-12-30 13:00:56 +02:00
copilot-swe-agent[bot]
267e84fd84 Implement cleanup for temporary certificate directories
Co-authored-by: warkanum <208308+warkanum@users.noreply.github.com>
2025-12-30 11:00:45 +00:00
Hein Puth (Warkanum)
8adc386863 Update pkg/server/manager.go
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2025-12-30 12:58:38 +02:00
Hein Puth (Warkanum)
feb023ec48 Update pkg/server/tls.go
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2025-12-30 12:57:55 +02:00
Hein Puth (Warkanum)
de50141a04 Update pkg/server/manager.go
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2025-12-30 12:57:35 +02:00
copilot-swe-agent[bot]
c226dc349f Initial plan 2025-12-30 10:56:43 +00:00
Hein
d4a6f9c4c2 Better server manager 2025-12-29 17:19:16 +02:00
8f83e8fdc1 Merge branch 'main' of github.com:bitechdev/ResolveSpec into server 2025-12-28 09:07:05 +02:00
Hein
90df4a157c Socket spec tests 2025-12-23 17:27:48 +02:00
Hein
2dd404af96 Updated to websockspec 2025-12-23 17:27:29 +02:00
Hein
17c472b206 Merge branch 'main' of https://github.com/bitechdev/ResolveSpec into websocketspec 2025-12-23 15:23:36 +02:00
Hein
ed67caf055 fix: reasheadspec customsql calls AddTablePrefixToColumns
Some checks failed
Build , Vet Test, and Lint / Run Vet Tests (1.24.x) (push) Successful in -25m42s
Build , Vet Test, and Lint / Run Vet Tests (1.23.x) (push) Successful in -25m6s
Build , Vet Test, and Lint / Lint Code (push) Failing after -25m37s
Build , Vet Test, and Lint / Build (push) Successful in -25m35s
Tests / Unit Tests (push) Failing after -25m50s
Tests / Integration Tests (push) Failing after -25m59s
2025-12-23 14:17:02 +02:00
4d1b8b6982 Work on server 2025-12-20 10:42:51 +02:00
Hein
63ed62a9a3 fix: Stupid logic error.
Some checks failed
Build , Vet Test, and Lint / Run Vet Tests (1.24.x) (push) Successful in -26m2s
Build , Vet Test, and Lint / Run Vet Tests (1.23.x) (push) Successful in -25m39s
Build , Vet Test, and Lint / Build (push) Successful in -25m47s
Build , Vet Test, and Lint / Lint Code (push) Successful in -25m6s
Tests / Unit Tests (push) Failing after -26m5s
Tests / Integration Tests (push) Failing after -26m5s
Co-authored-by: IvanX006 <ivan@bitechsystems.co.za>
Co-authored-by: Warkanum <HEIN.PUTH@GMAIL.COM>
Co-authored-by: Hein <hein@bitechsystems.co.za>
2025-12-19 16:52:34 +02:00
Hein
0525323a47 Fixed tests failing due to reponse header status
Co-authored-by: IvanX006 <ivan@bitechsystems.co.za>
Co-authored-by: Warkanum <HEIN.PUTH@GMAIL.COM>
Co-authored-by: Hein <hein@bitechsystems.co.za>
2025-12-19 16:50:16 +02:00
Hein Puth (Warkanum)
c3443f702e Merge pull request #4 from bitechdev/fix-dockers
Fixed Attempt to Fix Docker / Podman
2025-12-19 16:42:38 +02:00
Hein
45c463c117 Fixed Attempt to Fix Docker / Podman
Co-authored-by: IvanX006 <ivan@bitechsystems.co.za>
Co-authored-by: Warkanum <HEIN.PUTH@GMAIL.COM>
Co-authored-by: Hein <hein@bitechsystems.co.za>
2025-12-19 16:42:01 +02:00
Hein
84d673ce14 Added OpenAPI UI Routes
Co-authored-by: IvanX006 <ivan@bitechsystems.co.za>
Co-authored-by: Warkanum <HEIN.PUTH@GMAIL.COM>
Co-authored-by: Hein <hein@bitechsystems.co.za>
2025-12-19 16:32:14 +02:00
Hein
02fbdbd651 Cache package is pure infrastructure. Cache invalidates on create/delete from the API
Some checks failed
Tests / Integration Tests (push) Failing after 9s
Build , Vet Test, and Lint / Lint Code (push) Successful in 8m13s
Build , Vet Test, and Lint / Build (push) Successful in -24m36s
Build , Vet Test, and Lint / Run Vet Tests (1.23.x) (push) Successful in -25m6s
Build , Vet Test, and Lint / Run Vet Tests (1.24.x) (push) Successful in -24m33s
Tests / Unit Tests (push) Failing after -25m39s
2025-12-18 16:30:38 +02:00
Hein
97988e3b5e Updated bun version 2025-12-18 15:54:00 +02:00
Hein
c9838ad9d2 Bun bugfix 2025-12-18 15:22:58 +02:00
Hein
c5c0608f63 StatusPartialContent is better since we need to result to see. 2025-12-18 14:48:14 +02:00
Hein
39c3f05d21 StatusNoContent for zero length data 2025-12-18 13:34:07 +02:00
Hein
4ecd1ac17e Fixed to StatusNoContent 2025-12-18 13:20:39 +02:00
Hein
2b1aea0338 Fix null interface issue and added partial content response when content is empty 2025-12-18 13:19:57 +02:00
Hein
1e749efeb3 Fixes for not found records 2025-12-18 13:08:26 +02:00
Hein
09be676096 Resolvespec delete returns deleted record 2025-12-18 12:52:47 +02:00
Hein
e8350a70be Fixed delete record to return the record 2025-12-18 12:49:37 +02:00
Hein
5937b9eab5 Fixed the double table on update
Some checks are pending
Build , Vet Test, and Lint / Run Vet Tests (1.23.x) (push) Waiting to run
Build , Vet Test, and Lint / Run Vet Tests (1.24.x) (push) Waiting to run
Build , Vet Test, and Lint / Lint Code (push) Waiting to run
Build , Vet Test, and Lint / Build (push) Waiting to run
Tests / Unit Tests (push) Waiting to run
Tests / Integration Tests (push) Waiting to run
2025-12-18 12:14:39 +02:00
Hein
7c861c708e [breaking] Another breaking change datatypes -> spectypes 2025-12-18 11:49:35 +02:00
Hein
77f39af2f9 [breaking] Moved sql types to datatypes 2025-12-18 11:43:19 +02:00
Hein
1b2b0d8f0b Prototype for websockspec 2025-12-12 16:14:47 +02:00
83 changed files with 18231 additions and 1309 deletions

1
.gitignore vendored
View File

@@ -25,3 +25,4 @@ go.work.sum
.env
bin/
test.db
testserver

View File

@@ -13,10 +13,55 @@ test-integration:
# Run all tests (unit + integration)
test: test-unit test-integration
release-version: ## Create and push a release with specific version (use: make release-version VERSION=v1.2.3)
@if [ -z "$(VERSION)" ]; then \
echo "Error: VERSION is required. Usage: make release-version VERSION=v1.2.3"; \
exit 1; \
fi
@version="$(VERSION)"; \
if ! echo "$$version" | grep -q "^v"; then \
version="v$$version"; \
fi; \
echo "Creating release: $$version"; \
latest_tag=$$(git describe --tags --abbrev=0 2>/dev/null || echo ""); \
if [ -z "$$latest_tag" ]; then \
commit_logs=$$(git log --pretty=format:"- %s" --no-merges); \
else \
commit_logs=$$(git log "$${latest_tag}..HEAD" --pretty=format:"- %s" --no-merges); \
fi; \
if [ -z "$$commit_logs" ]; then \
tag_message="Release $$version"; \
else \
tag_message="Release $$version\n\n$$commit_logs"; \
fi; \
git tag -a "$$version" -m "$$tag_message"; \
git push origin "$$version"; \
echo "Tag $$version created and pushed to remote repository."
lint: ## Run linter
@echo "Running linter..."
@if command -v golangci-lint > /dev/null; then \
golangci-lint run --config=.golangci.json; \
else \
echo "golangci-lint not installed. Install with: go install github.com/golangci/golangci-lint/cmd/golangci-lint@latest"; \
exit 1; \
fi
lintfix: ## Run linter
@echo "Running linter..."
@if command -v golangci-lint > /dev/null; then \
golangci-lint run --config=.golangci.json --fix; \
else \
echo "golangci-lint not installed. Install with: go install github.com/golangci/golangci-lint/cmd/golangci-lint@latest"; \
exit 1; \
fi
# Start PostgreSQL for integration tests
docker-up:
@echo "Starting PostgreSQL container..."
@docker-compose up -d postgres-test
@podman compose up -d postgres-test
@echo "Waiting for PostgreSQL to be ready..."
@sleep 5
@echo "PostgreSQL is ready!"
@@ -24,12 +69,12 @@ docker-up:
# Stop PostgreSQL container
docker-down:
@echo "Stopping PostgreSQL container..."
@docker-compose down
@podman compose down
# Clean up Docker volumes and test data
clean:
@echo "Cleaning up..."
@docker-compose down -v
@podman compose down -v
@echo "Cleanup complete!"
# Run integration tests with Docker (full workflow)

View File

@@ -1,8 +1,8 @@
package main
import (
"fmt"
"log"
"net/http"
"os"
"time"
@@ -67,9 +67,36 @@ func main() {
// Setup routes using new SetupMuxRoutes function (without authentication)
resolvespec.SetupMuxRoutes(r, handler, nil)
// Create graceful server with configuration
srv := server.NewGracefulServer(server.Config{
Addr: cfg.Server.Addr,
// Create server manager
mgr := server.NewManager()
// Parse host and port from addr
host := ""
port := 8080
if cfg.Server.Addr != "" {
// Parse addr (format: ":8080" or "localhost:8080")
if cfg.Server.Addr[0] == ':' {
// Just port
_, err := fmt.Sscanf(cfg.Server.Addr, ":%d", &port)
if err != nil {
logger.Error("Invalid server address: %s", cfg.Server.Addr)
os.Exit(1)
}
} else {
// Host and port
_, err := fmt.Sscanf(cfg.Server.Addr, "%[^:]:%d", &host, &port)
if err != nil {
logger.Error("Invalid server address: %s", cfg.Server.Addr)
os.Exit(1)
}
}
}
// Add server instance
_, err = mgr.Add(server.Config{
Name: "api",
Host: host,
Port: port,
Handler: r,
ShutdownTimeout: cfg.Server.ShutdownTimeout,
DrainTimeout: cfg.Server.DrainTimeout,
@@ -77,11 +104,15 @@ func main() {
WriteTimeout: cfg.Server.WriteTimeout,
IdleTimeout: cfg.Server.IdleTimeout,
})
if err != nil {
logger.Error("Failed to add server: %v", err)
os.Exit(1)
}
// Start server with graceful shutdown
logger.Info("Starting server on %s", cfg.Server.Addr)
if err := srv.ListenAndServe(); err != nil && err != http.ErrServerClosed {
logger.Error("Server failed to start: %v", err)
if err := mgr.ServeWithGracefulShutdown(); err != nil {
logger.Error("Server failed: %v", err)
os.Exit(1)
}
}

79
go.mod
View File

@@ -7,19 +7,26 @@ toolchain go1.24.6
require (
github.com/DATA-DOG/go-sqlmock v1.5.2
github.com/bradfitz/gomemcache v0.0.0-20250403215159-8d39553ac7cf
github.com/eclipse/paho.mqtt.golang v1.5.1
github.com/getsentry/sentry-go v0.40.0
github.com/glebarez/sqlite v1.11.0
github.com/google/uuid v1.6.0
github.com/gorilla/mux v1.8.1
github.com/gorilla/websocket v1.5.3
github.com/jackc/pgx/v5 v5.6.0
github.com/klauspost/compress v1.18.0
github.com/mochi-mqtt/server/v2 v2.7.9
github.com/nats-io/nats.go v1.48.0
github.com/prometheus/client_golang v1.23.2
github.com/redis/go-redis/v9 v9.17.1
github.com/spf13/viper v1.21.0
github.com/stretchr/testify v1.11.1
github.com/testcontainers/testcontainers-go v0.40.0
github.com/tidwall/gjson v1.18.0
github.com/tidwall/sjson v1.2.5
github.com/uptrace/bun v1.2.15
github.com/uptrace/bun/dialect/sqlitedialect v1.2.15
github.com/uptrace/bun/driver/sqliteshim v1.2.15
github.com/uptrace/bun v1.2.16
github.com/uptrace/bun/dialect/sqlitedialect v1.2.16
github.com/uptrace/bun/driver/sqliteshim v1.2.16
github.com/uptrace/bunrouter v1.0.23
go.opentelemetry.io/otel v1.38.0
go.opentelemetry.io/otel/exporters/otlp/otlptrace v1.38.0
@@ -27,71 +34,113 @@ require (
go.opentelemetry.io/otel/sdk v1.38.0
go.opentelemetry.io/otel/trace v1.38.0
go.uber.org/zap v1.27.0
golang.org/x/crypto v0.43.0
golang.org/x/time v0.14.0
gorm.io/driver/postgres v1.6.0
gorm.io/gorm v1.25.12
gorm.io/driver/sqlite v1.6.0
gorm.io/gorm v1.30.0
)
require (
dario.cat/mergo v1.0.2 // indirect
github.com/Azure/go-ansiterm v0.0.0-20210617225240-d185dfc1b5a1 // indirect
github.com/Microsoft/go-winio v0.6.2 // indirect
github.com/beorn7/perks v1.0.1 // indirect
github.com/cenkalti/backoff/v4 v4.3.0 // indirect
github.com/cenkalti/backoff/v5 v5.0.3 // indirect
github.com/cespare/xxhash/v2 v2.3.0 // indirect
github.com/containerd/errdefs v1.0.0 // indirect
github.com/containerd/errdefs/pkg v0.3.0 // indirect
github.com/containerd/log v0.1.0 // indirect
github.com/containerd/platforms v0.2.1 // indirect
github.com/cpuguy83/dockercfg v0.3.2 // indirect
github.com/davecgh/go-spew v1.1.1 // indirect
github.com/dgryski/go-rendezvous v0.0.0-20200823014737-9f7001d12a5f // indirect
github.com/distribution/reference v0.6.0 // indirect
github.com/docker/docker v28.5.1+incompatible // indirect
github.com/docker/go-connections v0.6.0 // indirect
github.com/docker/go-units v0.5.0 // indirect
github.com/dustin/go-humanize v1.0.1 // indirect
github.com/ebitengine/purego v0.8.4 // indirect
github.com/felixge/httpsnoop v1.0.4 // indirect
github.com/fsnotify/fsnotify v1.9.0 // indirect
github.com/glebarez/go-sqlite v1.21.2 // indirect
github.com/go-logr/logr v1.4.3 // indirect
github.com/go-logr/stdr v1.2.2 // indirect
github.com/go-ole/go-ole v1.2.6 // indirect
github.com/go-viper/mapstructure/v2 v2.4.0 // indirect
github.com/grpc-ecosystem/grpc-gateway/v2 v2.27.2 // indirect
github.com/jackc/pgpassfile v1.0.0 // indirect
github.com/jackc/pgservicefile v0.0.0-20240606120523-5a60cdf6a761 // indirect
github.com/jackc/pgx/v5 v5.6.0 // indirect
github.com/jackc/puddle/v2 v2.2.2 // indirect
github.com/jinzhu/inflection v1.0.0 // indirect
github.com/jinzhu/now v1.1.5 // indirect
github.com/lufia/plan9stats v0.0.0-20211012122336-39d0f177ccd0 // indirect
github.com/magiconair/properties v1.8.10 // indirect
github.com/mattn/go-isatty v0.0.20 // indirect
github.com/mattn/go-sqlite3 v1.14.28 // indirect
github.com/mattn/go-sqlite3 v1.14.32 // indirect
github.com/moby/docker-image-spec v1.3.1 // indirect
github.com/moby/go-archive v0.1.0 // indirect
github.com/moby/patternmatcher v0.6.0 // indirect
github.com/moby/sys/sequential v0.6.0 // indirect
github.com/moby/sys/user v0.4.0 // indirect
github.com/moby/sys/userns v0.1.0 // indirect
github.com/moby/term v0.5.0 // indirect
github.com/morikuni/aec v1.0.0 // indirect
github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822 // indirect
github.com/ncruces/go-strftime v0.1.9 // indirect
github.com/nats-io/nkeys v0.4.11 // indirect
github.com/nats-io/nuid v1.0.1 // indirect
github.com/ncruces/go-strftime v1.0.0 // indirect
github.com/opencontainers/go-digest v1.0.0 // indirect
github.com/opencontainers/image-spec v1.1.1 // indirect
github.com/pelletier/go-toml/v2 v2.2.4 // indirect
github.com/pkg/errors v0.9.1 // indirect
github.com/pmezard/go-difflib v1.0.0 // indirect
github.com/power-devops/perfstat v0.0.0-20210106213030-5aafc221ea8c // indirect
github.com/prometheus/client_model v0.6.2 // indirect
github.com/prometheus/common v0.66.1 // indirect
github.com/prometheus/procfs v0.16.1 // indirect
github.com/puzpuzpuz/xsync/v3 v3.5.1 // indirect
github.com/remyoudompheng/bigfft v0.0.0-20230129092748-24d4a6f8daec // indirect
github.com/rs/xid v1.4.0 // indirect
github.com/sagikazarmark/locafero v0.11.0 // indirect
github.com/shirou/gopsutil/v4 v4.25.6 // indirect
github.com/sirupsen/logrus v1.9.3 // indirect
github.com/sourcegraph/conc v0.3.1-0.20240121214520-5f936abd7ae8 // indirect
github.com/spf13/afero v1.15.0 // indirect
github.com/spf13/cast v1.10.0 // indirect
github.com/spf13/pflag v1.0.10 // indirect
github.com/stretchr/objx v0.5.2 // indirect
github.com/subosito/gotenv v1.6.0 // indirect
github.com/tidwall/match v1.1.1 // indirect
github.com/tidwall/pretty v1.2.0 // indirect
github.com/tklauser/go-sysconf v0.3.12 // indirect
github.com/tklauser/numcpus v0.6.1 // indirect
github.com/tmthrgd/go-hex v0.0.0-20190904060850-447a3041c3bc // indirect
github.com/vmihailenco/msgpack/v5 v5.4.1 // indirect
github.com/vmihailenco/tagparser/v2 v2.0.0 // indirect
github.com/yusufpapurcu/wmi v1.2.4 // indirect
go.opentelemetry.io/auto/sdk v1.1.0 // indirect
go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.49.0 // indirect
go.opentelemetry.io/otel/metric v1.38.0 // indirect
go.opentelemetry.io/proto/otlp v1.7.1 // indirect
go.uber.org/multierr v1.10.0 // indirect
go.yaml.in/yaml/v2 v2.4.2 // indirect
go.yaml.in/yaml/v3 v3.0.4 // indirect
golang.org/x/crypto v0.41.0 // indirect
golang.org/x/exp v0.0.0-20250711185948-6ae5c78190dc // indirect
golang.org/x/net v0.43.0 // indirect
golang.org/x/sync v0.16.0 // indirect
golang.org/x/sys v0.35.0 // indirect
golang.org/x/text v0.28.0 // indirect
golang.org/x/exp v0.0.0-20251113190631-e25ba8c21ef6 // indirect
golang.org/x/net v0.45.0 // indirect
golang.org/x/sync v0.18.0 // indirect
golang.org/x/sys v0.38.0 // indirect
golang.org/x/text v0.30.0 // indirect
google.golang.org/genproto/googleapis/api v0.0.0-20250825161204-c5933d9347a5 // indirect
google.golang.org/genproto/googleapis/rpc v0.0.0-20250825161204-c5933d9347a5 // indirect
google.golang.org/grpc v1.75.0 // indirect
google.golang.org/protobuf v1.36.8 // indirect
gopkg.in/yaml.v3 v3.0.1 // indirect
modernc.org/libc v1.66.3 // indirect
modernc.org/libc v1.67.0 // indirect
modernc.org/mathutil v1.7.1 // indirect
modernc.org/memory v1.11.0 // indirect
modernc.org/sqlite v1.38.0 // indirect
modernc.org/sqlite v1.40.1 // indirect
)
replace github.com/uptrace/bun => github.com/warkanum/bun v1.2.17

192
go.sum
View File

@@ -1,5 +1,13 @@
dario.cat/mergo v1.0.2 h1:85+piFYR1tMbRrLcDwR18y4UKJ3aH1Tbzi24VRW1TK8=
dario.cat/mergo v1.0.2/go.mod h1:E/hbnu0NxMFBjpMIE34DRGLWqDy0g5FuKDhCb31ngxA=
github.com/AdaLogics/go-fuzz-headers v0.0.0-20240806141605-e8a1dd7889d6 h1:He8afgbRMd7mFxO99hRNu+6tazq8nFF9lIwo9JFroBk=
github.com/AdaLogics/go-fuzz-headers v0.0.0-20240806141605-e8a1dd7889d6/go.mod h1:8o94RPi1/7XTJvwPpRSzSUedZrtlirdB3r9Z20bi2f8=
github.com/Azure/go-ansiterm v0.0.0-20210617225240-d185dfc1b5a1 h1:UQHMgLO+TxOElx5B5HZ4hJQsoJ/PvUvKRhJHDQXO8P8=
github.com/Azure/go-ansiterm v0.0.0-20210617225240-d185dfc1b5a1/go.mod h1:xomTg63KZ2rFqZQzSB4Vz2SUXa1BpHTVz9L5PTmPC4E=
github.com/DATA-DOG/go-sqlmock v1.5.2 h1:OcvFkGmslmlZibjAjaHm3L//6LiuBgolP7OputlJIzU=
github.com/DATA-DOG/go-sqlmock v1.5.2/go.mod h1:88MAG/4G7SMwSE3CeA0ZKzrT5CiOU3OJ+JlNzwDqpNU=
github.com/Microsoft/go-winio v0.6.2 h1:F2VQgta7ecxGYO8k3ZZz3RS8fVIXVxONVUPlNERoyfY=
github.com/Microsoft/go-winio v0.6.2/go.mod h1:yd8OoFMLzJbo9gZq8j5qaps8bJ9aShtEA8Ipt1oGCvU=
github.com/beorn7/perks v1.0.1 h1:VlbKKnNfV8bJzeqoa4cOKqO6bYr3WgKZxO8Z16+hsOM=
github.com/beorn7/perks v1.0.1/go.mod h1:G2ZrVWU2WbWT9wwq4/hrbKbnv/1ERSJQ0ibhJ6rlkpw=
github.com/bradfitz/gomemcache v0.0.0-20250403215159-8d39553ac7cf h1:TqhNAT4zKbTdLa62d2HDBFdvgSbIGB3eJE8HqhgiL9I=
@@ -8,17 +16,45 @@ github.com/bsm/ginkgo/v2 v2.12.0 h1:Ny8MWAHyOepLGlLKYmXG4IEkioBysk6GpaRTLC8zwWs=
github.com/bsm/ginkgo/v2 v2.12.0/go.mod h1:SwYbGRRDovPVboqFv0tPTcG1sN61LM1Z4ARdbAV9g4c=
github.com/bsm/gomega v1.27.10 h1:yeMWxP2pV2fG3FgAODIY8EiRE3dy0aeFYt4l7wh6yKA=
github.com/bsm/gomega v1.27.10/go.mod h1:JyEr/xRbxbtgWNi8tIEVPUYZ5Dzef52k01W3YH0H+O0=
github.com/cenkalti/backoff/v4 v4.3.0 h1:MyRJ/UdXutAwSAT+s3wNd7MfTIcy71VQueUuFK343L8=
github.com/cenkalti/backoff/v4 v4.3.0/go.mod h1:Y3VNntkOUPxTVeUxJ/G5vcM//AlwfmyYozVcomhLiZE=
github.com/cenkalti/backoff/v5 v5.0.3 h1:ZN+IMa753KfX5hd8vVaMixjnqRZ3y8CuJKRKj1xcsSM=
github.com/cenkalti/backoff/v5 v5.0.3/go.mod h1:rkhZdG3JZukswDf7f0cwqPNk4K0sa+F97BxZthm/crw=
github.com/cespare/xxhash/v2 v2.3.0 h1:UL815xU9SqsFlibzuggzjXhog7bL6oX9BbNZnL2UFvs=
github.com/cespare/xxhash/v2 v2.3.0/go.mod h1:VGX0DQ3Q6kWi7AoAeZDth3/j3BFtOZR5XLFGgcrjCOs=
github.com/containerd/errdefs v1.0.0 h1:tg5yIfIlQIrxYtu9ajqY42W3lpS19XqdxRQeEwYG8PI=
github.com/containerd/errdefs v1.0.0/go.mod h1:+YBYIdtsnF4Iw6nWZhJcqGSg/dwvV7tyJ/kCkyJ2k+M=
github.com/containerd/errdefs/pkg v0.3.0 h1:9IKJ06FvyNlexW690DXuQNx2KA2cUJXx151Xdx3ZPPE=
github.com/containerd/errdefs/pkg v0.3.0/go.mod h1:NJw6s9HwNuRhnjJhM7pylWwMyAkmCQvQ4GpJHEqRLVk=
github.com/containerd/log v0.1.0 h1:TCJt7ioM2cr/tfR8GPbGf9/VRAX8D2B4PjzCpfX540I=
github.com/containerd/log v0.1.0/go.mod h1:VRRf09a7mHDIRezVKTRCrOq78v577GXq3bSa3EhrzVo=
github.com/containerd/platforms v0.2.1 h1:zvwtM3rz2YHPQsF2CHYM8+KtB5dvhISiXh5ZpSBQv6A=
github.com/containerd/platforms v0.2.1/go.mod h1:XHCb+2/hzowdiut9rkudds9bE5yJ7npe7dG/wG+uFPw=
github.com/cpuguy83/dockercfg v0.3.2 h1:DlJTyZGBDlXqUZ2Dk2Q3xHs/FtnooJJVaad2S9GKorA=
github.com/cpuguy83/dockercfg v0.3.2/go.mod h1:sugsbF4//dDlL/i+S+rtpIWp+5h0BHJHfjj5/jFyUJc=
github.com/creack/pty v1.1.18 h1:n56/Zwd5o6whRC5PMGretI4IdRLlmBXYNjScPaBgsbY=
github.com/creack/pty v1.1.18/go.mod h1:MOBLtS5ELjhRRrroQr9kyvTxUAFNvYEK993ew/Vr4O4=
github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c=
github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/dgryski/go-rendezvous v0.0.0-20200823014737-9f7001d12a5f h1:lO4WD4F/rVNCu3HqELle0jiPLLBs70cWOduZpkS1E78=
github.com/dgryski/go-rendezvous v0.0.0-20200823014737-9f7001d12a5f/go.mod h1:cuUVRXasLTGF7a8hSLbxyZXjz+1KgoB3wDUb6vlszIc=
github.com/distribution/reference v0.6.0 h1:0IXCQ5g4/QMHHkarYzh5l+u8T3t73zM5QvfrDyIgxBk=
github.com/distribution/reference v0.6.0/go.mod h1:BbU0aIcezP1/5jX/8MP0YiH4SdvB5Y4f/wlDRiLyi3E=
github.com/docker/docker v28.5.1+incompatible h1:Bm8DchhSD2J6PsFzxC35TZo4TLGR2PdW/E69rU45NhM=
github.com/docker/docker v28.5.1+incompatible/go.mod h1:eEKB0N0r5NX/I1kEveEz05bcu8tLC/8azJZsviup8Sk=
github.com/docker/go-connections v0.6.0 h1:LlMG9azAe1TqfR7sO+NJttz1gy6KO7VJBh+pMmjSD94=
github.com/docker/go-connections v0.6.0/go.mod h1:AahvXYshr6JgfUJGdDCs2b5EZG/vmaMAntpSFH5BFKE=
github.com/docker/go-units v0.5.0 h1:69rxXcBk27SvSaaxTtLh/8llcHD8vYHT7WSdRZ/jvr4=
github.com/docker/go-units v0.5.0/go.mod h1:fgPhTUdO+D/Jk86RDLlptpiXQzgHJF7gydDDbaIK4Dk=
github.com/dustin/go-humanize v1.0.1 h1:GzkhY7T5VNhEkwH0PVJgjz+fX1rhBrR7pRT3mDkpeCY=
github.com/dustin/go-humanize v1.0.1/go.mod h1:Mu1zIs6XwVuF/gI1OepvI0qD18qycQx+mFykh5fBlto=
github.com/ebitengine/purego v0.8.4 h1:CF7LEKg5FFOsASUj0+QwaXf8Ht6TlFxg09+S9wz0omw=
github.com/ebitengine/purego v0.8.4/go.mod h1:iIjxzd6CiRiOG0UyXP+V1+jWqUXVjPKLAI0mRfJZTmQ=
github.com/eclipse/paho.mqtt.golang v1.5.1 h1:/VSOv3oDLlpqR2Epjn1Q7b2bSTplJIeV2ISgCl2W7nE=
github.com/eclipse/paho.mqtt.golang v1.5.1/go.mod h1:1/yJCneuyOoCOzKSsOTUc0AJfpsItBGWvYpBLimhArU=
github.com/felixge/httpsnoop v1.0.4 h1:NFTV2Zj1bL4mc9sqWACXbQFVBBg2W3GPvqp8/ESS2Wg=
github.com/felixge/httpsnoop v1.0.4/go.mod h1:m8KPJKqk1gH5J9DgRY2ASl2lWCfGKXixSwevea8zH2U=
github.com/frankban/quicktest v1.14.6 h1:7Xjx+VpznH+oBnejlPUj8oUpdxnVs4f8XU8WnHkI4W8=
github.com/frankban/quicktest v1.14.6/go.mod h1:4ptaffx2x8+WTWXmUCuVU6aPUX1/Mz7zb5vbUoiM6w0=
github.com/fsnotify/fsnotify v1.9.0 h1:2Ml+OJNzbYCTzsxtv8vKSFD9PbJjmhYF14k/jKC7S9k=
@@ -36,10 +72,13 @@ github.com/go-logr/logr v1.4.3 h1:CjnDlHq8ikf6E492q6eKboGOC0T8CDaOvkHCIg8idEI=
github.com/go-logr/logr v1.4.3/go.mod h1:9T104GzyrTigFIr8wt5mBrctHMim0Nb2HLGrmQ40KvY=
github.com/go-logr/stdr v1.2.2 h1:hSWxHoqTgW2S2qGc0LTAI563KZ5YKYRhT3MFKZMbjag=
github.com/go-logr/stdr v1.2.2/go.mod h1:mMo/vtBO5dYbehREoey6XUKy/eSumjCCveDpRre4VKE=
github.com/go-ole/go-ole v1.2.6 h1:/Fpf6oFPoeFik9ty7siob0G6Ke8QvQEuVcuChpwXzpY=
github.com/go-ole/go-ole v1.2.6/go.mod h1:pprOEPIfldk/42T2oK7lQ4v4JSDwmV0As9GaiUsvbm0=
github.com/go-viper/mapstructure/v2 v2.4.0 h1:EBsztssimR/CONLSZZ04E8qAkxNYq4Qp9LvH92wZUgs=
github.com/go-viper/mapstructure/v2 v2.4.0/go.mod h1:oJDH3BJKyqBA2TXFhDsKDGDTlndYOZ6rGS0BRZIxGhM=
github.com/golang/protobuf v1.5.4 h1:i7eJL8qZTpSEXOPTxNKhASYpMn+8e5Q6AdndVa1dWek=
github.com/golang/protobuf v1.5.4/go.mod h1:lnTiLA8Wa4RWRcIUkrtSVa5nRhsEGBg48fD6rSs7xps=
github.com/google/go-cmp v0.5.6/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
github.com/google/go-cmp v0.7.0 h1:wk8382ETsv4JYUZwIsn6YpYiWiBsYLSJiTsyBybVuN8=
github.com/google/go-cmp v0.7.0/go.mod h1:pXiqmnSA92OHEEa9HXL2W4E7lf9JzCmGVUdgjX3N/iU=
github.com/google/pprof v0.0.0-20250317173921-a4b03ec1a45e h1:ijClszYn+mADRFY17kjQEVQ1XRhq2/JR1M3sGqeJoxs=
@@ -48,8 +87,12 @@ github.com/google/uuid v1.6.0 h1:NIvaJDMOsjHA8n1jAhLSgzrAzy1Hgr+hNrb57e+94F0=
github.com/google/uuid v1.6.0/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
github.com/gorilla/mux v1.8.1 h1:TuBL49tXwgrFYWhqrNgrUNEY92u81SPhu7sTdzQEiWY=
github.com/gorilla/mux v1.8.1/go.mod h1:AKf9I4AEqPTmMytcMc0KkNouC66V3BtZ4qD5fmWSiMQ=
github.com/gorilla/websocket v1.5.3 h1:saDtZ6Pbx/0u+bgYQ3q96pZgCzfhKXGPqt7kZ72aNNg=
github.com/gorilla/websocket v1.5.3/go.mod h1:YR8l580nyteQvAITg2hZ9XVh4b55+EU/adAjf1fMHhE=
github.com/grpc-ecosystem/grpc-gateway/v2 v2.27.2 h1:8Tjv8EJ+pM1xP8mK6egEbD1OgnVTyacbefKhmbLhIhU=
github.com/grpc-ecosystem/grpc-gateway/v2 v2.27.2/go.mod h1:pkJQ2tZHJ0aFOVEEot6oZmaVEZcRme73eIFmhiVuRWs=
github.com/hashicorp/golang-lru/v2 v2.0.7 h1:a+bsQ5rvGLjzHuww6tVxozPZFVghXaHOwFs4luLUK2k=
github.com/hashicorp/golang-lru/v2 v2.0.7/go.mod h1:QeFd9opnmA6QUJc5vARoKUSoFhyfM2/ZepoAG6RGpeM=
github.com/jackc/pgpassfile v1.0.0 h1:/6Hmqy13Ss2zCq62VdNG8tM1wchn8zjSGOBJ6icpsIM=
github.com/jackc/pgpassfile v1.0.0/go.mod h1:CEx0iS5ambNFdcRtxPj5JhEz+xB6uRky5eyVu/W2HEg=
github.com/jackc/pgservicefile v0.0.0-20240606120523-5a60cdf6a761 h1:iCEnooe7UlwOQYpKFhBabPMi4aNAfoODPEFNiAnClxo=
@@ -58,6 +101,8 @@ github.com/jackc/pgx/v5 v5.6.0 h1:SWJzexBzPL5jb0GEsrPMLIsi/3jOo7RHlzTjcAeDrPY=
github.com/jackc/pgx/v5 v5.6.0/go.mod h1:DNZ/vlrUnhWCoFGxHAG8U2ljioxukquj7utPDgtQdTw=
github.com/jackc/puddle/v2 v2.2.2 h1:PR8nw+E/1w0GLuRFSmiioY6UooMp6KJv0/61nB7icHo=
github.com/jackc/puddle/v2 v2.2.2/go.mod h1:vriiEXHvEE654aYKXXjOvZM39qJ0q+azkZFrfEOc3H4=
github.com/jinzhu/copier v0.3.5 h1:GlvfUwHk62RokgqVNvYsku0TATCF7bAHVwEXoBh3iJg=
github.com/jinzhu/copier v0.3.5/go.mod h1:DfbEm0FYsaqBcKcFuvmOZb218JkPGtvSHsKg8S8hyyg=
github.com/jinzhu/inflection v1.0.0 h1:K317FqzuhWc8YvSVlFMCCUb36O/S9MCKRDI7QkRKD/E=
github.com/jinzhu/inflection v1.0.0/go.mod h1:h+uFLlag+Qp1Va5pdKtLDYj+kHp5pxUVkryuEj+Srlc=
github.com/jinzhu/now v1.1.5 h1:/o9tlHleP7gOFmsnYNz3RGnqzefHA47wQpKrrdTIwXQ=
@@ -71,14 +116,48 @@ github.com/kr/text v0.2.0 h1:5Nx0Ya0ZqY2ygV366QzturHI13Jq95ApcVaJBhpS+AY=
github.com/kr/text v0.2.0/go.mod h1:eLer722TekiGuMkidMxC/pM04lWEeraHUUmBw8l2grE=
github.com/kylelemons/godebug v1.1.0 h1:RPNrshWIDI6G2gRW9EHilWtl7Z6Sb1BR0xunSBf0SNc=
github.com/kylelemons/godebug v1.1.0/go.mod h1:9/0rRGxNHcop5bhtWyNeEfOS8JIWk580+fNqagV/RAw=
github.com/lufia/plan9stats v0.0.0-20211012122336-39d0f177ccd0 h1:6E+4a0GO5zZEnZ81pIr0yLvtUWk2if982qA3F3QD6H4=
github.com/lufia/plan9stats v0.0.0-20211012122336-39d0f177ccd0/go.mod h1:zJYVVT2jmtg6P3p1VtQj7WsuWi/y4VnjVBn7F8KPB3I=
github.com/magiconair/properties v1.8.10 h1:s31yESBquKXCV9a/ScB3ESkOjUYYv+X0rg8SYxI99mE=
github.com/magiconair/properties v1.8.10/go.mod h1:Dhd985XPs7jluiymwWYZ0G4Z61jb3vdS329zhj2hYo0=
github.com/mattn/go-isatty v0.0.20 h1:xfD0iDuEKnDkl03q4limB+vH+GxLEtL/jb4xVJSWWEY=
github.com/mattn/go-isatty v0.0.20/go.mod h1:W+V8PltTTMOvKvAeJH7IuucS94S2C6jfK/D7dTCTo3Y=
github.com/mattn/go-sqlite3 v1.14.28 h1:ThEiQrnbtumT+QMknw63Befp/ce/nUPgBPMlRFEum7A=
github.com/mattn/go-sqlite3 v1.14.28/go.mod h1:Uh1q+B4BYcTPb+yiD3kU8Ct7aC0hY9fxUwlHK0RXw+Y=
github.com/mattn/go-sqlite3 v1.14.32 h1:JD12Ag3oLy1zQA+BNn74xRgaBbdhbNIDYvQUEuuErjs=
github.com/mattn/go-sqlite3 v1.14.32/go.mod h1:Uh1q+B4BYcTPb+yiD3kU8Ct7aC0hY9fxUwlHK0RXw+Y=
github.com/moby/docker-image-spec v1.3.1 h1:jMKff3w6PgbfSa69GfNg+zN/XLhfXJGnEx3Nl2EsFP0=
github.com/moby/docker-image-spec v1.3.1/go.mod h1:eKmb5VW8vQEh/BAr2yvVNvuiJuY6UIocYsFu/DxxRpo=
github.com/moby/go-archive v0.1.0 h1:Kk/5rdW/g+H8NHdJW2gsXyZ7UnzvJNOy6VKJqueWdcQ=
github.com/moby/go-archive v0.1.0/go.mod h1:G9B+YoujNohJmrIYFBpSd54GTUB4lt9S+xVQvsJyFuo=
github.com/moby/patternmatcher v0.6.0 h1:GmP9lR19aU5GqSSFko+5pRqHi+Ohk1O69aFiKkVGiPk=
github.com/moby/patternmatcher v0.6.0/go.mod h1:hDPoyOpDY7OrrMDLaYoY3hf52gNCR/YOUYxkhApJIxc=
github.com/moby/sys/atomicwriter v0.1.0 h1:kw5D/EqkBwsBFi0ss9v1VG3wIkVhzGvLklJ+w3A14Sw=
github.com/moby/sys/atomicwriter v0.1.0/go.mod h1:Ul8oqv2ZMNHOceF643P6FKPXeCmYtlQMvpizfsSoaWs=
github.com/moby/sys/sequential v0.6.0 h1:qrx7XFUd/5DxtqcoH1h438hF5TmOvzC/lspjy7zgvCU=
github.com/moby/sys/sequential v0.6.0/go.mod h1:uyv8EUTrca5PnDsdMGXhZe6CCe8U/UiTWd+lL+7b/Ko=
github.com/moby/sys/user v0.4.0 h1:jhcMKit7SA80hivmFJcbB1vqmw//wU61Zdui2eQXuMs=
github.com/moby/sys/user v0.4.0/go.mod h1:bG+tYYYJgaMtRKgEmuueC0hJEAZWwtIbZTB+85uoHjs=
github.com/moby/sys/userns v0.1.0 h1:tVLXkFOxVu9A64/yh59slHVv9ahO9UIev4JZusOLG/g=
github.com/moby/sys/userns v0.1.0/go.mod h1:IHUYgu/kao6N8YZlp9Cf444ySSvCmDlmzUcYfDHOl28=
github.com/moby/term v0.5.0 h1:xt8Q1nalod/v7BqbG21f8mQPqH+xAaC9C3N3wfWbVP0=
github.com/moby/term v0.5.0/go.mod h1:8FzsFHVUBGZdbDsJw/ot+X+d5HLUbvklYLJ9uGfcI3Y=
github.com/mochi-mqtt/server/v2 v2.7.9 h1:y0g4vrSLAag7T07l2oCzOa/+nKVLoazKEWAArwqBNYI=
github.com/mochi-mqtt/server/v2 v2.7.9/go.mod h1:lZD3j35AVNqJL5cezlnSkuG05c0FCHSsfAKSPBOSbqc=
github.com/morikuni/aec v1.0.0 h1:nP9CBfwrvYnBRgY6qfDQkygYDmYwOilePFkwzv4dU8A=
github.com/morikuni/aec v1.0.0/go.mod h1:BbKIizmSmc5MMPqRYbxO4ZU0S0+P200+tUnFx7PXmsc=
github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822 h1:C3w9PqII01/Oq1c1nUAm88MOHcQC9l5mIlSMApZMrHA=
github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822/go.mod h1:+n7T8mK8HuQTcFwEeznm/DIxMOiR9yIdICNftLE1DvQ=
github.com/ncruces/go-strftime v0.1.9 h1:bY0MQC28UADQmHmaF5dgpLmImcShSi2kHU9XLdhx/f4=
github.com/ncruces/go-strftime v0.1.9/go.mod h1:Fwc5htZGVVkseilnfgOVb9mKy6w1naJmn9CehxcKcls=
github.com/nats-io/nats.go v1.48.0 h1:pSFyXApG+yWU/TgbKCjmm5K4wrHu86231/w84qRVR+U=
github.com/nats-io/nats.go v1.48.0/go.mod h1:iRWIPokVIFbVijxuMQq4y9ttaBTMe0SFdlZfMDd+33g=
github.com/nats-io/nkeys v0.4.11 h1:q44qGV008kYd9W1b1nEBkNzvnWxtRSQ7A8BoqRrcfa0=
github.com/nats-io/nkeys v0.4.11/go.mod h1:szDimtgmfOi9n25JpfIdGw12tZFYXqhGxjhVxsatHVE=
github.com/nats-io/nuid v1.0.1 h1:5iA8DT8V7q8WK2EScv2padNa/rTESc1KdnPw4TC2paw=
github.com/nats-io/nuid v1.0.1/go.mod h1:19wcPz3Ph3q0Jbyiqsd0kePYG7A95tJPxeL+1OSON2c=
github.com/ncruces/go-strftime v1.0.0 h1:HMFp8mLCTPp341M/ZnA4qaf7ZlsbTc+miZjCLOFAw7w=
github.com/ncruces/go-strftime v1.0.0/go.mod h1:Fwc5htZGVVkseilnfgOVb9mKy6w1naJmn9CehxcKcls=
github.com/opencontainers/go-digest v1.0.0 h1:apOUWs51W5PlhuyGyz9FCeeBIOUDA/6nW8Oi/yOhh5U=
github.com/opencontainers/go-digest v1.0.0/go.mod h1:0JzlMkj0TRzQZfJkVvzbP0HBR3IKzErnv2BNG4W4MAM=
github.com/opencontainers/image-spec v1.1.1 h1:y0fUlFfIZhPF1W537XOLg0/fcx6zcHCJwooC2xJA040=
github.com/opencontainers/image-spec v1.1.1/go.mod h1:qpqAh3Dmcf36wStyyWU+kCeDgrGnAve2nCC8+7h8Q0M=
github.com/pelletier/go-toml/v2 v2.2.4 h1:mye9XuhQ6gvn5h28+VilKrrPoQVanw5PMw/TB0t5Ec4=
github.com/pelletier/go-toml/v2 v2.2.4/go.mod h1:2gIqNv+qfxSVS7cM2xJQKtLSTLUE9V8t9Stt+h56mCY=
github.com/pingcap/errors v0.11.4 h1:lFuQV/oaUMGcD2tqt+01ROSmJs75VG1ToEOkZIZ4nE4=
@@ -87,6 +166,8 @@ github.com/pkg/errors v0.9.1 h1:FEBLx1zS214owpjy7qsBeixbURkuhQAwrK5UwLGTwt4=
github.com/pkg/errors v0.9.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=
github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM=
github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
github.com/power-devops/perfstat v0.0.0-20210106213030-5aafc221ea8c h1:ncq/mPwQF4JjgDlrVEn3C11VoGHZN7m8qihwgMEtzYw=
github.com/power-devops/perfstat v0.0.0-20210106213030-5aafc221ea8c/go.mod h1:OmDBASR4679mdNQnz2pUhc2G8CO2JrUAVFDRBDP/hJE=
github.com/prometheus/client_golang v1.23.2 h1:Je96obch5RDVy3FDMndoUsjAhG5Edi49h0RJWRi/o0o=
github.com/prometheus/client_golang v1.23.2/go.mod h1:Tb1a6LWHB3/SPIzCoaDXI4I8UHKeFTEQ1YCr+0Gyqmg=
github.com/prometheus/client_model v0.6.2 h1:oBsgwpGs7iVziMvrGhE53c/GrLUsZdHnqNwqPLxwZyk=
@@ -103,8 +184,14 @@ github.com/remyoudompheng/bigfft v0.0.0-20230129092748-24d4a6f8daec h1:W09IVJc94
github.com/remyoudompheng/bigfft v0.0.0-20230129092748-24d4a6f8daec/go.mod h1:qqbHyh8v60DhA7CoWK5oRCqLrMHRGoxYCSS9EjAz6Eo=
github.com/rogpeppe/go-internal v1.13.1 h1:KvO1DLK/DRN07sQ1LQKScxyZJuNnedQ5/wKSR38lUII=
github.com/rogpeppe/go-internal v1.13.1/go.mod h1:uMEvuHeurkdAXX61udpOXGD/AzZDWNMNyH2VO9fmH0o=
github.com/rs/xid v1.4.0 h1:qd7wPTDkN6KQx2VmMBLrpHkiyQwgFXRnkOLacUiaSNY=
github.com/rs/xid v1.4.0/go.mod h1:trrq9SKmegXys3aeAKXMUTdJsYXVwGY3RLcfgqegfbg=
github.com/sagikazarmark/locafero v0.11.0 h1:1iurJgmM9G3PA/I+wWYIOw/5SyBtxapeHDcg+AAIFXc=
github.com/sagikazarmark/locafero v0.11.0/go.mod h1:nVIGvgyzw595SUSUE6tvCp3YYTeHs15MvlmU87WwIik=
github.com/shirou/gopsutil/v4 v4.25.6 h1:kLysI2JsKorfaFPcYmcJqbzROzsBWEOAtw6A7dIfqXs=
github.com/shirou/gopsutil/v4 v4.25.6/go.mod h1:PfybzyydfZcN+JMMjkF6Zb8Mq1A/VcogFFg7hj50W9c=
github.com/sirupsen/logrus v1.9.3 h1:dueUQJ1C2q9oE3F7wvmSGAaVtTmUizReu6fjN8uqzbQ=
github.com/sirupsen/logrus v1.9.3/go.mod h1:naHLuLoDiP4jHNo9R0sCBMtWGeIprob74mVsIT4qYEQ=
github.com/sourcegraph/conc v0.3.1-0.20240121214520-5f936abd7ae8 h1:+jumHNA0Wrelhe64i8F6HNlS8pkoyMv5sreGx2Ry5Rw=
github.com/sourcegraph/conc v0.3.1-0.20240121214520-5f936abd7ae8/go.mod h1:3n1Cwaq1E1/1lhQhtRK2ts/ZwZEhjcQeJQ1RuC6Q/8U=
github.com/spf13/afero v1.15.0 h1:b/YBCLWAJdFWJTN9cLhiXXcD7mzKn9Dm86dNnfyQw1I=
@@ -116,12 +203,16 @@ github.com/spf13/pflag v1.0.10/go.mod h1:McXfInJRrz4CZXVZOBLb0bTZqETkiAhM9Iw0y3A
github.com/spf13/viper v1.21.0 h1:x5S+0EU27Lbphp4UKm1C+1oQO+rKx36vfCoaVebLFSU=
github.com/spf13/viper v1.21.0/go.mod h1:P0lhsswPGWD/1lZJ9ny3fYnVqxiegrlNrEmgLjbTCAY=
github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=
github.com/stretchr/objx v0.5.2 h1:xuMeJ0Sdp5ZMRXx/aWO6RZxdr3beISkG5/G/aIRr3pY=
github.com/stretchr/objx v0.5.2/go.mod h1:FRsXN1f5AsAjCGJKqEizvkpNtU+EGNCLh3NxZ/8L+MA=
github.com/stretchr/testify v1.3.0/go.mod h1:M5WIy9Dh21IEIfnGCwXGc5bZfKNJtfHm1UVUgZn+9EI=
github.com/stretchr/testify v1.7.0/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg=
github.com/stretchr/testify v1.11.1 h1:7s2iGBzp5EwR7/aIZr8ao5+dra3wiQyKjjFuvgVKu7U=
github.com/stretchr/testify v1.11.1/go.mod h1:wZwfW3scLgRK+23gO65QZefKpKQRnfz6sD981Nm4B6U=
github.com/subosito/gotenv v1.6.0 h1:9NlTDc1FTs4qu0DDq7AEtTPNw6SVm7uBMsUCUjABIf8=
github.com/subosito/gotenv v1.6.0/go.mod h1:Dk4QP5c2W3ibzajGcXpNraDfq2IrhjMIvMSWPKKo0FU=
github.com/testcontainers/testcontainers-go v0.40.0 h1:pSdJYLOVgLE8YdUY2FHQ1Fxu+aMnb6JfVz1mxk7OeMU=
github.com/testcontainers/testcontainers-go v0.40.0/go.mod h1:FSXV5KQtX2HAMlm7U3APNyLkkap35zNLxukw9oBi/MY=
github.com/tidwall/gjson v1.14.2/go.mod h1:/wbyibRr2FHMks5tjHJ5F8dMZh3AcwJEMf5vlfC0lxk=
github.com/tidwall/gjson v1.18.0 h1:FIDeeyB800efLX89e5a8Y0BNH+LOngJyGrIWxG2FKQY=
github.com/tidwall/gjson v1.18.0/go.mod h1:/wbyibRr2FHMks5tjHJ5F8dMZh3AcwJEMf5vlfC0lxk=
@@ -131,28 +222,38 @@ github.com/tidwall/pretty v1.2.0 h1:RWIZEg2iJ8/g6fDDYzMpobmaoGh5OLl4AXtGUGPcqCs=
github.com/tidwall/pretty v1.2.0/go.mod h1:ITEVvHYasfjBbM0u2Pg8T2nJnzm8xPwvNhhsoaGGjNU=
github.com/tidwall/sjson v1.2.5 h1:kLy8mja+1c9jlljvWTlSazM7cKDRfJuR/bOJhcY5NcY=
github.com/tidwall/sjson v1.2.5/go.mod h1:Fvgq9kS/6ociJEDnK0Fk1cpYF4FIW6ZF7LAe+6jwd28=
github.com/tklauser/go-sysconf v0.3.12 h1:0QaGUFOdQaIVdPgfITYzaTegZvdCjmYO52cSFAEVmqU=
github.com/tklauser/go-sysconf v0.3.12/go.mod h1:Ho14jnntGE1fpdOqQEEaiKRpvIavV0hSfmBq8nJbHYI=
github.com/tklauser/numcpus v0.6.1 h1:ng9scYS7az0Bk4OZLvrNXNSAO2Pxr1XXRAPyjhIx+Fk=
github.com/tklauser/numcpus v0.6.1/go.mod h1:1XfjsgE2zo8GVw7POkMbHENHzVg3GzmoZ9fESEdAacY=
github.com/tmthrgd/go-hex v0.0.0-20190904060850-447a3041c3bc h1:9lRDQMhESg+zvGYmW5DyG0UqvY96Bu5QYsTLvCHdrgo=
github.com/tmthrgd/go-hex v0.0.0-20190904060850-447a3041c3bc/go.mod h1:bciPuU6GHm1iF1pBvUfxfsH0Wmnc2VbpgvbI9ZWuIRs=
github.com/uptrace/bun v1.2.15 h1:Ut68XRBLDgp9qG9QBMa9ELWaZOmzHNdczHQdrOZbEFE=
github.com/uptrace/bun v1.2.15/go.mod h1:Eghz7NonZMiTX/Z6oKYytJ0oaMEJ/eq3kEV4vSqG038=
github.com/uptrace/bun/dialect/sqlitedialect v1.2.15 h1:7upGMVjFRB1oI78GQw6ruNLblYn5CR+kxqcbbeBBils=
github.com/uptrace/bun/dialect/sqlitedialect v1.2.15/go.mod h1:c7YIDaPNS2CU2uI1p7umFuFWkuKbDcPDDvp+DLHZnkI=
github.com/uptrace/bun/driver/sqliteshim v1.2.15 h1:M/rZJSjOPV4OmfTVnDPtL+wJmdMTqDUn8cuk5ycfABA=
github.com/uptrace/bun/driver/sqliteshim v1.2.15/go.mod h1:YqwxFyvM992XOCpGJtXyKPkgkb+aZpIIMzGbpaw1hIk=
github.com/uptrace/bun/dialect/sqlitedialect v1.2.16 h1:6wVAiYLj1pMibRthGwy4wDLa3D5AQo32Y8rvwPd8CQ0=
github.com/uptrace/bun/dialect/sqlitedialect v1.2.16/go.mod h1:Z7+5qK8CGZkDQiPMu+LSdVuDuR1I5jcwtkB1Pi3F82E=
github.com/uptrace/bun/driver/sqliteshim v1.2.16 h1:M6Dh5kkDWFbUWBrOsIE1g1zdZ5JbSytTD4piFRBOUAI=
github.com/uptrace/bun/driver/sqliteshim v1.2.16/go.mod h1:iKdJ06P3XS+pwKcONjSIK07bbhksH3lWsw3mpfr0+bY=
github.com/uptrace/bunrouter v1.0.23 h1:Bi7NKw3uCQkcA/GUCtDNPq5LE5UdR9pe+UyWbjHB/wU=
github.com/uptrace/bunrouter v1.0.23/go.mod h1:O3jAcl+5qgnF+ejhgkmbceEk0E/mqaK+ADOocdNpY8M=
github.com/vmihailenco/msgpack/v5 v5.4.1 h1:cQriyiUvjTwOHg8QZaPihLWeRAAVoCpE00IUPn0Bjt8=
github.com/vmihailenco/msgpack/v5 v5.4.1/go.mod h1:GaZTsDaehaPpQVyxrf5mtQlH+pc21PIudVV/E3rRQok=
github.com/vmihailenco/tagparser/v2 v2.0.0 h1:y09buUbR+b5aycVFQs/g70pqKVZNBmxwAhO7/IwNM9g=
github.com/vmihailenco/tagparser/v2 v2.0.0/go.mod h1:Wri+At7QHww0WTrCBeu4J6bNtoV6mEfg5OIWRZA9qds=
github.com/warkanum/bun v1.2.17 h1:HP8eTuKSNcqMDhhIPFxEbgV/yct6RR0/c3qHH3PNZUA=
github.com/warkanum/bun v1.2.17/go.mod h1:jMoNg2n56ckaawi/O/J92BHaECmrz6IRjuMWqlMaMTM=
github.com/yusufpapurcu/wmi v1.2.4 h1:zFUKzehAFReQwLys1b/iSMl+JQGSCSjtVqQn9bBrPo0=
github.com/yusufpapurcu/wmi v1.2.4/go.mod h1:SBZ9tNy3G9/m5Oi98Zks0QjeHVDvuK0qfxQmPyzfmi0=
go.opentelemetry.io/auto/sdk v1.1.0 h1:cH53jehLUN6UFLY71z+NDOiNJqDdPRaXzTel0sJySYA=
go.opentelemetry.io/auto/sdk v1.1.0/go.mod h1:3wSPjt5PWp2RhlCcmmOial7AvC4DQqZb7a7wCow3W8A=
go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.49.0 h1:jq9TW8u3so/bN+JPT166wjOI6/vQPF6Xe7nMNIltagk=
go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.49.0/go.mod h1:p8pYQP+m5XfbZm9fxtSKAbM6oIllS7s2AfxrChvc7iw=
go.opentelemetry.io/otel v1.38.0 h1:RkfdswUDRimDg0m2Az18RKOsnI8UDzppJAtj01/Ymk8=
go.opentelemetry.io/otel v1.38.0/go.mod h1:zcmtmQ1+YmQM9wrNsTGV/q/uyusom3P8RxwExxkZhjM=
go.opentelemetry.io/otel/exporters/otlp/otlptrace v1.38.0 h1:GqRJVj7UmLjCVyVJ3ZFLdPRmhDUp2zFmQe3RHIOsw24=
go.opentelemetry.io/otel/exporters/otlp/otlptrace v1.38.0/go.mod h1:ri3aaHSmCTVYu2AWv44YMauwAQc0aqI9gHKIcSbI1pU=
go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc v1.38.0 h1:lwI4Dc5leUqENgGuQImwLo4WnuXFPetmPpkLi2IrX54=
go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc v1.38.0/go.mod h1:Kz/oCE7z5wuyhPxsXDuaPteSWqjSBD5YaSdbxZYGbGk=
go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracehttp v1.19.0 h1:IeMeyr1aBvBiPVYihXIaeIZba6b8E1bYp7lbdxK8CQg=
go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracehttp v1.19.0/go.mod h1:oVdCUtjq9MK9BlS7TtucsQwUcXcymNiEDjgDD2jMtZU=
go.opentelemetry.io/otel/metric v1.38.0 h1:Kl6lzIYGAh5M159u9NgiRkmoMKjvbsKtYRwgfrA6WpA=
go.opentelemetry.io/otel/metric v1.38.0/go.mod h1:kB5n/QoRM8YwmUahxvI3bO34eVtQf2i4utNVLr9gEmI=
go.opentelemetry.io/otel/sdk v1.38.0 h1:l48sr5YbNf2hpCUj/FoGhW9yDkl+Ma+LrVl8qaM5b+E=
@@ -173,25 +274,34 @@ go.yaml.in/yaml/v2 v2.4.2 h1:DzmwEr2rDGHl7lsFgAHxmNz/1NlQ7xLIrlN2h5d1eGI=
go.yaml.in/yaml/v2 v2.4.2/go.mod h1:081UH+NErpNdqlCXm3TtEran0rJZGxAYx9hb/ELlsPU=
go.yaml.in/yaml/v3 v3.0.4 h1:tfq32ie2Jv2UxXFdLJdh3jXuOzWiL1fo0bu/FbuKpbc=
go.yaml.in/yaml/v3 v3.0.4/go.mod h1:DhzuOOF2ATzADvBadXxruRBLzYTpT36CKvDb3+aBEFg=
golang.org/x/crypto v0.41.0 h1:WKYxWedPGCTVVl5+WHSSrOBT0O8lx32+zxmHxijgXp4=
golang.org/x/crypto v0.41.0/go.mod h1:pO5AFd7FA68rFak7rOAGVuygIISepHftHnr8dr6+sUc=
golang.org/x/exp v0.0.0-20250711185948-6ae5c78190dc h1:TS73t7x3KarrNd5qAipmspBDS1rkMcgVG/fS1aRb4Rc=
golang.org/x/exp v0.0.0-20250711185948-6ae5c78190dc/go.mod h1:A+z0yzpGtvnG90cToK5n2tu8UJVP2XUATh+r+sfOOOc=
golang.org/x/mod v0.26.0 h1:EGMPT//Ezu+ylkCijjPc+f4Aih7sZvaAr+O3EHBxvZg=
golang.org/x/mod v0.26.0/go.mod h1:/j6NAhSk8iQ723BGAUyoAcn7SlD7s15Dp9Nd/SfeaFQ=
golang.org/x/net v0.43.0 h1:lat02VYK2j4aLzMzecihNvTlJNQUq316m2Mr9rnM6YE=
golang.org/x/net v0.43.0/go.mod h1:vhO1fvI4dGsIjh73sWfUVjj3N7CA9WkKJNQm2svM6Jg=
golang.org/x/sync v0.16.0 h1:ycBJEhp9p4vXvUZNszeOq0kGTPghopOL8q0fq3vstxw=
golang.org/x/sync v0.16.0/go.mod h1:1dzgHSNfp02xaA81J2MS99Qcpr2w7fw1gpm99rleRqA=
golang.org/x/crypto v0.43.0 h1:dduJYIi3A3KOfdGOHX8AVZ/jGiyPa3IbBozJ5kNuE04=
golang.org/x/crypto v0.43.0/go.mod h1:BFbav4mRNlXJL4wNeejLpWxB7wMbc79PdRGhWKncxR0=
golang.org/x/exp v0.0.0-20251113190631-e25ba8c21ef6 h1:zfMcR1Cs4KNuomFFgGefv5N0czO2XZpUbxGUy8i8ug0=
golang.org/x/exp v0.0.0-20251113190631-e25ba8c21ef6/go.mod h1:46edojNIoXTNOhySWIWdix628clX9ODXwPsQuG6hsK0=
golang.org/x/mod v0.30.0 h1:fDEXFVZ/fmCKProc/yAXXUijritrDzahmwwefnjoPFk=
golang.org/x/mod v0.30.0/go.mod h1:lAsf5O2EvJeSFMiBxXDki7sCgAxEUcZHXoXMKT4GJKc=
golang.org/x/net v0.45.0 h1:RLBg5JKixCy82FtLJpeNlVM0nrSqpCRYzVU1n8kj0tM=
golang.org/x/net v0.45.0/go.mod h1:ECOoLqd5U3Lhyeyo/QDCEVQ4sNgYsqvCZ722XogGieY=
golang.org/x/sync v0.18.0 h1:kr88TuHDroi+UVf+0hZnirlk8o8T+4MrK6mr60WkH/I=
golang.org/x/sync v0.18.0/go.mod h1:9KTHXmSnoGruLpwFjVSX0lNNA75CykiMECbovNTZqGI=
golang.org/x/sys v0.0.0-20190916202348-b4ddaad3f8a3/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20201204225414-ed752295db88/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210616094352-59db8d763f22/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20220715151400-c0bba94af5f8/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.6.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.35.0 h1:vz1N37gP5bs89s7He8XuIYXpyY0+QlsKmzipCbUtyxI=
golang.org/x/sys v0.35.0/go.mod h1:BJP2sWEmIv4KK5OTEluFJCKSidICx8ciO85XgH3Ak8k=
golang.org/x/text v0.28.0 h1:rhazDwis8INMIwQ4tpjLDzUhx6RlXqZNPEM0huQojng=
golang.org/x/text v0.28.0/go.mod h1:U8nCwOR8jO/marOQ0QbDiOngZVEBB7MAiitBuMjXiNU=
golang.org/x/sys v0.8.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.11.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.38.0 h1:3yZWxaJjBmCWXqhN1qh02AkOnCQ1poK6oF+a7xWL6Gc=
golang.org/x/sys v0.38.0/go.mod h1:OgkHotnGiDImocRcuBABYBEXf8A9a87e/uXjp9XT3ks=
golang.org/x/term v0.36.0 h1:zMPR+aF8gfksFprF/Nc/rd1wRS1EI6nDBGyWAvDzx2Q=
golang.org/x/term v0.36.0/go.mod h1:Qu394IJq6V6dCBRgwqshf3mPF85AqzYEzofzRdZkWss=
golang.org/x/text v0.30.0 h1:yznKA/E9zq54KzlzBEAWn1NXSQ8DIp/NYMy88xJjl4k=
golang.org/x/text v0.30.0/go.mod h1:yDdHFIX9t+tORqspjENWgzaCVXgk0yYnYuSZ8UzzBVM=
golang.org/x/time v0.14.0 h1:MRx4UaLrDotUKUdCIqzPC48t1Y9hANFKIRpNx+Te8PI=
golang.org/x/time v0.14.0/go.mod h1:eL/Oa2bBBK0TkX57Fyni+NgnyQQN4LitPmob2Hjnqw4=
golang.org/x/tools v0.35.0 h1:mBffYraMEf7aa0sB+NuKnuCy8qI/9Bughn8dC2Gu5r0=
golang.org/x/tools v0.35.0/go.mod h1:NKdj5HkL/73byiZSJjqJgKn3ep7KjFkBOkR/Hps3VPw=
golang.org/x/tools v0.39.0 h1:ik4ho21kwuQln40uelmciQPp9SipgNDdrafrYA4TmQQ=
golang.org/x/tools v0.39.0/go.mod h1:JnefbkDPyD8UU2kI5fuf8ZX4/yUeh9W877ZeBONxUqQ=
golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
gonum.org/v1/gonum v0.16.0 h1:5+ul4Swaf3ESvrOnidPp4GZbzf0mxVQpDCYUQE7OJfk=
gonum.org/v1/gonum v0.16.0/go.mod h1:fef3am4MQ93R2HHpKnLk4/Tbh/s0+wqD5nfa6Pnwy4E=
google.golang.org/genproto/googleapis/api v0.0.0-20250825161204-c5933d9347a5 h1:BIRfGDEjiHRrk0QKZe3Xv2ieMhtgRGeLcZQ0mIVn4EY=
@@ -210,20 +320,26 @@ gopkg.in/yaml.v3 v3.0.1 h1:fxVm/GzAzEWqLHuvctI91KS9hhNmmWOoWu0XTYJS7CA=
gopkg.in/yaml.v3 v3.0.1/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
gorm.io/driver/postgres v1.6.0 h1:2dxzU8xJ+ivvqTRph34QX+WrRaJlmfyPqXmoGVjMBa4=
gorm.io/driver/postgres v1.6.0/go.mod h1:vUw0mrGgrTK+uPHEhAdV4sfFELrByKVGnaVRkXDhtWo=
gorm.io/gorm v1.25.12 h1:I0u8i2hWQItBq1WfE0o2+WuL9+8L21K9e2HHSTE/0f8=
gorm.io/gorm v1.25.12/go.mod h1:xh7N7RHfYlNc5EmcI/El95gXusucDrQnHXe0+CgWcLQ=
modernc.org/cc/v4 v4.26.2 h1:991HMkLjJzYBIfha6ECZdjrIYz2/1ayr+FL8GN+CNzM=
modernc.org/cc/v4 v4.26.2/go.mod h1:uVtb5OGqUKpoLWhqwNQo/8LwvoiEBLvZXIQ/SmO6mL0=
modernc.org/ccgo/v4 v4.28.0 h1:rjznn6WWehKq7dG4JtLRKxb52Ecv8OUGah8+Z/SfpNU=
modernc.org/ccgo/v4 v4.28.0/go.mod h1:JygV3+9AV6SmPhDasu4JgquwU81XAKLd3OKTUDNOiKE=
modernc.org/fileutil v1.3.8 h1:qtzNm7ED75pd1C7WgAGcK4edm4fvhtBsEiI/0NQ54YM=
modernc.org/fileutil v1.3.8/go.mod h1:HxmghZSZVAz/LXcMNwZPA/DRrQZEVP9VX0V4LQGQFOc=
gorm.io/driver/sqlite v1.6.0 h1:WHRRrIiulaPiPFmDcod6prc4l2VGVWHz80KspNsxSfQ=
gorm.io/driver/sqlite v1.6.0/go.mod h1:AO9V1qIQddBESngQUKWL9yoH93HIeA1X6V633rBwyT8=
gorm.io/gorm v1.30.0 h1:qbT5aPv1UH8gI99OsRlvDToLxW5zR7FzS9acZDOZcgs=
gorm.io/gorm v1.30.0/go.mod h1:8Z33v652h4//uMA76KjeDH8mJXPm1QNCYrMeatR0DOE=
gotest.tools/v3 v3.5.2 h1:7koQfIKdy+I8UTetycgUqXWSDwpgv193Ka+qRsmBY8Q=
gotest.tools/v3 v3.5.2/go.mod h1:LtdLGcnqToBH83WByAAi/wiwSFCArdFIUV/xxN4pcjA=
modernc.org/cc/v4 v4.27.1 h1:9W30zRlYrefrDV2JE2O8VDtJ1yPGownxciz5rrbQZis=
modernc.org/cc/v4 v4.27.1/go.mod h1:uVtb5OGqUKpoLWhqwNQo/8LwvoiEBLvZXIQ/SmO6mL0=
modernc.org/ccgo/v4 v4.30.1 h1:4r4U1J6Fhj98NKfSjnPUN7Ze2c6MnAdL0hWw6+LrJpc=
modernc.org/ccgo/v4 v4.30.1/go.mod h1:bIOeI1JL54Utlxn+LwrFyjCx2n2RDiYEaJVSrgdrRfM=
modernc.org/fileutil v1.3.40 h1:ZGMswMNc9JOCrcrakF1HrvmergNLAmxOPjizirpfqBA=
modernc.org/fileutil v1.3.40/go.mod h1:HxmghZSZVAz/LXcMNwZPA/DRrQZEVP9VX0V4LQGQFOc=
modernc.org/gc/v2 v2.6.5 h1:nyqdV8q46KvTpZlsw66kWqwXRHdjIlJOhG6kxiV/9xI=
modernc.org/gc/v2 v2.6.5/go.mod h1:YgIahr1ypgfe7chRuJi2gD7DBQiKSLMPgBQe9oIiito=
modernc.org/gc/v3 v3.1.1 h1:k8T3gkXWY9sEiytKhcgyiZ2L0DTyCQ/nvX+LoCljoRE=
modernc.org/gc/v3 v3.1.1/go.mod h1:HFK/6AGESC7Ex+EZJhJ2Gni6cTaYpSMmU/cT9RmlfYY=
modernc.org/goabi0 v0.2.0 h1:HvEowk7LxcPd0eq6mVOAEMai46V+i7Jrj13t4AzuNks=
modernc.org/goabi0 v0.2.0/go.mod h1:CEFRnnJhKvWT1c1JTI3Avm+tgOWbkOu5oPA8eH8LnMI=
modernc.org/libc v1.66.3 h1:cfCbjTUcdsKyyZZfEUKfoHcP3S0Wkvz3jgSzByEWVCQ=
modernc.org/libc v1.66.3/go.mod h1:XD9zO8kt59cANKvHPXpx7yS2ELPheAey0vjIuZOhOU8=
modernc.org/libc v1.67.0 h1:QzL4IrKab2OFmxA3/vRYl0tLXrIamwrhD6CKD4WBVjQ=
modernc.org/libc v1.67.0/go.mod h1:QvvnnJ5P7aitu0ReNpVIEyesuhmDLQ8kaEoyMjIFZJA=
modernc.org/mathutil v1.7.1 h1:GCZVGXdaN8gTqB1Mf/usp1Y/hSqgI2vAGGP4jZMCxOU=
modernc.org/mathutil v1.7.1/go.mod h1:4p5IwJITfppl0G4sUEDtCr4DthTaT47/N3aT6MhfgJg=
modernc.org/memory v1.11.0 h1:o4QC8aMQzmcwCK3t3Ux/ZHmwFPzE6hf2Y5LbkRs+hbI=
@@ -232,8 +348,8 @@ modernc.org/opt v0.1.4 h1:2kNGMRiUjrp4LcaPuLY2PzUfqM/w9N23quVwhKt5Qm8=
modernc.org/opt v0.1.4/go.mod h1:03fq9lsNfvkYSfxrfUhZCWPk1lm4cq4N+Bh//bEtgns=
modernc.org/sortutil v1.2.1 h1:+xyoGf15mM3NMlPDnFqrteY07klSFxLElE2PVuWIJ7w=
modernc.org/sortutil v1.2.1/go.mod h1:7ZI3a3REbai7gzCLcotuw9AC4VZVpYMjDzETGsSMqJE=
modernc.org/sqlite v1.38.0 h1:+4OrfPQ8pxHKuWG4md1JpR/EYAh3Md7TdejuuzE7EUI=
modernc.org/sqlite v1.38.0/go.mod h1:1Bj+yES4SVvBZ4cBOpVZ6QgesMCKpJZDq0nxYzOpmNE=
modernc.org/sqlite v1.40.1 h1:VfuXcxcUWWKRBuP8+BR9L7VnmusMgBNNnBYGEe9w/iY=
modernc.org/sqlite v1.40.1/go.mod h1:9fjQZ0mB1LLP0GYrp39oOJXx/I2sxEnZtzCmEQIKvGE=
modernc.org/strutil v1.2.1 h1:UneZBkQA+DX2Rp35KcM69cSsNES9ly8mQWD71HKlOA0=
modernc.org/strutil v1.2.1/go.mod h1:EHkiggD70koQxjVdSBM3JKM7k6L0FbGE5eymy9i3B9A=
modernc.org/token v1.1.0 h1:Xl7Ap9dKaEs5kLoOQeQmPWevfnk/DM5qcLcYlA8ys6Y=

View File

@@ -57,11 +57,31 @@ func (c *Cache) SetBytes(ctx context.Context, key string, value []byte, ttl time
return c.provider.Set(ctx, key, value, ttl)
}
// SetWithTags serializes and stores a value in the cache with the specified TTL and tags.
func (c *Cache) SetWithTags(ctx context.Context, key string, value interface{}, ttl time.Duration, tags []string) error {
data, err := json.Marshal(value)
if err != nil {
return fmt.Errorf("failed to serialize: %w", err)
}
return c.provider.SetWithTags(ctx, key, data, ttl, tags)
}
// SetBytesWithTags stores raw bytes in the cache with the specified TTL and tags.
func (c *Cache) SetBytesWithTags(ctx context.Context, key string, value []byte, ttl time.Duration, tags []string) error {
return c.provider.SetWithTags(ctx, key, value, ttl, tags)
}
// Delete removes a key from the cache.
func (c *Cache) Delete(ctx context.Context, key string) error {
return c.provider.Delete(ctx, key)
}
// DeleteByTag removes all keys associated with the given tag.
func (c *Cache) DeleteByTag(ctx context.Context, tag string) error {
return c.provider.DeleteByTag(ctx, tag)
}
// DeleteByPattern removes all keys matching the pattern.
func (c *Cache) DeleteByPattern(ctx context.Context, pattern string) error {
return c.provider.DeleteByPattern(ctx, pattern)

View File

@@ -15,9 +15,17 @@ type Provider interface {
// If ttl is 0, the item never expires.
Set(ctx context.Context, key string, value []byte, ttl time.Duration) error
// SetWithTags stores a value in the cache with the specified TTL and tags.
// Tags can be used to invalidate groups of related keys.
// If ttl is 0, the item never expires.
SetWithTags(ctx context.Context, key string, value []byte, ttl time.Duration, tags []string) error
// Delete removes a key from the cache.
Delete(ctx context.Context, key string) error
// DeleteByTag removes all keys associated with the given tag.
DeleteByTag(ctx context.Context, tag string) error
// DeleteByPattern removes all keys matching the pattern.
// Pattern syntax depends on the provider implementation.
DeleteByPattern(ctx context.Context, pattern string) error

View File

@@ -2,6 +2,7 @@ package cache
import (
"context"
"encoding/json"
"fmt"
"time"
@@ -97,8 +98,115 @@ func (m *MemcacheProvider) Set(ctx context.Context, key string, value []byte, tt
return m.client.Set(item)
}
// SetWithTags stores a value in the cache with the specified TTL and tags.
// Note: Tag support in Memcache is limited and less efficient than Redis.
func (m *MemcacheProvider) SetWithTags(ctx context.Context, key string, value []byte, ttl time.Duration, tags []string) error {
if ttl == 0 {
ttl = m.options.DefaultTTL
}
expiration := int32(ttl.Seconds())
// Set the main value
item := &memcache.Item{
Key: key,
Value: value,
Expiration: expiration,
}
if err := m.client.Set(item); err != nil {
return err
}
// Store tags for this key
if len(tags) > 0 {
tagsData, err := json.Marshal(tags)
if err != nil {
return fmt.Errorf("failed to marshal tags: %w", err)
}
tagsItem := &memcache.Item{
Key: fmt.Sprintf("cache:tags:%s", key),
Value: tagsData,
Expiration: expiration,
}
if err := m.client.Set(tagsItem); err != nil {
return err
}
// Add key to each tag's key list
for _, tag := range tags {
tagKey := fmt.Sprintf("cache:tag:%s", tag)
// Get existing keys for this tag
var keys []string
if item, err := m.client.Get(tagKey); err == nil {
_ = json.Unmarshal(item.Value, &keys)
}
// Add current key if not already present
found := false
for _, k := range keys {
if k == key {
found = true
break
}
}
if !found {
keys = append(keys, key)
}
// Store updated key list
keysData, err := json.Marshal(keys)
if err != nil {
continue
}
tagItem := &memcache.Item{
Key: tagKey,
Value: keysData,
Expiration: expiration + 3600, // Give tag lists longer TTL
}
_ = m.client.Set(tagItem)
}
}
return nil
}
// Delete removes a key from the cache.
func (m *MemcacheProvider) Delete(ctx context.Context, key string) error {
// Get tags for this key
tagsKey := fmt.Sprintf("cache:tags:%s", key)
if item, err := m.client.Get(tagsKey); err == nil {
var tags []string
if err := json.Unmarshal(item.Value, &tags); err == nil {
// Remove key from each tag's key list
for _, tag := range tags {
tagKey := fmt.Sprintf("cache:tag:%s", tag)
if tagItem, err := m.client.Get(tagKey); err == nil {
var keys []string
if err := json.Unmarshal(tagItem.Value, &keys); err == nil {
// Remove current key from the list
newKeys := make([]string, 0, len(keys))
for _, k := range keys {
if k != key {
newKeys = append(newKeys, k)
}
}
// Update the tag's key list
if keysData, err := json.Marshal(newKeys); err == nil {
tagItem.Value = keysData
_ = m.client.Set(tagItem)
}
}
}
}
}
// Delete the tags key
_ = m.client.Delete(tagsKey)
}
// Delete the actual key
err := m.client.Delete(key)
if err == memcache.ErrCacheMiss {
return nil
@@ -106,6 +214,38 @@ func (m *MemcacheProvider) Delete(ctx context.Context, key string) error {
return err
}
// DeleteByTag removes all keys associated with the given tag.
func (m *MemcacheProvider) DeleteByTag(ctx context.Context, tag string) error {
tagKey := fmt.Sprintf("cache:tag:%s", tag)
// Get all keys associated with this tag
item, err := m.client.Get(tagKey)
if err == memcache.ErrCacheMiss {
return nil
}
if err != nil {
return err
}
var keys []string
if err := json.Unmarshal(item.Value, &keys); err != nil {
return fmt.Errorf("failed to unmarshal tag keys: %w", err)
}
// Delete all keys
for _, key := range keys {
_ = m.client.Delete(key)
// Also delete the tags key for this cache key
tagsKey := fmt.Sprintf("cache:tags:%s", key)
_ = m.client.Delete(tagsKey)
}
// Delete the tag key itself
_ = m.client.Delete(tagKey)
return nil
}
// DeleteByPattern removes all keys matching the pattern.
// Note: Memcache does not support pattern-based deletion natively.
// This is a no-op for memcache and returns an error.

View File

@@ -15,6 +15,7 @@ type memoryItem struct {
Expiration time.Time
LastAccess time.Time
HitCount int64
Tags []string
}
// isExpired checks if the item has expired.
@@ -27,11 +28,12 @@ func (m *memoryItem) isExpired() bool {
// MemoryProvider is an in-memory implementation of the Provider interface.
type MemoryProvider struct {
mu sync.RWMutex
items map[string]*memoryItem
options *Options
hits atomic.Int64
misses atomic.Int64
mu sync.RWMutex
items map[string]*memoryItem
tagToKeys map[string]map[string]struct{} // tag -> set of keys
options *Options
hits atomic.Int64
misses atomic.Int64
}
// NewMemoryProvider creates a new in-memory cache provider.
@@ -44,8 +46,9 @@ func NewMemoryProvider(opts *Options) *MemoryProvider {
}
return &MemoryProvider{
items: make(map[string]*memoryItem),
options: opts,
items: make(map[string]*memoryItem),
tagToKeys: make(map[string]map[string]struct{}),
options: opts,
}
}
@@ -114,15 +117,116 @@ func (m *MemoryProvider) Set(ctx context.Context, key string, value []byte, ttl
return nil
}
// SetWithTags stores a value in the cache with the specified TTL and tags.
func (m *MemoryProvider) SetWithTags(ctx context.Context, key string, value []byte, ttl time.Duration, tags []string) error {
m.mu.Lock()
defer m.mu.Unlock()
if ttl == 0 {
ttl = m.options.DefaultTTL
}
var expiration time.Time
if ttl > 0 {
expiration = time.Now().Add(ttl)
}
// Check max size and evict if necessary
if m.options.MaxSize > 0 && len(m.items) >= m.options.MaxSize {
if _, exists := m.items[key]; !exists {
m.evictOne()
}
}
// Remove old tag associations if key exists
if oldItem, exists := m.items[key]; exists {
for _, tag := range oldItem.Tags {
if keySet, ok := m.tagToKeys[tag]; ok {
delete(keySet, key)
if len(keySet) == 0 {
delete(m.tagToKeys, tag)
}
}
}
}
// Store the item
m.items[key] = &memoryItem{
Value: value,
Expiration: expiration,
LastAccess: time.Now(),
Tags: tags,
}
// Add new tag associations
for _, tag := range tags {
if m.tagToKeys[tag] == nil {
m.tagToKeys[tag] = make(map[string]struct{})
}
m.tagToKeys[tag][key] = struct{}{}
}
return nil
}
// Delete removes a key from the cache.
func (m *MemoryProvider) Delete(ctx context.Context, key string) error {
m.mu.Lock()
defer m.mu.Unlock()
// Remove tag associations
if item, exists := m.items[key]; exists {
for _, tag := range item.Tags {
if keySet, ok := m.tagToKeys[tag]; ok {
delete(keySet, key)
if len(keySet) == 0 {
delete(m.tagToKeys, tag)
}
}
}
}
delete(m.items, key)
return nil
}
// DeleteByTag removes all keys associated with the given tag.
func (m *MemoryProvider) DeleteByTag(ctx context.Context, tag string) error {
m.mu.Lock()
defer m.mu.Unlock()
// Get all keys associated with this tag
keySet, exists := m.tagToKeys[tag]
if !exists {
return nil // No keys with this tag
}
// Delete all items with this tag
for key := range keySet {
if item, ok := m.items[key]; ok {
// Remove this tag from the item's tag list
newTags := make([]string, 0, len(item.Tags))
for _, t := range item.Tags {
if t != tag {
newTags = append(newTags, t)
}
}
// If item has no more tags, delete it
// Otherwise update its tags
if len(newTags) == 0 {
delete(m.items, key)
} else {
item.Tags = newTags
}
}
}
// Remove the tag mapping
delete(m.tagToKeys, tag)
return nil
}
// DeleteByPattern removes all keys matching the pattern.
func (m *MemoryProvider) DeleteByPattern(ctx context.Context, pattern string) error {
m.mu.Lock()

View File

@@ -103,9 +103,93 @@ func (r *RedisProvider) Set(ctx context.Context, key string, value []byte, ttl t
return r.client.Set(ctx, key, value, ttl).Err()
}
// SetWithTags stores a value in the cache with the specified TTL and tags.
func (r *RedisProvider) SetWithTags(ctx context.Context, key string, value []byte, ttl time.Duration, tags []string) error {
if ttl == 0 {
ttl = r.options.DefaultTTL
}
pipe := r.client.Pipeline()
// Set the value
pipe.Set(ctx, key, value, ttl)
// Add key to each tag's set
for _, tag := range tags {
tagKey := fmt.Sprintf("cache:tag:%s", tag)
pipe.SAdd(ctx, tagKey, key)
// Set expiration on tag set (longer than cache items to ensure cleanup)
if ttl > 0 {
pipe.Expire(ctx, tagKey, ttl+time.Hour)
}
}
// Store tags for this key for later cleanup
if len(tags) > 0 {
tagsKey := fmt.Sprintf("cache:tags:%s", key)
pipe.SAdd(ctx, tagsKey, tags)
if ttl > 0 {
pipe.Expire(ctx, tagsKey, ttl)
}
}
_, err := pipe.Exec(ctx)
return err
}
// Delete removes a key from the cache.
func (r *RedisProvider) Delete(ctx context.Context, key string) error {
return r.client.Del(ctx, key).Err()
pipe := r.client.Pipeline()
// Get tags for this key
tagsKey := fmt.Sprintf("cache:tags:%s", key)
tags, err := r.client.SMembers(ctx, tagsKey).Result()
if err == nil && len(tags) > 0 {
// Remove key from each tag set
for _, tag := range tags {
tagKey := fmt.Sprintf("cache:tag:%s", tag)
pipe.SRem(ctx, tagKey, key)
}
// Delete the tags key
pipe.Del(ctx, tagsKey)
}
// Delete the actual key
pipe.Del(ctx, key)
_, err = pipe.Exec(ctx)
return err
}
// DeleteByTag removes all keys associated with the given tag.
func (r *RedisProvider) DeleteByTag(ctx context.Context, tag string) error {
tagKey := fmt.Sprintf("cache:tag:%s", tag)
// Get all keys associated with this tag
keys, err := r.client.SMembers(ctx, tagKey).Result()
if err != nil {
return err
}
if len(keys) == 0 {
return nil
}
pipe := r.client.Pipeline()
// Delete all keys and their tag associations
for _, key := range keys {
pipe.Del(ctx, key)
// Also delete the tags key for this cache key
tagsKey := fmt.Sprintf("cache:tags:%s", key)
pipe.Del(ctx, tagsKey)
}
// Delete the tag set itself
pipe.Del(ctx, tagKey)
_, err = pipe.Exec(ctx)
return err
}
// DeleteByPattern removes all keys matching the pattern.

View File

@@ -1,151 +0,0 @@
package cache
import (
"context"
"testing"
"time"
"github.com/bitechdev/ResolveSpec/pkg/common"
)
func TestBuildQueryCacheKey(t *testing.T) {
filters := []common.FilterOption{
{Column: "name", Operator: "eq", Value: "test"},
{Column: "age", Operator: "gt", Value: 25},
}
sorts := []common.SortOption{
{Column: "name", Direction: "asc"},
}
// Generate cache key
key1 := BuildQueryCacheKey("users", filters, sorts, "status = 'active'", "")
// Same parameters should generate same key
key2 := BuildQueryCacheKey("users", filters, sorts, "status = 'active'", "")
if key1 != key2 {
t.Errorf("Expected same cache keys for identical parameters, got %s and %s", key1, key2)
}
// Different parameters should generate different key
key3 := BuildQueryCacheKey("users", filters, sorts, "status = 'inactive'", "")
if key1 == key3 {
t.Errorf("Expected different cache keys for different parameters, got %s and %s", key1, key3)
}
}
func TestBuildExtendedQueryCacheKey(t *testing.T) {
filters := []common.FilterOption{
{Column: "name", Operator: "eq", Value: "test"},
}
sorts := []common.SortOption{
{Column: "name", Direction: "asc"},
}
expandOpts := []interface{}{
map[string]interface{}{
"relation": "posts",
"where": "status = 'published'",
},
}
// Generate cache key
key1 := BuildExtendedQueryCacheKey("users", filters, sorts, "", "", expandOpts, false, "", "")
// Same parameters should generate same key
key2 := BuildExtendedQueryCacheKey("users", filters, sorts, "", "", expandOpts, false, "", "")
if key1 != key2 {
t.Errorf("Expected same cache keys for identical parameters")
}
// Different distinct value should generate different key
key3 := BuildExtendedQueryCacheKey("users", filters, sorts, "", "", expandOpts, true, "", "")
if key1 == key3 {
t.Errorf("Expected different cache keys for different distinct values")
}
}
func TestGetQueryTotalCacheKey(t *testing.T) {
hash := "abc123"
key := GetQueryTotalCacheKey(hash)
expected := "query_total:abc123"
if key != expected {
t.Errorf("Expected %s, got %s", expected, key)
}
}
func TestCachedTotalIntegration(t *testing.T) {
// Initialize cache with memory provider for testing
UseMemory(&Options{
DefaultTTL: 1 * time.Minute,
MaxSize: 100,
})
ctx := context.Background()
// Create test data
filters := []common.FilterOption{
{Column: "status", Operator: "eq", Value: "active"},
}
sorts := []common.SortOption{
{Column: "created_at", Direction: "desc"},
}
// Build cache key
cacheKeyHash := BuildQueryCacheKey("test_table", filters, sorts, "", "")
cacheKey := GetQueryTotalCacheKey(cacheKeyHash)
// Store a total count in cache
totalToCache := CachedTotal{Total: 42}
err := GetDefaultCache().Set(ctx, cacheKey, totalToCache, time.Minute)
if err != nil {
t.Fatalf("Failed to set cache: %v", err)
}
// Retrieve from cache
var cachedTotal CachedTotal
err = GetDefaultCache().Get(ctx, cacheKey, &cachedTotal)
if err != nil {
t.Fatalf("Failed to get from cache: %v", err)
}
if cachedTotal.Total != 42 {
t.Errorf("Expected total 42, got %d", cachedTotal.Total)
}
// Test cache miss
nonExistentKey := GetQueryTotalCacheKey("nonexistent")
var missedTotal CachedTotal
err = GetDefaultCache().Get(ctx, nonExistentKey, &missedTotal)
if err == nil {
t.Errorf("Expected error for cache miss, got nil")
}
}
func TestHashString(t *testing.T) {
input1 := "test string"
input2 := "test string"
input3 := "different string"
hash1 := hashString(input1)
hash2 := hashString(input2)
hash3 := hashString(input3)
// Same input should produce same hash
if hash1 != hash2 {
t.Errorf("Expected same hash for identical inputs")
}
// Different input should produce different hash
if hash1 == hash3 {
t.Errorf("Expected different hash for different inputs")
}
// Hash should be hex encoded SHA256 (64 characters)
if len(hash1) != 64 {
t.Errorf("Expected hash length of 64, got %d", len(hash1))
}
}

View File

@@ -208,21 +208,9 @@ func SanitizeWhereClause(where string, tableName string, options ...*RequestOpti
}
}
}
} else if tableName != "" && !hasTablePrefix(condToCheck) {
// If tableName is provided and the condition DOESN'T have a table prefix,
// qualify unambiguous column references to prevent "ambiguous column" errors
// when there are multiple joins on the same table (e.g., recursive preloads)
columnName := extractUnqualifiedColumnName(condToCheck)
if columnName != "" && (validColumns == nil || isValidColumn(columnName, validColumns)) {
// Qualify the column with the table name
// Be careful to only replace the column name, not other occurrences of the string
oldRef := columnName
newRef := tableName + "." + columnName
// Use word boundary matching to avoid replacing partial matches
cond = qualifyColumnInCondition(cond, oldRef, newRef)
logger.Debug("Qualified unqualified column in condition: '%s' added table prefix '%s'", oldRef, tableName)
}
}
// Note: We no longer add prefixes to unqualified columns here.
// Use AddTablePrefixToColumns() separately if you need to add prefixes.
validConditions = append(validConditions, cond)
}
@@ -246,35 +234,52 @@ func stripOuterParentheses(s string) string {
s = strings.TrimSpace(s)
for {
if len(s) < 2 || s[0] != '(' || s[len(s)-1] != ')' {
stripped, wasStripped := stripOneMatchingOuterParen(s)
if !wasStripped {
return s
}
s = stripped
}
}
// Check if these parentheses match (i.e., they're the outermost pair)
depth := 0
matched := false
for i := 0; i < len(s); i++ {
switch s[i] {
case '(':
depth++
case ')':
depth--
if depth == 0 && i == len(s)-1 {
matched = true
} else if depth == 0 {
// Found a closing paren before the end, so outer parens don't match
return s
}
// stripOneOuterParentheses removes only one level of matching outer parentheses from a string
// Unlike stripOuterParentheses, this only strips once, preserving nested parentheses
func stripOneOuterParentheses(s string) string {
stripped, _ := stripOneMatchingOuterParen(strings.TrimSpace(s))
return stripped
}
// stripOneMatchingOuterParen is a helper that strips one matching pair of outer parentheses
// Returns the stripped string and a boolean indicating if stripping occurred
func stripOneMatchingOuterParen(s string) (string, bool) {
if len(s) < 2 || s[0] != '(' || s[len(s)-1] != ')' {
return s, false
}
// Check if these parentheses match (i.e., they're the outermost pair)
depth := 0
matched := false
for i := 0; i < len(s); i++ {
switch s[i] {
case '(':
depth++
case ')':
depth--
if depth == 0 && i == len(s)-1 {
matched = true
} else if depth == 0 {
// Found a closing paren before the end, so outer parens don't match
return s, false
}
}
if !matched {
return s
}
// Strip the outer parentheses and continue
s = strings.TrimSpace(s[1 : len(s)-1])
}
if !matched {
return s, false
}
// Strip the outer parentheses
return strings.TrimSpace(s[1 : len(s)-1]), true
}
// splitByAND splits a WHERE clause by AND operators (case-insensitive)
@@ -498,9 +503,10 @@ func extractTableAndColumn(cond string) (table string, column string) {
return "", ""
}
// extractUnqualifiedColumnName extracts the column name from an unqualified condition
// Unused: extractUnqualifiedColumnName extracts the column name from an unqualified condition
// For example: "rid_parentmastertaskitem is null" returns "rid_parentmastertaskitem"
// "status = 'active'" returns "status"
// nolint:unused
func extractUnqualifiedColumnName(cond string) string {
// Common SQL operators
operators := []string{" = ", " != ", " <> ", " > ", " >= ", " < ", " <= ", " LIKE ", " like ", " IN ", " in ", " IS ", " is ", " NOT ", " not "}
@@ -633,3 +639,173 @@ func isValidColumn(columnName string, validColumns map[string]bool) bool {
}
return validColumns[strings.ToLower(columnName)]
}
// AddTablePrefixToColumns adds table prefix to unqualified column references in a WHERE clause.
// This function only prefixes simple column references and skips:
// - Columns already having a table prefix (containing a dot)
// - Columns inside function calls or expressions (inside parentheses)
// - Columns inside subqueries
// - Columns that don't exist in the table (validation via model registry)
//
// Examples:
// - "status = 'active'" -> "users.status = 'active'" (if status exists in users table)
// - "COALESCE(status, 'default') = 'active'" -> unchanged (status inside function)
// - "users.status = 'active'" -> unchanged (already has prefix)
// - "(status = 'active')" -> "(users.status = 'active')" (grouping parens are OK)
// - "invalid_col = 'value'" -> unchanged (if invalid_col doesn't exist in table)
//
// Parameters:
// - where: The WHERE clause to process
// - tableName: The table name to use as prefix
//
// Returns:
// - The WHERE clause with table prefixes added to appropriate and valid columns
func AddTablePrefixToColumns(where string, tableName string) string {
if where == "" || tableName == "" {
return where
}
where = strings.TrimSpace(where)
// Get valid columns from the model registry for validation
validColumns := getValidColumnsForTable(tableName)
// Split by AND to handle multiple conditions (parenthesis-aware)
conditions := splitByAND(where)
prefixedConditions := make([]string, 0, len(conditions))
for _, cond := range conditions {
cond = strings.TrimSpace(cond)
if cond == "" {
continue
}
// Process this condition to add table prefix if appropriate
processedCond := addPrefixToSingleCondition(cond, tableName, validColumns)
prefixedConditions = append(prefixedConditions, processedCond)
}
if len(prefixedConditions) == 0 {
return ""
}
return strings.Join(prefixedConditions, " AND ")
}
// addPrefixToSingleCondition adds table prefix to a single condition if appropriate
// Returns the condition unchanged if:
// - The condition is a SQL literal/expression (true, false, null, 1=1, etc.)
// - The column reference is inside a function call
// - The column already has a table prefix
// - No valid column reference is found
// - The column doesn't exist in the table (when validColumns is provided)
func addPrefixToSingleCondition(cond string, tableName string, validColumns map[string]bool) string {
// Strip one level of outer grouping parentheses to get to the actual condition
strippedCond := stripOneOuterParentheses(cond)
// Skip SQL literals and trivial conditions (true, false, null, 1=1, etc.)
if IsSQLExpression(strippedCond) || IsTrivialCondition(strippedCond) {
logger.Debug("Skipping SQL literal/trivial condition: '%s'", strippedCond)
return cond
}
// After stripping outer parentheses, check if there are multiple AND-separated conditions
// at the top level. If so, split and process each separately to avoid incorrectly
// treating "true AND status" as a single column name.
subConditions := splitByAND(strippedCond)
if len(subConditions) > 1 {
// Multiple conditions found - process each separately
logger.Debug("Found %d sub-conditions after stripping parentheses, processing separately", len(subConditions))
processedConditions := make([]string, 0, len(subConditions))
for _, subCond := range subConditions {
// Recursively process each sub-condition
processed := addPrefixToSingleCondition(subCond, tableName, validColumns)
processedConditions = append(processedConditions, processed)
}
result := strings.Join(processedConditions, " AND ")
// Preserve original outer parentheses if they existed
if cond != strippedCond {
result = "(" + result + ")"
}
return result
}
// If we stripped parentheses and still have more parentheses, recursively process
if cond != strippedCond && strings.HasPrefix(strippedCond, "(") && strings.HasSuffix(strippedCond, ")") {
// Recursively handle nested parentheses
processed := addPrefixToSingleCondition(strippedCond, tableName, validColumns)
return "(" + processed + ")"
}
// Extract the left side of the comparison (before the operator)
columnRef := extractLeftSideOfComparison(strippedCond)
if columnRef == "" {
return cond
}
// Skip if it already has a prefix (contains a dot)
if strings.Contains(columnRef, ".") {
logger.Debug("Skipping column '%s' - already has table prefix", columnRef)
return cond
}
// Skip if it's a function call or expression (contains parentheses)
if strings.Contains(columnRef, "(") {
logger.Debug("Skipping column reference '%s' - inside function or expression", columnRef)
return cond
}
// Validate that the column exists in the table (if we have column info)
if !isValidColumn(columnRef, validColumns) {
logger.Debug("Skipping column '%s' - not found in table '%s'", columnRef, tableName)
return cond
}
// It's a simple unqualified column reference that exists in the table - add the table prefix
newRef := tableName + "." + columnRef
result := qualifyColumnInCondition(cond, columnRef, newRef)
logger.Debug("Added table prefix to column: '%s' -> '%s'", columnRef, newRef)
return result
}
// extractLeftSideOfComparison extracts the left side of a comparison operator from a condition.
// This is used to identify the column reference that may need a table prefix.
//
// Examples:
// - "status = 'active'" returns "status"
// - "COALESCE(status, 'default') = 'active'" returns "COALESCE(status, 'default')"
// - "priority > 5" returns "priority"
//
// Returns empty string if no operator is found.
func extractLeftSideOfComparison(cond string) string {
operators := []string{" = ", " != ", " <> ", " > ", " >= ", " < ", " <= ", " LIKE ", " like ", " IN ", " in ", " IS ", " is ", " NOT ", " not "}
// Find the first operator outside of parentheses and quotes
minIdx := -1
for _, op := range operators {
idx := findOperatorOutsideParentheses(cond, op)
if idx > 0 && (minIdx == -1 || idx < minIdx) {
minIdx = idx
}
}
if minIdx > 0 {
leftSide := strings.TrimSpace(cond[:minIdx])
// Remove any surrounding quotes
leftSide = strings.Trim(leftSide, "`\"'")
return leftSide
}
// No operator found - might be a boolean column
parts := strings.Fields(cond)
if len(parts) > 0 {
columnRef := strings.Trim(parts[0], "`\"'")
// Make sure it's not a SQL keyword
if !IsSQLKeyword(strings.ToLower(columnRef)) {
return columnRef
}
}
return ""
}

View File

@@ -138,7 +138,10 @@ func TestSanitizeWhereClause(t *testing.T) {
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
result := SanitizeWhereClause(tt.where, tt.tableName)
// First add table prefixes to unqualified columns
prefixedWhere := AddTablePrefixToColumns(tt.where, tt.tableName)
// Then sanitize the where clause
result := SanitizeWhereClause(prefixedWhere, tt.tableName)
if result != tt.expected {
t.Errorf("SanitizeWhereClause(%q, %q) = %q; want %q", tt.where, tt.tableName, result, tt.expected)
}
@@ -348,6 +351,7 @@ func TestSanitizeWhereClauseWithPreloads(t *testing.T) {
tableName string
options *RequestOptions
expected string
addPrefix bool
}{
{
name: "preload relation prefix is preserved",
@@ -416,15 +420,30 @@ func TestSanitizeWhereClauseWithPreloads(t *testing.T) {
options: &RequestOptions{Preload: []PreloadOption{}},
expected: "users.status = 'active'",
},
{
name: "complex where clause with subquery and preload",
where: `("mastertaskitem"."rid_mastertask" IN (6, 173, 157, 172, 174, 171, 170, 169, 167, 168, 166, 145, 161, 164, 146, 160, 147, 159, 148, 150, 152, 175, 151, 8, 153, 149, 155, 154, 165)) AND (rid_parentmastertaskitem is null)`,
tableName: "mastertaskitem",
options: nil,
expected: `("mastertaskitem"."rid_mastertask" IN (6, 173, 157, 172, 174, 171, 170, 169, 167, 168, 166, 145, 161, 164, 146, 160, 147, 159, 148, 150, 152, 175, 151, 8, 153, 149, 155, 154, 165)) AND (mastertaskitem.rid_parentmastertaskitem is null)`,
addPrefix: true,
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
var result string
prefixedWhere := tt.where
if tt.addPrefix {
// First add table prefixes to unqualified columns
prefixedWhere = AddTablePrefixToColumns(tt.where, tt.tableName)
}
// Then sanitize the where clause
if tt.options != nil {
result = SanitizeWhereClause(tt.where, tt.tableName, tt.options)
result = SanitizeWhereClause(prefixedWhere, tt.tableName, tt.options)
} else {
result = SanitizeWhereClause(tt.where, tt.tableName)
result = SanitizeWhereClause(prefixedWhere, tt.tableName)
}
if result != tt.expected {
t.Errorf("SanitizeWhereClause(%q, %q, options) = %q; want %q", tt.where, tt.tableName, result, tt.expected)
@@ -639,3 +658,76 @@ func TestSanitizeWhereClauseWithModel(t *testing.T) {
})
}
}
func TestAddTablePrefixToColumns_ComplexConditions(t *testing.T) {
tests := []struct {
name string
where string
tableName string
expected string
}{
{
name: "Parentheses with true AND condition - should not prefix true",
where: "(true AND status = 'active')",
tableName: "mastertask",
expected: "(true AND mastertask.status = 'active')",
},
{
name: "Parentheses with multiple conditions including true",
where: "(true AND status = 'active' AND id > 5)",
tableName: "mastertask",
expected: "(true AND mastertask.status = 'active' AND mastertask.id > 5)",
},
{
name: "Nested parentheses with true",
where: "((true AND status = 'active'))",
tableName: "mastertask",
expected: "((true AND mastertask.status = 'active'))",
},
{
name: "Mixed: false AND valid conditions",
where: "(false AND name = 'test')",
tableName: "mastertask",
expected: "(false AND mastertask.name = 'test')",
},
{
name: "Mixed: null AND valid conditions",
where: "(null AND status = 'active')",
tableName: "mastertask",
expected: "(null AND mastertask.status = 'active')",
},
{
name: "Multiple true conditions in parentheses",
where: "(true AND true AND status = 'active')",
tableName: "mastertask",
expected: "(true AND true AND mastertask.status = 'active')",
},
{
name: "Simple true without parens - should not prefix",
where: "true",
tableName: "mastertask",
expected: "true",
},
{
name: "Simple condition without parens - should prefix",
where: "status = 'active'",
tableName: "mastertask",
expected: "mastertask.status = 'active'",
},
{
name: "Unregistered table with true - should not prefix true",
where: "(true AND status = 'active')",
tableName: "unregistered_table",
expected: "(true AND unregistered_table.status = 'active')",
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
result := AddTablePrefixToColumns(tt.where, tt.tableName)
if result != tt.expected {
t.Errorf("AddTablePrefixToColumns(%q, %q) = %q; want %q", tt.where, tt.tableName, result, tt.expected)
}
})
}
}

View File

@@ -90,12 +90,12 @@ Panics are automatically captured when using the logger's panic handlers:
```go
// Using CatchPanic
defer logger.CatchPanic("MyFunction")
defer logger.CatchPanic("MyFunction")()
// Using CatchPanicCallback
defer logger.CatchPanicCallback("MyFunction", func(err any) {
// Custom cleanup
})
})()
// Using HandlePanic
defer func() {

View File

@@ -0,0 +1,353 @@
# Event Broker System Implementation Plan
## Overview
Implement a comprehensive event handler/broker system for ResolveSpec that follows existing architectural patterns (Provider interface, Hook system, Config management, Graceful shutdown).
## Requirements Met
- ✅ Events with sources (database, websocket, frontend, system)
- ✅ Event statuses (pending, processing, completed, failed)
- ✅ Timestamps, JSON payloads, user IDs, session IDs
- ✅ Program instance IDs for tracking server instances
- ✅ Both sync and async processing modes
- ✅ Multiple provider backends (in-memory, Redis, NATS, database)
- ✅ Cross-instance pub/sub support
## Architecture
### Core Components
**Event Structure** (with full metadata):
```go
type Event struct {
ID string // UUID
Source EventSource // database, websocket, system, frontend
Type string // Pattern: schema.entity.operation
Status EventStatus // pending, processing, completed, failed
Payload json.RawMessage // JSON payload
UserID int
SessionID string
InstanceID string // Server instance identifier
Schema string
Entity string
Operation string // create, update, delete, read
CreatedAt time.Time
ProcessedAt *time.Time
CompletedAt *time.Time
Error string
Metadata map[string]interface{}
RetryCount int
}
```
**Provider Pattern** (like cache.Provider):
```go
type Provider interface {
Store(ctx context.Context, event *Event) error
Get(ctx context.Context, id string) (*Event, error)
List(ctx context.Context, filter *EventFilter) ([]*Event, error)
UpdateStatus(ctx context.Context, id string, status EventStatus, error string) error
Stream(ctx context.Context, pattern string) (<-chan *Event, error)
Publish(ctx context.Context, event *Event) error
Close() error
Stats(ctx context.Context) (*ProviderStats, error)
}
```
**Broker Interface**:
```go
type Broker interface {
Publish(ctx context.Context, event *Event) error // Mode-dependent
PublishSync(ctx context.Context, event *Event) error // Blocks
PublishAsync(ctx context.Context, event *Event) error // Non-blocking
Subscribe(pattern string, handler EventHandler) (SubscriptionID, error)
Unsubscribe(id SubscriptionID) error
Start(ctx context.Context) error
Stop(ctx context.Context) error
Stats(ctx context.Context) (*BrokerStats, error)
}
```
## Implementation Steps
### Phase 1: Core Foundation (Files: 1-5)
**1. Create `pkg/eventbroker/event.go`**
- Event struct with all required fields (status, timestamps, user, instance ID, etc.)
- EventSource enum (database, websocket, frontend, system, internal)
- EventStatus enum (pending, processing, completed, failed)
- Helper: `EventType(schema, entity, operation string) string`
- Helper: `NewEvent()` constructor with UUID generation
**2. Create `pkg/eventbroker/provider.go`**
- Provider interface definition
- EventFilter struct for queries
- ProviderStats struct
**3. Create `pkg/eventbroker/handler.go`**
- EventHandler interface
- EventHandlerFunc adapter type
**4. Create `pkg/eventbroker/broker.go`**
- Broker interface definition
- EventBroker struct implementation
- ProcessingMode enum (sync, async)
- Options struct with functional options (WithProvider, WithMode, WithWorkerCount, etc.)
- NewBroker() constructor
- Sync processing implementation
**5. Create `pkg/eventbroker/subscription.go`**
- Pattern matching using glob syntax (e.g., "public.users.*", "*.*.create")
- subscriptionManager struct
- SubscriptionID type
- Subscribe/Unsubscribe logic
### Phase 2: Configuration & Integration (Files: 6-8)
**6. Create `pkg/eventbroker/config.go`**
- EventBrokerConfig struct
- RedisConfig, NATSConfig, DatabaseConfig structs
- RetryPolicyConfig struct
**7. Update `pkg/config/config.go`**
- Add `EventBroker EventBrokerConfig` field to Config struct
**8. Update `pkg/config/manager.go`**
- Add event broker defaults to `setDefaults()`:
```go
v.SetDefault("event_broker.enabled", false)
v.SetDefault("event_broker.provider", "memory")
v.SetDefault("event_broker.mode", "async")
v.SetDefault("event_broker.worker_count", 10)
v.SetDefault("event_broker.buffer_size", 1000)
```
### Phase 3: Memory Provider (Files: 9)
**9. Create `pkg/eventbroker/provider_memory.go`**
- MemoryProvider struct with mutex-protected map
- In-memory event storage
- Pattern matching for subscriptions
- Channel-based streaming for real-time events
- LRU eviction when max size reached
- Cleanup goroutine for old completed events
- **Note**: Single-instance only (no cross-instance pub/sub)
### Phase 4: Async Processing (Update File: 4)
**10. Update `pkg/eventbroker/broker.go`** (add async support)
- workerPool struct with configurable worker count
- Buffered channel for event queue
- Worker goroutines that process events
- PublishAsync() queues to channel
- Graceful shutdown: stop accepting events, drain queue, wait for workers
- Retry logic with exponential backoff
### Phase 5: Hook Integration (Files: 11)
**11. Create `pkg/eventbroker/hooks.go`**
- `RegisterCRUDHooks(broker Broker, hookRegistry *restheadspec.HookRegistry)`
- Registers AfterCreate, AfterUpdate, AfterDelete, AfterRead hooks
- Extracts UserContext from hook context
- Creates Event with proper metadata
- Calls `broker.PublishAsync()` to not block CRUD operations
### Phase 6: Global Singleton & Factory (Files: 12-13)
**12. Create `pkg/eventbroker/eventbroker.go`**
- Global `defaultBroker` variable
- `Initialize(config *config.Config) error` - creates broker from config
- `SetDefaultBroker(broker Broker)`
- `GetDefaultBroker() Broker`
- Helper functions: `Publish()`, `PublishAsync()`, `PublishSync()`, `Subscribe()`
- `RegisterShutdown(broker Broker)` - registers with server.RegisterShutdownCallback()
**13. Create `pkg/eventbroker/factory.go`**
- `NewProviderFromConfig(config EventBrokerConfig) (Provider, error)`
- Provider selection logic (memory, redis, nats, database)
- Returns appropriate provider based on config
### Phase 7: Redis Provider (Files: 14)
**14. Create `pkg/eventbroker/provider_redis.go`**
- RedisProvider using Redis Streams (XADD, XREAD, XGROUP)
- Consumer group for distributed processing
- Cross-instance pub/sub support
- Stream(pattern) subscribes to consumer group
- Publish() uses XADD to append to stream
- Graceful shutdown: acknowledge pending messages
**Dependencies**: `github.com/redis/go-redis/v9`
### Phase 8: NATS Provider (Files: 15)
**15. Create `pkg/eventbroker/provider_nats.go`**
- NATSProvider using NATS JetStream
- Subject-based routing: `events.{source}.{type}`
- Wildcard subscriptions support
- Durable consumers for replay
- At-least-once delivery semantics
**Dependencies**: `github.com/nats-io/nats.go`
### Phase 9: Database Provider (Files: 16)
**16. Create `pkg/eventbroker/provider_database.go`**
- DatabaseProvider using `common.Database` interface
- Table schema creation (events table with indexes)
- Polling-based event consumption (configurable interval)
- Full SQL query support via List(filter)
- Transaction support for atomic operations
- Good for audit trails and debugging
### Phase 10: Metrics Integration (Files: 17)
**17. Create `pkg/eventbroker/metrics.go`**
- Integrate with existing `metrics.Provider`
- Record metrics:
- `eventbroker_events_published_total{source, type}`
- `eventbroker_events_processed_total{source, type, status}`
- `eventbroker_event_processing_duration_seconds{source, type}`
- `eventbroker_queue_size`
- `eventbroker_workers_active`
**18. Update `pkg/metrics/interfaces.go`**
- Add methods to Provider interface:
```go
RecordEventPublished(source, eventType string)
RecordEventProcessed(source, eventType, status string, duration time.Duration)
UpdateEventQueueSize(size int64)
```
### Phase 11: Testing & Examples (Files: 19-20)
**19. Create `pkg/eventbroker/eventbroker_test.go`**
- Unit tests for Event marshaling
- Pattern matching tests
- MemoryProvider tests
- Sync vs async mode tests
- Concurrent publish/subscribe tests
- Graceful shutdown tests
**20. Create `pkg/eventbroker/example_usage.go`**
- Basic publish example
- Subscribe with patterns example
- Hook integration example
- Multiple handlers example
- Error handling example
## Integration Points
### Hook System Integration
```go
// In application initialization (e.g., main.go)
eventbroker.RegisterCRUDHooks(broker, handler.Hooks())
```
This automatically publishes events for all CRUD operations:
- `schema.entity.create` after inserts
- `schema.entity.update` after updates
- `schema.entity.delete` after deletes
- `schema.entity.read` after reads
### Shutdown Integration
```go
// In application initialization
eventbroker.RegisterShutdown(broker)
```
Ensures event broker flushes pending events before shutdown.
### Configuration Example
```yaml
event_broker:
enabled: true
provider: redis # memory, redis, nats, database
mode: async # sync, async
worker_count: 10
buffer_size: 1000
instance_id: "${HOSTNAME}"
redis:
stream_name: "resolvespec:events"
consumer_group: "resolvespec-workers"
host: "localhost"
port: 6379
```
## Usage Examples
### Publishing Custom Events
```go
// WebSocket event
event := &eventbroker.Event{
Source: eventbroker.EventSourceWebSocket,
Type: "chat.message",
Payload: json.RawMessage(`{"room": "lobby", "msg": "Hello"}`),
UserID: userID,
SessionID: sessionID,
}
eventbroker.PublishAsync(ctx, event)
```
### Subscribing to Events
```go
// Subscribe to all user creation events
eventbroker.Subscribe("public.users.create", eventbroker.EventHandlerFunc(
func(ctx context.Context, event *eventbroker.Event) error {
log.Printf("New user created: %s", event.Payload)
// Send welcome email, update cache, etc.
return nil
},
))
// Subscribe to all events from database
eventbroker.Subscribe("*", eventbroker.EventHandlerFunc(
func(ctx context.Context, event *eventbroker.Event) error {
if event.Source == eventbroker.EventSourceDatabase {
// Audit logging
}
return nil
},
))
```
## Critical Files Reference
**Patterns to Follow**:
- `pkg/cache/provider.go` - Provider interface pattern
- `pkg/restheadspec/hooks.go` - Hook system integration
- `pkg/config/manager.go` - Configuration pattern
- `pkg/server/shutdown.go` - Shutdown callbacks
**Files to Modify**:
- `pkg/config/config.go` - Add EventBroker field
- `pkg/config/manager.go` - Add defaults
- `pkg/metrics/interfaces.go` - Add event broker metrics
**New Package**:
- `pkg/eventbroker/` (20 files total)
## Provider Feature Matrix
| Feature | Memory | Redis | NATS | Database |
|---------|--------|-------|------|----------|
| Persistence | ❌ | ✅ | ✅ | ✅ |
| Cross-instance | ❌ | ✅ | ✅ | ✅ |
| Real-time | ✅ | ✅ | ✅ | ⚠️ (polling) |
| Query history | Limited | Limited | ✅ (replay) | ✅ (SQL) |
| External deps | None | Redis | NATS | None |
| Complexity | Low | Medium | Medium | Low |
## Implementation Order Priority
1. **Core + Memory Provider** (Phase 1-3) - Functional in-process event system
2. **Async + Hooks** (Phase 4-5) - Non-blocking event dispatch integrated with CRUD
3. **Config + Singleton** (Phase 6) - Easy initialization and usage
4. **Redis Provider** (Phase 7) - Production-ready distributed events
5. **Metrics** (Phase 10) - Observability
6. **NATS/Database** (Phase 8-9) - Alternative backends
7. **Tests + Examples** (Phase 11) - Documentation and reliability
## Next Steps
After approval, implement in order of phases. Each phase builds on previous phases and can be tested independently.

View File

@@ -172,12 +172,13 @@ event_broker:
provider: memory
```
### Redis Provider (Future)
### Redis Provider
Best for: Production, multi-instance deployments
- **Pros**: Persistent, cross-instance pub/sub, reliable
- **Cons**: Requires Redis
- **Pros**: Persistent, cross-instance pub/sub, reliable, Redis Streams support
- **Cons**: Requires Redis server
- **Status**: ✅ Implemented
```yaml
event_broker:
@@ -185,16 +186,20 @@ event_broker:
redis:
stream_name: "resolvespec:events"
consumer_group: "resolvespec-workers"
max_len: 10000
host: "localhost"
port: 6379
password: ""
db: 0
```
### NATS Provider (Future)
### NATS Provider
Best for: High-performance, low-latency requirements
- **Pros**: Very fast, built-in clustering, durable
- **Pros**: Very fast, built-in clustering, durable, JetStream support
- **Cons**: Requires NATS server
- **Status**: ✅ Implemented
```yaml
event_broker:
@@ -202,14 +207,17 @@ event_broker:
nats:
url: "nats://localhost:4222"
stream_name: "RESOLVESPEC_EVENTS"
storage: "file" # or "memory"
max_age: "24h"
```
### Database Provider (Future)
### Database Provider
Best for: Audit trails, event replay, SQL queries
- **Pros**: No additional infrastructure, full SQL query support, PostgreSQL NOTIFY for real-time
- **Cons**: Slower than Redis/NATS
- **Cons**: Slower than Redis/NATS, requires database connection
- **Status**: ✅ Implemented
```yaml
event_broker:
@@ -217,6 +225,7 @@ event_broker:
database:
table_name: "events"
channel: "resolvespec_events"
poll_interval: "1s"
```
## Processing Modes
@@ -314,14 +323,25 @@ See `example_usage.go` for comprehensive examples including:
└─────────────────┘
```
## Implemented Features
- [x] Memory Provider (in-process, single-instance)
- [x] Redis Streams Provider (distributed, persistent)
- [x] NATS JetStream Provider (distributed, high-performance)
- [x] Database Provider with PostgreSQL NOTIFY (SQL-queryable, audit-friendly)
- [x] Sync and Async processing modes
- [x] Pattern-based subscriptions
- [x] Hook integration for automatic CRUD events
- [x] Retry policy with exponential backoff
- [x] Graceful shutdown
## Future Enhancements
- [ ] Database Provider with PostgreSQL NOTIFY
- [ ] Redis Streams Provider
- [ ] NATS JetStream Provider
- [ ] Event replay functionality
- [ ] Dead letter queue
- [ ] Event filtering at provider level
- [ ] Batch publishing
- [ ] Event compression
- [ ] Schema versioning
- [ ] Event replay functionality from specific timestamp
- [ ] Dead letter queue for failed events
- [ ] Event filtering at provider level for performance
- [ ] Batch publishing support
- [ ] Event compression for large payloads
- [ ] Schema versioning and migration
- [ ] Event streaming to external systems (Kafka, RabbitMQ)
- [ ] Event aggregation and analytics

View File

@@ -7,7 +7,6 @@ import (
"github.com/bitechdev/ResolveSpec/pkg/config"
"github.com/bitechdev/ResolveSpec/pkg/logger"
"github.com/bitechdev/ResolveSpec/pkg/server"
)
var (
@@ -69,9 +68,6 @@ func Initialize(cfg config.EventBrokerConfig) error {
// Set as default
SetDefaultBroker(broker)
// Register shutdown callback
RegisterShutdown(broker)
logger.Info("Event broker initialized successfully (provider: %s, mode: %s, instance: %s)",
cfg.Provider, cfg.Mode, opts.InstanceID)
@@ -151,10 +147,12 @@ func Stats(ctx context.Context) (*BrokerStats, error) {
return broker.Stats(ctx)
}
// RegisterShutdown registers the broker's shutdown with the server shutdown callbacks
func RegisterShutdown(broker Broker) {
server.RegisterShutdownCallback(func(ctx context.Context) error {
// RegisterShutdown registers the broker's shutdown with a server manager
// Call this from your application initialization code
// Example: serverMgr.RegisterShutdownCallback(eventbroker.MakeShutdownCallback(broker))
func MakeShutdownCallback(broker Broker) func(context.Context) error {
return func(ctx context.Context) error {
logger.Info("Shutting down event broker...")
return broker.Stop(ctx)
})
}
}

View File

@@ -24,16 +24,34 @@ func NewProviderFromConfig(cfg config.EventBrokerConfig) (Provider, error) {
}), nil
case "redis":
// Redis provider will be implemented in Phase 8
return nil, fmt.Errorf("redis provider not yet implemented")
return NewRedisProvider(RedisProviderConfig{
Host: cfg.Redis.Host,
Port: cfg.Redis.Port,
Password: cfg.Redis.Password,
DB: cfg.Redis.DB,
StreamName: cfg.Redis.StreamName,
ConsumerGroup: cfg.Redis.ConsumerGroup,
ConsumerName: getInstanceID(cfg.InstanceID),
InstanceID: getInstanceID(cfg.InstanceID),
MaxLen: cfg.Redis.MaxLen,
})
case "nats":
// NATS provider will be implemented in Phase 9
return nil, fmt.Errorf("nats provider not yet implemented")
// NATS provider initialization
// Note: Requires github.com/nats-io/nats.go dependency
return NewNATSProvider(NATSProviderConfig{
URL: cfg.NATS.URL,
StreamName: cfg.NATS.StreamName,
SubjectPrefix: "events",
InstanceID: getInstanceID(cfg.InstanceID),
MaxAge: cfg.NATS.MaxAge,
Storage: cfg.NATS.Storage, // "file" or "memory"
})
case "database":
// Database provider will be implemented in Phase 7
return nil, fmt.Errorf("database provider not yet implemented")
// Database provider requires a database connection
// This should be provided externally
return nil, fmt.Errorf("database provider requires a database connection to be configured separately")
default:
return nil, fmt.Errorf("unknown provider: %s", cfg.Provider)

View File

@@ -0,0 +1,653 @@
package eventbroker
import (
"context"
"database/sql"
"encoding/json"
"fmt"
"sync"
"sync/atomic"
"time"
"github.com/bitechdev/ResolveSpec/pkg/common"
"github.com/bitechdev/ResolveSpec/pkg/logger"
)
// DatabaseProvider implements Provider interface using SQL database
// Features:
// - Persistent event storage in database table
// - Full SQL query support for event history
// - PostgreSQL NOTIFY/LISTEN for real-time updates (optional)
// - Polling-based consumption with configurable interval
// - Good for audit trails and event replay
type DatabaseProvider struct {
db common.Database
tableName string
channel string // PostgreSQL NOTIFY channel name
pollInterval time.Duration
instanceID string
useNotify bool // Whether to use PostgreSQL NOTIFY
// Subscriptions
mu sync.RWMutex
subscribers map[string]*dbSubscription
// Statistics
stats DatabaseProviderStats
// Lifecycle
stopPolling chan struct{}
wg sync.WaitGroup
isRunning atomic.Bool
}
// DatabaseProviderStats contains statistics for the database provider
type DatabaseProviderStats struct {
TotalEvents atomic.Int64
EventsPublished atomic.Int64
EventsConsumed atomic.Int64
ActiveSubscribers atomic.Int32
PollErrors atomic.Int64
}
// dbSubscription represents a single database subscription
type dbSubscription struct {
pattern string
ch chan *Event
lastSeenID string
ctx context.Context
cancel context.CancelFunc
}
// DatabaseProviderConfig configures the database provider
type DatabaseProviderConfig struct {
DB common.Database
TableName string
Channel string // PostgreSQL NOTIFY channel (optional)
PollInterval time.Duration
InstanceID string
UseNotify bool // Enable PostgreSQL NOTIFY/LISTEN
}
// NewDatabaseProvider creates a new database event provider
func NewDatabaseProvider(cfg DatabaseProviderConfig) (*DatabaseProvider, error) {
// Apply defaults
if cfg.TableName == "" {
cfg.TableName = "events"
}
if cfg.Channel == "" {
cfg.Channel = "resolvespec_events"
}
if cfg.PollInterval == 0 {
cfg.PollInterval = 1 * time.Second
}
dp := &DatabaseProvider{
db: cfg.DB,
tableName: cfg.TableName,
channel: cfg.Channel,
pollInterval: cfg.PollInterval,
instanceID: cfg.InstanceID,
useNotify: cfg.UseNotify,
subscribers: make(map[string]*dbSubscription),
stopPolling: make(chan struct{}),
}
dp.isRunning.Store(true)
// Create table if it doesn't exist
ctx := context.Background()
if err := dp.createTable(ctx); err != nil {
return nil, fmt.Errorf("failed to create events table: %w", err)
}
// Start polling goroutine for subscriptions
dp.wg.Add(1)
go dp.pollLoop()
logger.Info("Database provider initialized (table: %s, poll_interval: %v, notify: %v)",
cfg.TableName, cfg.PollInterval, cfg.UseNotify)
return dp, nil
}
// Store stores an event
func (dp *DatabaseProvider) Store(ctx context.Context, event *Event) error {
// Marshal metadata to JSON
metadataJSON, err := json.Marshal(event.Metadata)
if err != nil {
return fmt.Errorf("failed to marshal metadata: %w", err)
}
// Insert event
query := fmt.Sprintf(`
INSERT INTO %s (
id, source, type, status, retry_count, error,
payload, user_id, session_id, instance_id,
schema, entity, operation,
created_at, processed_at, completed_at, metadata
) VALUES (
$1, $2, $3, $4, $5, $6,
$7, $8, $9, $10,
$11, $12, $13,
$14, $15, $16, $17
)
`, dp.tableName)
_, err = dp.db.Exec(ctx, query,
event.ID, event.Source, event.Type, event.Status, event.RetryCount, event.Error,
event.Payload, event.UserID, event.SessionID, event.InstanceID,
event.Schema, event.Entity, event.Operation,
event.CreatedAt, event.ProcessedAt, event.CompletedAt, metadataJSON,
)
if err != nil {
return fmt.Errorf("failed to insert event: %w", err)
}
dp.stats.TotalEvents.Add(1)
return nil
}
// Get retrieves an event by ID
func (dp *DatabaseProvider) Get(ctx context.Context, id string) (*Event, error) {
event := &Event{}
var metadataJSON []byte
var processedAt, completedAt sql.NullTime
// Query into individual fields
query := fmt.Sprintf(`
SELECT id, source, type, status, retry_count, error,
payload, user_id, session_id, instance_id,
schema, entity, operation,
created_at, processed_at, completed_at, metadata
FROM %s
WHERE id = $1
`, dp.tableName)
var source, eventType, status, operation string
// Execute raw query
rows, err := dp.db.GetUnderlyingDB().(interface {
QueryContext(ctx context.Context, query string, args ...interface{}) (*sql.Rows, error)
}).QueryContext(ctx, query, id)
if err != nil {
return nil, fmt.Errorf("failed to query event: %w", err)
}
defer rows.Close()
if !rows.Next() {
return nil, fmt.Errorf("event not found: %s", id)
}
if err := rows.Scan(
&event.ID, &source, &eventType, &status, &event.RetryCount, &event.Error,
&event.Payload, &event.UserID, &event.SessionID, &event.InstanceID,
&event.Schema, &event.Entity, &operation,
&event.CreatedAt, &processedAt, &completedAt, &metadataJSON,
); err != nil {
return nil, fmt.Errorf("failed to scan event: %w", err)
}
// Set enum values
event.Source = EventSource(source)
event.Type = eventType
event.Status = EventStatus(status)
event.Operation = operation
// Handle nullable timestamps
if processedAt.Valid {
event.ProcessedAt = &processedAt.Time
}
if completedAt.Valid {
event.CompletedAt = &completedAt.Time
}
// Unmarshal metadata
if len(metadataJSON) > 0 {
if err := json.Unmarshal(metadataJSON, &event.Metadata); err != nil {
logger.Warn("Failed to unmarshal metadata: %v", err)
}
}
return event, nil
}
// List lists events with optional filters
func (dp *DatabaseProvider) List(ctx context.Context, filter *EventFilter) ([]*Event, error) {
query := fmt.Sprintf("SELECT id, source, type, status, retry_count, error, "+
"payload, user_id, session_id, instance_id, "+
"schema, entity, operation, "+
"created_at, processed_at, completed_at, metadata "+
"FROM %s WHERE 1=1", dp.tableName)
args := []interface{}{}
argNum := 1
// Build WHERE clause
if filter != nil {
if filter.Source != nil {
query += fmt.Sprintf(" AND source = $%d", argNum)
args = append(args, string(*filter.Source))
argNum++
}
if filter.Status != nil {
query += fmt.Sprintf(" AND status = $%d", argNum)
args = append(args, string(*filter.Status))
argNum++
}
if filter.UserID != nil {
query += fmt.Sprintf(" AND user_id = $%d", argNum)
args = append(args, *filter.UserID)
argNum++
}
if filter.Schema != "" {
query += fmt.Sprintf(" AND schema = $%d", argNum)
args = append(args, filter.Schema)
argNum++
}
if filter.Entity != "" {
query += fmt.Sprintf(" AND entity = $%d", argNum)
args = append(args, filter.Entity)
argNum++
}
if filter.Operation != "" {
query += fmt.Sprintf(" AND operation = $%d", argNum)
args = append(args, filter.Operation)
argNum++
}
if filter.InstanceID != "" {
query += fmt.Sprintf(" AND instance_id = $%d", argNum)
args = append(args, filter.InstanceID)
argNum++
}
if filter.StartTime != nil {
query += fmt.Sprintf(" AND created_at >= $%d", argNum)
args = append(args, *filter.StartTime)
argNum++
}
if filter.EndTime != nil {
query += fmt.Sprintf(" AND created_at <= $%d", argNum)
args = append(args, *filter.EndTime)
argNum++
}
}
// Add ORDER BY
query += " ORDER BY created_at DESC"
// Add LIMIT and OFFSET
if filter != nil {
if filter.Limit > 0 {
query += fmt.Sprintf(" LIMIT $%d", argNum)
args = append(args, filter.Limit)
argNum++
}
if filter.Offset > 0 {
query += fmt.Sprintf(" OFFSET $%d", argNum)
args = append(args, filter.Offset)
}
}
// Execute query
rows, err := dp.db.GetUnderlyingDB().(interface {
QueryContext(ctx context.Context, query string, args ...interface{}) (*sql.Rows, error)
}).QueryContext(ctx, query, args...)
if err != nil {
return nil, fmt.Errorf("failed to query events: %w", err)
}
defer rows.Close()
var results []*Event
for rows.Next() {
event := &Event{}
var source, eventType, status, operation string
var metadataJSON []byte
var processedAt, completedAt sql.NullTime
err := rows.Scan(
&event.ID, &source, &eventType, &status, &event.RetryCount, &event.Error,
&event.Payload, &event.UserID, &event.SessionID, &event.InstanceID,
&event.Schema, &event.Entity, &operation,
&event.CreatedAt, &processedAt, &completedAt, &metadataJSON,
)
if err != nil {
logger.Warn("Failed to scan event: %v", err)
continue
}
// Set enum values
event.Source = EventSource(source)
event.Type = eventType
event.Status = EventStatus(status)
event.Operation = operation
// Handle nullable timestamps
if processedAt.Valid {
event.ProcessedAt = &processedAt.Time
}
if completedAt.Valid {
event.CompletedAt = &completedAt.Time
}
// Unmarshal metadata
if len(metadataJSON) > 0 {
if err := json.Unmarshal(metadataJSON, &event.Metadata); err != nil {
logger.Warn("Failed to unmarshal metadata: %v", err)
}
}
results = append(results, event)
}
return results, nil
}
// UpdateStatus updates the status of an event
func (dp *DatabaseProvider) UpdateStatus(ctx context.Context, id string, status EventStatus, errorMsg string) error {
query := fmt.Sprintf(`
UPDATE %s
SET status = $1, error = $2
WHERE id = $3
`, dp.tableName)
_, err := dp.db.Exec(ctx, query, string(status), errorMsg, id)
if err != nil {
return fmt.Errorf("failed to update status: %w", err)
}
return nil
}
// Delete deletes an event by ID
func (dp *DatabaseProvider) Delete(ctx context.Context, id string) error {
query := fmt.Sprintf("DELETE FROM %s WHERE id = $1", dp.tableName)
_, err := dp.db.Exec(ctx, query, id)
if err != nil {
return fmt.Errorf("failed to delete event: %w", err)
}
dp.stats.TotalEvents.Add(-1)
return nil
}
// Stream returns a channel of events for real-time consumption
func (dp *DatabaseProvider) Stream(ctx context.Context, pattern string) (<-chan *Event, error) {
ch := make(chan *Event, 100)
subCtx, cancel := context.WithCancel(ctx)
sub := &dbSubscription{
pattern: pattern,
ch: ch,
lastSeenID: "",
ctx: subCtx,
cancel: cancel,
}
dp.mu.Lock()
dp.subscribers[pattern] = sub
dp.stats.ActiveSubscribers.Add(1)
dp.mu.Unlock()
return ch, nil
}
// Publish publishes an event to all subscribers
func (dp *DatabaseProvider) Publish(ctx context.Context, event *Event) error {
// Store the event first
if err := dp.Store(ctx, event); err != nil {
return err
}
dp.stats.EventsPublished.Add(1)
// If using PostgreSQL NOTIFY, send notification
if dp.useNotify {
if err := dp.notify(ctx, event.ID); err != nil {
logger.Warn("Failed to send NOTIFY: %v", err)
}
}
return nil
}
// Close closes the provider and releases resources
func (dp *DatabaseProvider) Close() error {
if !dp.isRunning.Load() {
return nil
}
dp.isRunning.Store(false)
// Cancel all subscriptions
dp.mu.Lock()
for _, sub := range dp.subscribers {
sub.cancel()
}
dp.mu.Unlock()
// Stop polling
close(dp.stopPolling)
// Wait for goroutines
dp.wg.Wait()
logger.Info("Database provider closed")
return nil
}
// Stats returns provider statistics
func (dp *DatabaseProvider) Stats(ctx context.Context) (*ProviderStats, error) {
// Get counts by status
query := fmt.Sprintf(`
SELECT
COUNT(*) FILTER (WHERE status = 'pending') as pending,
COUNT(*) FILTER (WHERE status = 'processing') as processing,
COUNT(*) FILTER (WHERE status = 'completed') as completed,
COUNT(*) FILTER (WHERE status = 'failed') as failed,
COUNT(*) as total
FROM %s
`, dp.tableName)
var pending, processing, completed, failed, total int64
rows, err := dp.db.GetUnderlyingDB().(interface {
QueryContext(ctx context.Context, query string, args ...interface{}) (*sql.Rows, error)
}).QueryContext(ctx, query)
if err != nil {
logger.Warn("Failed to get stats: %v", err)
} else {
defer rows.Close()
if rows.Next() {
if err := rows.Scan(&pending, &processing, &completed, &failed, &total); err != nil {
logger.Warn("Failed to scan stats: %v", err)
}
}
}
return &ProviderStats{
ProviderType: "database",
TotalEvents: total,
PendingEvents: pending,
ProcessingEvents: processing,
CompletedEvents: completed,
FailedEvents: failed,
EventsPublished: dp.stats.EventsPublished.Load(),
EventsConsumed: dp.stats.EventsConsumed.Load(),
ActiveSubscribers: int(dp.stats.ActiveSubscribers.Load()),
ProviderSpecific: map[string]interface{}{
"table_name": dp.tableName,
"poll_interval": dp.pollInterval.String(),
"use_notify": dp.useNotify,
"poll_errors": dp.stats.PollErrors.Load(),
},
}, nil
}
// pollLoop periodically polls for new events
func (dp *DatabaseProvider) pollLoop() {
defer dp.wg.Done()
ticker := time.NewTicker(dp.pollInterval)
defer ticker.Stop()
for {
select {
case <-ticker.C:
dp.pollEvents()
case <-dp.stopPolling:
return
}
}
}
// pollEvents polls for new events and delivers to subscribers
func (dp *DatabaseProvider) pollEvents() {
dp.mu.RLock()
subscribers := make([]*dbSubscription, 0, len(dp.subscribers))
for _, sub := range dp.subscribers {
subscribers = append(subscribers, sub)
}
dp.mu.RUnlock()
for _, sub := range subscribers {
// Query for new events since last seen
query := fmt.Sprintf(`
SELECT id, source, type, status, retry_count, error,
payload, user_id, session_id, instance_id,
schema, entity, operation,
created_at, processed_at, completed_at, metadata
FROM %s
WHERE id > $1
ORDER BY created_at ASC
LIMIT 100
`, dp.tableName)
lastSeenID := sub.lastSeenID
if lastSeenID == "" {
lastSeenID = "00000000-0000-0000-0000-000000000000"
}
rows, err := dp.db.GetUnderlyingDB().(interface {
QueryContext(ctx context.Context, query string, args ...interface{}) (*sql.Rows, error)
}).QueryContext(sub.ctx, query, lastSeenID)
if err != nil {
dp.stats.PollErrors.Add(1)
logger.Warn("Failed to poll events: %v", err)
continue
}
for rows.Next() {
event := &Event{}
var source, eventType, status, operation string
var metadataJSON []byte
var processedAt, completedAt sql.NullTime
err := rows.Scan(
&event.ID, &source, &eventType, &status, &event.RetryCount, &event.Error,
&event.Payload, &event.UserID, &event.SessionID, &event.InstanceID,
&event.Schema, &event.Entity, &operation,
&event.CreatedAt, &processedAt, &completedAt, &metadataJSON,
)
if err != nil {
logger.Warn("Failed to scan event: %v", err)
continue
}
// Set enum values
event.Source = EventSource(source)
event.Type = eventType
event.Status = EventStatus(status)
event.Operation = operation
// Handle nullable timestamps
if processedAt.Valid {
event.ProcessedAt = &processedAt.Time
}
if completedAt.Valid {
event.CompletedAt = &completedAt.Time
}
// Unmarshal metadata
if len(metadataJSON) > 0 {
if err := json.Unmarshal(metadataJSON, &event.Metadata); err != nil {
logger.Warn("Failed to unmarshal metadata: %v", err)
}
}
// Check if event matches pattern
if matchPattern(sub.pattern, event.Type) {
select {
case sub.ch <- event:
dp.stats.EventsConsumed.Add(1)
sub.lastSeenID = event.ID
case <-sub.ctx.Done():
rows.Close()
return
default:
// Channel full, skip
logger.Warn("Subscriber channel full for pattern: %s", sub.pattern)
}
}
sub.lastSeenID = event.ID
}
rows.Close()
}
}
// notify sends a PostgreSQL NOTIFY message
func (dp *DatabaseProvider) notify(ctx context.Context, eventID string) error {
query := fmt.Sprintf("NOTIFY %s, '%s'", dp.channel, eventID)
_, err := dp.db.Exec(ctx, query)
return err
}
// createTable creates the events table if it doesn't exist
func (dp *DatabaseProvider) createTable(ctx context.Context) error {
query := fmt.Sprintf(`
CREATE TABLE IF NOT EXISTS %s (
id VARCHAR(255) PRIMARY KEY,
source VARCHAR(50) NOT NULL,
type VARCHAR(255) NOT NULL,
status VARCHAR(50) NOT NULL,
retry_count INTEGER DEFAULT 0,
error TEXT,
payload JSONB,
user_id INTEGER,
session_id VARCHAR(255),
instance_id VARCHAR(255),
schema VARCHAR(255),
entity VARCHAR(255),
operation VARCHAR(50),
created_at TIMESTAMP NOT NULL DEFAULT NOW(),
processed_at TIMESTAMP,
completed_at TIMESTAMP,
metadata JSONB
)
`, dp.tableName)
if _, err := dp.db.Exec(ctx, query); err != nil {
return fmt.Errorf("failed to create table: %w", err)
}
// Create indexes
indexes := []string{
fmt.Sprintf("CREATE INDEX IF NOT EXISTS idx_%s_source ON %s(source)", dp.tableName, dp.tableName),
fmt.Sprintf("CREATE INDEX IF NOT EXISTS idx_%s_type ON %s(type)", dp.tableName, dp.tableName),
fmt.Sprintf("CREATE INDEX IF NOT EXISTS idx_%s_status ON %s(status)", dp.tableName, dp.tableName),
fmt.Sprintf("CREATE INDEX IF NOT EXISTS idx_%s_created_at ON %s(created_at)", dp.tableName, dp.tableName),
fmt.Sprintf("CREATE INDEX IF NOT EXISTS idx_%s_instance_id ON %s(instance_id)", dp.tableName, dp.tableName),
}
for _, indexQuery := range indexes {
if _, err := dp.db.Exec(ctx, indexQuery); err != nil {
logger.Warn("Failed to create index: %v", err)
}
}
return nil
}

View File

@@ -0,0 +1,565 @@
package eventbroker
import (
"context"
"encoding/json"
"fmt"
"sync"
"sync/atomic"
"time"
"github.com/nats-io/nats.go"
"github.com/nats-io/nats.go/jetstream"
"github.com/bitechdev/ResolveSpec/pkg/logger"
)
// NATSProvider implements Provider interface using NATS JetStream
// Features:
// - Persistent event storage using JetStream
// - Cross-instance pub/sub using NATS subjects
// - Wildcard subscription support
// - Durable consumers for event replay
// - At-least-once delivery semantics
type NATSProvider struct {
nc *nats.Conn
js jetstream.JetStream
stream jetstream.Stream
streamName string
subjectPrefix string
instanceID string
maxAge time.Duration
// Subscriptions
mu sync.RWMutex
subscribers map[string]*natsSubscription
// Statistics
stats NATSProviderStats
// Lifecycle
wg sync.WaitGroup
isRunning atomic.Bool
}
// NATSProviderStats contains statistics for the NATS provider
type NATSProviderStats struct {
TotalEvents atomic.Int64
EventsPublished atomic.Int64
EventsConsumed atomic.Int64
ActiveSubscribers atomic.Int32
ConsumerErrors atomic.Int64
}
// natsSubscription represents a single NATS subscription
type natsSubscription struct {
pattern string
consumer jetstream.Consumer
ch chan *Event
ctx context.Context
cancel context.CancelFunc
}
// NATSProviderConfig configures the NATS provider
type NATSProviderConfig struct {
URL string
StreamName string
SubjectPrefix string // e.g., "events"
InstanceID string
MaxAge time.Duration // How long to keep events
Storage string // "file" or "memory"
}
// NewNATSProvider creates a new NATS event provider
func NewNATSProvider(cfg NATSProviderConfig) (*NATSProvider, error) {
// Apply defaults
if cfg.URL == "" {
cfg.URL = nats.DefaultURL
}
if cfg.StreamName == "" {
cfg.StreamName = "RESOLVESPEC_EVENTS"
}
if cfg.SubjectPrefix == "" {
cfg.SubjectPrefix = "events"
}
if cfg.MaxAge == 0 {
cfg.MaxAge = 7 * 24 * time.Hour // 7 days
}
if cfg.Storage == "" {
cfg.Storage = "file"
}
// Connect to NATS
nc, err := nats.Connect(cfg.URL,
nats.Name("resolvespec-eventbroker-"+cfg.InstanceID),
nats.Timeout(5*time.Second),
)
if err != nil {
return nil, fmt.Errorf("failed to connect to NATS: %w", err)
}
// Create JetStream context
js, err := jetstream.New(nc)
if err != nil {
nc.Close()
return nil, fmt.Errorf("failed to create JetStream context: %w", err)
}
np := &NATSProvider{
nc: nc,
js: js,
streamName: cfg.StreamName,
subjectPrefix: cfg.SubjectPrefix,
instanceID: cfg.InstanceID,
maxAge: cfg.MaxAge,
subscribers: make(map[string]*natsSubscription),
}
np.isRunning.Store(true)
// Create or update stream
ctx, cancel := context.WithTimeout(context.Background(), 10*time.Second)
defer cancel()
// Determine storage type
var storage jetstream.StorageType
if cfg.Storage == "memory" {
storage = jetstream.MemoryStorage
} else {
storage = jetstream.FileStorage
}
if err := np.ensureStream(ctx, storage); err != nil {
nc.Close()
return nil, fmt.Errorf("failed to create stream: %w", err)
}
logger.Info("NATS provider initialized (stream: %s, subject: %s.*, url: %s)",
cfg.StreamName, cfg.SubjectPrefix, cfg.URL)
return np, nil
}
// Store stores an event
func (np *NATSProvider) Store(ctx context.Context, event *Event) error {
// Marshal event to JSON
data, err := json.Marshal(event)
if err != nil {
return fmt.Errorf("failed to marshal event: %w", err)
}
// Publish to NATS subject
// Subject format: events.{source}.{schema}.{entity}.{operation}
subject := np.buildSubject(event)
msg := &nats.Msg{
Subject: subject,
Data: data,
Header: nats.Header{
"Event-ID": []string{event.ID},
"Event-Type": []string{event.Type},
"Event-Source": []string{string(event.Source)},
"Event-Status": []string{string(event.Status)},
"Instance-ID": []string{event.InstanceID},
},
}
if _, err := np.js.PublishMsg(ctx, msg); err != nil {
return fmt.Errorf("failed to publish event: %w", err)
}
np.stats.TotalEvents.Add(1)
return nil
}
// Get retrieves an event by ID
// Note: This is inefficient with JetStream - consider using a separate KV store for lookups
func (np *NATSProvider) Get(ctx context.Context, id string) (*Event, error) {
// We need to scan messages which is not ideal
// For production, consider using NATS KV store for fast lookups
consumer, err := np.stream.CreateOrUpdateConsumer(ctx, jetstream.ConsumerConfig{
Name: "get-" + id,
FilterSubject: np.subjectPrefix + ".>",
DeliverPolicy: jetstream.DeliverAllPolicy,
AckPolicy: jetstream.AckExplicitPolicy,
})
if err != nil {
return nil, fmt.Errorf("failed to create consumer: %w", err)
}
// Fetch messages in batches
msgs, err := consumer.Fetch(1000, jetstream.FetchMaxWait(5*time.Second))
if err != nil {
return nil, fmt.Errorf("failed to fetch messages: %w", err)
}
for msg := range msgs.Messages() {
if msg.Headers().Get("Event-ID") == id {
var event Event
if err := json.Unmarshal(msg.Data(), &event); err != nil {
_ = msg.Nak()
continue
}
_ = msg.Ack()
// Delete temporary consumer
_ = np.stream.DeleteConsumer(ctx, "get-"+id)
return &event, nil
}
_ = msg.Ack()
}
// Delete temporary consumer
_ = np.stream.DeleteConsumer(ctx, "get-"+id)
return nil, fmt.Errorf("event not found: %s", id)
}
// List lists events with optional filters
func (np *NATSProvider) List(ctx context.Context, filter *EventFilter) ([]*Event, error) {
var results []*Event
// Create temporary consumer
consumer, err := np.stream.CreateOrUpdateConsumer(ctx, jetstream.ConsumerConfig{
Name: fmt.Sprintf("list-%d", time.Now().UnixNano()),
FilterSubject: np.subjectPrefix + ".>",
DeliverPolicy: jetstream.DeliverAllPolicy,
AckPolicy: jetstream.AckExplicitPolicy,
})
if err != nil {
return nil, fmt.Errorf("failed to create consumer: %w", err)
}
defer func() { _ = np.stream.DeleteConsumer(ctx, consumer.CachedInfo().Name) }()
// Fetch messages in batches
msgs, err := consumer.Fetch(1000, jetstream.FetchMaxWait(5*time.Second))
if err != nil {
return nil, fmt.Errorf("failed to fetch messages: %w", err)
}
for msg := range msgs.Messages() {
var event Event
if err := json.Unmarshal(msg.Data(), &event); err != nil {
logger.Warn("Failed to unmarshal event: %v", err)
_ = msg.Nak()
continue
}
if np.matchesFilter(&event, filter) {
results = append(results, &event)
}
_ = msg.Ack()
}
// Apply limit and offset
if filter != nil {
if filter.Offset > 0 && filter.Offset < len(results) {
results = results[filter.Offset:]
}
if filter.Limit > 0 && filter.Limit < len(results) {
results = results[:filter.Limit]
}
}
return results, nil
}
// UpdateStatus updates the status of an event
// Note: NATS streams are append-only, so we publish a status update event
func (np *NATSProvider) UpdateStatus(ctx context.Context, id string, status EventStatus, errorMsg string) error {
// Publish a status update message
subject := fmt.Sprintf("%s.status.%s", np.subjectPrefix, id)
statusUpdate := map[string]interface{}{
"event_id": id,
"status": string(status),
"error": errorMsg,
"updated_at": time.Now(),
}
data, err := json.Marshal(statusUpdate)
if err != nil {
return fmt.Errorf("failed to marshal status update: %w", err)
}
if _, err := np.js.Publish(ctx, subject, data); err != nil {
return fmt.Errorf("failed to publish status update: %w", err)
}
return nil
}
// Delete deletes an event by ID
// Note: NATS streams don't support deletion - this just marks it in a separate subject
func (np *NATSProvider) Delete(ctx context.Context, id string) error {
subject := fmt.Sprintf("%s.deleted.%s", np.subjectPrefix, id)
deleteMsg := map[string]interface{}{
"event_id": id,
"deleted_at": time.Now(),
}
data, err := json.Marshal(deleteMsg)
if err != nil {
return fmt.Errorf("failed to marshal delete message: %w", err)
}
if _, err := np.js.Publish(ctx, subject, data); err != nil {
return fmt.Errorf("failed to publish delete message: %w", err)
}
return nil
}
// Stream returns a channel of events for real-time consumption
func (np *NATSProvider) Stream(ctx context.Context, pattern string) (<-chan *Event, error) {
ch := make(chan *Event, 100)
// Convert glob pattern to NATS subject pattern
natsSubject := np.patternToSubject(pattern)
// Create durable consumer
consumerName := fmt.Sprintf("consumer-%s-%d", np.instanceID, time.Now().UnixNano())
consumer, err := np.stream.CreateOrUpdateConsumer(ctx, jetstream.ConsumerConfig{
Name: consumerName,
FilterSubject: natsSubject,
DeliverPolicy: jetstream.DeliverNewPolicy,
AckPolicy: jetstream.AckExplicitPolicy,
AckWait: 30 * time.Second,
})
if err != nil {
return nil, fmt.Errorf("failed to create consumer: %w", err)
}
subCtx, cancel := context.WithCancel(ctx)
sub := &natsSubscription{
pattern: pattern,
consumer: consumer,
ch: ch,
ctx: subCtx,
cancel: cancel,
}
np.mu.Lock()
np.subscribers[pattern] = sub
np.stats.ActiveSubscribers.Add(1)
np.mu.Unlock()
// Start consumer goroutine
np.wg.Add(1)
go np.consumeMessages(sub)
return ch, nil
}
// Publish publishes an event to all subscribers
func (np *NATSProvider) Publish(ctx context.Context, event *Event) error {
// Store the event first
if err := np.Store(ctx, event); err != nil {
return err
}
np.stats.EventsPublished.Add(1)
return nil
}
// Close closes the provider and releases resources
func (np *NATSProvider) Close() error {
if !np.isRunning.Load() {
return nil
}
np.isRunning.Store(false)
// Cancel all subscriptions
np.mu.Lock()
for _, sub := range np.subscribers {
sub.cancel()
}
np.mu.Unlock()
// Wait for goroutines
np.wg.Wait()
// Close NATS connection
np.nc.Close()
logger.Info("NATS provider closed")
return nil
}
// Stats returns provider statistics
func (np *NATSProvider) Stats(ctx context.Context) (*ProviderStats, error) {
streamInfo, err := np.stream.Info(ctx)
if err != nil {
logger.Warn("Failed to get stream info: %v", err)
}
stats := &ProviderStats{
ProviderType: "nats",
TotalEvents: np.stats.TotalEvents.Load(),
EventsPublished: np.stats.EventsPublished.Load(),
EventsConsumed: np.stats.EventsConsumed.Load(),
ActiveSubscribers: int(np.stats.ActiveSubscribers.Load()),
ProviderSpecific: map[string]interface{}{
"stream_name": np.streamName,
"subject_prefix": np.subjectPrefix,
"max_age": np.maxAge.String(),
"consumer_errors": np.stats.ConsumerErrors.Load(),
},
}
if streamInfo != nil {
stats.ProviderSpecific["messages"] = streamInfo.State.Msgs
stats.ProviderSpecific["bytes"] = streamInfo.State.Bytes
stats.ProviderSpecific["consumers"] = streamInfo.State.Consumers
}
return stats, nil
}
// ensureStream creates or updates the JetStream stream
func (np *NATSProvider) ensureStream(ctx context.Context, storage jetstream.StorageType) error {
streamConfig := jetstream.StreamConfig{
Name: np.streamName,
Subjects: []string{np.subjectPrefix + ".>"},
MaxAge: np.maxAge,
Storage: storage,
Retention: jetstream.LimitsPolicy,
Discard: jetstream.DiscardOld,
}
stream, err := np.js.CreateStream(ctx, streamConfig)
if err != nil {
// Try to update if already exists
stream, err = np.js.UpdateStream(ctx, streamConfig)
if err != nil {
return fmt.Errorf("failed to create/update stream: %w", err)
}
}
np.stream = stream
return nil
}
// consumeMessages consumes messages from NATS for a subscription
func (np *NATSProvider) consumeMessages(sub *natsSubscription) {
defer np.wg.Done()
defer close(sub.ch)
defer func() {
np.mu.Lock()
delete(np.subscribers, sub.pattern)
np.stats.ActiveSubscribers.Add(-1)
np.mu.Unlock()
}()
logger.Debug("Starting NATS consumer for pattern: %s", sub.pattern)
// Consume messages
cc, err := sub.consumer.Consume(func(msg jetstream.Msg) {
var event Event
if err := json.Unmarshal(msg.Data(), &event); err != nil {
logger.Warn("Failed to unmarshal event: %v", err)
_ = msg.Nak()
return
}
// Check if event matches pattern (additional filtering)
if matchPattern(sub.pattern, event.Type) {
select {
case sub.ch <- &event:
np.stats.EventsConsumed.Add(1)
_ = msg.Ack()
case <-sub.ctx.Done():
_ = msg.Nak()
return
}
} else {
_ = msg.Ack()
}
})
if err != nil {
np.stats.ConsumerErrors.Add(1)
logger.Error("Failed to start consumer: %v", err)
return
}
// Wait for context cancellation
<-sub.ctx.Done()
// Stop consuming
cc.Stop()
logger.Debug("NATS consumer stopped for pattern: %s", sub.pattern)
}
// buildSubject creates a NATS subject from an event
// Format: events.{source}.{schema}.{entity}.{operation}
func (np *NATSProvider) buildSubject(event *Event) string {
return fmt.Sprintf("%s.%s.%s.%s.%s",
np.subjectPrefix,
event.Source,
event.Schema,
event.Entity,
event.Operation,
)
}
// patternToSubject converts a glob pattern to NATS subject pattern
// Examples:
// - "*" -> "events.>"
// - "public.users.*" -> "events.*.public.users.*"
// - "public.*.*" -> "events.*.public.*.*"
func (np *NATSProvider) patternToSubject(pattern string) string {
if pattern == "*" {
return np.subjectPrefix + ".>"
}
// For specific patterns, we need to match the event type structure
// Event type: schema.entity.operation
// NATS subject: events.{source}.{schema}.{entity}.{operation}
// We use wildcard for source since pattern doesn't include it
return fmt.Sprintf("%s.*.%s", np.subjectPrefix, pattern)
}
// matchesFilter checks if an event matches the filter criteria
func (np *NATSProvider) matchesFilter(event *Event, filter *EventFilter) bool {
if filter == nil {
return true
}
if filter.Source != nil && event.Source != *filter.Source {
return false
}
if filter.Status != nil && event.Status != *filter.Status {
return false
}
if filter.UserID != nil && event.UserID != *filter.UserID {
return false
}
if filter.Schema != "" && event.Schema != filter.Schema {
return false
}
if filter.Entity != "" && event.Entity != filter.Entity {
return false
}
if filter.Operation != "" && event.Operation != filter.Operation {
return false
}
if filter.InstanceID != "" && event.InstanceID != filter.InstanceID {
return false
}
if filter.StartTime != nil && event.CreatedAt.Before(*filter.StartTime) {
return false
}
if filter.EndTime != nil && event.CreatedAt.After(*filter.EndTime) {
return false
}
return true
}

View File

@@ -0,0 +1,541 @@
package eventbroker
import (
"context"
"encoding/json"
"fmt"
"sync"
"sync/atomic"
"time"
"github.com/redis/go-redis/v9"
"github.com/bitechdev/ResolveSpec/pkg/logger"
)
// RedisProvider implements Provider interface using Redis Streams
// Features:
// - Persistent event storage using Redis Streams
// - Cross-instance pub/sub using consumer groups
// - Pattern-based subscription routing
// - Automatic stream trimming to prevent unbounded growth
type RedisProvider struct {
client *redis.Client
streamName string
consumerGroup string
consumerName string
instanceID string
maxLen int64
// Subscriptions
mu sync.RWMutex
subscribers map[string]*redisSubscription
// Statistics
stats RedisProviderStats
// Lifecycle
stopListeners chan struct{}
wg sync.WaitGroup
isRunning atomic.Bool
}
// RedisProviderStats contains statistics for the Redis provider
type RedisProviderStats struct {
TotalEvents atomic.Int64
EventsPublished atomic.Int64
EventsConsumed atomic.Int64
ActiveSubscribers atomic.Int32
ConsumerErrors atomic.Int64
}
// redisSubscription represents a single subscription
type redisSubscription struct {
pattern string
ch chan *Event
ctx context.Context
cancel context.CancelFunc
}
// RedisProviderConfig configures the Redis provider
type RedisProviderConfig struct {
Host string
Port int
Password string
DB int
StreamName string
ConsumerGroup string
ConsumerName string
InstanceID string
MaxLen int64 // Maximum stream length (0 = unlimited)
}
// NewRedisProvider creates a new Redis event provider
func NewRedisProvider(cfg RedisProviderConfig) (*RedisProvider, error) {
// Apply defaults
if cfg.Host == "" {
cfg.Host = "localhost"
}
if cfg.Port == 0 {
cfg.Port = 6379
}
if cfg.StreamName == "" {
cfg.StreamName = "resolvespec:events"
}
if cfg.ConsumerGroup == "" {
cfg.ConsumerGroup = "resolvespec-workers"
}
if cfg.ConsumerName == "" {
cfg.ConsumerName = cfg.InstanceID
}
if cfg.MaxLen == 0 {
cfg.MaxLen = 10000 // Default max stream length
}
// Create Redis client
client := redis.NewClient(&redis.Options{
Addr: fmt.Sprintf("%s:%d", cfg.Host, cfg.Port),
Password: cfg.Password,
DB: cfg.DB,
PoolSize: 10,
})
// Test connection
ctx, cancel := context.WithTimeout(context.Background(), 5*time.Second)
defer cancel()
if err := client.Ping(ctx).Err(); err != nil {
return nil, fmt.Errorf("failed to connect to Redis: %w", err)
}
rp := &RedisProvider{
client: client,
streamName: cfg.StreamName,
consumerGroup: cfg.ConsumerGroup,
consumerName: cfg.ConsumerName,
instanceID: cfg.InstanceID,
maxLen: cfg.MaxLen,
subscribers: make(map[string]*redisSubscription),
stopListeners: make(chan struct{}),
}
rp.isRunning.Store(true)
// Create consumer group if it doesn't exist
if err := rp.ensureConsumerGroup(ctx); err != nil {
logger.Warn("Failed to create consumer group: %v (may already exist)", err)
}
logger.Info("Redis provider initialized (stream: %s, consumer_group: %s, consumer: %s)",
cfg.StreamName, cfg.ConsumerGroup, cfg.ConsumerName)
return rp, nil
}
// Store stores an event
func (rp *RedisProvider) Store(ctx context.Context, event *Event) error {
// Marshal event to JSON
data, err := json.Marshal(event)
if err != nil {
return fmt.Errorf("failed to marshal event: %w", err)
}
// Store in Redis Stream
args := &redis.XAddArgs{
Stream: rp.streamName,
MaxLen: rp.maxLen,
Approx: true, // Use approximate trimming for better performance
Values: map[string]interface{}{
"event": data,
"id": event.ID,
"type": event.Type,
"source": string(event.Source),
"status": string(event.Status),
"instance_id": event.InstanceID,
},
}
if _, err := rp.client.XAdd(ctx, args).Result(); err != nil {
return fmt.Errorf("failed to add event to stream: %w", err)
}
rp.stats.TotalEvents.Add(1)
return nil
}
// Get retrieves an event by ID
// Note: This scans the stream which can be slow for large streams
// Consider using a separate hash for fast lookups if needed
func (rp *RedisProvider) Get(ctx context.Context, id string) (*Event, error) {
// Scan stream for event with matching ID
args := &redis.XReadArgs{
Streams: []string{rp.streamName, "0"},
Count: 1000, // Read in batches
}
for {
streams, err := rp.client.XRead(ctx, args).Result()
if err == redis.Nil {
return nil, fmt.Errorf("event not found: %s", id)
}
if err != nil {
return nil, fmt.Errorf("failed to read stream: %w", err)
}
if len(streams) == 0 {
return nil, fmt.Errorf("event not found: %s", id)
}
for _, stream := range streams {
for _, message := range stream.Messages {
// Check if this is the event we're looking for
if eventID, ok := message.Values["id"].(string); ok && eventID == id {
// Parse event
if eventData, ok := message.Values["event"].(string); ok {
var event Event
if err := json.Unmarshal([]byte(eventData), &event); err != nil {
return nil, fmt.Errorf("failed to unmarshal event: %w", err)
}
return &event, nil
}
}
}
// If we've read messages, update start position for next iteration
if len(stream.Messages) > 0 {
args.Streams[1] = stream.Messages[len(stream.Messages)-1].ID
} else {
// No more messages
return nil, fmt.Errorf("event not found: %s", id)
}
}
}
}
// List lists events with optional filters
// Note: This scans the entire stream which can be slow
// Consider using time-based or ID-based ranges for better performance
func (rp *RedisProvider) List(ctx context.Context, filter *EventFilter) ([]*Event, error) {
var results []*Event
// Read from stream
args := &redis.XReadArgs{
Streams: []string{rp.streamName, "0"},
Count: 1000,
}
for {
streams, err := rp.client.XRead(ctx, args).Result()
if err == redis.Nil {
break
}
if err != nil {
return nil, fmt.Errorf("failed to read stream: %w", err)
}
if len(streams) == 0 {
break
}
for _, stream := range streams {
for _, message := range stream.Messages {
if eventData, ok := message.Values["event"].(string); ok {
var event Event
if err := json.Unmarshal([]byte(eventData), &event); err != nil {
logger.Warn("Failed to unmarshal event: %v", err)
continue
}
if rp.matchesFilter(&event, filter) {
results = append(results, &event)
}
}
}
// Update start position for next iteration
if len(stream.Messages) > 0 {
args.Streams[1] = stream.Messages[len(stream.Messages)-1].ID
} else {
// No more messages
goto done
}
}
}
done:
// Apply limit and offset
if filter != nil {
if filter.Offset > 0 && filter.Offset < len(results) {
results = results[filter.Offset:]
}
if filter.Limit > 0 && filter.Limit < len(results) {
results = results[:filter.Limit]
}
}
return results, nil
}
// UpdateStatus updates the status of an event
// Note: Redis Streams are append-only, so we need to store status updates separately
// This uses a separate hash for status tracking
func (rp *RedisProvider) UpdateStatus(ctx context.Context, id string, status EventStatus, errorMsg string) error {
statusKey := fmt.Sprintf("%s:status:%s", rp.streamName, id)
fields := map[string]interface{}{
"status": string(status),
"updated_at": time.Now().Format(time.RFC3339),
}
if errorMsg != "" {
fields["error"] = errorMsg
}
if err := rp.client.HSet(ctx, statusKey, fields).Err(); err != nil {
return fmt.Errorf("failed to update status: %w", err)
}
// Set TTL on status key to prevent unbounded growth
rp.client.Expire(ctx, statusKey, 7*24*time.Hour) // 7 days
return nil
}
// Delete deletes an event by ID
// Note: Redis Streams don't support deletion by field value
// This marks the event as deleted in a separate set
func (rp *RedisProvider) Delete(ctx context.Context, id string) error {
deletedKey := fmt.Sprintf("%s:deleted", rp.streamName)
if err := rp.client.SAdd(ctx, deletedKey, id).Err(); err != nil {
return fmt.Errorf("failed to mark event as deleted: %w", err)
}
// Also delete the status hash if it exists
statusKey := fmt.Sprintf("%s:status:%s", rp.streamName, id)
rp.client.Del(ctx, statusKey)
return nil
}
// Stream returns a channel of events for real-time consumption
// Uses Redis Streams consumer group for distributed processing
func (rp *RedisProvider) Stream(ctx context.Context, pattern string) (<-chan *Event, error) {
ch := make(chan *Event, 100)
subCtx, cancel := context.WithCancel(ctx)
sub := &redisSubscription{
pattern: pattern,
ch: ch,
ctx: subCtx,
cancel: cancel,
}
rp.mu.Lock()
rp.subscribers[pattern] = sub
rp.stats.ActiveSubscribers.Add(1)
rp.mu.Unlock()
// Start consumer goroutine
rp.wg.Add(1)
go rp.consumeStream(sub)
return ch, nil
}
// Publish publishes an event to all subscribers (cross-instance)
func (rp *RedisProvider) Publish(ctx context.Context, event *Event) error {
// Store the event first
if err := rp.Store(ctx, event); err != nil {
return err
}
rp.stats.EventsPublished.Add(1)
return nil
}
// Close closes the provider and releases resources
func (rp *RedisProvider) Close() error {
if !rp.isRunning.Load() {
return nil
}
rp.isRunning.Store(false)
// Cancel all subscriptions
rp.mu.Lock()
for _, sub := range rp.subscribers {
sub.cancel()
}
rp.mu.Unlock()
// Stop listeners
close(rp.stopListeners)
// Wait for goroutines
rp.wg.Wait()
// Close Redis client
if err := rp.client.Close(); err != nil {
return fmt.Errorf("failed to close Redis client: %w", err)
}
logger.Info("Redis provider closed")
return nil
}
// Stats returns provider statistics
func (rp *RedisProvider) Stats(ctx context.Context) (*ProviderStats, error) {
// Get stream info
streamInfo, err := rp.client.XInfoStream(ctx, rp.streamName).Result()
if err != nil && err != redis.Nil {
logger.Warn("Failed to get stream info: %v", err)
}
stats := &ProviderStats{
ProviderType: "redis",
TotalEvents: rp.stats.TotalEvents.Load(),
EventsPublished: rp.stats.EventsPublished.Load(),
EventsConsumed: rp.stats.EventsConsumed.Load(),
ActiveSubscribers: int(rp.stats.ActiveSubscribers.Load()),
ProviderSpecific: map[string]interface{}{
"stream_name": rp.streamName,
"consumer_group": rp.consumerGroup,
"consumer_name": rp.consumerName,
"max_len": rp.maxLen,
"consumer_errors": rp.stats.ConsumerErrors.Load(),
},
}
if streamInfo != nil {
stats.ProviderSpecific["stream_length"] = streamInfo.Length
stats.ProviderSpecific["first_entry_id"] = streamInfo.FirstEntry.ID
stats.ProviderSpecific["last_entry_id"] = streamInfo.LastEntry.ID
}
return stats, nil
}
// consumeStream consumes events from the Redis Stream for a subscription
func (rp *RedisProvider) consumeStream(sub *redisSubscription) {
defer rp.wg.Done()
defer close(sub.ch)
defer func() {
rp.mu.Lock()
delete(rp.subscribers, sub.pattern)
rp.stats.ActiveSubscribers.Add(-1)
rp.mu.Unlock()
}()
logger.Debug("Starting stream consumer for pattern: %s", sub.pattern)
// Use consumer group for distributed processing
for {
select {
case <-sub.ctx.Done():
logger.Debug("Stream consumer stopped for pattern: %s", sub.pattern)
return
default:
// Read from consumer group
args := &redis.XReadGroupArgs{
Group: rp.consumerGroup,
Consumer: rp.consumerName,
Streams: []string{rp.streamName, ">"},
Count: 10,
Block: 1 * time.Second,
}
streams, err := rp.client.XReadGroup(sub.ctx, args).Result()
if err == redis.Nil {
continue
}
if err != nil {
if sub.ctx.Err() != nil {
return
}
rp.stats.ConsumerErrors.Add(1)
logger.Warn("Failed to read from consumer group: %v", err)
time.Sleep(1 * time.Second)
continue
}
for _, stream := range streams {
for _, message := range stream.Messages {
if eventData, ok := message.Values["event"].(string); ok {
var event Event
if err := json.Unmarshal([]byte(eventData), &event); err != nil {
logger.Warn("Failed to unmarshal event: %v", err)
// Acknowledge message anyway to prevent redelivery
rp.client.XAck(sub.ctx, rp.streamName, rp.consumerGroup, message.ID)
continue
}
// Check if event matches pattern
if matchPattern(sub.pattern, event.Type) {
select {
case sub.ch <- &event:
rp.stats.EventsConsumed.Add(1)
// Acknowledge message
rp.client.XAck(sub.ctx, rp.streamName, rp.consumerGroup, message.ID)
case <-sub.ctx.Done():
return
}
} else {
// Acknowledge message even if it doesn't match pattern
rp.client.XAck(sub.ctx, rp.streamName, rp.consumerGroup, message.ID)
}
}
}
}
}
}
}
// ensureConsumerGroup creates the consumer group if it doesn't exist
func (rp *RedisProvider) ensureConsumerGroup(ctx context.Context) error {
// Try to create the stream and consumer group
// MKSTREAM creates the stream if it doesn't exist
err := rp.client.XGroupCreateMkStream(ctx, rp.streamName, rp.consumerGroup, "0").Err()
if err != nil && err.Error() != "BUSYGROUP Consumer Group name already exists" {
return err
}
return nil
}
// matchesFilter checks if an event matches the filter criteria
func (rp *RedisProvider) matchesFilter(event *Event, filter *EventFilter) bool {
if filter == nil {
return true
}
if filter.Source != nil && event.Source != *filter.Source {
return false
}
if filter.Status != nil && event.Status != *filter.Status {
return false
}
if filter.UserID != nil && event.UserID != *filter.UserID {
return false
}
if filter.Schema != "" && event.Schema != filter.Schema {
return false
}
if filter.Entity != "" && event.Entity != filter.Entity {
return false
}
if filter.Operation != "" && event.Operation != filter.Operation {
return false
}
if filter.InstanceID != "" && event.InstanceID != filter.InstanceID {
return false
}
if filter.StartTime != nil && event.CreatedAt.Before(*filter.StartTime) {
return false
}
if filter.EndTime != nil && event.CreatedAt.After(*filter.EndTime) {
return false
}
return true
}

View File

@@ -75,6 +75,28 @@ func CloseErrorTracking() error {
return nil
}
// extractContext attempts to find a context.Context in the given arguments.
// It returns the found context (or context.Background() if not found) and
// the remaining arguments without the context.
func extractContext(args ...interface{}) (ctx context.Context, filteredArgs []interface{}) {
ctx = context.Background()
var newArgs []interface{}
found := false
for _, arg := range args {
if c, ok := arg.(context.Context); ok {
if !found {
ctx = c
found = true
}
// Ignore any additional context.Context arguments after the first one.
continue
}
newArgs = append(newArgs, arg)
}
return ctx, newArgs
}
func Info(template string, args ...interface{}) {
if Logger == nil {
log.Printf(template, args...)
@@ -84,7 +106,8 @@ func Info(template string, args ...interface{}) {
}
func Warn(template string, args ...interface{}) {
message := fmt.Sprintf(template, args...)
ctx, remainingArgs := extractContext(args...)
message := fmt.Sprintf(template, remainingArgs...)
if Logger == nil {
log.Printf("%s", message)
} else {
@@ -93,14 +116,15 @@ func Warn(template string, args ...interface{}) {
// Send to error tracker
if errorTracker != nil {
errorTracker.CaptureMessage(context.Background(), message, errortracking.SeverityWarning, map[string]interface{}{
errorTracker.CaptureMessage(ctx, message, errortracking.SeverityWarning, map[string]interface{}{
"process_id": os.Getpid(),
})
}
}
func Error(template string, args ...interface{}) {
message := fmt.Sprintf(template, args...)
ctx, remainingArgs := extractContext(args...)
message := fmt.Sprintf(template, remainingArgs...)
if Logger == nil {
log.Printf("%s", message)
} else {
@@ -109,7 +133,7 @@ func Error(template string, args ...interface{}) {
// Send to error tracker
if errorTracker != nil {
errorTracker.CaptureMessage(context.Background(), message, errortracking.SeverityError, map[string]interface{}{
errorTracker.CaptureMessage(ctx, message, errortracking.SeverityError, map[string]interface{}{
"process_id": os.Getpid(),
})
}
@@ -124,34 +148,41 @@ func Debug(template string, args ...interface{}) {
}
// CatchPanic - Handle panic
func CatchPanicCallback(location string, cb func(err any)) {
if err := recover(); err != nil {
callstack := debug.Stack()
// Returns a function that should be deferred to catch panics
// Example usage: defer CatchPanicCallback("MyFunction", func(err any) { /* cleanup */ })()
func CatchPanicCallback(location string, cb func(err any), args ...interface{}) func() {
ctx, _ := extractContext(args...)
return func() {
if err := recover(); err != nil {
callstack := debug.Stack()
if Logger != nil {
Error("Panic in %s : %v", location, err)
} else {
fmt.Printf("%s:PANIC->%+v", location, err)
debug.PrintStack()
}
if Logger != nil {
Error("Panic in %s : %v", location, err, ctx) // Pass context implicitly
} else {
fmt.Printf("%s:PANIC->%+v", location, err)
debug.PrintStack()
}
// Send to error tracker
if errorTracker != nil {
errorTracker.CapturePanic(context.Background(), err, callstack, map[string]interface{}{
"location": location,
"process_id": os.Getpid(),
})
}
// Send to error tracker
if errorTracker != nil {
errorTracker.CapturePanic(ctx, err, callstack, map[string]interface{}{
"location": location,
"process_id": os.Getpid(),
})
}
if cb != nil {
cb(err)
if cb != nil {
cb(err)
}
}
}
}
// CatchPanic - Handle panic
func CatchPanic(location string) {
CatchPanicCallback(location, nil)
// Returns a function that should be deferred to catch panics
// Example usage: defer CatchPanic("MyFunction")()
func CatchPanic(location string, args ...interface{}) func() {
return CatchPanicCallback(location, nil, args...)
}
// HandlePanic logs a panic and returns it as an error
@@ -163,13 +194,14 @@ func CatchPanic(location string) {
// err = logger.HandlePanic("MethodName", r)
// }
// }()
func HandlePanic(methodName string, r any) error {
func HandlePanic(methodName string, r any, args ...interface{}) error {
ctx, _ := extractContext(args...)
stack := debug.Stack()
Error("Panic in %s: %v\nStack trace:\n%s", methodName, r, string(stack))
Error("Panic in %s: %v\nStack trace:\n%s", methodName, r, string(stack), ctx) // Pass context implicitly
// Send to error tracker
if errorTracker != nil {
errorTracker.CapturePanic(context.Background(), r, stack, map[string]interface{}{
errorTracker.CapturePanic(ctx, r, stack, map[string]interface{}{
"method": methodName,
"process_id": os.Getpid(),
})

View File

@@ -39,6 +39,9 @@ type Provider interface {
// UpdateEventQueueSize updates the event queue size metric
UpdateEventQueueSize(size int64)
// RecordPanic records a panic event
RecordPanic(methodName string)
// Handler returns an HTTP handler for exposing metrics (e.g., /metrics endpoint)
Handler() http.Handler
}
@@ -75,6 +78,7 @@ func (n *NoOpProvider) RecordEventPublished(source, eventType string) {}
func (n *NoOpProvider) RecordEventProcessed(source, eventType, status string, duration time.Duration) {
}
func (n *NoOpProvider) UpdateEventQueueSize(size int64) {}
func (n *NoOpProvider) RecordPanic(methodName string) {}
func (n *NoOpProvider) Handler() http.Handler {
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
w.WriteHeader(http.StatusNotFound)

View File

@@ -20,6 +20,7 @@ type PrometheusProvider struct {
cacheHits *prometheus.CounterVec
cacheMisses *prometheus.CounterVec
cacheSize *prometheus.GaugeVec
panicsTotal *prometheus.CounterVec
}
// NewPrometheusProvider creates a new Prometheus metrics provider
@@ -83,6 +84,13 @@ func NewPrometheusProvider() *PrometheusProvider {
},
[]string{"provider"},
),
panicsTotal: promauto.NewCounterVec(
prometheus.CounterOpts{
Name: "panics_total",
Help: "Total number of panics",
},
[]string{"method"},
),
}
}
@@ -145,6 +153,11 @@ func (p *PrometheusProvider) UpdateCacheSize(provider string, size int64) {
p.cacheSize.WithLabelValues(provider).Set(float64(size))
}
// RecordPanic implements the Provider interface
func (p *PrometheusProvider) RecordPanic(methodName string) {
p.panicsTotal.WithLabelValues(methodName).Inc()
}
// Handler implements Provider interface
func (p *PrometheusProvider) Handler() http.Handler {
return promhttp.Handler()

33
pkg/middleware/panic.go Normal file
View File

@@ -0,0 +1,33 @@
package middleware
import (
"net/http"
"github.com/bitechdev/ResolveSpec/pkg/logger"
"github.com/bitechdev/ResolveSpec/pkg/metrics"
)
const panicMiddlewareMethodName = "PanicMiddleware"
// PanicRecovery is a middleware that recovers from panics, logs the error,
// sends it to an error tracker, records a metric, and returns a 500 Internal Server Error.
func PanicRecovery(next http.Handler) http.Handler {
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
defer func() {
if rcv := recover(); rcv != nil {
// Record the panic metric
metrics.GetProvider().RecordPanic(panicMiddlewareMethodName)
// Log the panic and send to error tracker
// We pass the request context so the error tracker can potentially
// link the panic to the request trace.
ctx := r.Context()
err := logger.HandlePanic(panicMiddlewareMethodName, rcv, ctx)
// Respond with a 500 error
http.Error(w, err.Error(), http.StatusInternalServerError)
}
}()
next.ServeHTTP(w, r)
})
}

View File

@@ -0,0 +1,86 @@
package middleware
import (
"net/http"
"net/http/httptest"
"testing"
"github.com/bitechdev/ResolveSpec/pkg/logger"
"github.com/bitechdev/ResolveSpec/pkg/metrics"
"github.com/stretchr/testify/assert"
)
// mockMetricsProvider is a mock for the metrics provider to check if methods are called.
type mockMetricsProvider struct {
metrics.NoOpProvider // Embed NoOpProvider to avoid implementing all methods
panicRecorded bool
methodName string
}
func (m *mockMetricsProvider) RecordPanic(methodName string) {
m.panicRecorded = true
m.methodName = methodName
}
func TestPanicRecovery(t *testing.T) {
// Initialize a mock logger to avoid actual logging output during tests
logger.Init(true)
// Setup mock metrics provider
mockProvider := &mockMetricsProvider{}
originalProvider := metrics.GetProvider()
metrics.SetProvider(mockProvider)
defer metrics.SetProvider(originalProvider) // Restore original provider after test
// 1. Test case: A handler that panics
t.Run("recovers from panic and returns 500", func(t *testing.T) {
// Reset mock state for this sub-test
mockProvider.panicRecorded = false
mockProvider.methodName = ""
panicHandler := http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
panic("something went terribly wrong")
})
// Create the middleware wrapping the panicking handler
testHandler := PanicRecovery(panicHandler)
// Create a test request and response recorder
req := httptest.NewRequest("GET", "http://example.com/foo", nil)
rr := httptest.NewRecorder()
// Serve the request
testHandler.ServeHTTP(rr, req)
// Assertions
assert.Equal(t, http.StatusInternalServerError, rr.Code, "expected status code to be 500")
assert.Contains(t, rr.Body.String(), "panic in PanicMiddleware: something went terribly wrong", "expected error message in response body")
// Assert that the metric was recorded
assert.True(t, mockProvider.panicRecorded, "expected RecordPanic to be called on metrics provider")
assert.Equal(t, panicMiddlewareMethodName, mockProvider.methodName, "expected panic to be recorded with the correct method name")
})
// 2. Test case: A handler that does NOT panic
t.Run("does not interfere with a non-panicking handler", func(t *testing.T) {
// Reset mock state for this sub-test
mockProvider.panicRecorded = false
successHandler := http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
w.WriteHeader(http.StatusOK)
w.Write([]byte("OK"))
})
testHandler := PanicRecovery(successHandler)
req := httptest.NewRequest("GET", "http://example.com/foo", nil)
rr := httptest.NewRecorder()
testHandler.ServeHTTP(rr, req)
// Assertions
assert.Equal(t, http.StatusOK, rr.Code, "expected status code to be 200")
assert.Equal(t, "OK", rr.Body.String(), "expected 'OK' response body")
assert.False(t, mockProvider.panicRecorded, "expected RecordPanic to not be called when there is no panic")
})
}

724
pkg/mqttspec/README.md Normal file
View File

@@ -0,0 +1,724 @@
# MQTTSpec - MQTT-based Database Query Framework
MQTTSpec is an MQTT-based database query framework that enables real-time database operations and subscriptions via MQTT protocol. It mirrors the functionality of WebSocketSpec but uses MQTT as the transport layer, making it ideal for IoT applications, mobile apps with unreliable networks, and distributed systems requiring QoS guarantees.
## Features
- **Dual Broker Support**: Embedded broker (Mochi MQTT) or external broker connection (Paho MQTT)
- **QoS 1 (At-least-once delivery)**: Reliable message delivery for all operations
- **Full CRUD Operations**: Create, Read, Update, Delete with hooks
- **Real-time Subscriptions**: Subscribe to entity changes with filtering
- **Database Agnostic**: GORM and Bun ORM support
- **Lifecycle Hooks**: 12 hooks for authentication, authorization, validation, and auditing
- **Multi-tenancy Support**: Built-in tenant isolation via hooks
- **Thread-safe**: Proper concurrency handling throughout
## Installation
```bash
go get github.com/bitechdev/ResolveSpec/pkg/mqttspec
```
## Quick Start
### Embedded Broker (Default)
```go
package main
import (
"github.com/bitechdev/ResolveSpec/pkg/mqttspec"
"gorm.io/driver/postgres"
"gorm.io/gorm"
)
type User struct {
ID uint `json:"id" gorm:"primaryKey"`
Name string `json:"name"`
Email string `json:"email"`
Status string `json:"status"`
}
func main() {
// Connect to database
db, _ := gorm.Open(postgres.Open("postgres://..."), &gorm.Config{})
db.AutoMigrate(&User{})
// Create MQTT handler with embedded broker
handler, err := mqttspec.NewHandlerWithGORM(db)
if err != nil {
panic(err)
}
// Register models
handler.Registry().RegisterModel("public.users", &User{})
// Start handler (starts embedded broker on localhost:1883)
if err := handler.Start(); err != nil {
panic(err)
}
// Handler is now listening for MQTT messages
select {} // Keep running
}
```
### External Broker
```go
handler, err := mqttspec.NewHandlerWithGORM(db,
mqttspec.WithExternalBroker(mqttspec.ExternalBrokerConfig{
BrokerURL: "tcp://mqtt.example.com:1883",
ClientID: "mqttspec-server",
Username: "admin",
Password: "secret",
ConnectTimeout: 10 * time.Second,
}),
)
```
### Custom Port (Embedded Broker)
```go
handler, err := mqttspec.NewHandlerWithGORM(db,
mqttspec.WithEmbeddedBroker(mqttspec.BrokerConfig{
Host: "0.0.0.0",
Port: 1884,
}),
)
```
## Topic Structure
MQTTSpec uses a client-based topic hierarchy:
```
spec/{client_id}/request # Client publishes requests
spec/{client_id}/response # Server publishes responses
spec/{client_id}/notify/{sub_id} # Server publishes notifications
```
### Wildcard Subscriptions
- **Server**: `spec/+/request` (receives all client requests)
- **Client**: `spec/{client_id}/response` + `spec/{client_id}/notify/+`
## Message Protocol
MQTTSpec uses the same JSON message structure as WebSocketSpec and ResolveSpec for consistency.
### Request Message
```json
{
"id": "msg-123",
"type": "request",
"operation": "read",
"schema": "public",
"entity": "users",
"options": {
"filters": [
{"column": "status", "operator": "eq", "value": "active"}
],
"sort": [{"column": "created_at", "direction": "desc"}],
"limit": 10
}
}
```
### Response Message
```json
{
"id": "msg-123",
"type": "response",
"success": true,
"data": [
{"id": 1, "name": "John Doe", "email": "john@example.com", "status": "active"},
{"id": 2, "name": "Jane Smith", "email": "jane@example.com", "status": "active"}
],
"metadata": {
"total": 50,
"count": 2
}
}
```
### Notification Message
```json
{
"type": "notification",
"operation": "create",
"subscription_id": "sub-xyz",
"schema": "public",
"entity": "users",
"data": {
"id": 3,
"name": "New User",
"email": "new@example.com",
"status": "active"
}
}
```
## CRUD Operations
### Read (Single Record)
**MQTT Client Publishes to**: `spec/{client_id}/request`
```json
{
"id": "msg-1",
"type": "request",
"operation": "read",
"schema": "public",
"entity": "users",
"data": {"id": 1}
}
```
**Server Publishes Response to**: `spec/{client_id}/response`
```json
{
"id": "msg-1",
"success": true,
"data": {"id": 1, "name": "John Doe", "email": "john@example.com"}
}
```
### Read (Multiple Records with Filtering)
```json
{
"id": "msg-2",
"type": "request",
"operation": "read",
"schema": "public",
"entity": "users",
"options": {
"filters": [
{"column": "status", "operator": "eq", "value": "active"}
],
"sort": [{"column": "name", "direction": "asc"}],
"limit": 20,
"offset": 0
}
}
```
### Create
```json
{
"id": "msg-3",
"type": "request",
"operation": "create",
"schema": "public",
"entity": "users",
"data": {
"name": "Alice Brown",
"email": "alice@example.com",
"status": "active"
}
}
```
### Update
```json
{
"id": "msg-4",
"type": "request",
"operation": "update",
"schema": "public",
"entity": "users",
"data": {
"id": 1,
"status": "inactive"
}
}
```
### Delete
```json
{
"id": "msg-5",
"type": "request",
"operation": "delete",
"schema": "public",
"entity": "users",
"data": {"id": 1}
}
```
## Real-time Subscriptions
### Subscribe to Entity Changes
**Client Publishes to**: `spec/{client_id}/request`
```json
{
"id": "msg-6",
"type": "subscription",
"operation": "subscribe",
"schema": "public",
"entity": "users",
"options": {
"filters": [
{"column": "status", "operator": "eq", "value": "active"}
]
}
}
```
**Server Response** (published to `spec/{client_id}/response`):
```json
{
"id": "msg-6",
"success": true,
"data": {
"subscription_id": "sub-abc123",
"notify_topic": "spec/{client_id}/notify/sub-abc123"
}
}
```
**Client Then Subscribes** to MQTT topic: `spec/{client_id}/notify/sub-abc123`
### Receiving Notifications
When any client creates/updates/deletes a user matching the subscription filters, the subscriber receives:
```json
{
"type": "notification",
"operation": "create",
"subscription_id": "sub-abc123",
"schema": "public",
"entity": "users",
"data": {
"id": 10,
"name": "New User",
"email": "newuser@example.com",
"status": "active"
}
}
```
### Unsubscribe
```json
{
"id": "msg-7",
"type": "subscription",
"operation": "unsubscribe",
"data": {
"subscription_id": "sub-abc123"
}
}
```
## Lifecycle Hooks
MQTTSpec provides 12 lifecycle hooks for implementing cross-cutting concerns:
### Hook Types
- `BeforeConnect` / `AfterConnect` - Connection lifecycle
- `BeforeDisconnect` / `AfterDisconnect` - Disconnection lifecycle
- `BeforeRead` / `AfterRead` - Read operations
- `BeforeCreate` / `AfterCreate` - Create operations
- `BeforeUpdate` / `AfterUpdate` - Update operations
- `BeforeDelete` / `AfterDelete` - Delete operations
- `BeforeSubscribe` / `AfterSubscribe` - Subscription creation
- `BeforeUnsubscribe` / `AfterUnsubscribe` - Subscription removal
### Authentication Example (JWT)
```go
handler.Hooks().Register(mqttspec.BeforeConnect, func(ctx *mqttspec.HookContext) error {
client := ctx.Metadata["mqtt_client"].(*mqttspec.Client)
// MQTT username contains JWT token
token := client.Username
claims, err := jwt.Validate(token)
if err != nil {
return fmt.Errorf("invalid token: %w", err)
}
// Store user info in client metadata for later use
client.SetMetadata("user_id", claims.UserID)
client.SetMetadata("tenant_id", claims.TenantID)
client.SetMetadata("roles", claims.Roles)
logger.Info("Client authenticated: user_id=%d, tenant=%s", claims.UserID, claims.TenantID)
return nil
})
```
### Multi-tenancy Example
```go
// Auto-inject tenant filter for all read operations
handler.Hooks().Register(mqttspec.BeforeRead, func(ctx *mqttspec.HookContext) error {
client := ctx.Metadata["mqtt_client"].(*mqttspec.Client)
tenantID, _ := client.GetMetadata("tenant_id")
// Add tenant filter to ensure users only see their own data
ctx.Options.Filters = append(ctx.Options.Filters, common.FilterOption{
Column: "tenant_id",
Operator: "eq",
Value: tenantID,
})
return nil
})
// Auto-set tenant_id for all create operations
handler.Hooks().Register(mqttspec.BeforeCreate, func(ctx *mqttspec.HookContext) error {
client := ctx.Metadata["mqtt_client"].(*mqttspec.Client)
tenantID, _ := client.GetMetadata("tenant_id")
// Inject tenant_id into new records
if dataMap, ok := ctx.Data.(map[string]interface{}); ok {
dataMap["tenant_id"] = tenantID
}
return nil
})
```
### Role-based Access Control (RBAC)
```go
handler.Hooks().Register(mqttspec.BeforeDelete, func(ctx *mqttspec.HookContext) error {
client := ctx.Metadata["mqtt_client"].(*mqttspec.Client)
roles, _ := client.GetMetadata("roles")
roleList := roles.([]string)
hasAdminRole := false
for _, role := range roleList {
if role == "admin" {
hasAdminRole = true
break
}
}
if !hasAdminRole {
return fmt.Errorf("permission denied: delete requires admin role")
}
return nil
})
```
### Audit Logging Example
```go
handler.Hooks().Register(mqttspec.AfterCreate, func(ctx *mqttspec.HookContext) error {
client := ctx.Metadata["mqtt_client"].(*mqttspec.Client)
userID, _ := client.GetMetadata("user_id")
logger.Info("Audit: user %d created %s.%s record: %+v",
userID, ctx.Schema, ctx.Entity, ctx.Result)
// Could also write to audit log table
return nil
})
```
## Client Examples
### JavaScript (MQTT.js)
```javascript
const mqtt = require('mqtt');
// Connect to MQTT broker
const client = mqtt.connect('mqtt://localhost:1883', {
clientId: 'client-abc123',
username: 'your-jwt-token',
password: '', // JWT in username, password can be empty
});
client.on('connect', () => {
console.log('Connected to MQTT broker');
// Subscribe to responses
client.subscribe('spec/client-abc123/response');
// Read users
const readMsg = {
id: 'msg-1',
type: 'request',
operation: 'read',
schema: 'public',
entity: 'users',
options: {
filters: [
{ column: 'status', operator: 'eq', value: 'active' }
]
}
};
client.publish('spec/client-abc123/request', JSON.stringify(readMsg));
});
client.on('message', (topic, payload) => {
const message = JSON.parse(payload.toString());
console.log('Received:', message);
if (message.type === 'response') {
console.log('Response data:', message.data);
} else if (message.type === 'notification') {
console.log('Notification:', message.operation, message.data);
}
});
```
### Python (paho-mqtt)
```python
import paho.mqtt.client as mqtt
import json
client_id = 'client-python-123'
def on_connect(client, userdata, flags, rc):
print(f"Connected with result code {rc}")
# Subscribe to responses
client.subscribe(f"spec/{client_id}/response")
# Create a user
create_msg = {
'id': 'msg-create-1',
'type': 'request',
'operation': 'create',
'schema': 'public',
'entity': 'users',
'data': {
'name': 'Python User',
'email': 'python@example.com',
'status': 'active'
}
}
client.publish(f"spec/{client_id}/request", json.dumps(create_msg))
def on_message(client, userdata, msg):
message = json.loads(msg.payload.decode())
print(f"Received on {msg.topic}: {message}")
client = mqtt.Client(client_id=client_id)
client.username_pw_set('your-jwt-token', '')
client.on_connect = on_connect
client.on_message = on_message
client.connect('localhost', 1883, 60)
client.loop_forever()
```
### Go (paho.mqtt.golang)
```go
package main
import (
"encoding/json"
"fmt"
"time"
mqtt "github.com/eclipse/paho.mqtt.golang"
)
func main() {
clientID := "client-go-123"
opts := mqtt.NewClientOptions()
opts.AddBroker("tcp://localhost:1883")
opts.SetClientID(clientID)
opts.SetUsername("your-jwt-token")
opts.SetPassword("")
opts.SetDefaultPublishHandler(func(client mqtt.Client, msg mqtt.Message) {
var message map[string]interface{}
json.Unmarshal(msg.Payload(), &message)
fmt.Printf("Received on %s: %+v\n", msg.Topic(), message)
})
opts.OnConnect = func(client mqtt.Client) {
fmt.Println("Connected to MQTT broker")
// Subscribe to responses
client.Subscribe(fmt.Sprintf("spec/%s/response", clientID), 1, nil)
// Read users
readMsg := map[string]interface{}{
"id": "msg-1",
"type": "request",
"operation": "read",
"schema": "public",
"entity": "users",
"options": map[string]interface{}{
"filters": []map[string]interface{}{
{"column": "status", "operator": "eq", "value": "active"},
},
},
}
payload, _ := json.Marshal(readMsg)
client.Publish(fmt.Sprintf("spec/%s/request", clientID), 1, false, payload)
}
client := mqtt.NewClient(opts)
if token := client.Connect(); token.Wait() && token.Error() != nil {
panic(token.Error())
}
// Keep running
select {}
}
```
## Configuration Options
### BrokerConfig (Embedded Broker)
```go
type BrokerConfig struct {
Host string // Default: "localhost"
Port int // Default: 1883
EnableWebSocket bool // Enable WebSocket listener
WSPort int // WebSocket port (default: 1884)
MaxConnections int // Max concurrent connections
KeepAlive time.Duration // MQTT keep-alive interval
EnableAuth bool // Enable authentication
}
```
### ExternalBrokerConfig
```go
type ExternalBrokerConfig struct {
BrokerURL string // MQTT broker URL (tcp://host:port)
ClientID string // MQTT client ID
Username string // MQTT username
Password string // MQTT password
CleanSession bool // Clean session flag
KeepAlive time.Duration // Keep-alive interval
ConnectTimeout time.Duration // Connection timeout
ReconnectDelay time.Duration // Auto-reconnect delay
MaxReconnect int // Max reconnect attempts
TLSConfig *tls.Config // TLS configuration
}
```
### QoS Configuration
```go
handler, err := mqttspec.NewHandlerWithGORM(db,
mqttspec.WithQoS(1, 1, 1), // Request, Response, Notification
)
```
### Topic Prefix
```go
handler, err := mqttspec.NewHandlerWithGORM(db,
mqttspec.WithTopicPrefix("myapp"), // Changes topics to myapp/{client_id}/...
)
```
## Documentation References
- **ResolveSpec JSON Protocol**: See `/pkg/resolvespec/README.md` for the full message protocol specification
- **WebSocketSpec Documentation**: See `/pkg/websocketspec/README.md` for similar WebSocket-based implementation
- **Common Interfaces**: See `/pkg/common/types.go` for database adapter interfaces and query options
- **Model Registry**: See `/pkg/modelregistry/README.md` for model registration and reflection
- **Hooks Reference**: See `/pkg/websocketspec/hooks.go` for hook types (same as MQTTSpec)
- **Subscription Management**: See `/pkg/websocketspec/subscription.go` for subscription filtering
## Comparison: MQTTSpec vs WebSocketSpec
| Feature | MQTTSpec | WebSocketSpec |
|---------|----------|---------------|
| **Transport** | MQTT (pub/sub broker) | WebSocket (direct connection) |
| **Connection Model** | Broker-mediated | Direct client-server |
| **QoS Levels** | QoS 0, 1, 2 support | No built-in QoS |
| **Offline Messages** | Yes (with QoS 1+) | No |
| **Auto-reconnect** | Yes (built into MQTT) | Manual implementation needed |
| **Network Efficiency** | Better for unreliable networks | Better for low-latency |
| **Best For** | IoT, mobile apps, distributed systems | Web applications, real-time dashboards |
| **Message Protocol** | Same JSON structure | Same JSON structure |
| **Hooks** | Same 12 hooks | Same 12 hooks |
| **CRUD Operations** | Identical | Identical |
| **Subscriptions** | Identical (via MQTT topics) | Identical (via app-level) |
## Use Cases
### IoT Sensor Data
```go
// Sensors publish data, backend stores and notifies subscribers
handler.Registry().RegisterModel("public.sensor_readings", &SensorReading{})
// Auto-set device_id from client metadata
handler.Hooks().Register(mqttspec.BeforeCreate, func(ctx *mqttspec.HookContext) error {
client := ctx.Metadata["mqtt_client"].(*mqttspec.Client)
deviceID, _ := client.GetMetadata("device_id")
if ctx.Entity == "sensor_readings" {
if dataMap, ok := ctx.Data.(map[string]interface{}); ok {
dataMap["device_id"] = deviceID
dataMap["timestamp"] = time.Now()
}
}
return nil
})
```
### Mobile App with Offline Support
MQTTSpec's QoS 1 ensures messages are delivered even if the client temporarily disconnects.
### Distributed Microservices
Multiple services can subscribe to entity changes and react accordingly.
## Testing
Run unit tests:
```bash
go test -v ./pkg/mqttspec
```
Run with race detection:
```bash
go test -race -v ./pkg/mqttspec
```
## License
This package is part of the ResolveSpec project.
## Contributing
Contributions are welcome! Please ensure:
- All tests pass (`go test ./pkg/mqttspec`)
- No race conditions (`go test -race ./pkg/mqttspec`)
- Documentation is updated
- Examples are provided for new features
## Support
For issues, questions, or feature requests, please open an issue in the ResolveSpec repository.

417
pkg/mqttspec/broker.go Normal file
View File

@@ -0,0 +1,417 @@
package mqttspec
import (
"context"
"fmt"
"sync"
"time"
mqtt "github.com/mochi-mqtt/server/v2"
"github.com/mochi-mqtt/server/v2/listeners"
pahomqtt "github.com/eclipse/paho.mqtt.golang"
"github.com/bitechdev/ResolveSpec/pkg/logger"
)
// BrokerInterface abstracts MQTT broker operations
type BrokerInterface interface {
// Start initializes the broker/client connection
Start(ctx context.Context) error
// Stop gracefully shuts down the broker/client
Stop(ctx context.Context) error
// Publish sends a message to a topic
Publish(topic string, qos byte, payload []byte) error
// Subscribe subscribes to a topic pattern with callback
Subscribe(topicFilter string, qos byte, callback MessageCallback) error
// Unsubscribe removes subscription
Unsubscribe(topicFilter string) error
// IsConnected returns connection status
IsConnected() bool
// GetClientManager returns the client manager
GetClientManager() *ClientManager
// SetHandler sets the handler reference (needed for hooks)
SetHandler(handler *Handler)
}
// MessageCallback is called when a message arrives
type MessageCallback func(topic string, payload []byte)
// EmbeddedBroker wraps Mochi MQTT server
type EmbeddedBroker struct {
config BrokerConfig
server *mqtt.Server
clientManager *ClientManager
handler *Handler
subscriptions map[string]MessageCallback
subMu sync.RWMutex
ctx context.Context
cancel context.CancelFunc
mu sync.RWMutex
started bool
}
// NewEmbeddedBroker creates a new embedded broker
func NewEmbeddedBroker(config BrokerConfig, clientManager *ClientManager) *EmbeddedBroker {
return &EmbeddedBroker{
config: config,
clientManager: clientManager,
subscriptions: make(map[string]MessageCallback),
}
}
// SetHandler sets the handler reference
func (eb *EmbeddedBroker) SetHandler(handler *Handler) {
eb.mu.Lock()
defer eb.mu.Unlock()
eb.handler = handler
}
// Start starts the embedded MQTT broker
func (eb *EmbeddedBroker) Start(ctx context.Context) error {
eb.mu.Lock()
defer eb.mu.Unlock()
if eb.started {
return fmt.Errorf("broker already started")
}
eb.ctx, eb.cancel = context.WithCancel(ctx)
// Create Mochi MQTT server
eb.server = mqtt.New(&mqtt.Options{
InlineClient: true,
})
// Note: Authentication is handled at the handler level via BeforeConnect hook
// Mochi MQTT auth can be configured via custom hooks if needed
// Add TCP listener
tcp := listeners.NewTCP(
listeners.Config{
ID: "tcp",
Address: fmt.Sprintf("%s:%d", eb.config.Host, eb.config.Port),
},
)
if err := eb.server.AddListener(tcp); err != nil {
return fmt.Errorf("failed to add TCP listener: %w", err)
}
// Add WebSocket listener if enabled
if eb.config.EnableWebSocket {
ws := listeners.NewWebsocket(
listeners.Config{
ID: "ws",
Address: fmt.Sprintf("%s:%d", eb.config.Host, eb.config.WSPort),
},
)
if err := eb.server.AddListener(ws); err != nil {
return fmt.Errorf("failed to add WebSocket listener: %w", err)
}
}
// Start server in goroutine
go func() {
if err := eb.server.Serve(); err != nil {
logger.Error("[MQTTSpec] Embedded broker error: %v", err)
}
}()
// Wait for server to be ready
select {
case <-time.After(2 * time.Second):
// Server should be ready
case <-eb.ctx.Done():
return fmt.Errorf("context cancelled during startup")
}
eb.started = true
logger.Info("[MQTTSpec] Embedded broker started on %s:%d", eb.config.Host, eb.config.Port)
return nil
}
// Stop stops the embedded broker
func (eb *EmbeddedBroker) Stop(ctx context.Context) error {
eb.mu.Lock()
defer eb.mu.Unlock()
if !eb.started {
return nil
}
if eb.cancel != nil {
eb.cancel()
}
if eb.server != nil {
if err := eb.server.Close(); err != nil {
logger.Error("[MQTTSpec] Error closing embedded broker: %v", err)
}
}
eb.started = false
logger.Info("[MQTTSpec] Embedded broker stopped")
return nil
}
// Publish publishes a message to a topic
func (eb *EmbeddedBroker) Publish(topic string, qos byte, payload []byte) error {
if !eb.started {
return fmt.Errorf("broker not started")
}
if eb.server == nil {
return fmt.Errorf("server not initialized")
}
// Use inline client to publish
return eb.server.Publish(topic, payload, false, qos)
}
// Subscribe subscribes to a topic
func (eb *EmbeddedBroker) Subscribe(topicFilter string, qos byte, callback MessageCallback) error {
if !eb.started {
return fmt.Errorf("broker not started")
}
// Store callback
eb.subMu.Lock()
eb.subscriptions[topicFilter] = callback
eb.subMu.Unlock()
// Create inline subscription handler
// Note: Mochi MQTT internal subscriptions are more complex
// For now, we'll use a publishing hook to intercept messages
// This is a simplified implementation
logger.Info("[MQTTSpec] Subscribed to topic filter: %s", topicFilter)
return nil
}
// Unsubscribe unsubscribes from a topic
func (eb *EmbeddedBroker) Unsubscribe(topicFilter string) error {
eb.subMu.Lock()
defer eb.subMu.Unlock()
delete(eb.subscriptions, topicFilter)
logger.Info("[MQTTSpec] Unsubscribed from topic filter: %s", topicFilter)
return nil
}
// IsConnected returns whether the broker is running
func (eb *EmbeddedBroker) IsConnected() bool {
eb.mu.RLock()
defer eb.mu.RUnlock()
return eb.started
}
// GetClientManager returns the client manager
func (eb *EmbeddedBroker) GetClientManager() *ClientManager {
return eb.clientManager
}
// ExternalBrokerClient wraps Paho MQTT client
type ExternalBrokerClient struct {
config ExternalBrokerConfig
client pahomqtt.Client
clientManager *ClientManager
handler *Handler
subscriptions map[string]MessageCallback
subMu sync.RWMutex
ctx context.Context
cancel context.CancelFunc
mu sync.RWMutex
connected bool
}
// NewExternalBrokerClient creates a new external broker client
func NewExternalBrokerClient(config ExternalBrokerConfig, clientManager *ClientManager) *ExternalBrokerClient {
return &ExternalBrokerClient{
config: config,
clientManager: clientManager,
subscriptions: make(map[string]MessageCallback),
}
}
// SetHandler sets the handler reference
func (ebc *ExternalBrokerClient) SetHandler(handler *Handler) {
ebc.mu.Lock()
defer ebc.mu.Unlock()
ebc.handler = handler
}
// Start connects to the external MQTT broker
func (ebc *ExternalBrokerClient) Start(ctx context.Context) error {
ebc.mu.Lock()
defer ebc.mu.Unlock()
if ebc.connected {
return fmt.Errorf("already connected")
}
ebc.ctx, ebc.cancel = context.WithCancel(ctx)
// Create Paho client options
opts := pahomqtt.NewClientOptions()
opts.AddBroker(ebc.config.BrokerURL)
opts.SetClientID(ebc.config.ClientID)
opts.SetUsername(ebc.config.Username)
opts.SetPassword(ebc.config.Password)
opts.SetCleanSession(ebc.config.CleanSession)
opts.SetKeepAlive(ebc.config.KeepAlive)
opts.SetAutoReconnect(true)
opts.SetMaxReconnectInterval(ebc.config.ReconnectDelay)
// Set connection lost handler
opts.SetConnectionLostHandler(func(client pahomqtt.Client, err error) {
logger.Error("[MQTTSpec] External broker connection lost: %v", err)
ebc.mu.Lock()
ebc.connected = false
ebc.mu.Unlock()
})
// Set on-connect handler
opts.SetOnConnectHandler(func(client pahomqtt.Client) {
logger.Info("[MQTTSpec] Connected to external broker")
ebc.mu.Lock()
ebc.connected = true
ebc.mu.Unlock()
// Resubscribe to topics
ebc.resubscribeAll()
})
// Create and connect client
ebc.client = pahomqtt.NewClient(opts)
token := ebc.client.Connect()
if !token.WaitTimeout(ebc.config.ConnectTimeout) {
return fmt.Errorf("connection timeout")
}
if err := token.Error(); err != nil {
return fmt.Errorf("failed to connect to external broker: %w", err)
}
ebc.connected = true
logger.Info("[MQTTSpec] Connected to external MQTT broker: %s", ebc.config.BrokerURL)
return nil
}
// Stop disconnects from the external broker
func (ebc *ExternalBrokerClient) Stop(ctx context.Context) error {
ebc.mu.Lock()
defer ebc.mu.Unlock()
if !ebc.connected {
return nil
}
if ebc.cancel != nil {
ebc.cancel()
}
if ebc.client != nil && ebc.client.IsConnected() {
ebc.client.Disconnect(uint(ebc.config.ConnectTimeout.Milliseconds()))
}
ebc.connected = false
logger.Info("[MQTTSpec] Disconnected from external broker")
return nil
}
// Publish publishes a message to a topic
func (ebc *ExternalBrokerClient) Publish(topic string, qos byte, payload []byte) error {
if !ebc.connected {
return fmt.Errorf("not connected to broker")
}
token := ebc.client.Publish(topic, qos, false, payload)
token.Wait()
return token.Error()
}
// Subscribe subscribes to a topic
func (ebc *ExternalBrokerClient) Subscribe(topicFilter string, qos byte, callback MessageCallback) error {
if !ebc.connected {
return fmt.Errorf("not connected to broker")
}
// Store callback
ebc.subMu.Lock()
ebc.subscriptions[topicFilter] = callback
ebc.subMu.Unlock()
// Subscribe via Paho client
token := ebc.client.Subscribe(topicFilter, qos, func(client pahomqtt.Client, msg pahomqtt.Message) {
callback(msg.Topic(), msg.Payload())
})
token.Wait()
if err := token.Error(); err != nil {
return fmt.Errorf("failed to subscribe to %s: %w", topicFilter, err)
}
logger.Info("[MQTTSpec] Subscribed to topic filter: %s", topicFilter)
return nil
}
// Unsubscribe unsubscribes from a topic
func (ebc *ExternalBrokerClient) Unsubscribe(topicFilter string) error {
ebc.subMu.Lock()
defer ebc.subMu.Unlock()
if ebc.client != nil && ebc.connected {
token := ebc.client.Unsubscribe(topicFilter)
token.Wait()
if err := token.Error(); err != nil {
logger.Error("[MQTTSpec] Failed to unsubscribe from %s: %v", topicFilter, err)
}
}
delete(ebc.subscriptions, topicFilter)
logger.Info("[MQTTSpec] Unsubscribed from topic filter: %s", topicFilter)
return nil
}
// IsConnected returns connection status
func (ebc *ExternalBrokerClient) IsConnected() bool {
ebc.mu.RLock()
defer ebc.mu.RUnlock()
return ebc.connected
}
// GetClientManager returns the client manager
func (ebc *ExternalBrokerClient) GetClientManager() *ClientManager {
return ebc.clientManager
}
// resubscribeAll resubscribes to all topics after reconnection
func (ebc *ExternalBrokerClient) resubscribeAll() {
ebc.subMu.RLock()
defer ebc.subMu.RUnlock()
for topicFilter, callback := range ebc.subscriptions {
logger.Info("[MQTTSpec] Resubscribing to topic: %s", topicFilter)
token := ebc.client.Subscribe(topicFilter, 1, func(client pahomqtt.Client, msg pahomqtt.Message) {
callback(msg.Topic(), msg.Payload())
})
if token.Wait() && token.Error() != nil {
logger.Error("[MQTTSpec] Failed to resubscribe to %s: %v", topicFilter, token.Error())
}
}
}

409
pkg/mqttspec/broker_test.go Normal file
View File

@@ -0,0 +1,409 @@
package mqttspec
import (
"context"
"sync"
"testing"
"time"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
func TestNewEmbeddedBroker(t *testing.T) {
cm := NewClientManager(context.Background())
defer cm.Shutdown()
config := BrokerConfig{
Host: "localhost",
Port: 1883,
MaxConnections: 100,
KeepAlive: 60 * time.Second,
}
broker := NewEmbeddedBroker(config, cm)
assert.NotNil(t, broker)
assert.Equal(t, config, broker.config)
assert.Equal(t, cm, broker.clientManager)
assert.NotNil(t, broker.subscriptions)
assert.False(t, broker.started)
}
func TestEmbeddedBroker_StartStop(t *testing.T) {
cm := NewClientManager(context.Background())
defer cm.Shutdown()
config := BrokerConfig{
Host: "localhost",
Port: 11883, // Use non-standard port for testing
MaxConnections: 100,
KeepAlive: 60 * time.Second,
}
broker := NewEmbeddedBroker(config, cm)
ctx := context.Background()
// Start broker
err := broker.Start(ctx)
require.NoError(t, err)
// Verify started
assert.True(t, broker.IsConnected())
// Stop broker
err = broker.Stop(ctx)
require.NoError(t, err)
// Verify stopped
assert.False(t, broker.IsConnected())
}
func TestEmbeddedBroker_StartTwice(t *testing.T) {
cm := NewClientManager(context.Background())
defer cm.Shutdown()
config := BrokerConfig{
Host: "localhost",
Port: 11884,
MaxConnections: 100,
KeepAlive: 60 * time.Second,
}
broker := NewEmbeddedBroker(config, cm)
ctx := context.Background()
// Start broker
err := broker.Start(ctx)
require.NoError(t, err)
defer broker.Stop(ctx)
// Try to start again - should fail
err = broker.Start(ctx)
assert.Error(t, err)
assert.Contains(t, err.Error(), "already started")
}
func TestEmbeddedBroker_StopWithoutStart(t *testing.T) {
cm := NewClientManager(context.Background())
defer cm.Shutdown()
config := BrokerConfig{
Host: "localhost",
Port: 11885,
MaxConnections: 100,
KeepAlive: 60 * time.Second,
}
broker := NewEmbeddedBroker(config, cm)
ctx := context.Background()
// Stop without starting - should not error
err := broker.Stop(ctx)
assert.NoError(t, err)
}
func TestEmbeddedBroker_PublishWithoutStart(t *testing.T) {
cm := NewClientManager(context.Background())
defer cm.Shutdown()
config := BrokerConfig{
Host: "localhost",
Port: 11886,
MaxConnections: 100,
KeepAlive: 60 * time.Second,
}
broker := NewEmbeddedBroker(config, cm)
// Try to publish without starting - should fail
err := broker.Publish("test/topic", 1, []byte("test"))
assert.Error(t, err)
assert.Contains(t, err.Error(), "broker not started")
}
func TestEmbeddedBroker_SubscribeWithoutStart(t *testing.T) {
cm := NewClientManager(context.Background())
defer cm.Shutdown()
config := BrokerConfig{
Host: "localhost",
Port: 11887,
MaxConnections: 100,
KeepAlive: 60 * time.Second,
}
broker := NewEmbeddedBroker(config, cm)
// Try to subscribe without starting - should fail
err := broker.Subscribe("test/topic", 1, func(topic string, payload []byte) {})
assert.Error(t, err)
assert.Contains(t, err.Error(), "broker not started")
}
func TestEmbeddedBroker_PublishSubscribe(t *testing.T) {
cm := NewClientManager(context.Background())
defer cm.Shutdown()
config := BrokerConfig{
Host: "localhost",
Port: 11888,
MaxConnections: 100,
KeepAlive: 60 * time.Second,
}
broker := NewEmbeddedBroker(config, cm)
ctx := context.Background()
// Start broker
err := broker.Start(ctx)
require.NoError(t, err)
defer broker.Stop(ctx)
// Subscribe to topic
callback := func(topic string, payload []byte) {
// Callback for subscription - actual message delivery would require
// integration with Mochi MQTT's hook system
}
err = broker.Subscribe("test/topic", 1, callback)
require.NoError(t, err)
// Note: Embedded broker's Subscribe is simplified and doesn't fully integrate
// with Mochi MQTT's internal pub/sub. This test verifies the subscription
// is registered but actual message delivery would require more complex
// integration with Mochi MQTT's hook system.
// Verify subscription was registered
broker.subMu.RLock()
_, exists := broker.subscriptions["test/topic"]
broker.subMu.RUnlock()
assert.True(t, exists)
}
func TestEmbeddedBroker_Unsubscribe(t *testing.T) {
cm := NewClientManager(context.Background())
defer cm.Shutdown()
config := BrokerConfig{
Host: "localhost",
Port: 11889,
MaxConnections: 100,
KeepAlive: 60 * time.Second,
}
broker := NewEmbeddedBroker(config, cm)
ctx := context.Background()
// Start broker
err := broker.Start(ctx)
require.NoError(t, err)
defer broker.Stop(ctx)
// Subscribe
callback := func(topic string, payload []byte) {}
err = broker.Subscribe("test/topic", 1, callback)
require.NoError(t, err)
// Verify subscription exists
broker.subMu.RLock()
_, exists := broker.subscriptions["test/topic"]
broker.subMu.RUnlock()
assert.True(t, exists)
// Unsubscribe
err = broker.Unsubscribe("test/topic")
require.NoError(t, err)
// Verify subscription removed
broker.subMu.RLock()
_, exists = broker.subscriptions["test/topic"]
broker.subMu.RUnlock()
assert.False(t, exists)
}
func TestEmbeddedBroker_SetHandler(t *testing.T) {
cm := NewClientManager(context.Background())
defer cm.Shutdown()
config := BrokerConfig{
Host: "localhost",
Port: 11890,
MaxConnections: 100,
KeepAlive: 60 * time.Second,
}
broker := NewEmbeddedBroker(config, cm)
// Create a mock handler (nil is fine for this test)
var handler *Handler = nil
// Set handler
broker.SetHandler(handler)
// Verify handler was set
broker.mu.RLock()
assert.Equal(t, handler, broker.handler)
broker.mu.RUnlock()
}
func TestEmbeddedBroker_GetClientManager(t *testing.T) {
cm := NewClientManager(context.Background())
defer cm.Shutdown()
config := BrokerConfig{
Host: "localhost",
Port: 11891,
MaxConnections: 100,
KeepAlive: 60 * time.Second,
}
broker := NewEmbeddedBroker(config, cm)
// Get client manager
retrievedCM := broker.GetClientManager()
// Verify it's the same instance
assert.Equal(t, cm, retrievedCM)
}
func TestEmbeddedBroker_ConcurrentPublish(t *testing.T) {
cm := NewClientManager(context.Background())
defer cm.Shutdown()
config := BrokerConfig{
Host: "localhost",
Port: 11892,
MaxConnections: 100,
KeepAlive: 60 * time.Second,
}
broker := NewEmbeddedBroker(config, cm)
ctx := context.Background()
// Start broker
err := broker.Start(ctx)
require.NoError(t, err)
defer broker.Stop(ctx)
// Test concurrent publishing
var wg sync.WaitGroup
numPublishers := 10
for i := 0; i < numPublishers; i++ {
wg.Add(1)
go func(id int) {
defer wg.Done()
for j := 0; j < 10; j++ {
err := broker.Publish("test/topic", 1, []byte("test"))
// Errors are acceptable in concurrent scenario
_ = err
}
}(i)
}
wg.Wait()
}
func TestNewExternalBrokerClient(t *testing.T) {
cm := NewClientManager(context.Background())
defer cm.Shutdown()
config := ExternalBrokerConfig{
BrokerURL: "tcp://localhost:1883",
ClientID: "test-client",
Username: "user",
Password: "pass",
CleanSession: true,
KeepAlive: 60 * time.Second,
ConnectTimeout: 5 * time.Second,
ReconnectDelay: 1 * time.Second,
}
broker := NewExternalBrokerClient(config, cm)
assert.NotNil(t, broker)
assert.Equal(t, config, broker.config)
assert.Equal(t, cm, broker.clientManager)
assert.NotNil(t, broker.subscriptions)
assert.False(t, broker.connected)
}
func TestExternalBrokerClient_SetHandler(t *testing.T) {
cm := NewClientManager(context.Background())
defer cm.Shutdown()
config := ExternalBrokerConfig{
BrokerURL: "tcp://localhost:1883",
ClientID: "test-client",
Username: "user",
Password: "pass",
CleanSession: true,
KeepAlive: 60 * time.Second,
ConnectTimeout: 5 * time.Second,
ReconnectDelay: 1 * time.Second,
}
broker := NewExternalBrokerClient(config, cm)
// Create a mock handler (nil is fine for this test)
var handler *Handler = nil
// Set handler
broker.SetHandler(handler)
// Verify handler was set
broker.mu.RLock()
assert.Equal(t, handler, broker.handler)
broker.mu.RUnlock()
}
func TestExternalBrokerClient_GetClientManager(t *testing.T) {
cm := NewClientManager(context.Background())
defer cm.Shutdown()
config := ExternalBrokerConfig{
BrokerURL: "tcp://localhost:1883",
ClientID: "test-client",
Username: "user",
Password: "pass",
CleanSession: true,
KeepAlive: 60 * time.Second,
ConnectTimeout: 5 * time.Second,
ReconnectDelay: 1 * time.Second,
}
broker := NewExternalBrokerClient(config, cm)
// Get client manager
retrievedCM := broker.GetClientManager()
// Verify it's the same instance
assert.Equal(t, cm, retrievedCM)
}
func TestExternalBrokerClient_IsConnected(t *testing.T) {
cm := NewClientManager(context.Background())
defer cm.Shutdown()
config := ExternalBrokerConfig{
BrokerURL: "tcp://localhost:1883",
ClientID: "test-client",
Username: "user",
Password: "pass",
CleanSession: true,
KeepAlive: 60 * time.Second,
ConnectTimeout: 5 * time.Second,
ReconnectDelay: 1 * time.Second,
}
broker := NewExternalBrokerClient(config, cm)
// Should not be connected initially
assert.False(t, broker.IsConnected())
}
// Note: Tests for ExternalBrokerClient Start/Stop/Publish/Subscribe require
// a running MQTT broker and are better suited for integration tests.
// These tests would be included in integration_test.go with proper test
// broker setup (e.g., using Docker Compose).

184
pkg/mqttspec/client.go Normal file
View File

@@ -0,0 +1,184 @@
package mqttspec
import (
"context"
"sync"
"time"
"github.com/bitechdev/ResolveSpec/pkg/logger"
)
// Client represents an MQTT client connection
type Client struct {
// ID is the MQTT client ID (unique per connection)
ID string
// Username from MQTT CONNECT packet
Username string
// ConnectedAt is when the client connected
ConnectedAt time.Time
// subscriptions holds active subscriptions for this client
subscriptions map[string]*Subscription
subMu sync.RWMutex
// metadata stores client-specific data (user_id, roles, tenant_id, etc.)
// Set by BeforeConnect hook for authentication/authorization
metadata map[string]interface{}
metaMu sync.RWMutex
// ctx is the client context
ctx context.Context
cancel context.CancelFunc
// handler reference for callback access
handler *Handler
}
// ClientManager manages all MQTT client connections
type ClientManager struct {
// clients maps client_id to Client
clients map[string]*Client
mu sync.RWMutex
// ctx for lifecycle management
ctx context.Context
cancel context.CancelFunc
}
// NewClient creates a new MQTT client
func NewClient(id, username string, handler *Handler) *Client {
ctx, cancel := context.WithCancel(context.Background())
return &Client{
ID: id,
Username: username,
ConnectedAt: time.Now(),
subscriptions: make(map[string]*Subscription),
metadata: make(map[string]interface{}),
ctx: ctx,
cancel: cancel,
handler: handler,
}
}
// SetMetadata sets metadata for this client
func (c *Client) SetMetadata(key string, value interface{}) {
c.metaMu.Lock()
defer c.metaMu.Unlock()
c.metadata[key] = value
}
// GetMetadata retrieves metadata for this client
func (c *Client) GetMetadata(key string) (interface{}, bool) {
c.metaMu.RLock()
defer c.metaMu.RUnlock()
val, ok := c.metadata[key]
return val, ok
}
// AddSubscription adds a subscription to this client
func (c *Client) AddSubscription(sub *Subscription) {
c.subMu.Lock()
defer c.subMu.Unlock()
c.subscriptions[sub.ID] = sub
}
// RemoveSubscription removes a subscription from this client
func (c *Client) RemoveSubscription(subID string) {
c.subMu.Lock()
defer c.subMu.Unlock()
delete(c.subscriptions, subID)
}
// GetSubscription retrieves a subscription by ID
func (c *Client) GetSubscription(subID string) (*Subscription, bool) {
c.subMu.RLock()
defer c.subMu.RUnlock()
sub, ok := c.subscriptions[subID]
return sub, ok
}
// Close cleans up the client
func (c *Client) Close() {
if c.cancel != nil {
c.cancel()
}
// Clean up subscriptions
c.subMu.Lock()
for subID := range c.subscriptions {
if c.handler != nil && c.handler.subscriptionManager != nil {
c.handler.subscriptionManager.Unsubscribe(subID)
}
}
c.subscriptions = make(map[string]*Subscription)
c.subMu.Unlock()
}
// NewClientManager creates a new client manager
func NewClientManager(ctx context.Context) *ClientManager {
ctx, cancel := context.WithCancel(ctx)
return &ClientManager{
clients: make(map[string]*Client),
ctx: ctx,
cancel: cancel,
}
}
// Register registers a new MQTT client
func (cm *ClientManager) Register(clientID, username string, handler *Handler) *Client {
cm.mu.Lock()
defer cm.mu.Unlock()
client := NewClient(clientID, username, handler)
cm.clients[clientID] = client
count := len(cm.clients)
logger.Info("[MQTTSpec] Client registered: %s (username: %s, total: %d)", clientID, username, count)
return client
}
// Unregister removes a client
func (cm *ClientManager) Unregister(clientID string) {
cm.mu.Lock()
defer cm.mu.Unlock()
if client, ok := cm.clients[clientID]; ok {
client.Close()
delete(cm.clients, clientID)
count := len(cm.clients)
logger.Info("[MQTTSpec] Client unregistered: %s (total: %d)", clientID, count)
}
}
// GetClient retrieves a client by ID
func (cm *ClientManager) GetClient(clientID string) (*Client, bool) {
cm.mu.RLock()
defer cm.mu.RUnlock()
client, ok := cm.clients[clientID]
return client, ok
}
// Count returns the number of active clients
func (cm *ClientManager) Count() int {
cm.mu.RLock()
defer cm.mu.RUnlock()
return len(cm.clients)
}
// Shutdown gracefully shuts down the client manager
func (cm *ClientManager) Shutdown() {
cm.cancel()
// Close all clients
cm.mu.Lock()
for _, client := range cm.clients {
client.Close()
}
cm.clients = make(map[string]*Client)
cm.mu.Unlock()
logger.Info("[MQTTSpec] Client manager shut down")
}

256
pkg/mqttspec/client_test.go Normal file
View File

@@ -0,0 +1,256 @@
package mqttspec
import (
"context"
"sync"
"testing"
"github.com/stretchr/testify/assert"
)
func TestNewClient(t *testing.T) {
client := NewClient("client-123", "user@example.com", nil)
assert.Equal(t, "client-123", client.ID)
assert.Equal(t, "user@example.com", client.Username)
assert.NotNil(t, client.subscriptions)
assert.NotNil(t, client.metadata)
assert.NotNil(t, client.ctx)
assert.NotNil(t, client.cancel)
}
func TestClient_Metadata(t *testing.T) {
client := NewClient("client-123", "user", nil)
// Set metadata
client.SetMetadata("user_id", 456)
client.SetMetadata("tenant_id", "tenant-abc")
client.SetMetadata("roles", []string{"admin", "user"})
// Get metadata
userID, exists := client.GetMetadata("user_id")
assert.True(t, exists)
assert.Equal(t, 456, userID)
tenantID, exists := client.GetMetadata("tenant_id")
assert.True(t, exists)
assert.Equal(t, "tenant-abc", tenantID)
roles, exists := client.GetMetadata("roles")
assert.True(t, exists)
assert.Equal(t, []string{"admin", "user"}, roles)
// Non-existent key
_, exists = client.GetMetadata("nonexistent")
assert.False(t, exists)
}
func TestClient_Subscriptions(t *testing.T) {
client := NewClient("client-123", "user", nil)
// Create mock subscription
sub := &Subscription{
ID: "sub-1",
ConnectionID: "client-123",
Schema: "public",
Entity: "users",
Active: true,
}
// Add subscription
client.AddSubscription(sub)
// Get subscription
retrieved, exists := client.GetSubscription("sub-1")
assert.True(t, exists)
assert.Equal(t, "sub-1", retrieved.ID)
// Remove subscription
client.RemoveSubscription("sub-1")
// Verify removed
_, exists = client.GetSubscription("sub-1")
assert.False(t, exists)
}
func TestClient_Close(t *testing.T) {
client := NewClient("client-123", "user", nil)
// Add some subscriptions
client.AddSubscription(&Subscription{ID: "sub-1"})
client.AddSubscription(&Subscription{ID: "sub-2"})
// Close client
client.Close()
// Verify subscriptions cleared
client.subMu.RLock()
assert.Empty(t, client.subscriptions)
client.subMu.RUnlock()
// Verify context cancelled
select {
case <-client.ctx.Done():
// Context was cancelled
default:
t.Fatal("Context should be cancelled after Close()")
}
}
func TestNewClientManager(t *testing.T) {
cm := NewClientManager(context.Background())
assert.NotNil(t, cm)
assert.NotNil(t, cm.clients)
assert.Equal(t, 0, cm.Count())
}
func TestClientManager_Register(t *testing.T) {
cm := NewClientManager(context.Background())
defer cm.Shutdown()
client := cm.Register("client-1", "user@example.com", nil)
assert.NotNil(t, client)
assert.Equal(t, "client-1", client.ID)
assert.Equal(t, "user@example.com", client.Username)
assert.Equal(t, 1, cm.Count())
}
func TestClientManager_Unregister(t *testing.T) {
cm := NewClientManager(context.Background())
defer cm.Shutdown()
cm.Register("client-1", "user1", nil)
assert.Equal(t, 1, cm.Count())
cm.Unregister("client-1")
assert.Equal(t, 0, cm.Count())
}
func TestClientManager_GetClient(t *testing.T) {
cm := NewClientManager(context.Background())
defer cm.Shutdown()
cm.Register("client-1", "user1", nil)
// Get existing client
client, exists := cm.GetClient("client-1")
assert.True(t, exists)
assert.NotNil(t, client)
assert.Equal(t, "client-1", client.ID)
// Get non-existent client
_, exists = cm.GetClient("nonexistent")
assert.False(t, exists)
}
func TestClientManager_MultipleClients(t *testing.T) {
cm := NewClientManager(context.Background())
defer cm.Shutdown()
cm.Register("client-1", "user1", nil)
cm.Register("client-2", "user2", nil)
cm.Register("client-3", "user3", nil)
assert.Equal(t, 3, cm.Count())
cm.Unregister("client-2")
assert.Equal(t, 2, cm.Count())
// Verify correct client was removed
_, exists := cm.GetClient("client-2")
assert.False(t, exists)
_, exists = cm.GetClient("client-1")
assert.True(t, exists)
_, exists = cm.GetClient("client-3")
assert.True(t, exists)
}
func TestClientManager_Shutdown(t *testing.T) {
cm := NewClientManager(context.Background())
cm.Register("client-1", "user1", nil)
cm.Register("client-2", "user2", nil)
assert.Equal(t, 2, cm.Count())
cm.Shutdown()
// All clients should be removed
assert.Equal(t, 0, cm.Count())
// Context should be cancelled
select {
case <-cm.ctx.Done():
// Context was cancelled
default:
t.Fatal("Context should be cancelled after Shutdown()")
}
}
func TestClientManager_ConcurrentOperations(t *testing.T) {
cm := NewClientManager(context.Background())
defer cm.Shutdown()
// This test verifies that concurrent operations don't cause race conditions
// Run with: go test -race
var wg sync.WaitGroup
// Goroutine 1: Register clients
wg.Add(1)
go func() {
defer wg.Done()
for i := 0; i < 100; i++ {
cm.Register("client-"+string(rune(i)), "user", nil)
}
}()
// Goroutine 2: Get clients
wg.Add(1)
go func() {
defer wg.Done()
for i := 0; i < 100; i++ {
cm.GetClient("client-" + string(rune(i)))
}
}()
// Goroutine 3: Count
wg.Add(1)
go func() {
defer wg.Done()
for i := 0; i < 100; i++ {
cm.Count()
}
}()
wg.Wait()
}
func TestClient_ConcurrentMetadata(t *testing.T) {
client := NewClient("client-123", "user", nil)
var wg sync.WaitGroup
// Concurrent writes
wg.Add(1)
go func() {
defer wg.Done()
for i := 0; i < 100; i++ {
client.SetMetadata("key1", i)
}
}()
// Concurrent reads
wg.Add(1)
go func() {
defer wg.Done()
for i := 0; i < 100; i++ {
client.GetMetadata("key1")
}
}()
wg.Wait()
}

178
pkg/mqttspec/config.go Normal file
View File

@@ -0,0 +1,178 @@
package mqttspec
import (
"crypto/tls"
"time"
)
// BrokerMode specifies how to connect to MQTT
type BrokerMode string
const (
// BrokerModeEmbedded runs Mochi MQTT broker in-process
BrokerModeEmbedded BrokerMode = "embedded"
// BrokerModeExternal connects to external MQTT broker as client
BrokerModeExternal BrokerMode = "external"
)
// Config holds all mqttspec configuration
type Config struct {
// BrokerMode determines whether to use embedded or external broker
BrokerMode BrokerMode
// Broker configuration for embedded mode
Broker BrokerConfig
// ExternalBroker configuration for external client mode
ExternalBroker ExternalBrokerConfig
// Topics configuration
Topics TopicConfig
// QoS configuration for different message types
QoS QoSConfig
// Auth configuration
Auth AuthConfig
// Timeouts for various operations
Timeouts TimeoutConfig
}
// BrokerConfig configures the embedded Mochi MQTT broker
type BrokerConfig struct {
// Host to bind to (default: "localhost")
Host string
// Port to listen on (default: 1883)
Port int
// EnableWebSocket enables WebSocket support
EnableWebSocket bool
// WSPort is the WebSocket port (default: 8883)
WSPort int
// MaxConnections limits concurrent client connections
MaxConnections int
// KeepAlive is the client keepalive interval
KeepAlive time.Duration
// EnableAuth enables username/password authentication
EnableAuth bool
}
// ExternalBrokerConfig for connecting as a client to external broker
type ExternalBrokerConfig struct {
// BrokerURL is the broker address (e.g., tcp://host:port or ssl://host:port)
BrokerURL string
// ClientID is a unique identifier for this handler instance
ClientID string
// Username for MQTT authentication
Username string
// Password for MQTT authentication
Password string
// CleanSession flag (default: true)
CleanSession bool
// KeepAlive interval (default: 60s)
KeepAlive time.Duration
// ConnectTimeout for initial connection (default: 30s)
ConnectTimeout time.Duration
// ReconnectDelay between reconnection attempts (default: 5s)
ReconnectDelay time.Duration
// MaxReconnect attempts (0 = unlimited, default: 0)
MaxReconnect int
// TLSConfig for SSL/TLS connections
TLSConfig *tls.Config
}
// TopicConfig defines the MQTT topic structure
type TopicConfig struct {
// Prefix for all topics (default: "spec")
// Topics will be: {Prefix}/{client_id}/request|response|notify/{sub_id}
Prefix string
}
// QoSConfig defines quality of service levels for different message types
type QoSConfig struct {
// Request messages QoS (default: 1 - at-least-once)
Request byte
// Response messages QoS (default: 1 - at-least-once)
Response byte
// Notification messages QoS (default: 1 - at-least-once)
Notification byte
}
// AuthConfig for MQTT-level authentication
type AuthConfig struct {
// ValidateCredentials is called to validate username/password for embedded broker
// Return true if credentials are valid, false otherwise
ValidateCredentials func(username, password string) bool
}
// TimeoutConfig defines timeouts for various operations
type TimeoutConfig struct {
// Connect timeout for MQTT connection (default: 30s)
Connect time.Duration
// Publish timeout for publishing messages (default: 5s)
Publish time.Duration
// Disconnect timeout for graceful shutdown (default: 10s)
Disconnect time.Duration
}
// DefaultConfig returns a configuration with sensible defaults
func DefaultConfig() *Config {
return &Config{
BrokerMode: BrokerModeEmbedded,
Broker: BrokerConfig{
Host: "localhost",
Port: 1883,
EnableWebSocket: false,
WSPort: 8883,
MaxConnections: 1000,
KeepAlive: 60 * time.Second,
EnableAuth: false,
},
ExternalBroker: ExternalBrokerConfig{
BrokerURL: "",
ClientID: "",
Username: "",
Password: "",
CleanSession: true,
KeepAlive: 60 * time.Second,
ConnectTimeout: 30 * time.Second,
ReconnectDelay: 5 * time.Second,
MaxReconnect: 0, // Unlimited
},
Topics: TopicConfig{
Prefix: "spec",
},
QoS: QoSConfig{
Request: 1, // At-least-once
Response: 1, // At-least-once
Notification: 1, // At-least-once
},
Auth: AuthConfig{
ValidateCredentials: nil,
},
Timeouts: TimeoutConfig{
Connect: 30 * time.Second,
Publish: 5 * time.Second,
Disconnect: 10 * time.Second,
},
}
}

846
pkg/mqttspec/handler.go Normal file
View File

@@ -0,0 +1,846 @@
package mqttspec
import (
"context"
"encoding/json"
"fmt"
"reflect"
"strings"
"sync"
"github.com/google/uuid"
"github.com/bitechdev/ResolveSpec/pkg/common"
"github.com/bitechdev/ResolveSpec/pkg/logger"
"github.com/bitechdev/ResolveSpec/pkg/reflection"
)
// Handler handles MQTT messages and operations
type Handler struct {
// Database adapter (GORM/Bun)
db common.Database
// Model registry
registry common.ModelRegistry
// Hook registry
hooks *HookRegistry
// Client manager
clientManager *ClientManager
// Subscription manager
subscriptionManager *SubscriptionManager
// Broker interface (embedded or external)
broker BrokerInterface
// Configuration
config *Config
// Context for lifecycle management
ctx context.Context
cancel context.CancelFunc
// Started flag
started bool
mu sync.RWMutex
}
// NewHandler creates a new MQTT handler
func NewHandler(db common.Database, registry common.ModelRegistry, config *Config) (*Handler, error) {
ctx, cancel := context.WithCancel(context.Background())
h := &Handler{
db: db,
registry: registry,
hooks: NewHookRegistry(),
clientManager: NewClientManager(ctx),
subscriptionManager: NewSubscriptionManager(),
config: config,
ctx: ctx,
cancel: cancel,
started: false,
}
// Initialize broker based on mode
if config.BrokerMode == BrokerModeEmbedded {
h.broker = NewEmbeddedBroker(config.Broker, h.clientManager)
} else {
h.broker = NewExternalBrokerClient(config.ExternalBroker, h.clientManager)
}
// Set handler reference in broker
h.broker.SetHandler(h)
return h, nil
}
// Start initializes and starts the handler
func (h *Handler) Start() error {
h.mu.Lock()
defer h.mu.Unlock()
if h.started {
return fmt.Errorf("handler already started")
}
// Start broker
if err := h.broker.Start(h.ctx); err != nil {
return fmt.Errorf("failed to start broker: %w", err)
}
// Subscribe to all request topics: spec/+/request
requestTopic := fmt.Sprintf("%s/+/request", h.config.Topics.Prefix)
if err := h.broker.Subscribe(requestTopic, h.config.QoS.Request, h.handleIncomingMessage); err != nil {
_ = h.broker.Stop(h.ctx)
return fmt.Errorf("failed to subscribe to request topic: %w", err)
}
h.started = true
logger.Info("[MQTTSpec] Handler started, listening on topic: %s", requestTopic)
return nil
}
// Shutdown gracefully shuts down the handler
func (h *Handler) Shutdown() error {
h.mu.Lock()
defer h.mu.Unlock()
if !h.started {
return nil
}
logger.Info("[MQTTSpec] Shutting down handler...")
// Execute disconnect hooks for all clients
h.clientManager.mu.RLock()
clients := make([]*Client, 0, len(h.clientManager.clients))
for _, client := range h.clientManager.clients {
clients = append(clients, client)
}
h.clientManager.mu.RUnlock()
for _, client := range clients {
hookCtx := &HookContext{
Context: h.ctx,
Handler: nil, // Not used for MQTT
Metadata: map[string]interface{}{
"mqtt_client": client,
},
}
_ = h.hooks.Execute(BeforeDisconnect, hookCtx)
h.clientManager.Unregister(client.ID)
_ = h.hooks.Execute(AfterDisconnect, hookCtx)
}
// Unsubscribe from request topic
requestTopic := fmt.Sprintf("%s/+/request", h.config.Topics.Prefix)
_ = h.broker.Unsubscribe(requestTopic)
// Stop broker
if err := h.broker.Stop(h.ctx); err != nil {
logger.Error("[MQTTSpec] Error stopping broker: %v", err)
}
// Cancel context
if h.cancel != nil {
h.cancel()
}
h.started = false
logger.Info("[MQTTSpec] Handler stopped")
return nil
}
// Hooks returns the hook registry
func (h *Handler) Hooks() *HookRegistry {
return h.hooks
}
// Registry returns the model registry
func (h *Handler) Registry() common.ModelRegistry {
return h.registry
}
// GetDatabase returns the database adapter
func (h *Handler) GetDatabase() common.Database {
return h.db
}
// GetRelationshipInfo is a placeholder for relationship detection
func (h *Handler) GetRelationshipInfo(modelType reflect.Type, relationName string) *common.RelationshipInfo {
// TODO: Implement full relationship detection if needed
return nil
}
// handleIncomingMessage is called when a message arrives on spec/+/request
func (h *Handler) handleIncomingMessage(topic string, payload []byte) {
// Extract client_id from topic: spec/{client_id}/request
parts := strings.Split(topic, "/")
if len(parts) < 3 {
logger.Error("[MQTTSpec] Invalid topic format: %s", topic)
return
}
clientID := parts[len(parts)-2] // Second to last part is client_id
// Parse message
msg, err := ParseMessage(payload)
if err != nil {
logger.Error("[MQTTSpec] Failed to parse message from %s: %v", clientID, err)
h.sendError(clientID, "", "invalid_message", "Failed to parse message")
return
}
// Validate message
if !msg.IsValid() {
logger.Error("[MQTTSpec] Invalid message from %s", clientID)
h.sendError(clientID, msg.ID, "invalid_message", "Message validation failed")
return
}
// Get or register client
client, exists := h.clientManager.GetClient(clientID)
if !exists {
// First request from this client - register it
client = h.clientManager.Register(clientID, "", h)
// Execute connect hooks
hookCtx := &HookContext{
Context: h.ctx,
Handler: nil, // Not used for MQTT, handler ref stored in metadata if needed
Metadata: map[string]interface{}{
"mqtt_client": client,
},
}
if err := h.hooks.Execute(BeforeConnect, hookCtx); err != nil {
logger.Error("[MQTTSpec] BeforeConnect hook failed for %s: %v", clientID, err)
h.sendError(clientID, msg.ID, "auth_error", err.Error())
h.clientManager.Unregister(clientID)
return
}
_ = h.hooks.Execute(AfterConnect, hookCtx)
}
// Route message by type
switch msg.Type {
case MessageTypeRequest:
h.handleRequest(client, msg)
case MessageTypeSubscription:
h.handleSubscription(client, msg)
case MessageTypePing:
h.handlePing(client, msg)
default:
h.sendError(clientID, msg.ID, "invalid_message_type", fmt.Sprintf("Unknown message type: %s", msg.Type))
}
}
// handleRequest processes CRUD requests
func (h *Handler) handleRequest(client *Client, msg *Message) {
ctx := client.ctx
schema := msg.Schema
entity := msg.Entity
recordID := msg.RecordID
// Get model from registry
model, err := h.registry.GetModelByEntity(schema, entity)
if err != nil {
logger.Error("[MQTTSpec] Model not found for %s.%s: %v", schema, entity, err)
h.sendError(client.ID, msg.ID, "model_not_found", fmt.Sprintf("Model not found: %s.%s", schema, entity))
return
}
// Validate and unwrap model
result, err := common.ValidateAndUnwrapModel(model)
if err != nil {
logger.Error("[MQTTSpec] Model validation failed for %s.%s: %v", schema, entity, err)
h.sendError(client.ID, msg.ID, "invalid_model", err.Error())
return
}
model = result.Model
modelPtr := result.ModelPtr
tableName := h.getTableName(schema, entity, model)
// Create hook context
hookCtx := &HookContext{
Context: ctx,
Handler: nil, // Not used for MQTT
Message: msg,
Schema: schema,
Entity: entity,
TableName: tableName,
Model: model,
ModelPtr: modelPtr,
Options: msg.Options,
ID: recordID,
Data: msg.Data,
Metadata: map[string]interface{}{
"mqtt_client": client,
},
}
// Route to operation handler
switch msg.Operation {
case OperationRead:
h.handleRead(client, msg, hookCtx)
case OperationCreate:
h.handleCreate(client, msg, hookCtx)
case OperationUpdate:
h.handleUpdate(client, msg, hookCtx)
case OperationDelete:
h.handleDelete(client, msg, hookCtx)
case OperationMeta:
h.handleMeta(client, msg, hookCtx)
default:
h.sendError(client.ID, msg.ID, "invalid_operation", fmt.Sprintf("Unknown operation: %s", msg.Operation))
}
}
// handleRead processes a read operation
func (h *Handler) handleRead(client *Client, msg *Message, hookCtx *HookContext) {
// Execute before hook
if err := h.hooks.Execute(BeforeRead, hookCtx); err != nil {
logger.Error("[MQTTSpec] BeforeRead hook failed: %v", err)
h.sendError(client.ID, msg.ID, "hook_error", err.Error())
return
}
// Perform read operation
var data interface{}
var metadata map[string]interface{}
var err error
if hookCtx.ID != "" {
// Read single record by ID
data, err = h.readByID(hookCtx)
metadata = map[string]interface{}{"total": 1}
} else {
// Read multiple records
data, metadata, err = h.readMultiple(hookCtx)
}
if err != nil {
logger.Error("[MQTTSpec] Read operation failed: %v", err)
h.sendError(client.ID, msg.ID, "read_error", err.Error())
return
}
// Update hook context
hookCtx.Result = data
// Execute after hook
if err := h.hooks.Execute(AfterRead, hookCtx); err != nil {
logger.Error("[MQTTSpec] AfterRead hook failed: %v", err)
h.sendError(client.ID, msg.ID, "hook_error", err.Error())
return
}
// Send response
h.sendResponse(client.ID, msg.ID, hookCtx.Result, metadata)
}
// handleCreate processes a create operation
func (h *Handler) handleCreate(client *Client, msg *Message, hookCtx *HookContext) {
// Execute before hook
if err := h.hooks.Execute(BeforeCreate, hookCtx); err != nil {
logger.Error("[MQTTSpec] BeforeCreate hook failed: %v", err)
h.sendError(client.ID, msg.ID, "hook_error", err.Error())
return
}
// Perform create operation
data, err := h.create(hookCtx)
if err != nil {
logger.Error("[MQTTSpec] Create operation failed: %v", err)
h.sendError(client.ID, msg.ID, "create_error", err.Error())
return
}
// Update hook context
hookCtx.Result = data
// Execute after hook
if err := h.hooks.Execute(AfterCreate, hookCtx); err != nil {
logger.Error("[MQTTSpec] AfterCreate hook failed: %v", err)
h.sendError(client.ID, msg.ID, "hook_error", err.Error())
return
}
// Send response
h.sendResponse(client.ID, msg.ID, hookCtx.Result, nil)
// Notify subscribers
h.notifySubscribers(hookCtx.Schema, hookCtx.Entity, OperationCreate, data)
}
// handleUpdate processes an update operation
func (h *Handler) handleUpdate(client *Client, msg *Message, hookCtx *HookContext) {
// Execute before hook
if err := h.hooks.Execute(BeforeUpdate, hookCtx); err != nil {
logger.Error("[MQTTSpec] BeforeUpdate hook failed: %v", err)
h.sendError(client.ID, msg.ID, "hook_error", err.Error())
return
}
// Perform update operation
data, err := h.update(hookCtx)
if err != nil {
logger.Error("[MQTTSpec] Update operation failed: %v", err)
h.sendError(client.ID, msg.ID, "update_error", err.Error())
return
}
// Update hook context
hookCtx.Result = data
// Execute after hook
if err := h.hooks.Execute(AfterUpdate, hookCtx); err != nil {
logger.Error("[MQTTSpec] AfterUpdate hook failed: %v", err)
h.sendError(client.ID, msg.ID, "hook_error", err.Error())
return
}
// Send response
h.sendResponse(client.ID, msg.ID, hookCtx.Result, nil)
// Notify subscribers
h.notifySubscribers(hookCtx.Schema, hookCtx.Entity, OperationUpdate, data)
}
// handleDelete processes a delete operation
func (h *Handler) handleDelete(client *Client, msg *Message, hookCtx *HookContext) {
// Execute before hook
if err := h.hooks.Execute(BeforeDelete, hookCtx); err != nil {
logger.Error("[MQTTSpec] BeforeDelete hook failed: %v", err)
h.sendError(client.ID, msg.ID, "hook_error", err.Error())
return
}
// Perform delete operation
if err := h.delete(hookCtx); err != nil {
logger.Error("[MQTTSpec] Delete operation failed: %v", err)
h.sendError(client.ID, msg.ID, "delete_error", err.Error())
return
}
// Execute after hook
if err := h.hooks.Execute(AfterDelete, hookCtx); err != nil {
logger.Error("[MQTTSpec] AfterDelete hook failed: %v", err)
h.sendError(client.ID, msg.ID, "hook_error", err.Error())
return
}
// Send response
h.sendResponse(client.ID, msg.ID, map[string]interface{}{"deleted": true}, nil)
// Notify subscribers
h.notifySubscribers(hookCtx.Schema, hookCtx.Entity, OperationDelete, map[string]interface{}{
"id": hookCtx.ID,
})
}
// handleMeta processes a metadata request
func (h *Handler) handleMeta(client *Client, msg *Message, hookCtx *HookContext) {
metadata, err := h.getMetadata(hookCtx)
if err != nil {
logger.Error("[MQTTSpec] Meta operation failed: %v", err)
h.sendError(client.ID, msg.ID, "meta_error", err.Error())
return
}
h.sendResponse(client.ID, msg.ID, metadata, nil)
}
// handleSubscription manages subscriptions
func (h *Handler) handleSubscription(client *Client, msg *Message) {
switch msg.Operation {
case OperationSubscribe:
h.handleSubscribe(client, msg)
case OperationUnsubscribe:
h.handleUnsubscribe(client, msg)
default:
h.sendError(client.ID, msg.ID, "invalid_subscription_operation", fmt.Sprintf("Unknown subscription operation: %s", msg.Operation))
}
}
// handleSubscribe creates a subscription
func (h *Handler) handleSubscribe(client *Client, msg *Message) {
// Generate subscription ID
subID := uuid.New().String()
// Create hook context
hookCtx := &HookContext{
Context: client.ctx,
Handler: nil, // Not used for MQTT
Message: msg,
Schema: msg.Schema,
Entity: msg.Entity,
Options: msg.Options,
Metadata: map[string]interface{}{
"mqtt_client": client,
},
}
// Execute before hook
if err := h.hooks.Execute(BeforeSubscribe, hookCtx); err != nil {
logger.Error("[MQTTSpec] BeforeSubscribe hook failed: %v", err)
h.sendError(client.ID, msg.ID, "hook_error", err.Error())
return
}
// Create subscription
sub := h.subscriptionManager.Subscribe(subID, client.ID, msg.Schema, msg.Entity, msg.Options)
client.AddSubscription(sub)
// Execute after hook
_ = h.hooks.Execute(AfterSubscribe, hookCtx)
// Send response
h.sendResponse(client.ID, msg.ID, map[string]interface{}{
"subscription_id": subID,
"schema": msg.Schema,
"entity": msg.Entity,
"notify_topic": h.getNotifyTopic(client.ID, subID),
}, nil)
logger.Info("[MQTTSpec] Subscription created: %s for %s.%s (client: %s)", subID, msg.Schema, msg.Entity, client.ID)
}
// handleUnsubscribe removes a subscription
func (h *Handler) handleUnsubscribe(client *Client, msg *Message) {
subID := msg.SubscriptionID
if subID == "" {
h.sendError(client.ID, msg.ID, "invalid_subscription", "Subscription ID is required")
return
}
// Create hook context
hookCtx := &HookContext{
Context: client.ctx,
Handler: nil, // Not used for MQTT
Message: msg,
Metadata: map[string]interface{}{
"mqtt_client": client,
},
}
// Execute before hook
if err := h.hooks.Execute(BeforeUnsubscribe, hookCtx); err != nil {
logger.Error("[MQTTSpec] BeforeUnsubscribe hook failed: %v", err)
h.sendError(client.ID, msg.ID, "hook_error", err.Error())
return
}
// Remove subscription
h.subscriptionManager.Unsubscribe(subID)
client.RemoveSubscription(subID)
// Execute after hook
_ = h.hooks.Execute(AfterUnsubscribe, hookCtx)
// Send response
h.sendResponse(client.ID, msg.ID, map[string]interface{}{
"unsubscribed": true,
"subscription_id": subID,
}, nil)
logger.Info("[MQTTSpec] Subscription removed: %s (client: %s)", subID, client.ID)
}
// handlePing responds to ping messages
func (h *Handler) handlePing(client *Client, msg *Message) {
pong := &ResponseMessage{
ID: msg.ID,
Type: MessageTypePong,
Success: true,
}
payload, _ := json.Marshal(pong)
topic := h.getResponseTopic(client.ID)
_ = h.broker.Publish(topic, h.config.QoS.Response, payload)
}
// notifySubscribers sends notifications to subscribers
func (h *Handler) notifySubscribers(schema, entity string, operation OperationType, data interface{}) {
subscriptions := h.subscriptionManager.GetSubscriptionsByEntity(schema, entity)
if len(subscriptions) == 0 {
return
}
for _, sub := range subscriptions {
// Check if data matches subscription filters
if !sub.MatchesFilters(data) {
continue
}
// Get client
client, exists := h.clientManager.GetClient(sub.ConnectionID)
if !exists {
continue
}
// Create notification message
notification := NewNotificationMessage(sub.ID, operation, schema, entity, data)
payload, err := json.Marshal(notification)
if err != nil {
logger.Error("[MQTTSpec] Failed to marshal notification: %v", err)
continue
}
// Publish to client's notify topic
topic := h.getNotifyTopic(client.ID, sub.ID)
if err := h.broker.Publish(topic, h.config.QoS.Notification, payload); err != nil {
logger.Error("[MQTTSpec] Failed to publish notification to %s: %v", topic, err)
}
}
}
// Response helpers
// sendResponse publishes a response message
func (h *Handler) sendResponse(clientID, msgID string, data interface{}, metadata map[string]interface{}) {
resp := NewResponseMessage(msgID, true, data)
resp.Metadata = metadata
payload, err := json.Marshal(resp)
if err != nil {
logger.Error("[MQTTSpec] Failed to marshal response: %v", err)
return
}
topic := h.getResponseTopic(clientID)
if err := h.broker.Publish(topic, h.config.QoS.Response, payload); err != nil {
logger.Error("[MQTTSpec] Failed to publish response to %s: %v", topic, err)
}
}
// sendError publishes an error response
func (h *Handler) sendError(clientID, msgID, code, message string) {
errResp := NewErrorResponse(msgID, code, message)
payload, _ := json.Marshal(errResp)
topic := h.getResponseTopic(clientID)
_ = h.broker.Publish(topic, h.config.QoS.Response, payload)
}
// Topic helpers
func (h *Handler) getRequestTopic(clientID string) string {
return fmt.Sprintf("%s/%s/request", h.config.Topics.Prefix, clientID)
}
func (h *Handler) getResponseTopic(clientID string) string {
return fmt.Sprintf("%s/%s/response", h.config.Topics.Prefix, clientID)
}
func (h *Handler) getNotifyTopic(clientID, subscriptionID string) string {
return fmt.Sprintf("%s/%s/notify/%s", h.config.Topics.Prefix, clientID, subscriptionID)
}
// Database operation helpers (adapted from websocketspec)
func (h *Handler) getTableName(schema, entity string, model interface{}) string {
// Use entity as table name
tableName := entity
if schema != "" {
tableName = schema + "." + tableName
}
return tableName
}
// readByID reads a single record by ID
func (h *Handler) readByID(hookCtx *HookContext) (interface{}, error) {
query := h.db.NewSelect().Model(hookCtx.ModelPtr).Table(hookCtx.TableName)
// Add ID filter
pkName := reflection.GetPrimaryKeyName(hookCtx.Model)
query = query.Where(fmt.Sprintf("%s = ?", pkName), hookCtx.ID)
// Apply columns
if hookCtx.Options != nil && len(hookCtx.Options.Columns) > 0 {
query = query.Column(hookCtx.Options.Columns...)
}
// Apply preloads (simplified)
if hookCtx.Options != nil {
for i := range hookCtx.Options.Preload {
query = query.PreloadRelation(hookCtx.Options.Preload[i].Relation)
}
}
// Execute query
if err := query.ScanModel(hookCtx.Context); err != nil {
return nil, fmt.Errorf("failed to read record: %w", err)
}
return hookCtx.ModelPtr, nil
}
// readMultiple reads multiple records
func (h *Handler) readMultiple(hookCtx *HookContext) (data interface{}, metadata map[string]interface{}, err error) {
query := h.db.NewSelect().Model(hookCtx.ModelPtr).Table(hookCtx.TableName)
// Apply options
if hookCtx.Options != nil {
// Apply filters
for _, filter := range hookCtx.Options.Filters {
query = query.Where(fmt.Sprintf("%s %s ?", filter.Column, h.getOperatorSQL(filter.Operator)), filter.Value)
}
// Apply sorting
for _, sort := range hookCtx.Options.Sort {
direction := "ASC"
if sort.Direction == "desc" {
direction = "DESC"
}
query = query.Order(fmt.Sprintf("%s %s", sort.Column, direction))
}
// Apply limit and offset
if hookCtx.Options.Limit != nil {
query = query.Limit(*hookCtx.Options.Limit)
}
if hookCtx.Options.Offset != nil {
query = query.Offset(*hookCtx.Options.Offset)
}
// Apply preloads
for i := range hookCtx.Options.Preload {
query = query.PreloadRelation(hookCtx.Options.Preload[i].Relation)
}
// Apply columns
if len(hookCtx.Options.Columns) > 0 {
query = query.Column(hookCtx.Options.Columns...)
}
}
// Execute query
if err := query.ScanModel(hookCtx.Context); err != nil {
return nil, nil, fmt.Errorf("failed to read records: %w", err)
}
// Get count
metadata = make(map[string]interface{})
countQuery := h.db.NewSelect().Model(hookCtx.ModelPtr).Table(hookCtx.TableName)
if hookCtx.Options != nil {
for _, filter := range hookCtx.Options.Filters {
countQuery = countQuery.Where(fmt.Sprintf("%s %s ?", filter.Column, h.getOperatorSQL(filter.Operator)), filter.Value)
}
}
count, _ := countQuery.Count(hookCtx.Context)
metadata["total"] = count
metadata["count"] = reflection.Len(hookCtx.ModelPtr)
return hookCtx.ModelPtr, metadata, nil
}
// create creates a new record
func (h *Handler) create(hookCtx *HookContext) (interface{}, error) {
// Marshal and unmarshal data into model
dataBytes, err := json.Marshal(hookCtx.Data)
if err != nil {
return nil, fmt.Errorf("failed to marshal data: %w", err)
}
if err := json.Unmarshal(dataBytes, hookCtx.ModelPtr); err != nil {
return nil, fmt.Errorf("failed to unmarshal data into model: %w", err)
}
// Insert record
query := h.db.NewInsert().Model(hookCtx.ModelPtr).Table(hookCtx.TableName)
if _, err := query.Exec(hookCtx.Context); err != nil {
return nil, fmt.Errorf("failed to create record: %w", err)
}
return hookCtx.ModelPtr, nil
}
// update updates an existing record
func (h *Handler) update(hookCtx *HookContext) (interface{}, error) {
// Marshal and unmarshal data into model
dataBytes, err := json.Marshal(hookCtx.Data)
if err != nil {
return nil, fmt.Errorf("failed to marshal data: %w", err)
}
if err := json.Unmarshal(dataBytes, hookCtx.ModelPtr); err != nil {
return nil, fmt.Errorf("failed to unmarshal data into model: %w", err)
}
// Update record
query := h.db.NewUpdate().Model(hookCtx.ModelPtr).Table(hookCtx.TableName)
// Add ID filter
pkName := reflection.GetPrimaryKeyName(hookCtx.Model)
query = query.Where(fmt.Sprintf("%s = ?", pkName), hookCtx.ID)
if _, err := query.Exec(hookCtx.Context); err != nil {
return nil, fmt.Errorf("failed to update record: %w", err)
}
// Fetch updated record
return h.readByID(hookCtx)
}
// delete deletes a record
func (h *Handler) delete(hookCtx *HookContext) error {
query := h.db.NewDelete().Model(hookCtx.ModelPtr).Table(hookCtx.TableName)
// Add ID filter
pkName := reflection.GetPrimaryKeyName(hookCtx.Model)
query = query.Where(fmt.Sprintf("%s = ?", pkName), hookCtx.ID)
if _, err := query.Exec(hookCtx.Context); err != nil {
return fmt.Errorf("failed to delete record: %w", err)
}
return nil
}
// getMetadata returns schema metadata for an entity
func (h *Handler) getMetadata(hookCtx *HookContext) (interface{}, error) {
metadata := make(map[string]interface{})
metadata["schema"] = hookCtx.Schema
metadata["entity"] = hookCtx.Entity
metadata["table_name"] = hookCtx.TableName
// Get fields from model using reflection
columns := reflection.GetModelColumns(hookCtx.Model)
metadata["columns"] = columns
metadata["primary_key"] = reflection.GetPrimaryKeyName(hookCtx.Model)
return metadata, nil
}
// getOperatorSQL converts filter operator to SQL operator
func (h *Handler) getOperatorSQL(operator string) string {
switch operator {
case "eq":
return "="
case "neq":
return "!="
case "gt":
return ">"
case "gte":
return ">="
case "lt":
return "<"
case "lte":
return "<="
case "like":
return "LIKE"
case "ilike":
return "ILIKE"
case "in":
return "IN"
default:
return "="
}
}

View File

@@ -0,0 +1,743 @@
package mqttspec
import (
"context"
"encoding/json"
"fmt"
"strings"
"sync"
"testing"
"time"
"github.com/bitechdev/ResolveSpec/pkg/common"
"github.com/bitechdev/ResolveSpec/pkg/common/adapters/database"
"github.com/bitechdev/ResolveSpec/pkg/modelregistry"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
"gorm.io/driver/sqlite"
"gorm.io/gorm"
)
// Test model
type TestUser struct {
ID uint `json:"id" gorm:"primaryKey"`
Name string `json:"name"`
Email string `json:"email"`
Status string `json:"status"`
TenantID string `json:"tenant_id"`
CreatedAt time.Time
UpdatedAt time.Time
}
func (TestUser) TableName() string {
return "users"
}
// setupTestHandler creates a handler with in-memory SQLite database
func setupTestHandler(t *testing.T) (*Handler, *gorm.DB) {
// Create in-memory SQLite database
db, err := gorm.Open(sqlite.Open(":memory:"), &gorm.Config{})
require.NoError(t, err)
// Auto-migrate test model
err = db.AutoMigrate(&TestUser{})
require.NoError(t, err)
// Create handler
config := DefaultConfig()
config.Broker.Port = 21883 // Use different port for handler tests
adapter := database.NewGormAdapter(db)
registry := modelregistry.NewModelRegistry()
registry.RegisterModel("public.users", &TestUser{})
handler, err := NewHandlerWithDatabase(adapter, registry, WithEmbeddedBroker(config.Broker))
require.NoError(t, err)
return handler, db
}
func TestNewHandler(t *testing.T) {
handler, _ := setupTestHandler(t)
defer handler.Shutdown()
assert.NotNil(t, handler)
assert.NotNil(t, handler.db)
assert.NotNil(t, handler.registry)
assert.NotNil(t, handler.hooks)
assert.NotNil(t, handler.clientManager)
assert.NotNil(t, handler.subscriptionManager)
assert.NotNil(t, handler.broker)
assert.NotNil(t, handler.config)
}
func TestHandler_StartShutdown(t *testing.T) {
handler, _ := setupTestHandler(t)
// Start handler
err := handler.Start()
require.NoError(t, err)
assert.True(t, handler.started)
// Shutdown handler
err = handler.Shutdown()
require.NoError(t, err)
assert.False(t, handler.started)
}
func TestHandler_HandleRead_Single(t *testing.T) {
handler, db := setupTestHandler(t)
defer handler.Shutdown()
// Insert test data
user := &TestUser{
ID: 1,
Name: "John Doe",
Email: "john@example.com",
Status: "active",
}
db.Create(user)
// Create mock client
client := NewClient("test-client", "test-user", handler)
// Create read request message
msg := &Message{
ID: "msg-1",
Type: MessageTypeRequest,
Operation: OperationRead,
Schema: "public",
Entity: "users",
Options: &common.RequestOptions{},
}
// Create hook context
hookCtx := &HookContext{
Context: context.Background(),
Handler: nil,
Schema: "public",
Entity: "users",
ID: "1",
Options: msg.Options,
Metadata: map[string]interface{}{"mqtt_client": client},
}
// Handle read
handler.handleRead(client, msg, hookCtx)
// Note: In a full integration test, we would verify the response was published
// to the correct MQTT topic. Here we're just testing that the handler doesn't error.
}
func TestHandler_HandleRead_Multiple(t *testing.T) {
handler, db := setupTestHandler(t)
defer handler.Shutdown()
// Insert test data
users := []TestUser{
{ID: 1, Name: "User 1", Email: "user1@example.com", Status: "active"},
{ID: 2, Name: "User 2", Email: "user2@example.com", Status: "active"},
{ID: 3, Name: "User 3", Email: "user3@example.com", Status: "inactive"},
}
for _, user := range users {
db.Create(&user)
}
// Create mock client
client := NewClient("test-client", "test-user", handler)
// Create read request with filter
msg := &Message{
ID: "msg-2",
Type: MessageTypeRequest,
Operation: OperationRead,
Schema: "public",
Entity: "users",
Options: &common.RequestOptions{
Filters: []common.FilterOption{
{Column: "status", Operator: "eq", Value: "active"},
},
},
}
// Create hook context
hookCtx := &HookContext{
Context: context.Background(),
Handler: nil,
Schema: "public",
Entity: "users",
Options: msg.Options,
Metadata: map[string]interface{}{"mqtt_client": client},
}
// Handle read
handler.handleRead(client, msg, hookCtx)
}
func TestHandler_HandleCreate(t *testing.T) {
handler, db := setupTestHandler(t)
defer handler.Shutdown()
// Start handler to initialize broker
err := handler.Start()
require.NoError(t, err)
defer handler.Shutdown()
// Create mock client
client := NewClient("test-client", "test-user", handler)
// Create request data
newUser := map[string]interface{}{
"name": "New User",
"email": "new@example.com",
"status": "active",
}
// Create create request message
msg := &Message{
ID: "msg-3",
Type: MessageTypeRequest,
Operation: OperationCreate,
Schema: "public",
Entity: "users",
Data: newUser,
Options: &common.RequestOptions{},
}
// Create hook context
hookCtx := &HookContext{
Context: context.Background(),
Handler: nil,
Schema: "public",
Entity: "users",
Data: newUser,
Options: msg.Options,
Metadata: map[string]interface{}{"mqtt_client": client},
}
// Handle create
handler.handleCreate(client, msg, hookCtx)
// Verify user was created in database
var user TestUser
result := db.Where("email = ?", "new@example.com").First(&user)
assert.NoError(t, result.Error)
assert.Equal(t, "New User", user.Name)
}
func TestHandler_HandleUpdate(t *testing.T) {
handler, db := setupTestHandler(t)
defer handler.Shutdown()
// Start handler
err := handler.Start()
require.NoError(t, err)
defer handler.Shutdown()
// Insert test data
user := &TestUser{
ID: 1,
Name: "Original Name",
Email: "original@example.com",
Status: "active",
}
db.Create(user)
// Create mock client
client := NewClient("test-client", "test-user", handler)
// Update data
updateData := map[string]interface{}{
"name": "Updated Name",
}
// Create update request message
msg := &Message{
ID: "msg-4",
Type: MessageTypeRequest,
Operation: OperationUpdate,
Schema: "public",
Entity: "users",
Data: updateData,
Options: &common.RequestOptions{},
}
// Create hook context
hookCtx := &HookContext{
Context: context.Background(),
Handler: nil,
Schema: "public",
Entity: "users",
ID: "1",
Data: updateData,
Options: msg.Options,
Metadata: map[string]interface{}{"mqtt_client": client},
}
// Handle update
handler.handleUpdate(client, msg, hookCtx)
// Verify user was updated
var updatedUser TestUser
db.First(&updatedUser, 1)
assert.Equal(t, "Updated Name", updatedUser.Name)
}
func TestHandler_HandleDelete(t *testing.T) {
handler, db := setupTestHandler(t)
defer handler.Shutdown()
// Start handler
err := handler.Start()
require.NoError(t, err)
defer handler.Shutdown()
// Insert test data
user := &TestUser{
ID: 1,
Name: "To Delete",
Email: "delete@example.com",
Status: "active",
}
db.Create(user)
// Create mock client
client := NewClient("test-client", "test-user", handler)
// Create delete request message
msg := &Message{
ID: "msg-5",
Type: MessageTypeRequest,
Operation: OperationDelete,
Schema: "public",
Entity: "users",
Options: &common.RequestOptions{},
}
// Create hook context
hookCtx := &HookContext{
Context: context.Background(),
Handler: nil,
Schema: "public",
Entity: "users",
ID: "1",
Options: msg.Options,
Metadata: map[string]interface{}{"mqtt_client": client},
}
// Handle delete
handler.handleDelete(client, msg, hookCtx)
// Verify user was deleted
var deletedUser TestUser
result := db.First(&deletedUser, 1)
assert.Error(t, result.Error)
assert.Equal(t, gorm.ErrRecordNotFound, result.Error)
}
func TestHandler_HandleSubscribe(t *testing.T) {
handler, _ := setupTestHandler(t)
defer handler.Shutdown()
// Start handler
err := handler.Start()
require.NoError(t, err)
defer handler.Shutdown()
// Create mock client
client := NewClient("test-client", "test-user", handler)
// Create subscribe message
msg := &Message{
ID: "msg-6",
Type: MessageTypeSubscription,
Operation: OperationSubscribe,
Schema: "public",
Entity: "users",
Options: &common.RequestOptions{
Filters: []common.FilterOption{
{Column: "status", Operator: "eq", Value: "active"},
},
},
}
// Handle subscribe
handler.handleSubscribe(client, msg)
// Verify subscription was created
subscriptions := handler.subscriptionManager.GetSubscriptionsByEntity("public", "users")
assert.Len(t, subscriptions, 1)
assert.Equal(t, client.ID, subscriptions[0].ConnectionID)
}
func TestHandler_HandleUnsubscribe(t *testing.T) {
handler, _ := setupTestHandler(t)
defer handler.Shutdown()
// Start handler
err := handler.Start()
require.NoError(t, err)
defer handler.Shutdown()
// Create mock client
client := NewClient("test-client", "test-user", handler)
// Create subscription using Subscribe method
sub := handler.subscriptionManager.Subscribe("sub-1", client.ID, "public", "users", &common.RequestOptions{})
client.AddSubscription(sub)
// Create unsubscribe message with subscription ID in Data
msg := &Message{
ID: "msg-7",
Type: MessageTypeSubscription,
Operation: OperationUnsubscribe,
Data: map[string]interface{}{"subscription_id": "sub-1"},
Options: &common.RequestOptions{},
}
// Handle unsubscribe
handler.handleUnsubscribe(client, msg)
// Verify subscription was removed
_, exists := handler.subscriptionManager.GetSubscription("sub-1")
assert.False(t, exists)
}
func TestHandler_NotifySubscribers(t *testing.T) {
handler, _ := setupTestHandler(t)
defer handler.Shutdown()
// Start handler
err := handler.Start()
require.NoError(t, err)
defer handler.Shutdown()
// Create mock clients
client1 := handler.clientManager.Register("client-1", "user1", handler)
client2 := handler.clientManager.Register("client-2", "user2", handler)
// Create subscriptions
opts1 := &common.RequestOptions{
Filters: []common.FilterOption{
{Column: "status", Operator: "eq", Value: "active"},
},
}
sub1 := handler.subscriptionManager.Subscribe("sub-1", client1.ID, "public", "users", opts1)
client1.AddSubscription(sub1)
opts2 := &common.RequestOptions{
Filters: []common.FilterOption{
{Column: "status", Operator: "eq", Value: "inactive"},
},
}
sub2 := handler.subscriptionManager.Subscribe("sub-2", client2.ID, "public", "users", opts2)
client2.AddSubscription(sub2)
// Notify subscribers with active user
activeUser := map[string]interface{}{
"id": 1,
"name": "Active User",
"status": "active",
}
// This should notify sub-1 only
handler.notifySubscribers("public", "users", OperationCreate, activeUser)
// Note: In a full integration test, we would verify that the notification
// was published to the correct MQTT topic. Here we're just testing that
// the handler doesn't error and finds the correct subscriptions.
}
func TestHandler_Hooks_BeforeRead(t *testing.T) {
handler, db := setupTestHandler(t)
defer handler.Shutdown()
// Insert test data with different tenants
users := []TestUser{
{ID: 1, Name: "User 1", TenantID: "tenant-a", Status: "active"},
{ID: 2, Name: "User 2", TenantID: "tenant-b", Status: "active"},
{ID: 3, Name: "User 3", TenantID: "tenant-a", Status: "active"},
}
for _, user := range users {
db.Create(&user)
}
// Register hook to filter by tenant
handler.Hooks().Register(BeforeRead, func(ctx *HookContext) error {
// Auto-inject tenant filter
ctx.Options.Filters = append(ctx.Options.Filters, common.FilterOption{
Column: "tenant_id",
Operator: "eq",
Value: "tenant-a",
})
return nil
})
// Create mock client
client := NewClient("test-client", "test-user", handler)
// Create read request (no tenant filter)
msg := &Message{
ID: "msg-8",
Type: MessageTypeRequest,
Operation: OperationRead,
Schema: "public",
Entity: "users",
Options: &common.RequestOptions{},
}
// Create hook context
hookCtx := &HookContext{
Context: context.Background(),
Handler: nil,
Schema: "public",
Entity: "users",
Options: msg.Options,
Metadata: map[string]interface{}{"mqtt_client": client},
}
// Handle read
handler.handleRead(client, msg, hookCtx)
// The hook should have injected the tenant filter
// In a full test, we would verify only tenant-a users were returned
}
func TestHandler_Hooks_BeforeCreate(t *testing.T) {
handler, db := setupTestHandler(t)
defer handler.Shutdown()
// Register hook to set default values
handler.Hooks().Register(BeforeCreate, func(ctx *HookContext) error {
// Auto-set tenant_id
if dataMap, ok := ctx.Data.(map[string]interface{}); ok {
dataMap["tenant_id"] = "auto-tenant"
}
return nil
})
// Start handler
err := handler.Start()
require.NoError(t, err)
defer handler.Shutdown()
// Create mock client
client := NewClient("test-client", "test-user", handler)
// Create user without tenant_id
newUser := map[string]interface{}{
"name": "Test User",
"email": "test@example.com",
"status": "active",
}
msg := &Message{
ID: "msg-9",
Type: MessageTypeRequest,
Operation: OperationCreate,
Schema: "public",
Entity: "users",
Data: newUser,
Options: &common.RequestOptions{},
}
hookCtx := &HookContext{
Context: context.Background(),
Handler: nil,
Schema: "public",
Entity: "users",
Data: newUser,
Options: msg.Options,
Metadata: map[string]interface{}{"mqtt_client": client},
}
// Handle create
handler.handleCreate(client, msg, hookCtx)
// Verify tenant_id was auto-set
var user TestUser
db.Where("email = ?", "test@example.com").First(&user)
assert.Equal(t, "auto-tenant", user.TenantID)
}
func TestHandler_ConcurrentRequests(t *testing.T) {
handler, db := setupTestHandler(t)
defer handler.Shutdown()
// Start handler
err := handler.Start()
require.NoError(t, err)
defer handler.Shutdown()
// Create multiple clients
var wg sync.WaitGroup
numClients := 10
for i := 0; i < numClients; i++ {
wg.Add(1)
go func(id int) {
defer wg.Done()
client := NewClient(fmt.Sprintf("client-%d", id), fmt.Sprintf("user%d", id), handler)
// Create user
newUser := map[string]interface{}{
"name": fmt.Sprintf("User %d", id),
"email": fmt.Sprintf("user%d@example.com", id),
"status": "active",
}
msg := &Message{
ID: fmt.Sprintf("msg-%d", id),
Type: MessageTypeRequest,
Operation: OperationCreate,
Schema: "public",
Entity: "users",
Data: newUser,
Options: &common.RequestOptions{},
}
hookCtx := &HookContext{
Context: context.Background(),
Handler: nil,
Schema: "public",
Entity: "users",
Data: newUser,
Options: msg.Options,
Metadata: map[string]interface{}{"mqtt_client": client},
}
handler.handleCreate(client, msg, hookCtx)
}(i)
}
wg.Wait()
// Verify all users were created
var count int64
db.Model(&TestUser{}).Count(&count)
assert.Equal(t, int64(numClients), count)
}
func TestHandler_TopicHelpers(t *testing.T) {
handler, _ := setupTestHandler(t)
defer handler.Shutdown()
clientID := "test-client"
subscriptionID := "sub-123"
requestTopic := handler.getRequestTopic(clientID)
assert.Equal(t, "spec/test-client/request", requestTopic)
responseTopic := handler.getResponseTopic(clientID)
assert.Equal(t, "spec/test-client/response", responseTopic)
notifyTopic := handler.getNotifyTopic(clientID, subscriptionID)
assert.Equal(t, "spec/test-client/notify/sub-123", notifyTopic)
}
func TestHandler_SendResponse(t *testing.T) {
handler, _ := setupTestHandler(t)
defer handler.Shutdown()
// Start handler
err := handler.Start()
require.NoError(t, err)
defer handler.Shutdown()
// Test data
clientID := "test-client"
msgID := "msg-123"
data := map[string]interface{}{"id": 1, "name": "Test"}
metadata := map[string]interface{}{"total": 1}
// Send response (should not error)
handler.sendResponse(clientID, msgID, data, metadata)
}
func TestHandler_SendError(t *testing.T) {
handler, _ := setupTestHandler(t)
defer handler.Shutdown()
// Start handler
err := handler.Start()
require.NoError(t, err)
defer handler.Shutdown()
// Test error
clientID := "test-client"
msgID := "msg-123"
code := "test_error"
message := "Test error message"
// Send error (should not error)
handler.sendError(clientID, msgID, code, message)
}
// extractClientID extracts the client ID from a topic like spec/{client_id}/request
func extractClientID(topic string) string {
parts := strings.Split(topic, "/")
if len(parts) >= 2 {
return parts[len(parts)-2]
}
return ""
}
func TestHandler_ExtractClientID(t *testing.T) {
tests := []struct {
topic string
expected string
}{
{"spec/client-123/request", "client-123"},
{"spec/abc-xyz/request", "abc-xyz"},
{"spec/test/request", "test"},
}
for _, tt := range tests {
result := extractClientID(tt.topic)
assert.Equal(t, tt.expected, result, "topic: %s", tt.topic)
}
}
func TestHandler_HandleIncomingMessage_InvalidJSON(t *testing.T) {
handler, _ := setupTestHandler(t)
defer handler.Shutdown()
// Start handler
err := handler.Start()
require.NoError(t, err)
defer handler.Shutdown()
// Invalid JSON payload
payload := []byte("{invalid json")
// Should not panic
handler.handleIncomingMessage("spec/test-client/request", payload)
}
func TestHandler_HandleIncomingMessage_ValidMessage(t *testing.T) {
handler, _ := setupTestHandler(t)
defer handler.Shutdown()
// Start handler
err := handler.Start()
require.NoError(t, err)
defer handler.Shutdown()
// Valid message
msg := &Message{
ID: "msg-1",
Type: MessageTypeRequest,
Operation: OperationRead,
Schema: "public",
Entity: "users",
Options: &common.RequestOptions{},
}
payload, _ := json.Marshal(msg)
// Should not panic or error
handler.handleIncomingMessage("spec/test-client/request", payload)
}

51
pkg/mqttspec/hooks.go Normal file
View File

@@ -0,0 +1,51 @@
package mqttspec
import (
"github.com/bitechdev/ResolveSpec/pkg/websocketspec"
)
// Hook types - aliases to websocketspec for lifecycle hook consistency
type (
// HookType defines the type of lifecycle hook
HookType = websocketspec.HookType
// HookFunc is a function that executes during a lifecycle hook
HookFunc = websocketspec.HookFunc
// HookContext contains all context for hook execution
// Note: For MQTT, the Client is stored in Metadata["mqtt_client"]
HookContext = websocketspec.HookContext
// HookRegistry manages all registered hooks
HookRegistry = websocketspec.HookRegistry
)
// Hook type constants - all 12 lifecycle hooks
const (
// CRUD operation hooks
BeforeRead = websocketspec.BeforeRead
AfterRead = websocketspec.AfterRead
BeforeCreate = websocketspec.BeforeCreate
AfterCreate = websocketspec.AfterCreate
BeforeUpdate = websocketspec.BeforeUpdate
AfterUpdate = websocketspec.AfterUpdate
BeforeDelete = websocketspec.BeforeDelete
AfterDelete = websocketspec.AfterDelete
// Subscription hooks
BeforeSubscribe = websocketspec.BeforeSubscribe
AfterSubscribe = websocketspec.AfterSubscribe
BeforeUnsubscribe = websocketspec.BeforeUnsubscribe
AfterUnsubscribe = websocketspec.AfterUnsubscribe
// Connection hooks
BeforeConnect = websocketspec.BeforeConnect
AfterConnect = websocketspec.AfterConnect
BeforeDisconnect = websocketspec.BeforeDisconnect
AfterDisconnect = websocketspec.AfterDisconnect
)
// NewHookRegistry creates a new hook registry
func NewHookRegistry() *HookRegistry {
return websocketspec.NewHookRegistry()
}

63
pkg/mqttspec/message.go Normal file
View File

@@ -0,0 +1,63 @@
package mqttspec
import (
"github.com/bitechdev/ResolveSpec/pkg/websocketspec"
)
// Message types - aliases to websocketspec for protocol consistency
type (
// Message represents an MQTT message (identical to WebSocket message protocol)
Message = websocketspec.Message
// MessageType defines the type of message
MessageType = websocketspec.MessageType
// OperationType defines the operation to perform
OperationType = websocketspec.OperationType
// ResponseMessage is sent back to clients after processing requests
ResponseMessage = websocketspec.ResponseMessage
// NotificationMessage is sent to subscribers when data changes
NotificationMessage = websocketspec.NotificationMessage
// ErrorInfo contains error details
ErrorInfo = websocketspec.ErrorInfo
)
// Message type constants
const (
MessageTypeRequest = websocketspec.MessageTypeRequest
MessageTypeResponse = websocketspec.MessageTypeResponse
MessageTypeNotification = websocketspec.MessageTypeNotification
MessageTypeSubscription = websocketspec.MessageTypeSubscription
MessageTypeError = websocketspec.MessageTypeError
MessageTypePing = websocketspec.MessageTypePing
MessageTypePong = websocketspec.MessageTypePong
)
// Operation type constants
const (
OperationRead = websocketspec.OperationRead
OperationCreate = websocketspec.OperationCreate
OperationUpdate = websocketspec.OperationUpdate
OperationDelete = websocketspec.OperationDelete
OperationSubscribe = websocketspec.OperationSubscribe
OperationUnsubscribe = websocketspec.OperationUnsubscribe
OperationMeta = websocketspec.OperationMeta
)
// Helper functions from websocketspec
var (
// NewResponseMessage creates a new response message
NewResponseMessage = websocketspec.NewResponseMessage
// NewErrorResponse creates an error response
NewErrorResponse = websocketspec.NewErrorResponse
// NewNotificationMessage creates a notification message
NewNotificationMessage = websocketspec.NewNotificationMessage
// ParseMessage parses a JSON message into a Message struct
ParseMessage = websocketspec.ParseMessage
)

104
pkg/mqttspec/mqttspec.go Normal file
View File

@@ -0,0 +1,104 @@
package mqttspec
import (
"github.com/bitechdev/ResolveSpec/pkg/common"
"github.com/bitechdev/ResolveSpec/pkg/common/adapters/database"
"github.com/bitechdev/ResolveSpec/pkg/modelregistry"
"gorm.io/gorm"
"github.com/uptrace/bun"
)
// NewHandlerWithGORM creates an MQTT handler with GORM database adapter
func NewHandlerWithGORM(db *gorm.DB, opts ...Option) (*Handler, error) {
adapter := database.NewGormAdapter(db)
registry := modelregistry.NewModelRegistry()
return NewHandlerWithDatabase(adapter, registry, opts...)
}
// NewHandlerWithBun creates an MQTT handler with Bun database adapter
func NewHandlerWithBun(db *bun.DB, opts ...Option) (*Handler, error) {
adapter := database.NewBunAdapter(db)
registry := modelregistry.NewModelRegistry()
return NewHandlerWithDatabase(adapter, registry, opts...)
}
// NewHandlerWithDatabase creates an MQTT handler with a custom database adapter
func NewHandlerWithDatabase(db common.Database, registry common.ModelRegistry, opts ...Option) (*Handler, error) {
// Start with default configuration
config := DefaultConfig()
// Create handler with basic initialization
// Note: broker and clientManager will be initialized after options are applied
handler, err := NewHandler(db, registry, config)
if err != nil {
return nil, err
}
// Apply functional options
for _, opt := range opts {
if err := opt(handler); err != nil {
return nil, err
}
}
// Reinitialize broker based on final config (after options)
if handler.config.BrokerMode == BrokerModeEmbedded {
handler.broker = NewEmbeddedBroker(handler.config.Broker, handler.clientManager)
} else {
handler.broker = NewExternalBrokerClient(handler.config.ExternalBroker, handler.clientManager)
}
// Set handler reference in broker
handler.broker.SetHandler(handler)
return handler, nil
}
// Option is a functional option for configuring the handler
type Option func(*Handler) error
// WithEmbeddedBroker configures the handler to use an embedded MQTT broker
func WithEmbeddedBroker(config BrokerConfig) Option {
return func(h *Handler) error {
h.config.BrokerMode = BrokerModeEmbedded
h.config.Broker = config
return nil
}
}
// WithExternalBroker configures the handler to connect to an external MQTT broker
func WithExternalBroker(config ExternalBrokerConfig) Option {
return func(h *Handler) error {
h.config.BrokerMode = BrokerModeExternal
h.config.ExternalBroker = config
return nil
}
}
// WithHooks sets a pre-configured hook registry
func WithHooks(hooks *HookRegistry) Option {
return func(h *Handler) error {
h.hooks = hooks
return nil
}
}
// WithTopicPrefix sets a custom topic prefix (default: "spec")
func WithTopicPrefix(prefix string) Option {
return func(h *Handler) error {
h.config.Topics.Prefix = prefix
return nil
}
}
// WithQoS sets custom QoS levels for different message types
func WithQoS(request, response, notification byte) Option {
return func(h *Handler) error {
h.config.QoS.Request = request
h.config.QoS.Response = response
h.config.QoS.Notification = notification
return nil
}
}

View File

@@ -0,0 +1,21 @@
package mqttspec
import (
"github.com/bitechdev/ResolveSpec/pkg/websocketspec"
)
// Subscription types - aliases to websocketspec for subscription management
type (
// Subscription represents an active subscription to entity changes
// The key difference for MQTT: notifications are delivered via MQTT publish
// to spec/{client_id}/notify/{subscription_id} instead of WebSocket send
Subscription = websocketspec.Subscription
// SubscriptionManager manages all active subscriptions
SubscriptionManager = websocketspec.SubscriptionManager
)
// NewSubscriptionManager creates a new subscription manager
func NewSubscriptionManager() *SubscriptionManager {
return websocketspec.NewSubscriptionManager()
}

View File

@@ -273,25 +273,151 @@ handler.SetOpenAPIGenerator(func() (string, error) {
})
```
## Using with Swagger UI
## Using the Built-in UI Handler
You can serve the generated OpenAPI spec with Swagger UI:
The package includes a built-in UI handler that serves popular OpenAPI visualization tools. No need to download or manage static files - everything is served from CDN.
### Quick Start
```go
import (
"github.com/bitechdev/ResolveSpec/pkg/openapi"
"github.com/gorilla/mux"
)
func main() {
router := mux.NewRouter()
// Setup your API routes and OpenAPI generator...
// (see examples above)
// Add the UI handler - defaults to Swagger UI
openapi.SetupUIRoute(router, "/docs", openapi.UIConfig{
UIType: openapi.SwaggerUI,
SpecURL: "/openapi",
Title: "My API Documentation",
})
// Now visit http://localhost:8080/docs
http.ListenAndServe(":8080", router)
}
```
### Supported UI Frameworks
The handler supports four popular OpenAPI UI frameworks:
#### 1. Swagger UI (Default)
The most widely used OpenAPI UI with excellent compatibility and features.
```go
openapi.SetupUIRoute(router, "/docs", openapi.UIConfig{
UIType: openapi.SwaggerUI,
Theme: "dark", // optional: "light" or "dark"
})
```
#### 2. RapiDoc
Modern, customizable, and feature-rich OpenAPI UI.
```go
openapi.SetupUIRoute(router, "/docs", openapi.UIConfig{
UIType: openapi.RapiDoc,
Theme: "dark",
})
```
#### 3. Redoc
Clean, responsive documentation with great UX.
```go
openapi.SetupUIRoute(router, "/docs", openapi.UIConfig{
UIType: openapi.Redoc,
})
```
#### 4. Scalar
Modern and sleek OpenAPI documentation.
```go
openapi.SetupUIRoute(router, "/docs", openapi.UIConfig{
UIType: openapi.Scalar,
Theme: "dark",
})
```
### Configuration Options
```go
type UIConfig struct {
UIType UIType // SwaggerUI, RapiDoc, Redoc, or Scalar
SpecURL string // URL to OpenAPI spec (default: "/openapi")
Title string // Page title (default: "API Documentation")
FaviconURL string // Custom favicon URL (optional)
CustomCSS string // Custom CSS to inject (optional)
Theme string // "light" or "dark" (support varies by UI)
}
```
### Custom Styling Example
```go
openapi.SetupUIRoute(router, "/docs", openapi.UIConfig{
UIType: openapi.SwaggerUI,
Title: "Acme Corp API",
CustomCSS: `
.swagger-ui .topbar {
background-color: #1976d2;
}
.swagger-ui .info .title {
color: #1976d2;
}
`,
})
```
### Using Multiple UIs
You can serve different UIs at different paths:
```go
// Swagger UI at /docs
openapi.SetupUIRoute(router, "/docs", openapi.UIConfig{
UIType: openapi.SwaggerUI,
})
// Redoc at /redoc
openapi.SetupUIRoute(router, "/redoc", openapi.UIConfig{
UIType: openapi.Redoc,
})
// RapiDoc at /api-docs
openapi.SetupUIRoute(router, "/api-docs", openapi.UIConfig{
UIType: openapi.RapiDoc,
})
```
### Manual Handler Usage
If you need more control, use the handler directly:
```go
handler := openapi.UIHandler(openapi.UIConfig{
UIType: openapi.SwaggerUI,
SpecURL: "/api/openapi.json",
})
router.Handle("/documentation", handler)
```
## Using with External Swagger UI
Alternatively, you can use an external Swagger UI instance:
1. Get the spec from `/openapi`
2. Load it in Swagger UI at `https://petstore.swagger.io/`
3. Or self-host Swagger UI and point it to your `/openapi` endpoint
Example with self-hosted Swagger UI:
```go
// Serve Swagger UI static files
router.PathPrefix("/swagger/").Handler(
http.StripPrefix("/swagger/", http.FileServer(http.Dir("./swagger-ui"))),
)
// Configure Swagger UI to use /openapi
```
## Testing
You can test the OpenAPI endpoint:

View File

@@ -183,6 +183,69 @@ func ExampleWithFuncSpec() {
_ = generatorFunc
}
// ExampleWithUIHandler shows how to serve OpenAPI documentation with a web UI
func ExampleWithUIHandler(db *gorm.DB) {
// Create handler and configure OpenAPI generator
handler := restheadspec.NewHandlerWithGORM(db)
registry := modelregistry.NewModelRegistry()
handler.SetOpenAPIGenerator(func() (string, error) {
generator := NewGenerator(GeneratorConfig{
Title: "My API",
Description: "API documentation with interactive UI",
Version: "1.0.0",
BaseURL: "http://localhost:8080",
Registry: registry,
IncludeRestheadSpec: true,
})
return generator.GenerateJSON()
})
// Setup routes
router := mux.NewRouter()
restheadspec.SetupMuxRoutes(router, handler, nil)
// Add UI handlers for different frameworks
// Swagger UI at /docs (most popular)
SetupUIRoute(router, "/docs", UIConfig{
UIType: SwaggerUI,
SpecURL: "/openapi",
Title: "My API - Swagger UI",
Theme: "light",
})
// RapiDoc at /rapidoc (modern alternative)
SetupUIRoute(router, "/rapidoc", UIConfig{
UIType: RapiDoc,
SpecURL: "/openapi",
Title: "My API - RapiDoc",
})
// Redoc at /redoc (clean and responsive)
SetupUIRoute(router, "/redoc", UIConfig{
UIType: Redoc,
SpecURL: "/openapi",
Title: "My API - Redoc",
})
// Scalar at /scalar (modern and sleek)
SetupUIRoute(router, "/scalar", UIConfig{
UIType: Scalar,
SpecURL: "/openapi",
Title: "My API - Scalar",
Theme: "dark",
})
// Now you can access:
// http://localhost:8080/docs - Swagger UI
// http://localhost:8080/rapidoc - RapiDoc
// http://localhost:8080/redoc - Redoc
// http://localhost:8080/scalar - Scalar
// http://localhost:8080/openapi - Raw OpenAPI JSON
_ = router
}
// ExampleCustomization shows advanced customization options
func ExampleCustomization() {
// Create registry and register models with descriptions using struct tags

294
pkg/openapi/ui_handler.go Normal file
View File

@@ -0,0 +1,294 @@
package openapi
import (
"fmt"
"html/template"
"net/http"
"strings"
"github.com/gorilla/mux"
)
// UIType represents the type of OpenAPI UI to serve
type UIType string
const (
// SwaggerUI is the most popular OpenAPI UI
SwaggerUI UIType = "swagger-ui"
// RapiDoc is a modern, customizable OpenAPI UI
RapiDoc UIType = "rapidoc"
// Redoc is a clean, responsive OpenAPI UI
Redoc UIType = "redoc"
// Scalar is a modern and sleek OpenAPI UI
Scalar UIType = "scalar"
)
// UIConfig holds configuration for the OpenAPI UI handler
type UIConfig struct {
// UIType specifies which UI framework to use (default: SwaggerUI)
UIType UIType
// SpecURL is the URL to the OpenAPI spec JSON (default: "/openapi")
SpecURL string
// Title is the page title (default: "API Documentation")
Title string
// FaviconURL is the URL to the favicon (optional)
FaviconURL string
// CustomCSS allows injecting custom CSS (optional)
CustomCSS string
// Theme for the UI (light/dark, depends on UI type)
Theme string
}
// UIHandler creates an HTTP handler that serves an OpenAPI UI
func UIHandler(config UIConfig) http.HandlerFunc {
// Set defaults
if config.UIType == "" {
config.UIType = SwaggerUI
}
if config.SpecURL == "" {
config.SpecURL = "/openapi"
}
if config.Title == "" {
config.Title = "API Documentation"
}
if config.Theme == "" {
config.Theme = "light"
}
return func(w http.ResponseWriter, r *http.Request) {
var htmlContent string
var err error
switch config.UIType {
case SwaggerUI:
htmlContent, err = generateSwaggerUI(config)
case RapiDoc:
htmlContent, err = generateRapiDoc(config)
case Redoc:
htmlContent, err = generateRedoc(config)
case Scalar:
htmlContent, err = generateScalar(config)
default:
http.Error(w, "Unsupported UI type", http.StatusBadRequest)
return
}
if err != nil {
http.Error(w, fmt.Sprintf("Failed to generate UI: %v", err), http.StatusInternalServerError)
return
}
w.Header().Set("Content-Type", "text/html; charset=utf-8")
w.WriteHeader(http.StatusOK)
_, err = w.Write([]byte(htmlContent))
if err != nil {
http.Error(w, fmt.Sprintf("Failed to write response: %v", err), http.StatusInternalServerError)
return
}
}
}
// templateData wraps UIConfig to properly handle CSS in templates
type templateData struct {
UIConfig
SafeCustomCSS template.CSS
}
// generateSwaggerUI generates the HTML for Swagger UI
func generateSwaggerUI(config UIConfig) (string, error) {
tmpl := `<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>{{.Title}}</title>
{{if .FaviconURL}}<link rel="icon" type="image/png" href="{{.FaviconURL}}">{{end}}
<link rel="stylesheet" type="text/css" href="https://cdn.jsdelivr.net/npm/swagger-ui-dist@5/swagger-ui.css">
{{if .SafeCustomCSS}}<style>{{.SafeCustomCSS}}</style>{{end}}
<style>
html { box-sizing: border-box; overflow: -moz-scrollbars-vertical; overflow-y: scroll; }
*, *:before, *:after { box-sizing: inherit; }
body { margin: 0; padding: 0; }
</style>
</head>
<body>
<div id="swagger-ui"></div>
<script src="https://cdn.jsdelivr.net/npm/swagger-ui-dist@5/swagger-ui-bundle.js"></script>
<script src="https://cdn.jsdelivr.net/npm/swagger-ui-dist@5/swagger-ui-standalone-preset.js"></script>
<script>
window.onload = function() {
const ui = SwaggerUIBundle({
url: "{{.SpecURL}}",
dom_id: '#swagger-ui',
deepLinking: true,
presets: [
SwaggerUIBundle.presets.apis,
SwaggerUIStandalonePreset
],
plugins: [
SwaggerUIBundle.plugins.DownloadUrl
],
layout: "StandaloneLayout",
{{if eq .Theme "dark"}}
syntaxHighlight: {
activate: true,
theme: "monokai"
}
{{end}}
});
window.ui = ui;
};
</script>
</body>
</html>`
t, err := template.New("swagger").Parse(tmpl)
if err != nil {
return "", err
}
data := templateData{
UIConfig: config,
SafeCustomCSS: template.CSS(config.CustomCSS),
}
var buf strings.Builder
if err := t.Execute(&buf, data); err != nil {
return "", err
}
return buf.String(), nil
}
// generateRapiDoc generates the HTML for RapiDoc
func generateRapiDoc(config UIConfig) (string, error) {
theme := "light"
if config.Theme == "dark" {
theme = "dark"
}
tmpl := `<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>{{.Title}}</title>
{{if .FaviconURL}}<link rel="icon" type="image/png" href="{{.FaviconURL}}">{{end}}
<script type="module" src="https://unpkg.com/rapidoc/dist/rapidoc-min.js"></script>
{{if .SafeCustomCSS}}<style>{{.SafeCustomCSS}}</style>{{end}}
</head>
<body>
<rapi-doc
spec-url="{{.SpecURL}}"
theme="` + theme + `"
render-style="read"
show-header="true"
show-info="true"
allow-try="true"
allow-server-selection="true"
allow-authentication="true"
api-key-name="Authorization"
api-key-location="header"
></rapi-doc>
</body>
</html>`
t, err := template.New("rapidoc").Parse(tmpl)
if err != nil {
return "", err
}
data := templateData{
UIConfig: config,
SafeCustomCSS: template.CSS(config.CustomCSS),
}
var buf strings.Builder
if err := t.Execute(&buf, data); err != nil {
return "", err
}
return buf.String(), nil
}
// generateRedoc generates the HTML for Redoc
func generateRedoc(config UIConfig) (string, error) {
tmpl := `<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>{{.Title}}</title>
{{if .FaviconURL}}<link rel="icon" type="image/png" href="{{.FaviconURL}}">{{end}}
{{if .SafeCustomCSS}}<style>{{.SafeCustomCSS}}</style>{{end}}
<style>
body { margin: 0; padding: 0; }
</style>
</head>
<body>
<redoc spec-url="{{.SpecURL}}" {{if eq .Theme "dark"}}theme='{"colors": {"primary": {"main": "#dd5522"}}}'{{end}}></redoc>
<script src="https://cdn.redoc.ly/redoc/latest/bundles/redoc.standalone.js"></script>
</body>
</html>`
t, err := template.New("redoc").Parse(tmpl)
if err != nil {
return "", err
}
data := templateData{
UIConfig: config,
SafeCustomCSS: template.CSS(config.CustomCSS),
}
var buf strings.Builder
if err := t.Execute(&buf, data); err != nil {
return "", err
}
return buf.String(), nil
}
// generateScalar generates the HTML for Scalar
func generateScalar(config UIConfig) (string, error) {
tmpl := `<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>{{.Title}}</title>
{{if .FaviconURL}}<link rel="icon" type="image/png" href="{{.FaviconURL}}">{{end}}
{{if .SafeCustomCSS}}<style>{{.SafeCustomCSS}}</style>{{end}}
<style>
body { margin: 0; padding: 0; }
</style>
</head>
<body>
<script id="api-reference" data-url="{{.SpecURL}}" {{if eq .Theme "dark"}}data-theme="dark"{{end}}></script>
<script src="https://cdn.jsdelivr.net/npm/@scalar/api-reference"></script>
</body>
</html>`
t, err := template.New("scalar").Parse(tmpl)
if err != nil {
return "", err
}
data := templateData{
UIConfig: config,
SafeCustomCSS: template.CSS(config.CustomCSS),
}
var buf strings.Builder
if err := t.Execute(&buf, data); err != nil {
return "", err
}
return buf.String(), nil
}
// SetupUIRoute adds the OpenAPI UI route to a mux router
// This is a convenience function for the most common use case
func SetupUIRoute(router *mux.Router, path string, config UIConfig) {
router.Handle(path, UIHandler(config))
}

View File

@@ -0,0 +1,308 @@
package openapi
import (
"net/http"
"net/http/httptest"
"strings"
"testing"
"github.com/gorilla/mux"
)
func TestUIHandler_SwaggerUI(t *testing.T) {
config := UIConfig{
UIType: SwaggerUI,
SpecURL: "/openapi",
Title: "Test API Docs",
}
handler := UIHandler(config)
req := httptest.NewRequest("GET", "/docs", nil)
w := httptest.NewRecorder()
handler(w, req)
resp := w.Result()
if resp.StatusCode != http.StatusOK {
t.Errorf("Expected status 200, got %d", resp.StatusCode)
}
body := w.Body.String()
// Check for Swagger UI specific content
if !strings.Contains(body, "swagger-ui") {
t.Error("Expected Swagger UI content")
}
if !strings.Contains(body, "SwaggerUIBundle") {
t.Error("Expected SwaggerUIBundle script")
}
if !strings.Contains(body, config.Title) {
t.Errorf("Expected title '%s' in HTML", config.Title)
}
if !strings.Contains(body, config.SpecURL) {
t.Errorf("Expected spec URL '%s' in HTML", config.SpecURL)
}
if !strings.Contains(body, "swagger-ui-dist") {
t.Error("Expected Swagger UI CDN link")
}
}
func TestUIHandler_RapiDoc(t *testing.T) {
config := UIConfig{
UIType: RapiDoc,
SpecURL: "/api/spec",
Title: "RapiDoc Test",
}
handler := UIHandler(config)
req := httptest.NewRequest("GET", "/docs", nil)
w := httptest.NewRecorder()
handler(w, req)
resp := w.Result()
if resp.StatusCode != http.StatusOK {
t.Errorf("Expected status 200, got %d", resp.StatusCode)
}
body := w.Body.String()
// Check for RapiDoc specific content
if !strings.Contains(body, "rapi-doc") {
t.Error("Expected rapi-doc element")
}
if !strings.Contains(body, "rapidoc-min.js") {
t.Error("Expected RapiDoc script")
}
if !strings.Contains(body, config.Title) {
t.Errorf("Expected title '%s' in HTML", config.Title)
}
if !strings.Contains(body, config.SpecURL) {
t.Errorf("Expected spec URL '%s' in HTML", config.SpecURL)
}
}
func TestUIHandler_Redoc(t *testing.T) {
config := UIConfig{
UIType: Redoc,
SpecURL: "/spec.json",
Title: "Redoc Test",
}
handler := UIHandler(config)
req := httptest.NewRequest("GET", "/docs", nil)
w := httptest.NewRecorder()
handler(w, req)
resp := w.Result()
if resp.StatusCode != http.StatusOK {
t.Errorf("Expected status 200, got %d", resp.StatusCode)
}
body := w.Body.String()
// Check for Redoc specific content
if !strings.Contains(body, "<redoc") {
t.Error("Expected redoc element")
}
if !strings.Contains(body, "redoc.standalone.js") {
t.Error("Expected Redoc script")
}
if !strings.Contains(body, config.Title) {
t.Errorf("Expected title '%s' in HTML", config.Title)
}
if !strings.Contains(body, config.SpecURL) {
t.Errorf("Expected spec URL '%s' in HTML", config.SpecURL)
}
}
func TestUIHandler_Scalar(t *testing.T) {
config := UIConfig{
UIType: Scalar,
SpecURL: "/openapi.json",
Title: "Scalar Test",
}
handler := UIHandler(config)
req := httptest.NewRequest("GET", "/docs", nil)
w := httptest.NewRecorder()
handler(w, req)
resp := w.Result()
if resp.StatusCode != http.StatusOK {
t.Errorf("Expected status 200, got %d", resp.StatusCode)
}
body := w.Body.String()
// Check for Scalar specific content
if !strings.Contains(body, "api-reference") {
t.Error("Expected api-reference element")
}
if !strings.Contains(body, "@scalar/api-reference") {
t.Error("Expected Scalar script")
}
if !strings.Contains(body, config.Title) {
t.Errorf("Expected title '%s' in HTML", config.Title)
}
if !strings.Contains(body, config.SpecURL) {
t.Errorf("Expected spec URL '%s' in HTML", config.SpecURL)
}
}
func TestUIHandler_DefaultValues(t *testing.T) {
// Test with empty config to check defaults
config := UIConfig{}
handler := UIHandler(config)
req := httptest.NewRequest("GET", "/docs", nil)
w := httptest.NewRecorder()
handler(w, req)
resp := w.Result()
if resp.StatusCode != http.StatusOK {
t.Errorf("Expected status 200, got %d", resp.StatusCode)
}
body := w.Body.String()
// Should default to Swagger UI
if !strings.Contains(body, "swagger-ui") {
t.Error("Expected default to Swagger UI")
}
// Should default to /openapi spec URL
if !strings.Contains(body, "/openapi") {
t.Error("Expected default spec URL '/openapi'")
}
// Should default to "API Documentation" title
if !strings.Contains(body, "API Documentation") {
t.Error("Expected default title 'API Documentation'")
}
}
func TestUIHandler_CustomCSS(t *testing.T) {
customCSS := ".custom-class { color: red; }"
config := UIConfig{
UIType: SwaggerUI,
CustomCSS: customCSS,
}
handler := UIHandler(config)
req := httptest.NewRequest("GET", "/docs", nil)
w := httptest.NewRecorder()
handler(w, req)
body := w.Body.String()
if !strings.Contains(body, customCSS) {
t.Errorf("Expected custom CSS to be included. Body:\n%s", body)
}
}
func TestUIHandler_Favicon(t *testing.T) {
faviconURL := "https://example.com/favicon.ico"
config := UIConfig{
UIType: SwaggerUI,
FaviconURL: faviconURL,
}
handler := UIHandler(config)
req := httptest.NewRequest("GET", "/docs", nil)
w := httptest.NewRecorder()
handler(w, req)
body := w.Body.String()
if !strings.Contains(body, faviconURL) {
t.Error("Expected favicon URL to be included")
}
}
func TestUIHandler_DarkTheme(t *testing.T) {
config := UIConfig{
UIType: SwaggerUI,
Theme: "dark",
}
handler := UIHandler(config)
req := httptest.NewRequest("GET", "/docs", nil)
w := httptest.NewRecorder()
handler(w, req)
body := w.Body.String()
// SwaggerUI uses monokai theme for dark mode
if !strings.Contains(body, "monokai") {
t.Error("Expected dark theme configuration for Swagger UI")
}
}
func TestUIHandler_InvalidUIType(t *testing.T) {
config := UIConfig{
UIType: "invalid-ui-type",
}
handler := UIHandler(config)
req := httptest.NewRequest("GET", "/docs", nil)
w := httptest.NewRecorder()
handler(w, req)
resp := w.Result()
if resp.StatusCode != http.StatusBadRequest {
t.Errorf("Expected status 400 for invalid UI type, got %d", resp.StatusCode)
}
}
func TestUIHandler_ContentType(t *testing.T) {
config := UIConfig{
UIType: SwaggerUI,
}
handler := UIHandler(config)
req := httptest.NewRequest("GET", "/docs", nil)
w := httptest.NewRecorder()
handler(w, req)
contentType := w.Header().Get("Content-Type")
if !strings.Contains(contentType, "text/html") {
t.Errorf("Expected Content-Type to contain 'text/html', got '%s'", contentType)
}
if !strings.Contains(contentType, "charset=utf-8") {
t.Errorf("Expected Content-Type to contain 'charset=utf-8', got '%s'", contentType)
}
}
func TestSetupUIRoute(t *testing.T) {
router := mux.NewRouter()
config := UIConfig{
UIType: SwaggerUI,
}
SetupUIRoute(router, "/api-docs", config)
// Test that the route was added and works
req := httptest.NewRequest("GET", "/api-docs", nil)
w := httptest.NewRecorder()
router.ServeHTTP(w, req)
if w.Code != http.StatusOK {
t.Errorf("Expected status 200, got %d", w.Code)
}
// Verify it returns HTML
body := w.Body.String()
if !strings.Contains(body, "swagger-ui") {
t.Error("Expected Swagger UI content")
}
}

View File

@@ -6,6 +6,7 @@ import (
"reflect"
"strconv"
"strings"
"time"
"github.com/bitechdev/ResolveSpec/pkg/modelregistry"
)
@@ -1080,7 +1081,55 @@ func setFieldValue(field reflect.Value, value interface{}) error {
// Handle struct types (like SqlTimeStamp, SqlDate, SqlTime which wrap SqlNull[time.Time])
if field.Kind() == reflect.Struct {
// Try to find a "Val" field (for SqlNull types) and set it
// Handle datatypes.SqlNull[T] and wrapped types (SqlTimeStamp, SqlDate, SqlTime)
// Check if the type has a Scan method (sql.Scanner interface)
if field.CanAddr() {
scanMethod := field.Addr().MethodByName("Scan")
if scanMethod.IsValid() {
// Call the Scan method with the value
results := scanMethod.Call([]reflect.Value{reflect.ValueOf(value)})
if len(results) > 0 {
// Check if there was an error
if err, ok := results[0].Interface().(error); ok && err != nil {
return err
}
return nil
}
}
}
// Handle time.Time with ISO string fallback
if field.Type() == reflect.TypeOf(time.Time{}) {
switch v := value.(type) {
case time.Time:
field.Set(reflect.ValueOf(v))
return nil
case string:
// Try parsing as ISO 8601 / RFC3339
if t, err := time.Parse(time.RFC3339, v); err == nil {
field.Set(reflect.ValueOf(t))
return nil
}
// Try other common formats
formats := []string{
"2006-01-02T15:04:05.000-0700",
"2006-01-02T15:04:05.000",
"2006-01-02T15:04:05",
"2006-01-02 15:04:05",
"2006-01-02",
}
for _, format := range formats {
if t, err := time.Parse(format, v); err == nil {
field.Set(reflect.ValueOf(t))
return nil
}
}
return fmt.Errorf("cannot parse time string: %s", v)
}
}
// Fallback: Try to find a "Val" field (for SqlNull types) and set it directly
valField := field.FieldByName("Val")
if valField.IsValid() && valField.CanSet() {
// Also set Valid field to true
@@ -1095,6 +1144,7 @@ func setFieldValue(field reflect.Value, value interface{}) error {
return nil
}
}
}
// If we can convert the type, do it

View File

@@ -4,15 +4,15 @@ import (
"testing"
"time"
"github.com/bitechdev/ResolveSpec/pkg/common"
"github.com/bitechdev/ResolveSpec/pkg/reflection"
"github.com/bitechdev/ResolveSpec/pkg/spectypes"
)
func TestMapToStruct_SqlJSONB_PreservesDriverValuer(t *testing.T) {
// Test that SqlJSONB type preserves driver.Valuer interface
type TestModel struct {
ID int64 `bun:"id,pk" json:"id"`
Meta common.SqlJSONB `bun:"meta" json:"meta"`
Meta spectypes.SqlJSONB `bun:"meta" json:"meta"`
}
dataMap := map[string]interface{}{
@@ -65,7 +65,7 @@ func TestMapToStruct_SqlJSONB_FromBytes(t *testing.T) {
// Test that SqlJSONB can be set from []byte directly
type TestModel struct {
ID int64 `bun:"id,pk" json:"id"`
Meta common.SqlJSONB `bun:"meta" json:"meta"`
Meta spectypes.SqlJSONB `bun:"meta" json:"meta"`
}
jsonBytes := []byte(`{"direct":"bytes"}`)
@@ -103,11 +103,11 @@ func TestMapToStruct_AllSqlTypes(t *testing.T) {
type TestModel struct {
ID int64 `bun:"id,pk" json:"id"`
Name string `bun:"name" json:"name"`
CreatedAt common.SqlTimeStamp `bun:"created_at" json:"created_at"`
BirthDate common.SqlDate `bun:"birth_date" json:"birth_date"`
LoginTime common.SqlTime `bun:"login_time" json:"login_time"`
Meta common.SqlJSONB `bun:"meta" json:"meta"`
Tags common.SqlJSONB `bun:"tags" json:"tags"`
CreatedAt spectypes.SqlTimeStamp `bun:"created_at" json:"created_at"`
BirthDate spectypes.SqlDate `bun:"birth_date" json:"birth_date"`
LoginTime spectypes.SqlTime `bun:"login_time" json:"login_time"`
Meta spectypes.SqlJSONB `bun:"meta" json:"meta"`
Tags spectypes.SqlJSONB `bun:"tags" json:"tags"`
}
now := time.Now()
@@ -225,8 +225,8 @@ func TestMapToStruct_SqlNull_NilValues(t *testing.T) {
// Test that SqlNull types handle nil values correctly
type TestModel struct {
ID int64 `bun:"id,pk" json:"id"`
UpdatedAt common.SqlTimeStamp `bun:"updated_at" json:"updated_at"`
DeletedAt common.SqlTimeStamp `bun:"deleted_at" json:"deleted_at"`
UpdatedAt spectypes.SqlTimeStamp `bun:"updated_at" json:"updated_at"`
DeletedAt spectypes.SqlTimeStamp `bun:"deleted_at" json:"deleted_at"`
}
now := time.Now()

View File

@@ -0,0 +1,118 @@
package resolvespec
import (
"context"
"crypto/sha256"
"encoding/hex"
"encoding/json"
"fmt"
"strings"
"time"
"github.com/bitechdev/ResolveSpec/pkg/cache"
"github.com/bitechdev/ResolveSpec/pkg/common"
)
// queryCacheKey represents the components used to build a cache key for query total count
type queryCacheKey struct {
TableName string `json:"table_name"`
Filters []common.FilterOption `json:"filters"`
Sort []common.SortOption `json:"sort"`
CustomSQLWhere string `json:"custom_sql_where,omitempty"`
CustomSQLOr string `json:"custom_sql_or,omitempty"`
CursorForward string `json:"cursor_forward,omitempty"`
CursorBackward string `json:"cursor_backward,omitempty"`
}
// cachedTotal represents a cached total count
type cachedTotal struct {
Total int `json:"total"`
}
// buildQueryCacheKey builds a cache key from query parameters for total count caching
func buildQueryCacheKey(tableName string, filters []common.FilterOption, sort []common.SortOption, customWhere, customOr string) string {
key := queryCacheKey{
TableName: tableName,
Filters: filters,
Sort: sort,
CustomSQLWhere: customWhere,
CustomSQLOr: customOr,
}
// Serialize to JSON for consistent hashing
jsonData, err := json.Marshal(key)
if err != nil {
// Fallback to simple string concatenation if JSON fails
return hashString(fmt.Sprintf("%s_%v_%v_%s_%s", tableName, filters, sort, customWhere, customOr))
}
return hashString(string(jsonData))
}
// buildExtendedQueryCacheKey builds a cache key for extended query options with cursor pagination
func buildExtendedQueryCacheKey(tableName string, filters []common.FilterOption, sort []common.SortOption,
customWhere, customOr string, cursorFwd, cursorBwd string) string {
key := queryCacheKey{
TableName: tableName,
Filters: filters,
Sort: sort,
CustomSQLWhere: customWhere,
CustomSQLOr: customOr,
CursorForward: cursorFwd,
CursorBackward: cursorBwd,
}
// Serialize to JSON for consistent hashing
jsonData, err := json.Marshal(key)
if err != nil {
// Fallback to simple string concatenation if JSON fails
return hashString(fmt.Sprintf("%s_%v_%v_%s_%s_%s_%s",
tableName, filters, sort, customWhere, customOr, cursorFwd, cursorBwd))
}
return hashString(string(jsonData))
}
// hashString computes SHA256 hash of a string
func hashString(s string) string {
h := sha256.New()
h.Write([]byte(s))
return hex.EncodeToString(h.Sum(nil))
}
// getQueryTotalCacheKey returns a formatted cache key for storing/retrieving total count
func getQueryTotalCacheKey(hash string) string {
return fmt.Sprintf("query_total:%s", hash)
}
// buildCacheTags creates cache tags from schema and table name
func buildCacheTags(schema, tableName string) []string {
return []string{
fmt.Sprintf("schema:%s", strings.ToLower(schema)),
fmt.Sprintf("table:%s", strings.ToLower(tableName)),
}
}
// setQueryTotalCache stores a query total in the cache with schema and table tags
func setQueryTotalCache(ctx context.Context, cacheKey string, total int, schema, tableName string, ttl time.Duration) error {
c := cache.GetDefaultCache()
cacheData := cachedTotal{Total: total}
tags := buildCacheTags(schema, tableName)
return c.SetWithTags(ctx, cacheKey, cacheData, ttl, tags)
}
// invalidateCacheForTags removes all cached items matching the specified tags
func invalidateCacheForTags(ctx context.Context, tags []string) error {
c := cache.GetDefaultCache()
// Invalidate for each tag
for _, tag := range tags {
if err := c.DeleteByTag(ctx, tag); err != nil {
return err
}
}
return nil
}

View File

@@ -2,6 +2,7 @@ package resolvespec
import (
"context"
"database/sql"
"encoding/json"
"fmt"
"net/http"
@@ -330,19 +331,17 @@ func (h *Handler) handleRead(ctx context.Context, w common.ResponseWriter, id st
// Use extended cache key if cursors are present
var cacheKeyHash string
if len(options.CursorForward) > 0 || len(options.CursorBackward) > 0 {
cacheKeyHash = cache.BuildExtendedQueryCacheKey(
cacheKeyHash = buildExtendedQueryCacheKey(
tableName,
options.Filters,
options.Sort,
"", // No custom SQL WHERE in resolvespec
"", // No custom SQL OR in resolvespec
nil, // No expand options in resolvespec
false, // distinct not used here
"", // No custom SQL WHERE in resolvespec
"", // No custom SQL OR in resolvespec
options.CursorForward,
options.CursorBackward,
)
} else {
cacheKeyHash = cache.BuildQueryCacheKey(
cacheKeyHash = buildQueryCacheKey(
tableName,
options.Filters,
options.Sort,
@@ -350,10 +349,10 @@ func (h *Handler) handleRead(ctx context.Context, w common.ResponseWriter, id st
"", // No custom SQL OR in resolvespec
)
}
cacheKey := cache.GetQueryTotalCacheKey(cacheKeyHash)
cacheKey := getQueryTotalCacheKey(cacheKeyHash)
// Try to retrieve from cache
var cachedTotal cache.CachedTotal
var cachedTotal cachedTotal
err := cache.GetDefaultCache().Get(ctx, cacheKey, &cachedTotal)
if err == nil {
total = cachedTotal.Total
@@ -370,10 +369,9 @@ func (h *Handler) handleRead(ctx context.Context, w common.ResponseWriter, id st
total = count
logger.Debug("Total records (from query): %d", total)
// Store in cache
// Store in cache with schema and table tags
cacheTTL := time.Minute * 2 // Default 2 minutes TTL
cacheData := cache.CachedTotal{Total: total}
if err := cache.GetDefaultCache().Set(ctx, cacheKey, cacheData, cacheTTL); err != nil {
if err := setQueryTotalCache(ctx, cacheKey, total, schema, tableName, cacheTTL); err != nil {
logger.Warn("Failed to cache query total: %v", err)
// Don't fail the request if caching fails
} else {
@@ -463,6 +461,11 @@ func (h *Handler) handleCreate(ctx context.Context, w common.ResponseWriter, dat
return
}
logger.Info("Successfully created record with nested data, ID: %v", result.ID)
// Invalidate cache for this table
cacheTags := buildCacheTags(schema, tableName)
if err := invalidateCacheForTags(ctx, cacheTags); err != nil {
logger.Warn("Failed to invalidate cache for table %s: %v", tableName, err)
}
h.sendResponse(w, result.Data, nil)
return
}
@@ -479,6 +482,11 @@ func (h *Handler) handleCreate(ctx context.Context, w common.ResponseWriter, dat
return
}
logger.Info("Successfully created record, rows affected: %d", result.RowsAffected())
// Invalidate cache for this table
cacheTags := buildCacheTags(schema, tableName)
if err := invalidateCacheForTags(ctx, cacheTags); err != nil {
logger.Warn("Failed to invalidate cache for table %s: %v", tableName, err)
}
h.sendResponse(w, v, nil)
case []map[string]interface{}:
@@ -517,6 +525,11 @@ func (h *Handler) handleCreate(ctx context.Context, w common.ResponseWriter, dat
return
}
logger.Info("Successfully created %d records with nested data", len(results))
// Invalidate cache for this table
cacheTags := buildCacheTags(schema, tableName)
if err := invalidateCacheForTags(ctx, cacheTags); err != nil {
logger.Warn("Failed to invalidate cache for table %s: %v", tableName, err)
}
h.sendResponse(w, results, nil)
return
}
@@ -540,6 +553,11 @@ func (h *Handler) handleCreate(ctx context.Context, w common.ResponseWriter, dat
return
}
logger.Info("Successfully created %d records", len(v))
// Invalidate cache for this table
cacheTags := buildCacheTags(schema, tableName)
if err := invalidateCacheForTags(ctx, cacheTags); err != nil {
logger.Warn("Failed to invalidate cache for table %s: %v", tableName, err)
}
h.sendResponse(w, v, nil)
case []interface{}:
@@ -583,6 +601,11 @@ func (h *Handler) handleCreate(ctx context.Context, w common.ResponseWriter, dat
return
}
logger.Info("Successfully created %d records with nested data", len(results))
// Invalidate cache for this table
cacheTags := buildCacheTags(schema, tableName)
if err := invalidateCacheForTags(ctx, cacheTags); err != nil {
logger.Warn("Failed to invalidate cache for table %s: %v", tableName, err)
}
h.sendResponse(w, results, nil)
return
}
@@ -610,6 +633,11 @@ func (h *Handler) handleCreate(ctx context.Context, w common.ResponseWriter, dat
return
}
logger.Info("Successfully created %d records", len(v))
// Invalidate cache for this table
cacheTags := buildCacheTags(schema, tableName)
if err := invalidateCacheForTags(ctx, cacheTags); err != nil {
logger.Warn("Failed to invalidate cache for table %s: %v", tableName, err)
}
h.sendResponse(w, list, nil)
default:
@@ -660,6 +688,11 @@ func (h *Handler) handleUpdate(ctx context.Context, w common.ResponseWriter, url
return
}
logger.Info("Successfully updated record with nested data, rows: %d", result.AffectedRows)
// Invalidate cache for this table
cacheTags := buildCacheTags(schema, tableName)
if err := invalidateCacheForTags(ctx, cacheTags); err != nil {
logger.Warn("Failed to invalidate cache for table %s: %v", tableName, err)
}
h.sendResponse(w, result.Data, nil)
return
}
@@ -696,6 +729,11 @@ func (h *Handler) handleUpdate(ctx context.Context, w common.ResponseWriter, url
}
logger.Info("Successfully updated %d records", result.RowsAffected())
// Invalidate cache for this table
cacheTags := buildCacheTags(schema, tableName)
if err := invalidateCacheForTags(ctx, cacheTags); err != nil {
logger.Warn("Failed to invalidate cache for table %s: %v", tableName, err)
}
h.sendResponse(w, data, nil)
case []map[string]interface{}:
@@ -734,6 +772,11 @@ func (h *Handler) handleUpdate(ctx context.Context, w common.ResponseWriter, url
return
}
logger.Info("Successfully updated %d records with nested data", len(results))
// Invalidate cache for this table
cacheTags := buildCacheTags(schema, tableName)
if err := invalidateCacheForTags(ctx, cacheTags); err != nil {
logger.Warn("Failed to invalidate cache for table %s: %v", tableName, err)
}
h.sendResponse(w, results, nil)
return
}
@@ -757,6 +800,11 @@ func (h *Handler) handleUpdate(ctx context.Context, w common.ResponseWriter, url
return
}
logger.Info("Successfully updated %d records", len(updates))
// Invalidate cache for this table
cacheTags := buildCacheTags(schema, tableName)
if err := invalidateCacheForTags(ctx, cacheTags); err != nil {
logger.Warn("Failed to invalidate cache for table %s: %v", tableName, err)
}
h.sendResponse(w, updates, nil)
case []interface{}:
@@ -799,6 +847,11 @@ func (h *Handler) handleUpdate(ctx context.Context, w common.ResponseWriter, url
return
}
logger.Info("Successfully updated %d records with nested data", len(results))
// Invalidate cache for this table
cacheTags := buildCacheTags(schema, tableName)
if err := invalidateCacheForTags(ctx, cacheTags); err != nil {
logger.Warn("Failed to invalidate cache for table %s: %v", tableName, err)
}
h.sendResponse(w, results, nil)
return
}
@@ -826,6 +879,11 @@ func (h *Handler) handleUpdate(ctx context.Context, w common.ResponseWriter, url
return
}
logger.Info("Successfully updated %d records", len(list))
// Invalidate cache for this table
cacheTags := buildCacheTags(schema, tableName)
if err := invalidateCacheForTags(ctx, cacheTags); err != nil {
logger.Warn("Failed to invalidate cache for table %s: %v", tableName, err)
}
h.sendResponse(w, list, nil)
default:
@@ -872,6 +930,11 @@ func (h *Handler) handleDelete(ctx context.Context, w common.ResponseWriter, id
return
}
logger.Info("Successfully deleted %d records", len(v))
// Invalidate cache for this table
cacheTags := buildCacheTags(schema, tableName)
if err := invalidateCacheForTags(ctx, cacheTags); err != nil {
logger.Warn("Failed to invalidate cache for table %s: %v", tableName, err)
}
h.sendResponse(w, map[string]interface{}{"deleted": len(v)}, nil)
return
@@ -913,6 +976,11 @@ func (h *Handler) handleDelete(ctx context.Context, w common.ResponseWriter, id
return
}
logger.Info("Successfully deleted %d records", deletedCount)
// Invalidate cache for this table
cacheTags := buildCacheTags(schema, tableName)
if err := invalidateCacheForTags(ctx, cacheTags); err != nil {
logger.Warn("Failed to invalidate cache for table %s: %v", tableName, err)
}
h.sendResponse(w, map[string]interface{}{"deleted": deletedCount}, nil)
return
@@ -939,6 +1007,11 @@ func (h *Handler) handleDelete(ctx context.Context, w common.ResponseWriter, id
return
}
logger.Info("Successfully deleted %d records", deletedCount)
// Invalidate cache for this table
cacheTags := buildCacheTags(schema, tableName)
if err := invalidateCacheForTags(ctx, cacheTags); err != nil {
logger.Warn("Failed to invalidate cache for table %s: %v", tableName, err)
}
h.sendResponse(w, map[string]interface{}{"deleted": deletedCount}, nil)
return
@@ -957,7 +1030,29 @@ func (h *Handler) handleDelete(ctx context.Context, w common.ResponseWriter, id
return
}
query := h.db.NewDelete().Table(tableName).Where(fmt.Sprintf("%s = ?", common.QuoteIdent(reflection.GetPrimaryKeyName(model))), id)
// Get primary key name
pkName := reflection.GetPrimaryKeyName(model)
// First, fetch the record that will be deleted
modelType := reflect.TypeOf(model)
if modelType.Kind() == reflect.Ptr {
modelType = modelType.Elem()
}
recordToDelete := reflect.New(modelType).Interface()
selectQuery := h.db.NewSelect().Model(recordToDelete).Where(fmt.Sprintf("%s = ?", common.QuoteIdent(pkName)), id)
if err := selectQuery.ScanModel(ctx); err != nil {
if err == sql.ErrNoRows {
logger.Warn("Record not found for delete: %s = %s", pkName, id)
h.sendError(w, http.StatusNotFound, "not_found", "Record not found", err)
return
}
logger.Error("Error fetching record for delete: %v", err)
h.sendError(w, http.StatusInternalServerError, "fetch_error", "Error fetching record", err)
return
}
query := h.db.NewDelete().Table(tableName).Where(fmt.Sprintf("%s = ?", common.QuoteIdent(pkName)), id)
result, err := query.Exec(ctx)
if err != nil {
@@ -966,14 +1061,21 @@ func (h *Handler) handleDelete(ctx context.Context, w common.ResponseWriter, id
return
}
// Check if the record was actually deleted
if result.RowsAffected() == 0 {
logger.Warn("No record found to delete with ID: %s", id)
h.sendError(w, http.StatusNotFound, "not_found", "Record not found", nil)
logger.Warn("No rows deleted for ID: %s", id)
h.sendError(w, http.StatusNotFound, "not_found", "Record not found or already deleted", nil)
return
}
logger.Info("Successfully deleted record with ID: %s", id)
h.sendResponse(w, nil, nil)
// Return the deleted record data
// Invalidate cache for this table
cacheTags := buildCacheTags(schema, tableName)
if err := invalidateCacheForTags(ctx, cacheTags); err != nil {
logger.Warn("Failed to invalidate cache for table %s: %v", tableName, err)
}
h.sendResponse(w, recordToDelete, nil)
}
func (h *Handler) applyFilter(query common.SelectQuery, filter common.FilterOption) common.SelectQuery {

View File

@@ -1,4 +1,4 @@
package cache
package restheadspec
import (
"context"
@@ -7,56 +7,42 @@ import (
"encoding/json"
"fmt"
"strings"
"time"
"github.com/bitechdev/ResolveSpec/pkg/cache"
"github.com/bitechdev/ResolveSpec/pkg/common"
)
// QueryCacheKey represents the components used to build a cache key for query total count
type QueryCacheKey struct {
// expandOptionKey represents expand options for cache key
type expandOptionKey struct {
Relation string `json:"relation"`
Where string `json:"where,omitempty"`
}
// queryCacheKey represents the components used to build a cache key for query total count
type queryCacheKey struct {
TableName string `json:"table_name"`
Filters []common.FilterOption `json:"filters"`
Sort []common.SortOption `json:"sort"`
CustomSQLWhere string `json:"custom_sql_where,omitempty"`
CustomSQLOr string `json:"custom_sql_or,omitempty"`
Expand []ExpandOptionKey `json:"expand,omitempty"`
Expand []expandOptionKey `json:"expand,omitempty"`
Distinct bool `json:"distinct,omitempty"`
CursorForward string `json:"cursor_forward,omitempty"`
CursorBackward string `json:"cursor_backward,omitempty"`
}
// ExpandOptionKey represents expand options for cache key
type ExpandOptionKey struct {
Relation string `json:"relation"`
Where string `json:"where,omitempty"`
// cachedTotal represents a cached total count
type cachedTotal struct {
Total int `json:"total"`
}
// BuildQueryCacheKey builds a cache key from query parameters for total count caching
// This is used to cache the total count of records matching a query
func BuildQueryCacheKey(tableName string, filters []common.FilterOption, sort []common.SortOption, customWhere, customOr string) string {
key := QueryCacheKey{
TableName: tableName,
Filters: filters,
Sort: sort,
CustomSQLWhere: customWhere,
CustomSQLOr: customOr,
}
// Serialize to JSON for consistent hashing
jsonData, err := json.Marshal(key)
if err != nil {
// Fallback to simple string concatenation if JSON fails
return hashString(fmt.Sprintf("%s_%v_%v_%s_%s", tableName, filters, sort, customWhere, customOr))
}
return hashString(string(jsonData))
}
// BuildExtendedQueryCacheKey builds a cache key for extended query options (restheadspec)
// buildExtendedQueryCacheKey builds a cache key for extended query options (restheadspec)
// Includes expand, distinct, and cursor pagination options
func BuildExtendedQueryCacheKey(tableName string, filters []common.FilterOption, sort []common.SortOption,
func buildExtendedQueryCacheKey(tableName string, filters []common.FilterOption, sort []common.SortOption,
customWhere, customOr string, expandOpts []interface{}, distinct bool, cursorFwd, cursorBwd string) string {
key := QueryCacheKey{
key := queryCacheKey{
TableName: tableName,
Filters: filters,
Sort: sort,
@@ -69,11 +55,11 @@ func BuildExtendedQueryCacheKey(tableName string, filters []common.FilterOption,
// Convert expand options to cache key format
if len(expandOpts) > 0 {
key.Expand = make([]ExpandOptionKey, 0, len(expandOpts))
key.Expand = make([]expandOptionKey, 0, len(expandOpts))
for _, exp := range expandOpts {
// Type assert to get the expand option fields we care about for caching
if expMap, ok := exp.(map[string]interface{}); ok {
expKey := ExpandOptionKey{}
expKey := expandOptionKey{}
if rel, ok := expMap["relation"].(string); ok {
expKey.Relation = rel
}
@@ -83,7 +69,6 @@ func BuildExtendedQueryCacheKey(tableName string, filters []common.FilterOption,
key.Expand = append(key.Expand, expKey)
}
}
// Sort expand options for consistent hashing (already sorted by relation name above)
}
// Serialize to JSON for consistent hashing
@@ -104,24 +89,38 @@ func hashString(s string) string {
return hex.EncodeToString(h.Sum(nil))
}
// GetQueryTotalCacheKey returns a formatted cache key for storing/retrieving total count
func GetQueryTotalCacheKey(hash string) string {
// getQueryTotalCacheKey returns a formatted cache key for storing/retrieving total count
func getQueryTotalCacheKey(hash string) string {
return fmt.Sprintf("query_total:%s", hash)
}
// CachedTotal represents a cached total count
type CachedTotal struct {
Total int `json:"total"`
// buildCacheTags creates cache tags from schema and table name
func buildCacheTags(schema, tableName string) []string {
return []string{
fmt.Sprintf("schema:%s", strings.ToLower(schema)),
fmt.Sprintf("table:%s", strings.ToLower(tableName)),
}
}
// InvalidateCacheForTable removes all cached totals for a specific table
// This should be called when data in the table changes (insert/update/delete)
func InvalidateCacheForTable(ctx context.Context, tableName string) error {
cache := GetDefaultCache()
// setQueryTotalCache stores a query total in the cache with schema and table tags
func setQueryTotalCache(ctx context.Context, cacheKey string, total int, schema, tableName string, ttl time.Duration) error {
c := cache.GetDefaultCache()
cacheData := cachedTotal{Total: total}
tags := buildCacheTags(schema, tableName)
// Build a pattern to match all query totals for this table
// Note: This requires pattern matching support in the provider
pattern := fmt.Sprintf("query_total:*%s*", strings.ToLower(tableName))
return cache.DeleteByPattern(ctx, pattern)
return c.SetWithTags(ctx, cacheKey, cacheData, ttl, tags)
}
// invalidateCacheForTags removes all cached items matching the specified tags
func invalidateCacheForTags(ctx context.Context, tags []string) error {
c := cache.GetDefaultCache()
// Invalidate for each tag
for _, tag := range tags {
if err := c.DeleteByTag(ctx, tag); err != nil {
return err
}
}
return nil
}

View File

@@ -2,6 +2,7 @@ package restheadspec
import (
"context"
"database/sql"
"encoding/json"
"fmt"
"net/http"
@@ -481,8 +482,10 @@ func (h *Handler) handleRead(ctx context.Context, w common.ResponseWriter, id st
// Apply custom SQL WHERE clause (AND condition)
if options.CustomSQLWhere != "" {
logger.Debug("Applying custom SQL WHERE: %s", options.CustomSQLWhere)
// Sanitize and allow preload table prefixes since custom SQL may reference multiple tables
sanitizedWhere := common.SanitizeWhereClause(options.CustomSQLWhere, reflection.ExtractTableNameOnly(tableName), &options.RequestOptions)
// First add table prefixes to unqualified columns (but skip columns inside function calls)
prefixedWhere := common.AddTablePrefixToColumns(options.CustomSQLWhere, reflection.ExtractTableNameOnly(tableName))
// Then sanitize and allow preload table prefixes since custom SQL may reference multiple tables
sanitizedWhere := common.SanitizeWhereClause(prefixedWhere, reflection.ExtractTableNameOnly(tableName), &options.RequestOptions)
if sanitizedWhere != "" {
query = query.Where(sanitizedWhere)
}
@@ -491,8 +494,9 @@ func (h *Handler) handleRead(ctx context.Context, w common.ResponseWriter, id st
// Apply custom SQL WHERE clause (OR condition)
if options.CustomSQLOr != "" {
logger.Debug("Applying custom SQL OR: %s", options.CustomSQLOr)
customOr := common.AddTablePrefixToColumns(options.CustomSQLOr, reflection.ExtractTableNameOnly(tableName))
// Sanitize and allow preload table prefixes since custom SQL may reference multiple tables
sanitizedOr := common.SanitizeWhereClause(options.CustomSQLOr, reflection.ExtractTableNameOnly(tableName), &options.RequestOptions)
sanitizedOr := common.SanitizeWhereClause(customOr, reflection.ExtractTableNameOnly(tableName), &options.RequestOptions)
if sanitizedOr != "" {
query = query.WhereOr(sanitizedOr)
}
@@ -528,7 +532,7 @@ func (h *Handler) handleRead(ctx context.Context, w common.ResponseWriter, id st
var total int
if !options.SkipCount {
// Try to get from cache first (unless SkipCache is true)
var cachedTotal *cache.CachedTotal
var cachedTotalData *cachedTotal
var cacheKey string
if !options.SkipCache {
@@ -542,7 +546,7 @@ func (h *Handler) handleRead(ctx context.Context, w common.ResponseWriter, id st
}
}
cacheKeyHash := cache.BuildExtendedQueryCacheKey(
cacheKeyHash := buildExtendedQueryCacheKey(
tableName,
options.Filters,
options.Sort,
@@ -553,22 +557,22 @@ func (h *Handler) handleRead(ctx context.Context, w common.ResponseWriter, id st
options.CursorForward,
options.CursorBackward,
)
cacheKey = cache.GetQueryTotalCacheKey(cacheKeyHash)
cacheKey = getQueryTotalCacheKey(cacheKeyHash)
// Try to retrieve from cache
cachedTotal = &cache.CachedTotal{}
err := cache.GetDefaultCache().Get(ctx, cacheKey, cachedTotal)
cachedTotalData = &cachedTotal{}
err := cache.GetDefaultCache().Get(ctx, cacheKey, cachedTotalData)
if err == nil {
total = cachedTotal.Total
total = cachedTotalData.Total
logger.Debug("Total records (from cache): %d", total)
} else {
logger.Debug("Cache miss for query total")
cachedTotal = nil
cachedTotalData = nil
}
}
// If not in cache or cache skip, execute count query
if cachedTotal == nil {
if cachedTotalData == nil {
count, err := query.Count(ctx)
if err != nil {
logger.Error("Error counting records: %v", err)
@@ -578,11 +582,10 @@ func (h *Handler) handleRead(ctx context.Context, w common.ResponseWriter, id st
total = count
logger.Debug("Total records (from query): %d", total)
// Store in cache (if caching is enabled)
// Store in cache with schema and table tags (if caching is enabled)
if !options.SkipCache && cacheKey != "" {
cacheTTL := time.Minute * 2 // Default 2 minutes TTL
cacheData := &cache.CachedTotal{Total: total}
if err := cache.GetDefaultCache().Set(ctx, cacheKey, cacheData, cacheTTL); err != nil {
if err := setQueryTotalCache(ctx, cacheKey, total, schema, tableName, cacheTTL); err != nil {
logger.Warn("Failed to cache query total: %v", err)
// Don't fail the request if caching fails
} else {
@@ -661,6 +664,14 @@ func (h *Handler) handleRead(ctx context.Context, w common.ResponseWriter, id st
return
}
// Check if a specific ID was requested but no record was found
resultCount := reflection.Len(modelPtr)
if id != "" && resultCount == 0 {
logger.Warn("Record not found for ID: %s", id)
h.sendError(w, http.StatusNotFound, "not_found", "Record not found", nil)
return
}
limit := 0
if options.Limit != nil {
limit = *options.Limit
@@ -675,7 +686,7 @@ func (h *Handler) handleRead(ctx context.Context, w common.ResponseWriter, id st
metadata := &common.Metadata{
Total: int64(total),
Count: int64(reflection.Len(modelPtr)),
Count: int64(resultCount),
Filtered: int64(total),
Limit: limit,
Offset: offset,
@@ -850,7 +861,10 @@ func (h *Handler) applyPreloadWithRecursion(query common.SelectQuery, preload co
if len(preload.Where) > 0 {
// Build RequestOptions with all preloads to allow references to sibling relations
preloadOpts := &common.RequestOptions{Preload: allPreloads}
sanitizedWhere := common.SanitizeWhereClause(preload.Where, reflection.ExtractTableNameOnly(preload.Relation), preloadOpts)
// First add table prefixes to unqualified columns
prefixedWhere := common.AddTablePrefixToColumns(preload.Where, reflection.ExtractTableNameOnly(preload.Relation))
// Then sanitize and allow preload table prefixes
sanitizedWhere := common.SanitizeWhereClause(prefixedWhere, reflection.ExtractTableNameOnly(preload.Relation), preloadOpts)
if len(sanitizedWhere) > 0 {
sq = sq.Where(sanitizedWhere)
}
@@ -1140,6 +1154,11 @@ func (h *Handler) handleCreate(ctx context.Context, w common.ResponseWriter, dat
}
logger.Info("Successfully created %d record(s)", len(mergedResults))
// Invalidate cache for this table
cacheTags := buildCacheTags(schema, tableName)
if err := invalidateCacheForTags(ctx, cacheTags); err != nil {
logger.Warn("Failed to invalidate cache for table %s: %v", tableName, err)
}
h.sendResponseWithOptions(w, responseData, nil, &options)
}
@@ -1247,7 +1266,7 @@ func (h *Handler) handleUpdate(ctx context.Context, w common.ResponseWriter, id
}
// Create update query using Model() to preserve custom types and driver.Valuer interfaces
query := tx.NewUpdate().Model(modelInstance).Table(tableName)
query := tx.NewUpdate().Model(modelInstance)
query = query.Where(fmt.Sprintf("%s = ?", common.QuoteIdent(pkName)), targetID)
// Execute BeforeScan hooks - pass query chain so hooks can modify it
@@ -1311,6 +1330,11 @@ func (h *Handler) handleUpdate(ctx context.Context, w common.ResponseWriter, id
}
logger.Info("Successfully updated record with ID: %v", targetID)
// Invalidate cache for this table
cacheTags := buildCacheTags(schema, tableName)
if err := invalidateCacheForTags(ctx, cacheTags); err != nil {
logger.Warn("Failed to invalidate cache for table %s: %v", tableName, err)
}
h.sendResponseWithOptions(w, mergedData, nil, &options)
}
@@ -1379,6 +1403,11 @@ func (h *Handler) handleDelete(ctx context.Context, w common.ResponseWriter, id
return
}
logger.Info("Successfully deleted %d records", deletedCount)
// Invalidate cache for this table
cacheTags := buildCacheTags(schema, tableName)
if err := invalidateCacheForTags(ctx, cacheTags); err != nil {
logger.Warn("Failed to invalidate cache for table %s: %v", tableName, err)
}
h.sendResponse(w, map[string]interface{}{"deleted": deletedCount}, nil)
return
@@ -1447,6 +1476,11 @@ func (h *Handler) handleDelete(ctx context.Context, w common.ResponseWriter, id
return
}
logger.Info("Successfully deleted %d records", deletedCount)
// Invalidate cache for this table
cacheTags := buildCacheTags(schema, tableName)
if err := invalidateCacheForTags(ctx, cacheTags); err != nil {
logger.Warn("Failed to invalidate cache for table %s: %v", tableName, err)
}
h.sendResponse(w, map[string]interface{}{"deleted": deletedCount}, nil)
return
@@ -1501,6 +1535,11 @@ func (h *Handler) handleDelete(ctx context.Context, w common.ResponseWriter, id
return
}
logger.Info("Successfully deleted %d records", deletedCount)
// Invalidate cache for this table
cacheTags := buildCacheTags(schema, tableName)
if err := invalidateCacheForTags(ctx, cacheTags); err != nil {
logger.Warn("Failed to invalidate cache for table %s: %v", tableName, err)
}
h.sendResponse(w, map[string]interface{}{"deleted": deletedCount}, nil)
return
@@ -1514,7 +1553,34 @@ func (h *Handler) handleDelete(ctx context.Context, w common.ResponseWriter, id
}
// Single delete with URL ID
// Execute BeforeDelete hooks
if id == "" {
h.sendError(w, http.StatusBadRequest, "missing_id", "ID is required for delete", nil)
return
}
// Get primary key name
pkName := reflection.GetPrimaryKeyName(model)
// First, fetch the record that will be deleted
modelType := reflect.TypeOf(model)
if modelType.Kind() == reflect.Ptr {
modelType = modelType.Elem()
}
recordToDelete := reflect.New(modelType).Interface()
selectQuery := h.db.NewSelect().Model(recordToDelete).Where(fmt.Sprintf("%s = ?", common.QuoteIdent(pkName)), id)
if err := selectQuery.ScanModel(ctx); err != nil {
if err == sql.ErrNoRows {
logger.Warn("Record not found for delete: %s = %s", pkName, id)
h.sendError(w, http.StatusNotFound, "not_found", "Record not found", err)
return
}
logger.Error("Error fetching record for delete: %v", err)
h.sendError(w, http.StatusInternalServerError, "fetch_error", "Error fetching record", err)
return
}
// Execute BeforeDelete hooks with the record data
hookCtx := &HookContext{
Context: ctx,
Handler: h,
@@ -1525,6 +1591,7 @@ func (h *Handler) handleDelete(ctx context.Context, w common.ResponseWriter, id
ID: id,
Writer: w,
Tx: h.db,
Data: recordToDelete,
}
if err := h.hooks.Execute(BeforeDelete, hookCtx); err != nil {
@@ -1534,13 +1601,7 @@ func (h *Handler) handleDelete(ctx context.Context, w common.ResponseWriter, id
}
query := h.db.NewDelete().Table(tableName)
if id == "" {
h.sendError(w, http.StatusBadRequest, "missing_id", "ID is required for delete", nil)
return
}
query = query.Where(fmt.Sprintf("%s = ?", common.QuoteIdent(reflection.GetPrimaryKeyName(model))), id)
query = query.Where(fmt.Sprintf("%s = ?", common.QuoteIdent(pkName)), id)
// Execute BeforeScan hooks - pass query chain so hooks can modify it
hookCtx.Query = query
@@ -1562,11 +1623,15 @@ func (h *Handler) handleDelete(ctx context.Context, w common.ResponseWriter, id
return
}
// Execute AfterDelete hooks
responseData := map[string]interface{}{
"deleted": result.RowsAffected(),
// Check if the record was actually deleted
if result.RowsAffected() == 0 {
logger.Warn("No rows deleted for ID: %s", id)
h.sendError(w, http.StatusNotFound, "not_found", "Record not found or already deleted", nil)
return
}
hookCtx.Result = responseData
// Execute AfterDelete hooks with the deleted record data
hookCtx.Result = recordToDelete
hookCtx.Error = nil
if err := h.hooks.Execute(AfterDelete, hookCtx); err != nil {
@@ -1575,7 +1640,13 @@ func (h *Handler) handleDelete(ctx context.Context, w common.ResponseWriter, id
return
}
h.sendResponse(w, responseData, nil)
// Return the deleted record data
// Invalidate cache for this table
cacheTags := buildCacheTags(schema, tableName)
if err := invalidateCacheForTags(ctx, cacheTags); err != nil {
logger.Warn("Failed to invalidate cache for table %s: %v", tableName, err)
}
h.sendResponse(w, recordToDelete, nil)
}
// mergeRecordWithRequest merges a database record with the original request data
@@ -2071,14 +2142,20 @@ func (h *Handler) sendResponse(w common.ResponseWriter, data interface{}, metada
// sendResponseWithOptions sends a response with optional formatting
func (h *Handler) sendResponseWithOptions(w common.ResponseWriter, data interface{}, metadata *common.Metadata, options *ExtendedRequestOptions) {
w.SetHeader("Content-Type", "application/json")
if data == nil {
data = map[string]interface{}{}
w.WriteHeader(http.StatusPartialContent)
} else {
w.WriteHeader(http.StatusOK)
}
// Normalize single-record arrays to objects if requested
if options != nil && options.SingleRecordAsObject {
data = h.normalizeResultArray(data)
}
// Return data as-is without wrapping in common.Response
w.SetHeader("Content-Type", "application/json")
w.WriteHeader(http.StatusOK)
if err := w.WriteJSON(data); err != nil {
logger.Error("Failed to write JSON response: %v", err)
}
@@ -2088,7 +2165,7 @@ func (h *Handler) sendResponseWithOptions(w common.ResponseWriter, data interfac
// Returns the single element if data is a slice/array with exactly one element, otherwise returns data unchanged
func (h *Handler) normalizeResultArray(data interface{}) interface{} {
if data == nil {
return nil
return map[string]interface{}{}
}
// Use reflection to check if data is a slice or array
@@ -2097,18 +2174,41 @@ func (h *Handler) normalizeResultArray(data interface{}) interface{} {
dataValue = dataValue.Elem()
}
// Check if it's a slice or array with exactly one element
if (dataValue.Kind() == reflect.Slice || dataValue.Kind() == reflect.Array) && dataValue.Len() == 1 {
// Return the single element
return dataValue.Index(0).Interface()
// Check if it's a slice or array
if dataValue.Kind() == reflect.Slice || dataValue.Kind() == reflect.Array {
if dataValue.Len() == 1 {
// Return the single element
return dataValue.Index(0).Interface()
} else if dataValue.Len() == 0 {
// Return empty object instead of empty array
return map[string]interface{}{}
}
}
if dataValue.Kind() == reflect.String {
str := dataValue.String()
if str == "" || str == "null" {
return map[string]interface{}{}
}
}
return data
}
// sendFormattedResponse sends response with formatting options
func (h *Handler) sendFormattedResponse(w common.ResponseWriter, data interface{}, metadata *common.Metadata, options ExtendedRequestOptions) {
// Normalize single-record arrays to objects if requested
httpStatus := http.StatusOK
if data == nil {
data = map[string]interface{}{}
httpStatus = http.StatusPartialContent
} else {
dataLen := reflection.Len(data)
if dataLen == 0 {
httpStatus = http.StatusPartialContent
}
}
if options.SingleRecordAsObject {
data = h.normalizeResultArray(data)
}
@@ -2127,7 +2227,7 @@ func (h *Handler) sendFormattedResponse(w common.ResponseWriter, data interface{
switch options.ResponseFormat {
case "simple":
// Simple format: just return the data array
w.WriteHeader(http.StatusOK)
w.WriteHeader(httpStatus)
if err := w.WriteJSON(data); err != nil {
logger.Error("Failed to write JSON response: %v", err)
}
@@ -2139,7 +2239,7 @@ func (h *Handler) sendFormattedResponse(w common.ResponseWriter, data interface{
if metadata != nil {
response["count"] = metadata.Total
}
w.WriteHeader(http.StatusOK)
w.WriteHeader(httpStatus)
if err := w.WriteJSON(response); err != nil {
logger.Error("Failed to write JSON response: %v", err)
}
@@ -2150,7 +2250,7 @@ func (h *Handler) sendFormattedResponse(w common.ResponseWriter, data interface{
Data: data,
Metadata: metadata,
}
w.WriteHeader(http.StatusOK)
w.WriteHeader(httpStatus)
if err := w.WriteJSON(response); err != nil {
logger.Error("Failed to write JSON response: %v", err)
}

View File

@@ -935,7 +935,16 @@ func (h *Handler) addXFilesPreload(xfile *XFiles, options *ExtendedRequestOption
// Add WHERE clause if SQL conditions specified
whereConditions := make([]string, 0)
if len(xfile.SqlAnd) > 0 {
whereConditions = append(whereConditions, xfile.SqlAnd...)
// Process each SQL condition: add table prefixes and sanitize
for _, sqlCond := range xfile.SqlAnd {
// First add table prefixes to unqualified columns
prefixedCond := common.AddTablePrefixToColumns(sqlCond, xfile.TableName)
// Then sanitize the condition
sanitizedCond := common.SanitizeWhereClause(prefixedCond, xfile.TableName)
if sanitizedCond != "" {
whereConditions = append(whereConditions, sanitizedCond)
}
}
}
if len(whereConditions) > 0 {
preloadOpt.Where = strings.Join(whereConditions, " AND ")

View File

@@ -1,3 +1,4 @@
//go:build integration
// +build integration
package restheadspec
@@ -21,12 +22,12 @@ import (
// Test models
type TestUser struct {
ID uint `gorm:"primaryKey" json:"id"`
Name string `gorm:"not null" json:"name"`
Email string `gorm:"uniqueIndex;not null" json:"email"`
Age int `json:"age"`
Active bool `gorm:"default:true" json:"active"`
CreatedAt time.Time `json:"created_at"`
ID uint `gorm:"primaryKey" json:"id"`
Name string `gorm:"not null" json:"name"`
Email string `gorm:"uniqueIndex;not null" json:"email"`
Age int `json:"age"`
Active bool `gorm:"default:true" json:"active"`
CreatedAt time.Time `json:"created_at"`
Posts []TestPost `gorm:"foreignKey:UserID" json:"posts,omitempty"`
}
@@ -35,13 +36,13 @@ func (TestUser) TableName() string {
}
type TestPost struct {
ID uint `gorm:"primaryKey" json:"id"`
UserID uint `gorm:"not null" json:"user_id"`
Title string `gorm:"not null" json:"title"`
Content string `json:"content"`
Published bool `gorm:"default:false" json:"published"`
CreatedAt time.Time `json:"created_at"`
User *TestUser `gorm:"foreignKey:UserID" json:"user,omitempty"`
ID uint `gorm:"primaryKey" json:"id"`
UserID uint `gorm:"not null" json:"user_id"`
Title string `gorm:"not null" json:"title"`
Content string `json:"content"`
Published bool `gorm:"default:false" json:"published"`
CreatedAt time.Time `json:"created_at"`
User *TestUser `gorm:"foreignKey:UserID" json:"user,omitempty"`
Comments []TestComment `gorm:"foreignKey:PostID" json:"comments,omitempty"`
}
@@ -54,7 +55,7 @@ type TestComment struct {
PostID uint `gorm:"not null" json:"post_id"`
Content string `gorm:"not null" json:"content"`
CreatedAt time.Time `json:"created_at"`
Post *TestPost `gorm:"foreignKey:PostID" json:"post,omitempty"`
Post *TestPost `gorm:"foreignKey:PostID" json:"post,omitempty"`
}
func (TestComment) TableName() string {
@@ -401,7 +402,7 @@ func TestIntegration_GetMetadata(t *testing.T) {
muxRouter.ServeHTTP(w, req)
if w.Code != http.StatusOK {
if !(w.Code == http.StatusOK || w.Code == http.StatusPartialContent) {
t.Errorf("Expected status 200, got %d. Body: %s", w.Code, w.Body.String())
}
@@ -492,7 +493,7 @@ func TestIntegration_QueryParamsOverHeaders(t *testing.T) {
muxRouter.ServeHTTP(w, req)
if w.Code != http.StatusOK {
if !(w.Code == http.StatusOK || w.Code == http.StatusPartialContent) {
t.Errorf("Expected status 200, got %d", w.Code)
}

View File

@@ -296,7 +296,7 @@ func setColSecValue(fieldsrc reflect.Value, colsec ColumnSecurity, fieldTypeName
}
func (m *SecurityList) ApplyColumnSecurity(records reflect.Value, modelType reflect.Type, pUserID int, pSchema, pTablename string) (reflect.Value, error) {
defer logger.CatchPanic("ApplyColumnSecurity")
defer logger.CatchPanic("ApplyColumnSecurity")()
if m.ColumnSecurity == nil {
return records, fmt.Errorf("security not initialized")
@@ -437,7 +437,7 @@ func (m *SecurityList) LoadRowSecurity(ctx context.Context, pUserID int, pSchema
}
func (m *SecurityList) GetRowSecurityTemplate(pUserID int, pSchema, pTablename string) (RowSecurity, error) {
defer logger.CatchPanic("GetRowSecurityTemplate")
defer logger.CatchPanic("GetRowSecurityTemplate")()
if m.RowSecurity == nil {
return RowSecurity{}, fmt.Errorf("security not initialized")

View File

@@ -1,233 +1,314 @@
# Server Package
Graceful HTTP server with request draining and shutdown coordination.
Production-ready HTTP server manager with graceful shutdown, request draining, and comprehensive TLS/HTTPS support.
## Features
**Multiple Server Management** - Run multiple HTTP/HTTPS servers concurrently
**Graceful Shutdown** - Handles SIGINT/SIGTERM with request draining
**Automatic Request Rejection** - New requests get 503 during shutdown
**Health & Readiness Endpoints** - Kubernetes-ready health checks
**Shutdown Callbacks** - Register cleanup functions (DB, cache, metrics)
**Comprehensive TLS Support**:
- Certificate files (production)
- Self-signed certificates (development/testing)
- Let's Encrypt / AutoTLS (automatic certificate management)
**GZIP Compression** - Optional response compression
**Panic Recovery** - Automatic panic recovery middleware
**Configurable Timeouts** - Read, write, idle, drain, and shutdown timeouts
## Quick Start
### Single Server
```go
import "github.com/bitechdev/ResolveSpec/pkg/server"
// Create server
srv := server.NewGracefulServer(server.Config{
Addr: ":8080",
Handler: router,
// Create server manager
mgr := server.NewManager()
// Add server
_, err := mgr.Add(server.Config{
Name: "api-server",
Host: "localhost",
Port: 8080,
Handler: myRouter,
GZIP: true,
})
// Start server (blocks until shutdown signal)
if err := srv.ListenAndServe(); err != nil {
// Start and wait for shutdown signal
if err := mgr.ServeWithGracefulShutdown(); err != nil {
log.Fatal(err)
}
```
## Features
### Multiple Servers
✅ Graceful shutdown on SIGINT/SIGTERM
✅ Request draining (waits for in-flight requests)
✅ Automatic request rejection during shutdown
✅ Health and readiness endpoints
✅ Shutdown callbacks for cleanup
✅ Configurable timeouts
```go
mgr := server.NewManager()
// Public API
mgr.Add(server.Config{
Name: "public-api",
Port: 8080,
Handler: publicRouter,
})
// Admin API
mgr.Add(server.Config{
Name: "admin-api",
Port: 8081,
Handler: adminRouter,
})
// Start all and wait
mgr.ServeWithGracefulShutdown()
```
## HTTPS/TLS Configuration
### Option 1: Certificate Files (Production)
```go
mgr.Add(server.Config{
Name: "https-server",
Host: "0.0.0.0",
Port: 443,
Handler: handler,
SSLCert: "/etc/ssl/certs/server.crt",
SSLKey: "/etc/ssl/private/server.key",
})
```
### Option 2: Self-Signed Certificate (Development)
```go
mgr.Add(server.Config{
Name: "dev-server",
Host: "localhost",
Port: 8443,
Handler: handler,
SelfSignedSSL: true, // Auto-generates certificate
})
```
### Option 3: Let's Encrypt / AutoTLS (Production)
```go
mgr.Add(server.Config{
Name: "prod-server",
Host: "0.0.0.0",
Port: 443,
Handler: handler,
AutoTLS: true,
AutoTLSDomains: []string{"example.com", "www.example.com"},
AutoTLSEmail: "admin@example.com",
AutoTLSCacheDir: "./certs-cache", // Certificate cache directory
})
```
## Configuration
```go
config := server.Config{
// Server address
Addr: ":8080",
server.Config{
// Basic configuration
Name: "my-server", // Server name (required)
Host: "0.0.0.0", // Bind address
Port: 8080, // Port (required)
Handler: myRouter, // HTTP handler (required)
Description: "My API server", // Optional description
// HTTP handler
Handler: myRouter,
// Features
GZIP: true, // Enable GZIP compression
// Maximum time for graceful shutdown (default: 30s)
ShutdownTimeout: 30 * time.Second,
// TLS/HTTPS (choose one option)
SSLCert: "/path/to/cert.pem", // Certificate file
SSLKey: "/path/to/key.pem", // Key file
SelfSignedSSL: false, // Auto-generate self-signed cert
AutoTLS: false, // Let's Encrypt
AutoTLSDomains: []string{}, // Domains for AutoTLS
AutoTLSEmail: "", // Email for Let's Encrypt
AutoTLSCacheDir: "./certs-cache", // Cert cache directory
// Time to wait for in-flight requests (default: 25s)
DrainTimeout: 25 * time.Second,
// Request read timeout (default: 10s)
ReadTimeout: 10 * time.Second,
// Response write timeout (default: 10s)
WriteTimeout: 10 * time.Second,
// Idle connection timeout (default: 120s)
IdleTimeout: 120 * time.Second,
// Timeouts
ShutdownTimeout: 30 * time.Second, // Max shutdown time
DrainTimeout: 25 * time.Second, // Request drain timeout
ReadTimeout: 15 * time.Second, // Request read timeout
WriteTimeout: 15 * time.Second, // Response write timeout
IdleTimeout: 60 * time.Second, // Idle connection timeout
}
srv := server.NewGracefulServer(config)
```
## Shutdown Behavior
## Graceful Shutdown
**Signal received (SIGINT/SIGTERM):**
### Automatic (Recommended)
1. **Mark as shutting down** - New requests get 503
2. **Drain requests** - Wait up to `DrainTimeout` for in-flight requests
3. **Shutdown server** - Close listeners and connections
4. **Execute callbacks** - Run registered cleanup functions
```go
mgr := server.NewManager()
// Add servers...
// This blocks until SIGINT/SIGTERM
mgr.ServeWithGracefulShutdown()
```
Time Event
─────────────────────────────────────────
0s Signal received: SIGTERM
├─ Mark as shutting down
├─ Reject new requests (503)
└─ Start draining...
1s In-flight: 50 requests
2s In-flight: 32 requests
3s In-flight: 12 requests
4s In-flight: 3 requests
5s In-flight: 0 requests ✓
└─ All requests drained
### Manual Control
5s Execute shutdown callbacks
6s Shutdown complete
```go
mgr := server.NewManager()
// Add and start servers
mgr.StartAll()
// Later... stop gracefully
ctx, cancel := context.WithTimeout(context.Background(), 30*time.Second)
defer cancel()
if err := mgr.StopAllWithContext(ctx); err != nil {
log.Printf("Shutdown error: %v", err)
}
```
### Shutdown Callbacks
Register cleanup functions to run during shutdown:
```go
// Close database
mgr.RegisterShutdownCallback(func(ctx context.Context) error {
log.Println("Closing database...")
return db.Close()
})
// Flush metrics
mgr.RegisterShutdownCallback(func(ctx context.Context) error {
log.Println("Flushing metrics...")
return metrics.Flush(ctx)
})
// Close cache
mgr.RegisterShutdownCallback(func(ctx context.Context) error {
log.Println("Closing cache...")
return cache.Close()
})
```
## Health Checks
### Health Endpoint
Returns 200 when healthy, 503 when shutting down:
### Adding Health Endpoints
```go
router.HandleFunc("/health", srv.HealthCheckHandler())
instance, _ := mgr.Add(server.Config{
Name: "api-server",
Port: 8080,
Handler: router,
})
// Add health endpoints to your router
router.HandleFunc("/health", instance.HealthCheckHandler())
router.HandleFunc("/ready", instance.ReadinessHandler())
```
**Response (healthy):**
### Health Endpoint
Returns server health status:
**Healthy (200 OK):**
```json
{"status":"healthy"}
```
**Response (shutting down):**
**Shutting Down (503 Service Unavailable):**
```json
{"status":"shutting_down"}
```
### Readiness Endpoint
Includes in-flight request count:
Returns readiness with in-flight request count:
```go
router.HandleFunc("/ready", srv.ReadinessHandler())
```
**Response:**
**Ready (200 OK):**
```json
{"ready":true,"in_flight_requests":12}
```
**During shutdown:**
**Not Ready (503 Service Unavailable):**
```json
{"ready":false,"reason":"shutting_down"}
```
## Shutdown Callbacks
## Shutdown Behavior
Register cleanup functions to run during shutdown:
When a shutdown signal (SIGINT/SIGTERM) is received:
```go
// Close database
server.RegisterShutdownCallback(func(ctx context.Context) error {
logger.Info("Closing database connection...")
return db.Close()
})
1. **Mark as shutting down** → New requests get 503
2. **Execute callbacks** → Run cleanup functions
3. **Drain requests** → Wait up to `DrainTimeout` for in-flight requests
4. **Shutdown servers** → Close listeners and connections
// Flush metrics
server.RegisterShutdownCallback(func(ctx context.Context) error {
logger.Info("Flushing metrics...")
return metricsProvider.Flush(ctx)
})
```
Time Event
─────────────────────────────────────────
0s Signal received: SIGTERM
├─ Mark servers as shutting down
├─ Reject new requests (503)
└─ Execute shutdown callbacks
// Close cache
server.RegisterShutdownCallback(func(ctx context.Context) error {
logger.Info("Closing cache...")
return cache.Close()
})
1s Callbacks complete
└─ Start draining requests...
2s In-flight: 50 requests
3s In-flight: 32 requests
4s In-flight: 12 requests
5s In-flight: 3 requests
6s In-flight: 0 requests ✓
└─ All requests drained
6s Shutdown servers
7s All servers stopped ✓
```
## Complete Example
## Server Management
### Get Server Instance
```go
package main
import (
"context"
"log"
"net/http"
"time"
"github.com/bitechdev/ResolveSpec/pkg/middleware"
"github.com/bitechdev/ResolveSpec/pkg/metrics"
"github.com/bitechdev/ResolveSpec/pkg/server"
"github.com/gorilla/mux"
)
func main() {
// Initialize metrics
metricsProvider := metrics.NewPrometheusProvider()
metrics.SetProvider(metricsProvider)
// Create router
router := mux.NewRouter()
// Apply middleware
rateLimiter := middleware.NewRateLimiter(100, 20)
sizeLimiter := middleware.NewRequestSizeLimiter(middleware.Size10MB)
sanitizer := middleware.DefaultSanitizer()
router.Use(rateLimiter.Middleware)
router.Use(sizeLimiter.Middleware)
router.Use(sanitizer.Middleware)
router.Use(metricsProvider.Middleware)
// API routes
router.HandleFunc("/api/data", dataHandler)
// Create graceful server
srv := server.NewGracefulServer(server.Config{
Addr: ":8080",
Handler: router,
ShutdownTimeout: 30 * time.Second,
DrainTimeout: 25 * time.Second,
})
// Health checks
router.HandleFunc("/health", srv.HealthCheckHandler())
router.HandleFunc("/ready", srv.ReadinessHandler())
// Metrics endpoint
router.Handle("/metrics", metricsProvider.Handler())
// Register shutdown callbacks
server.RegisterShutdownCallback(func(ctx context.Context) error {
log.Println("Cleanup: Flushing metrics...")
return nil
})
server.RegisterShutdownCallback(func(ctx context.Context) error {
log.Println("Cleanup: Closing database...")
// return db.Close()
return nil
})
// Start server (blocks until shutdown)
log.Printf("Starting server on :8080")
if err := srv.ListenAndServe(); err != nil {
log.Fatal(err)
}
// Wait for shutdown to complete
srv.Wait()
log.Println("Server stopped")
instance, err := mgr.Get("api-server")
if err != nil {
log.Fatal(err)
}
func dataHandler(w http.ResponseWriter, r *http.Request) {
// Your handler logic
time.Sleep(100 * time.Millisecond) // Simulate work
w.WriteHeader(http.StatusOK)
w.Write([]byte(`{"message":"success"}`))
// Check status
fmt.Printf("Address: %s\n", instance.Addr())
fmt.Printf("Name: %s\n", instance.Name())
fmt.Printf("In-flight: %d\n", instance.InFlightRequests())
fmt.Printf("Shutting down: %v\n", instance.IsShuttingDown())
```
### List All Servers
```go
instances := mgr.List()
for _, instance := range instances {
fmt.Printf("Server: %s at %s\n", instance.Name(), instance.Addr())
}
```
### Remove Server
```go
// Stop and remove a server
if err := mgr.Remove("api-server"); err != nil {
log.Printf("Error removing server: %v", err)
}
```
### Restart All Servers
```go
// Gracefully restart all servers
if err := mgr.RestartAll(); err != nil {
log.Printf("Error restarting: %v", err)
}
```
@@ -250,23 +331,21 @@ spec:
ports:
- containerPort: 8080
# Liveness probe - is app running?
# Liveness probe
livenessProbe:
httpGet:
path: /health
port: 8080
initialDelaySeconds: 10
periodSeconds: 10
timeoutSeconds: 5
# Readiness probe - can app handle traffic?
# Readiness probe
readinessProbe:
httpGet:
path: /ready
port: 8080
initialDelaySeconds: 5
periodSeconds: 5
timeoutSeconds: 3
# Graceful shutdown
lifecycle:
@@ -274,26 +353,12 @@ spec:
exec:
command: ["/bin/sh", "-c", "sleep 5"]
# Environment
env:
- name: SHUTDOWN_TIMEOUT
value: "30"
```
### Service
```yaml
apiVersion: v1
kind: Service
metadata:
name: myapp
spec:
selector:
app: myapp
ports:
- port: 80
targetPort: 8080
type: LoadBalancer
# Allow time for graceful shutdown
terminationGracePeriodSeconds: 35
```
## Docker Compose
@@ -312,8 +377,70 @@ services:
interval: 10s
timeout: 5s
retries: 3
start_period: 10s
stop_grace_period: 35s # Slightly longer than shutdown timeout
stop_grace_period: 35s
```
## Complete Example
```go
package main
import (
"context"
"log"
"net/http"
"time"
"github.com/bitechdev/ResolveSpec/pkg/server"
)
func main() {
// Create server manager
mgr := server.NewManager()
// Register shutdown callbacks
mgr.RegisterShutdownCallback(func(ctx context.Context) error {
log.Println("Cleanup: Closing database...")
// return db.Close()
return nil
})
// Create router
router := http.NewServeMux()
router.HandleFunc("/api/data", dataHandler)
// Add server
instance, err := mgr.Add(server.Config{
Name: "api-server",
Host: "0.0.0.0",
Port: 8080,
Handler: router,
GZIP: true,
ShutdownTimeout: 30 * time.Second,
DrainTimeout: 25 * time.Second,
})
if err != nil {
log.Fatal(err)
}
// Add health endpoints
router.HandleFunc("/health", instance.HealthCheckHandler())
router.HandleFunc("/ready", instance.ReadinessHandler())
// Start and wait for shutdown
log.Println("Starting server on :8080")
if err := mgr.ServeWithGracefulShutdown(); err != nil {
log.Printf("Server stopped: %v", err)
}
log.Println("Server shutdown complete")
}
func dataHandler(w http.ResponseWriter, r *http.Request) {
time.Sleep(100 * time.Millisecond) // Simulate work
w.WriteHeader(http.StatusOK)
w.Write([]byte(`{"message":"success"}`))
}
```
## Testing Graceful Shutdown
@@ -330,7 +457,7 @@ SERVER_PID=$!
# Wait for server to start
sleep 2
# Send some requests
# Send requests
for i in {1..10}; do
curl http://localhost:8080/api/data &
done
@@ -341,7 +468,7 @@ sleep 1
# Send shutdown signal
kill -TERM $SERVER_PID
# Try to send more requests (should get 503)
# Try more requests (should get 503)
curl -v http://localhost:8080/api/data
# Wait for server to stop
@@ -349,101 +476,13 @@ wait $SERVER_PID
echo "Server stopped gracefully"
```
### Expected Output
```
Starting server on :8080
Received signal: terminated, initiating graceful shutdown
Starting graceful shutdown...
Waiting for 8 in-flight requests to complete...
Waiting for 4 in-flight requests to complete...
Waiting for 1 in-flight requests to complete...
All requests drained in 2.3s
Cleanup: Flushing metrics...
Cleanup: Closing database...
Shutting down HTTP server...
Graceful shutdown complete
Server stopped
```
## Monitoring In-Flight Requests
```go
// Get current in-flight count
count := srv.InFlightRequests()
fmt.Printf("In-flight requests: %d\n", count)
// Check if shutting down
if srv.IsShuttingDown() {
fmt.Println("Server is shutting down")
}
```
## Advanced Usage
### Custom Shutdown Logic
```go
// Implement custom shutdown
go func() {
sigChan := make(chan os.Signal, 1)
signal.Notify(sigChan, os.Interrupt, syscall.SIGTERM)
<-sigChan
log.Println("Shutdown signal received")
// Custom pre-shutdown logic
log.Println("Running custom cleanup...")
// Shutdown with callbacks
ctx, cancel := context.WithTimeout(context.Background(), 30*time.Second)
defer cancel()
if err := srv.ShutdownWithCallbacks(ctx); err != nil {
log.Printf("Shutdown error: %v", err)
}
}()
// Start server
srv.server.ListenAndServe()
```
### Multiple Servers
```go
// HTTP server
httpSrv := server.NewGracefulServer(server.Config{
Addr: ":8080",
Handler: httpRouter,
})
// HTTPS server
httpsSrv := server.NewGracefulServer(server.Config{
Addr: ":8443",
Handler: httpsRouter,
})
// Start both
go httpSrv.ListenAndServe()
go httpsSrv.ListenAndServe()
// Shutdown both on signal
sigChan := make(chan os.Signal, 1)
signal.Notify(sigChan, os.Interrupt)
<-sigChan
ctx := context.Background()
httpSrv.Shutdown(ctx)
httpsSrv.Shutdown(ctx)
```
## Best Practices
1. **Set appropriate timeouts**
- `DrainTimeout` < `ShutdownTimeout`
- `ShutdownTimeout` < Kubernetes `terminationGracePeriodSeconds`
2. **Register cleanup callbacks** for:
2. **Use shutdown callbacks** for:
- Database connections
- Message queues
- Metrics flushing
@@ -458,7 +497,12 @@ httpsSrv.Shutdown(ctx)
- Set `preStop` hook in Kubernetes (5-10s delay)
- Allows load balancer to deregister before shutdown
5. **Monitoring**
5. **HTTPS in production**
- Use AutoTLS for public-facing services
- Use certificate files for enterprise PKI
- Use self-signed only for development/testing
6. **Monitoring**
- Track in-flight requests in metrics
- Alert on slow drains
- Monitor shutdown duration
@@ -470,24 +514,63 @@ httpsSrv.Shutdown(ctx)
```go
// Increase drain timeout
config.DrainTimeout = 60 * time.Second
config.ShutdownTimeout = 65 * time.Second
```
### Requests Still Timing Out
### Requests Timing Out
```go
// Increase write timeout
config.WriteTimeout = 30 * time.Second
```
### Force Shutdown Not Working
The server will force shutdown after `ShutdownTimeout` even if requests are still in-flight. Adjust timeouts as needed.
### Debugging Shutdown
### Certificate Issues
```go
// Verify certificate files exist and are readable
if _, err := os.Stat(config.SSLCert); err != nil {
log.Fatalf("Certificate not found: %v", err)
}
// For AutoTLS, ensure:
// - Port 443 is accessible
// - Domains resolve to server IP
// - Cache directory is writable
```
### Debug Logging
```go
// Enable debug logging
import "github.com/bitechdev/ResolveSpec/pkg/logger"
// Enable debug logging
logger.SetLevel("debug")
```
## API Reference
### Manager Methods
- `NewManager()` - Create new server manager
- `Add(cfg Config)` - Register server instance
- `Get(name string)` - Get server by name
- `Remove(name string)` - Stop and remove server
- `StartAll()` - Start all registered servers
- `StopAll()` - Stop all servers gracefully
- `StopAllWithContext(ctx)` - Stop with timeout
- `RestartAll()` - Restart all servers
- `List()` - Get all server instances
- `ServeWithGracefulShutdown()` - Start and block until shutdown
- `RegisterShutdownCallback(cb)` - Register cleanup function
### Instance Methods
- `Start()` - Start the server
- `Stop(ctx)` - Stop gracefully
- `Addr()` - Get server address
- `Name()` - Get server name
- `HealthCheckHandler()` - Get health handler
- `ReadinessHandler()` - Get readiness handler
- `InFlightRequests()` - Get in-flight count
- `IsShuttingDown()` - Check shutdown status
- `Wait()` - Block until shutdown complete

294
pkg/server/example_test.go Normal file
View File

@@ -0,0 +1,294 @@
package server_test
import (
"context"
"fmt"
"net/http"
"time"
"github.com/bitechdev/ResolveSpec/pkg/server"
)
// ExampleManager_basic demonstrates basic server manager usage
func ExampleManager_basic() {
// Create a server manager
mgr := server.NewManager()
// Define a simple handler
handler := http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
w.WriteHeader(http.StatusOK)
fmt.Fprintln(w, "Hello from server!")
})
// Add an HTTP server
_, err := mgr.Add(server.Config{
Name: "api-server",
Host: "localhost",
Port: 8080,
Handler: handler,
GZIP: true, // Enable GZIP compression
})
if err != nil {
panic(err)
}
// Start all servers
if err := mgr.StartAll(); err != nil {
panic(err)
}
// Server is now running...
// When done, stop gracefully
if err := mgr.StopAll(); err != nil {
panic(err)
}
}
// ExampleManager_https demonstrates HTTPS configurations
func ExampleManager_https() {
mgr := server.NewManager()
handler := http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
fmt.Fprintln(w, "Secure connection!")
})
// Option 1: Use certificate files
_, err := mgr.Add(server.Config{
Name: "https-server-files",
Host: "localhost",
Port: 8443,
Handler: handler,
SSLCert: "/path/to/cert.pem",
SSLKey: "/path/to/key.pem",
})
if err != nil {
panic(err)
}
// Option 2: Self-signed certificate (for development)
_, err = mgr.Add(server.Config{
Name: "https-server-self-signed",
Host: "localhost",
Port: 8444,
Handler: handler,
SelfSignedSSL: true,
})
if err != nil {
panic(err)
}
// Option 3: Let's Encrypt / AutoTLS (for production)
_, err = mgr.Add(server.Config{
Name: "https-server-letsencrypt",
Host: "0.0.0.0",
Port: 443,
Handler: handler,
AutoTLS: true,
AutoTLSDomains: []string{"example.com", "www.example.com"},
AutoTLSEmail: "admin@example.com",
AutoTLSCacheDir: "./certs-cache",
})
if err != nil {
panic(err)
}
// Start all servers
if err := mgr.StartAll(); err != nil {
panic(err)
}
// Cleanup
mgr.StopAll()
}
// ExampleManager_gracefulShutdown demonstrates graceful shutdown with callbacks
func ExampleManager_gracefulShutdown() {
mgr := server.NewManager()
// Register shutdown callbacks for cleanup tasks
mgr.RegisterShutdownCallback(func(ctx context.Context) error {
fmt.Println("Closing database connections...")
// Close your database here
return nil
})
mgr.RegisterShutdownCallback(func(ctx context.Context) error {
fmt.Println("Flushing metrics...")
// Flush metrics here
return nil
})
// Add server with custom timeouts
handler := http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
// Simulate some work
time.Sleep(100 * time.Millisecond)
fmt.Fprintln(w, "Done!")
})
_, err := mgr.Add(server.Config{
Name: "api-server",
Host: "localhost",
Port: 8080,
Handler: handler,
ShutdownTimeout: 30 * time.Second, // Max time for shutdown
DrainTimeout: 25 * time.Second, // Time to wait for in-flight requests
ReadTimeout: 10 * time.Second,
WriteTimeout: 10 * time.Second,
IdleTimeout: 120 * time.Second,
})
if err != nil {
panic(err)
}
// Start servers and block until shutdown signal (SIGINT/SIGTERM)
// This will automatically handle graceful shutdown with callbacks
if err := mgr.ServeWithGracefulShutdown(); err != nil {
fmt.Printf("Shutdown completed: %v\n", err)
}
}
// ExampleManager_healthChecks demonstrates health and readiness endpoints
func ExampleManager_healthChecks() {
mgr := server.NewManager()
// Create a router with health endpoints
mux := http.NewServeMux()
mux.HandleFunc("/api/data", func(w http.ResponseWriter, r *http.Request) {
fmt.Fprintln(w, "Data endpoint")
})
// Add server
instance, err := mgr.Add(server.Config{
Name: "api-server",
Host: "localhost",
Port: 8080,
Handler: mux,
})
if err != nil {
panic(err)
}
// Add health and readiness endpoints
mux.HandleFunc("/health", instance.HealthCheckHandler())
mux.HandleFunc("/ready", instance.ReadinessHandler())
// Start the server
if err := mgr.StartAll(); err != nil {
panic(err)
}
// Health check returns:
// - 200 OK with {"status":"healthy"} when healthy
// - 503 Service Unavailable with {"status":"shutting_down"} when shutting down
// Readiness check returns:
// - 200 OK with {"ready":true,"in_flight_requests":N} when ready
// - 503 Service Unavailable with {"ready":false,"reason":"shutting_down"} when shutting down
// Cleanup
mgr.StopAll()
}
// ExampleManager_multipleServers demonstrates running multiple servers
func ExampleManager_multipleServers() {
mgr := server.NewManager()
// Public API server
publicHandler := http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
fmt.Fprintln(w, "Public API")
})
_, err := mgr.Add(server.Config{
Name: "public-api",
Host: "0.0.0.0",
Port: 8080,
Handler: publicHandler,
GZIP: true,
})
if err != nil {
panic(err)
}
// Admin API server (different port)
adminHandler := http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
fmt.Fprintln(w, "Admin API")
})
_, err = mgr.Add(server.Config{
Name: "admin-api",
Host: "localhost",
Port: 8081,
Handler: adminHandler,
})
if err != nil {
panic(err)
}
// Metrics server (internal only)
metricsHandler := http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
fmt.Fprintln(w, "Metrics data")
})
_, err = mgr.Add(server.Config{
Name: "metrics",
Host: "127.0.0.1",
Port: 9090,
Handler: metricsHandler,
})
if err != nil {
panic(err)
}
// Start all servers at once
if err := mgr.StartAll(); err != nil {
panic(err)
}
// Get specific server instance
publicInstance, err := mgr.Get("public-api")
if err != nil {
panic(err)
}
fmt.Printf("Public API running on: %s\n", publicInstance.Addr())
// List all servers
instances := mgr.List()
fmt.Printf("Running %d servers\n", len(instances))
// Stop all servers gracefully (in parallel)
if err := mgr.StopAll(); err != nil {
panic(err)
}
}
// ExampleManager_monitoring demonstrates monitoring server state
func ExampleManager_monitoring() {
mgr := server.NewManager()
handler := http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
time.Sleep(50 * time.Millisecond) // Simulate work
fmt.Fprintln(w, "Done")
})
instance, err := mgr.Add(server.Config{
Name: "api-server",
Host: "localhost",
Port: 8080,
Handler: handler,
})
if err != nil {
panic(err)
}
if err := mgr.StartAll(); err != nil {
panic(err)
}
// Check server status
fmt.Printf("Server address: %s\n", instance.Addr())
fmt.Printf("Server name: %s\n", instance.Name())
fmt.Printf("Is shutting down: %v\n", instance.IsShuttingDown())
fmt.Printf("In-flight requests: %d\n", instance.InFlightRequests())
// Cleanup
mgr.StopAll()
// Wait for complete shutdown
instance.Wait()
}

137
pkg/server/interfaces.go Normal file
View File

@@ -0,0 +1,137 @@
package server
import (
"context"
"net/http"
"time"
)
// Config holds the configuration for a single web server instance.
type Config struct {
Name string
Host string
Port int
Description string
// Handler is the http.Handler (e.g., a router) to be served.
Handler http.Handler
// GZIP compression support
GZIP bool
// TLS/HTTPS configuration options (mutually exclusive)
// Option 1: Provide certificate and key files directly
SSLCert string
SSLKey string
// Option 2: Use self-signed certificate (for development/testing)
// Generates a self-signed certificate automatically if no SSLCert/SSLKey provided
SelfSignedSSL bool
// Option 3: Use Let's Encrypt / Certbot for automatic TLS
// AutoTLS enables automatic certificate management via Let's Encrypt
AutoTLS bool
// AutoTLSDomains specifies the domains for Let's Encrypt certificates
AutoTLSDomains []string
// AutoTLSCacheDir specifies where to cache certificates (default: "./certs-cache")
AutoTLSCacheDir string
// AutoTLSEmail is the email for Let's Encrypt registration (optional but recommended)
AutoTLSEmail string
// Graceful shutdown configuration
// ShutdownTimeout is the maximum time to wait for graceful shutdown
// Default: 30 seconds
ShutdownTimeout time.Duration
// DrainTimeout is the time to wait for in-flight requests to complete
// before forcing shutdown. Default: 25 seconds
DrainTimeout time.Duration
// ReadTimeout is the maximum duration for reading the entire request
// Default: 15 seconds
ReadTimeout time.Duration
// WriteTimeout is the maximum duration before timing out writes of the response
// Default: 15 seconds
WriteTimeout time.Duration
// IdleTimeout is the maximum amount of time to wait for the next request
// Default: 60 seconds
IdleTimeout time.Duration
}
// Instance defines the interface for a single server instance.
// It abstracts the underlying http.Server, allowing for easier management and testing.
type Instance interface {
// Start begins serving requests. This method should be non-blocking and
// run the server in a separate goroutine.
Start() error
// Stop gracefully shuts down the server without interrupting any active connections.
// It accepts a context to allow for a timeout.
Stop(ctx context.Context) error
// Addr returns the network address the server is listening on.
Addr() string
// Name returns the server instance name.
Name() string
// HealthCheckHandler returns a handler that responds to health checks.
// Returns 200 OK when healthy, 503 Service Unavailable when shutting down.
HealthCheckHandler() http.HandlerFunc
// ReadinessHandler returns a handler for readiness checks.
// Includes in-flight request count.
ReadinessHandler() http.HandlerFunc
// InFlightRequests returns the current number of in-flight requests.
InFlightRequests() int64
// IsShuttingDown returns true if the server is shutting down.
IsShuttingDown() bool
// Wait blocks until shutdown is complete.
Wait()
}
// Manager defines the interface for a server manager.
// It is responsible for managing the lifecycle of multiple server instances.
type Manager interface {
// Add registers a new server instance based on the provided configuration.
// The server is not started until StartAll or Start is called on the instance.
Add(cfg Config) (Instance, error)
// Get returns a server instance by its name.
Get(name string) (Instance, error)
// Remove stops and removes a server instance by its name.
Remove(name string) error
// StartAll starts all registered server instances that are not already running.
StartAll() error
// StopAll gracefully shuts down all running server instances.
// Executes shutdown callbacks and drains in-flight requests.
StopAll() error
// StopAllWithContext gracefully shuts down all running server instances with a context.
StopAllWithContext(ctx context.Context) error
// RestartAll gracefully restarts all running server instances.
RestartAll() error
// List returns all registered server instances.
List() []Instance
// ServeWithGracefulShutdown starts all servers and blocks until a shutdown signal is received.
// It handles SIGINT and SIGTERM signals and performs graceful shutdown with callbacks.
ServeWithGracefulShutdown() error
// RegisterShutdownCallback registers a callback to be called during shutdown.
// Useful for cleanup tasks like closing database connections, flushing metrics, etc.
RegisterShutdownCallback(cb ShutdownCallback)
}
// ShutdownCallback is a function called during graceful shutdown.
type ShutdownCallback func(context.Context) error

601
pkg/server/manager.go Normal file
View File

@@ -0,0 +1,601 @@
package server
import (
"context"
"crypto/tls"
"fmt"
"net"
"net/http"
"os"
"os/signal"
"sync"
"sync/atomic"
"syscall"
"time"
"github.com/klauspost/compress/gzhttp"
"github.com/bitechdev/ResolveSpec/pkg/logger"
"github.com/bitechdev/ResolveSpec/pkg/middleware"
)
// gracefulServer wraps http.Server with graceful shutdown capabilities (internal type)
type gracefulServer struct {
server *http.Server
shutdownTimeout time.Duration
drainTimeout time.Duration
inFlightRequests atomic.Int64
isShuttingDown atomic.Bool
shutdownOnce sync.Once
shutdownComplete chan struct{}
}
// trackRequestsMiddleware tracks in-flight requests and blocks new requests during shutdown
func (gs *gracefulServer) trackRequestsMiddleware(next http.Handler) http.Handler {
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
// Check if shutting down
if gs.isShuttingDown.Load() {
http.Error(w, `{"error":"service_unavailable","message":"Server is shutting down"}`, http.StatusServiceUnavailable)
return
}
// Increment in-flight counter
gs.inFlightRequests.Add(1)
defer gs.inFlightRequests.Add(-1)
// Serve the request
next.ServeHTTP(w, r)
})
}
// shutdown performs graceful shutdown with request draining
func (gs *gracefulServer) shutdown(ctx context.Context) error {
var shutdownErr error
gs.shutdownOnce.Do(func() {
logger.Info("Starting graceful shutdown...")
// Mark as shutting down (new requests will be rejected)
gs.isShuttingDown.Store(true)
// Create context with timeout
shutdownCtx, cancel := context.WithTimeout(ctx, gs.shutdownTimeout)
defer cancel()
// Wait for in-flight requests to complete (with drain timeout)
drainCtx, drainCancel := context.WithTimeout(shutdownCtx, gs.drainTimeout)
defer drainCancel()
shutdownErr = gs.drainRequests(drainCtx)
if shutdownErr != nil {
logger.Error("Error draining requests: %v", shutdownErr)
}
// Shutdown the server
logger.Info("Shutting down HTTP server...")
if err := gs.server.Shutdown(shutdownCtx); err != nil {
logger.Error("Error shutting down server: %v", err)
if shutdownErr == nil {
shutdownErr = err
}
}
logger.Info("Graceful shutdown complete")
close(gs.shutdownComplete)
})
return shutdownErr
}
// drainRequests waits for in-flight requests to complete
func (gs *gracefulServer) drainRequests(ctx context.Context) error {
ticker := time.NewTicker(100 * time.Millisecond)
defer ticker.Stop()
startTime := time.Now()
for {
inFlight := gs.inFlightRequests.Load()
if inFlight == 0 {
logger.Info("All requests drained in %v", time.Since(startTime))
return nil
}
select {
case <-ctx.Done():
logger.Warn("Drain timeout exceeded with %d requests still in flight", inFlight)
return fmt.Errorf("drain timeout exceeded: %d requests still in flight", inFlight)
case <-ticker.C:
logger.Debug("Waiting for %d in-flight requests to complete...", inFlight)
}
}
}
// inFlightRequests returns the current number of in-flight requests
func (gs *gracefulServer) inFlightRequestsCount() int64 {
return gs.inFlightRequests.Load()
}
// isShutdown returns true if the server is shutting down
func (gs *gracefulServer) isShutdown() bool {
return gs.isShuttingDown.Load()
}
// wait blocks until shutdown is complete
func (gs *gracefulServer) wait() {
<-gs.shutdownComplete
}
// healthCheckHandler returns a handler that responds to health checks
func (gs *gracefulServer) healthCheckHandler() http.HandlerFunc {
return func(w http.ResponseWriter, r *http.Request) {
if gs.isShutdown() {
http.Error(w, `{"status":"shutting_down"}`, http.StatusServiceUnavailable)
return
}
w.Header().Set("Content-Type", "application/json")
w.WriteHeader(http.StatusOK)
_, err := w.Write([]byte(`{"status":"healthy"}`))
if err != nil {
logger.Warn("Failed to write health check response: %v", err)
}
}
}
// readinessHandler returns a handler for readiness checks
func (gs *gracefulServer) readinessHandler() http.HandlerFunc {
return func(w http.ResponseWriter, r *http.Request) {
if gs.isShutdown() {
http.Error(w, `{"ready":false,"reason":"shutting_down"}`, http.StatusServiceUnavailable)
return
}
inFlight := gs.inFlightRequestsCount()
w.Header().Set("Content-Type", "application/json")
w.WriteHeader(http.StatusOK)
fmt.Fprintf(w, `{"ready":true,"in_flight_requests":%d}`, inFlight)
}
}
// serverManager manages a collection of server instances with graceful shutdown support.
type serverManager struct {
instances map[string]Instance
mu sync.RWMutex
shutdownCallbacks []ShutdownCallback
callbacksMu sync.Mutex
}
// NewManager creates a new server manager.
func NewManager() Manager {
return &serverManager{
instances: make(map[string]Instance),
shutdownCallbacks: make([]ShutdownCallback, 0),
}
}
// Add registers a new server instance.
func (sm *serverManager) Add(cfg Config) (Instance, error) {
sm.mu.Lock()
defer sm.mu.Unlock()
if cfg.Name == "" {
return nil, fmt.Errorf("server name cannot be empty")
}
if _, exists := sm.instances[cfg.Name]; exists {
return nil, fmt.Errorf("server with name '%s' already exists", cfg.Name)
}
instance, err := newInstance(cfg)
if err != nil {
return nil, err
}
sm.instances[cfg.Name] = instance
return instance, nil
}
// Get returns a server instance by its name.
func (sm *serverManager) Get(name string) (Instance, error) {
sm.mu.RLock()
defer sm.mu.RUnlock()
instance, exists := sm.instances[name]
if !exists {
return nil, fmt.Errorf("server with name '%s' not found", name)
}
return instance, nil
}
// Remove stops and removes a server instance by its name.
func (sm *serverManager) Remove(name string) error {
sm.mu.Lock()
defer sm.mu.Unlock()
instance, exists := sm.instances[name]
if !exists {
return fmt.Errorf("server with name '%s' not found", name)
}
// Stop the server if it's running. Prefer the server's configured shutdownTimeout
// when available, and fall back to a sensible default.
timeout := 10 * time.Second
if si, ok := instance.(*serverInstance); ok && si.gracefulServer != nil && si.gracefulServer.shutdownTimeout > 0 {
timeout = si.gracefulServer.shutdownTimeout
}
ctx, cancel := context.WithTimeout(context.Background(), timeout)
defer cancel()
if err := instance.Stop(ctx); err != nil {
logger.Warn("Failed to gracefully stop server '%s' on remove: %v", name, err)
}
delete(sm.instances, name)
return nil
}
// StartAll starts all registered server instances.
func (sm *serverManager) StartAll() error {
sm.mu.RLock()
defer sm.mu.RUnlock()
var startErrors []error
for name, instance := range sm.instances {
if err := instance.Start(); err != nil {
startErrors = append(startErrors, fmt.Errorf("failed to start server '%s': %w", name, err))
}
}
if len(startErrors) > 0 {
return fmt.Errorf("encountered errors while starting servers: %v", startErrors)
}
return nil
}
// StopAll gracefully shuts down all running server instances.
func (sm *serverManager) StopAll() error {
return sm.StopAllWithContext(context.Background())
}
// StopAllWithContext gracefully shuts down all running server instances with a context.
func (sm *serverManager) StopAllWithContext(ctx context.Context) error {
sm.mu.RLock()
instancesToStop := make([]Instance, 0, len(sm.instances))
for _, instance := range sm.instances {
instancesToStop = append(instancesToStop, instance)
}
sm.mu.RUnlock()
logger.Info("Shutting down all servers...")
// Execute shutdown callbacks first
sm.callbacksMu.Lock()
callbacks := make([]ShutdownCallback, len(sm.shutdownCallbacks))
copy(callbacks, sm.shutdownCallbacks)
sm.callbacksMu.Unlock()
if len(callbacks) > 0 {
logger.Info("Executing %d shutdown callbacks...", len(callbacks))
for i, cb := range callbacks {
if err := cb(ctx); err != nil {
logger.Error("Shutdown callback %d failed: %v", i+1, err)
}
}
}
// Stop all instances in parallel
var shutdownErrors []error
var wg sync.WaitGroup
var errorsMu sync.Mutex
for _, instance := range instancesToStop {
wg.Add(1)
go func(inst Instance) {
defer wg.Done()
shutdownCtx, cancel := context.WithTimeout(ctx, 15*time.Second)
defer cancel()
if err := inst.Stop(shutdownCtx); err != nil {
errorsMu.Lock()
shutdownErrors = append(shutdownErrors, fmt.Errorf("failed to stop server '%s': %w", inst.Name(), err))
errorsMu.Unlock()
}
}(instance)
}
wg.Wait()
if len(shutdownErrors) > 0 {
return fmt.Errorf("encountered errors while stopping servers: %v", shutdownErrors)
}
logger.Info("All servers stopped gracefully.")
return nil
}
// RestartAll gracefully restarts all running server instances.
func (sm *serverManager) RestartAll() error {
logger.Info("Restarting all servers...")
if err := sm.StopAll(); err != nil {
return fmt.Errorf("failed to stop servers during restart: %w", err)
}
// Retry starting all servers with exponential backoff instead of a fixed sleep.
const (
maxAttempts = 5
initialBackoff = 100 * time.Millisecond
maxBackoff = 2 * time.Second
)
var lastErr error
backoff := initialBackoff
for attempt := 1; attempt <= maxAttempts; attempt++ {
if err := sm.StartAll(); err != nil {
lastErr = err
if attempt == maxAttempts {
break
}
logger.Warn("Attempt %d to start servers during restart failed: %v; retrying in %s", attempt, err, backoff)
time.Sleep(backoff)
backoff *= 2
if backoff > maxBackoff {
backoff = maxBackoff
}
continue
}
logger.Info("All servers restarted successfully.")
return nil
}
return fmt.Errorf("failed to start servers during restart after %d attempts: %w", maxAttempts, lastErr)
}
// List returns all registered server instances.
func (sm *serverManager) List() []Instance {
sm.mu.RLock()
defer sm.mu.RUnlock()
instances := make([]Instance, 0, len(sm.instances))
for _, instance := range sm.instances {
instances = append(instances, instance)
}
return instances
}
// RegisterShutdownCallback registers a callback to be called during shutdown.
func (sm *serverManager) RegisterShutdownCallback(cb ShutdownCallback) {
sm.callbacksMu.Lock()
defer sm.callbacksMu.Unlock()
sm.shutdownCallbacks = append(sm.shutdownCallbacks, cb)
}
// ServeWithGracefulShutdown starts all servers and blocks until a shutdown signal is received.
func (sm *serverManager) ServeWithGracefulShutdown() error {
// Start all servers
if err := sm.StartAll(); err != nil {
return fmt.Errorf("failed to start servers: %w", err)
}
logger.Info("All servers started. Waiting for shutdown signal...")
// Wait for interrupt signal
sigChan := make(chan os.Signal, 1)
signal.Notify(sigChan, os.Interrupt, syscall.SIGTERM, syscall.SIGINT)
sig := <-sigChan
logger.Info("Received signal: %v, initiating graceful shutdown", sig)
// Create context with timeout for shutdown
ctx, cancel := context.WithTimeout(context.Background(), 30*time.Second)
defer cancel()
return sm.StopAllWithContext(ctx)
}
// serverInstance is a concrete implementation of the Instance interface.
// It wraps gracefulServer to provide graceful shutdown capabilities.
type serverInstance struct {
cfg Config
gracefulServer *gracefulServer
certFile string // Path to certificate file (may be persistent for self-signed)
keyFile string // Path to key file (may be persistent for self-signed)
mu sync.RWMutex
running bool
serverErr chan error
}
// newInstance creates a new, unstarted server instance from a config.
func newInstance(cfg Config) (*serverInstance, error) {
if cfg.Handler == nil {
return nil, fmt.Errorf("handler cannot be nil")
}
// Set default timeouts
if cfg.ShutdownTimeout == 0 {
cfg.ShutdownTimeout = 30 * time.Second
}
if cfg.DrainTimeout == 0 {
cfg.DrainTimeout = 25 * time.Second
}
if cfg.ReadTimeout == 0 {
cfg.ReadTimeout = 15 * time.Second
}
if cfg.WriteTimeout == 0 {
cfg.WriteTimeout = 15 * time.Second
}
if cfg.IdleTimeout == 0 {
cfg.IdleTimeout = 60 * time.Second
}
addr := fmt.Sprintf("%s:%d", cfg.Host, cfg.Port)
var handler = cfg.Handler
// Wrap with GZIP handler if enabled
if cfg.GZIP {
gz, err := gzhttp.NewWrapper()
if err != nil {
return nil, fmt.Errorf("failed to create GZIP wrapper: %w", err)
}
handler = gz(handler)
}
// Wrap with the panic recovery middleware
handler = middleware.PanicRecovery(handler)
// Configure TLS if any TLS option is enabled
tlsConfig, certFile, keyFile, err := configureTLS(cfg)
if err != nil {
return nil, fmt.Errorf("failed to configure TLS: %w", err)
}
// Create gracefulServer
gracefulSrv := &gracefulServer{
server: &http.Server{
Addr: addr,
Handler: handler,
ReadTimeout: cfg.ReadTimeout,
WriteTimeout: cfg.WriteTimeout,
IdleTimeout: cfg.IdleTimeout,
TLSConfig: tlsConfig,
},
shutdownTimeout: cfg.ShutdownTimeout,
drainTimeout: cfg.DrainTimeout,
shutdownComplete: make(chan struct{}),
}
return &serverInstance{
cfg: cfg,
gracefulServer: gracefulSrv,
certFile: certFile,
keyFile: keyFile,
serverErr: make(chan error, 1),
}, nil
}
// Start begins serving requests in a new goroutine.
func (s *serverInstance) Start() error {
s.mu.Lock()
defer s.mu.Unlock()
if s.running {
return fmt.Errorf("server '%s' is already running", s.cfg.Name)
}
// Determine if we're using TLS
useTLS := s.cfg.SSLCert != "" || s.cfg.SSLKey != "" || s.cfg.SelfSignedSSL || s.cfg.AutoTLS
// Wrap handler with request tracking
s.gracefulServer.server.Handler = s.gracefulServer.trackRequestsMiddleware(s.gracefulServer.server.Handler)
go func() {
defer func() {
s.mu.Lock()
s.running = false
s.mu.Unlock()
logger.Info("Server '%s' stopped.", s.cfg.Name)
}()
var err error
protocol := "HTTP"
if useTLS {
protocol = "HTTPS"
logger.Info("Starting %s server '%s' on %s", protocol, s.cfg.Name, s.Addr())
// For AutoTLS, we need to use a TLS listener
if s.cfg.AutoTLS {
// Create listener
ln, lnErr := net.Listen("tcp", s.gracefulServer.server.Addr)
if lnErr != nil {
err = fmt.Errorf("failed to create listener: %w", lnErr)
} else {
// Wrap with TLS
tlsListener := tls.NewListener(ln, s.gracefulServer.server.TLSConfig)
err = s.gracefulServer.server.Serve(tlsListener)
}
} else {
// Use certificate files (regular SSL or self-signed)
err = s.gracefulServer.server.ListenAndServeTLS(s.certFile, s.keyFile)
}
} else {
logger.Info("Starting %s server '%s' on %s", protocol, s.cfg.Name, s.Addr())
err = s.gracefulServer.server.ListenAndServe()
}
// If the server stopped for a reason other than a graceful shutdown, log and report the error.
if err != nil && err != http.ErrServerClosed {
logger.Error("Server '%s' failed: %v", s.cfg.Name, err)
select {
case s.serverErr <- err:
default:
}
}
}()
s.running = true
// A small delay to allow the goroutine to start and potentially fail on binding.
time.Sleep(50 * time.Millisecond)
// Check if the server failed to start
select {
case err := <-s.serverErr:
s.running = false
return err
default:
}
return nil
}
// Stop gracefully shuts down the server.
func (s *serverInstance) Stop(ctx context.Context) error {
s.mu.Lock()
defer s.mu.Unlock()
if !s.running {
return nil // Already stopped
}
logger.Info("Gracefully shutting down server '%s'...", s.cfg.Name)
err := s.gracefulServer.shutdown(ctx)
if err == nil {
s.running = false
}
return err
}
// Addr returns the network address the server is listening on.
func (s *serverInstance) Addr() string {
return s.gracefulServer.server.Addr
}
// Name returns the server instance name.
func (s *serverInstance) Name() string {
return s.cfg.Name
}
// HealthCheckHandler returns a handler that responds to health checks.
func (s *serverInstance) HealthCheckHandler() http.HandlerFunc {
return s.gracefulServer.healthCheckHandler()
}
// ReadinessHandler returns a handler for readiness checks.
func (s *serverInstance) ReadinessHandler() http.HandlerFunc {
return s.gracefulServer.readinessHandler()
}
// InFlightRequests returns the current number of in-flight requests.
func (s *serverInstance) InFlightRequests() int64 {
return s.gracefulServer.inFlightRequestsCount()
}
// IsShuttingDown returns true if the server is shutting down.
func (s *serverInstance) IsShuttingDown() bool {
return s.gracefulServer.isShutdown()
}
// Wait blocks until shutdown is complete.
func (s *serverInstance) Wait() {
s.gracefulServer.wait()
}

399
pkg/server/manager_test.go Normal file
View File

@@ -0,0 +1,399 @@
package server
import (
"context"
"fmt"
"io"
"net"
"net/http"
"os"
"path/filepath"
"sync"
"testing"
"time"
"github.com/bitechdev/ResolveSpec/pkg/logger"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
// getFreePort asks the kernel for a free open port that is ready to use.
func getFreePort(t *testing.T) int {
t.Helper()
addr, err := net.ResolveTCPAddr("tcp", "localhost:0")
require.NoError(t, err)
l, err := net.ListenTCP("tcp", addr)
require.NoError(t, err)
defer l.Close()
return l.Addr().(*net.TCPAddr).Port
}
func TestServerManagerLifecycle(t *testing.T) {
// Initialize logger for test output
logger.Init(true)
// Create a new server manager
sm := NewManager()
// Define a simple test handler
testHandler := http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
w.WriteHeader(http.StatusOK)
w.Write([]byte("Hello, World!"))
})
// Get a free port for the server to listen on to avoid conflicts
testPort := getFreePort(t)
// Add a new server configuration
serverConfig := Config{
Name: "TestServer",
Host: "localhost",
Port: testPort,
Handler: testHandler,
}
instance, err := sm.Add(serverConfig)
require.NoError(t, err, "should be able to add a new server")
require.NotNil(t, instance, "added instance should not be nil")
// --- Test StartAll ---
err = sm.StartAll()
require.NoError(t, err, "StartAll should not return an error")
// Give the server a moment to start up
time.Sleep(100 * time.Millisecond)
// --- Verify Server is Running ---
client := &http.Client{Timeout: 2 * time.Second}
url := fmt.Sprintf("http://localhost:%d", testPort)
resp, err := client.Get(url)
require.NoError(t, err, "should be able to make a request to the running server")
assert.Equal(t, http.StatusOK, resp.StatusCode, "expected status OK from the test server")
body, err := io.ReadAll(resp.Body)
require.NoError(t, err)
resp.Body.Close()
assert.Equal(t, "Hello, World!", string(body), "response body should match expected value")
// --- Test Get ---
retrievedInstance, err := sm.Get("TestServer")
require.NoError(t, err, "should be able to get server by name")
assert.Equal(t, instance.Addr(), retrievedInstance.Addr(), "retrieved instance should be the same")
// --- Test List ---
instanceList := sm.List()
require.Len(t, instanceList, 1, "list should contain one instance")
assert.Equal(t, instance.Addr(), instanceList[0].Addr(), "listed instance should be the same")
// --- Test StopAll ---
err = sm.StopAll()
require.NoError(t, err, "StopAll should not return an error")
// Give the server a moment to shut down
time.Sleep(100 * time.Millisecond)
// --- Verify Server is Stopped ---
_, err = client.Get(url)
require.Error(t, err, "should not be able to make a request to a stopped server")
// --- Test Remove ---
err = sm.Remove("TestServer")
require.NoError(t, err, "should be able to remove a server")
_, err = sm.Get("TestServer")
require.Error(t, err, "should not be able to get a removed server")
}
func TestManagerErrorCases(t *testing.T) {
logger.Init(true)
sm := NewManager()
testPort := getFreePort(t)
// --- Test Add Duplicate Name ---
config1 := Config{Name: "Duplicate", Host: "localhost", Port: testPort, Handler: http.NewServeMux()}
_, err := sm.Add(config1)
require.NoError(t, err)
config2 := Config{Name: "Duplicate", Host: "localhost", Port: getFreePort(t), Handler: http.NewServeMux()}
_, err = sm.Add(config2)
require.Error(t, err, "should not be able to add a server with a duplicate name")
// --- Test Get Non-existent ---
_, err = sm.Get("NonExistent")
require.Error(t, err, "should get an error for a non-existent server")
// --- Test Add with Nil Handler ---
config3 := Config{Name: "NilHandler", Host: "localhost", Port: getFreePort(t), Handler: nil}
_, err = sm.Add(config3)
require.Error(t, err, "should not be able to add a server with a nil handler")
}
func TestGracefulShutdown(t *testing.T) {
logger.Init(true)
sm := NewManager()
requestsHandled := 0
var requestsMu sync.Mutex
handler := http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
requestsMu.Lock()
requestsHandled++
requestsMu.Unlock()
time.Sleep(100 * time.Millisecond)
w.WriteHeader(http.StatusOK)
})
testPort := getFreePort(t)
instance, err := sm.Add(Config{
Name: "TestServer",
Host: "localhost",
Port: testPort,
Handler: handler,
DrainTimeout: 2 * time.Second,
})
require.NoError(t, err)
err = sm.StartAll()
require.NoError(t, err)
// Give server time to start
time.Sleep(100 * time.Millisecond)
// Send some concurrent requests
var wg sync.WaitGroup
for i := 0; i < 5; i++ {
wg.Add(1)
go func() {
defer wg.Done()
client := &http.Client{Timeout: 5 * time.Second}
url := fmt.Sprintf("http://localhost:%d", testPort)
resp, err := client.Get(url)
if err == nil {
resp.Body.Close()
}
}()
}
// Wait a bit for requests to start
time.Sleep(50 * time.Millisecond)
// Check in-flight requests
inFlight := instance.InFlightRequests()
assert.Greater(t, inFlight, int64(0), "Should have in-flight requests")
// Stop the server
err = sm.StopAll()
require.NoError(t, err)
// Wait for all requests to complete
wg.Wait()
// Verify all requests were handled
requestsMu.Lock()
handled := requestsHandled
requestsMu.Unlock()
assert.GreaterOrEqual(t, handled, 1, "At least some requests should have been handled")
// Verify no in-flight requests
assert.Equal(t, int64(0), instance.InFlightRequests(), "Should have no in-flight requests after shutdown")
}
func TestHealthAndReadinessEndpoints(t *testing.T) {
logger.Init(true)
sm := NewManager()
mux := http.NewServeMux()
testPort := getFreePort(t)
instance, err := sm.Add(Config{
Name: "TestServer",
Host: "localhost",
Port: testPort,
Handler: mux,
})
require.NoError(t, err)
// Add health and readiness endpoints
mux.HandleFunc("/health", instance.HealthCheckHandler())
mux.HandleFunc("/ready", instance.ReadinessHandler())
err = sm.StartAll()
require.NoError(t, err)
time.Sleep(100 * time.Millisecond)
client := &http.Client{Timeout: 2 * time.Second}
baseURL := fmt.Sprintf("http://localhost:%d", testPort)
// Test health endpoint
resp, err := client.Get(baseURL + "/health")
require.NoError(t, err)
assert.Equal(t, http.StatusOK, resp.StatusCode)
body, _ := io.ReadAll(resp.Body)
resp.Body.Close()
assert.Contains(t, string(body), "healthy")
// Test readiness endpoint
resp, err = client.Get(baseURL + "/ready")
require.NoError(t, err)
assert.Equal(t, http.StatusOK, resp.StatusCode)
body, _ = io.ReadAll(resp.Body)
resp.Body.Close()
assert.Contains(t, string(body), "ready")
assert.Contains(t, string(body), "in_flight_requests")
// Stop the server
sm.StopAll()
}
func TestRequestRejectionDuringShutdown(t *testing.T) {
logger.Init(true)
sm := NewManager()
handler := http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
time.Sleep(50 * time.Millisecond)
w.WriteHeader(http.StatusOK)
})
testPort := getFreePort(t)
_, err := sm.Add(Config{
Name: "TestServer",
Host: "localhost",
Port: testPort,
Handler: handler,
DrainTimeout: 1 * time.Second,
})
require.NoError(t, err)
err = sm.StartAll()
require.NoError(t, err)
time.Sleep(100 * time.Millisecond)
// Start shutdown in background
go func() {
time.Sleep(50 * time.Millisecond)
sm.StopAll()
}()
// Give shutdown time to start
time.Sleep(100 * time.Millisecond)
// Try to make a request after shutdown started
client := &http.Client{Timeout: 2 * time.Second}
url := fmt.Sprintf("http://localhost:%d", testPort)
resp, err := client.Get(url)
// The request should either fail (connection refused) or get 503
if err == nil {
assert.Equal(t, http.StatusServiceUnavailable, resp.StatusCode, "Should get 503 during shutdown")
resp.Body.Close()
}
}
func TestShutdownCallbacks(t *testing.T) {
logger.Init(true)
sm := NewManager()
callbackExecuted := false
var callbackMu sync.Mutex
sm.RegisterShutdownCallback(func(ctx context.Context) error {
callbackMu.Lock()
callbackExecuted = true
callbackMu.Unlock()
return nil
})
testPort := getFreePort(t)
_, err := sm.Add(Config{
Name: "TestServer",
Host: "localhost",
Port: testPort,
Handler: http.NewServeMux(),
})
require.NoError(t, err)
err = sm.StartAll()
require.NoError(t, err)
time.Sleep(100 * time.Millisecond)
err = sm.StopAll()
require.NoError(t, err)
callbackMu.Lock()
executed := callbackExecuted
callbackMu.Unlock()
assert.True(t, executed, "Shutdown callback should have been executed")
}
func TestSelfSignedSSLCertificateReuse(t *testing.T) {
logger.Init(true)
// Get expected cert directory location
cacheDir, err := os.UserCacheDir()
require.NoError(t, err)
certDir := filepath.Join(cacheDir, "resolvespec", "certs")
host := "localhost"
certFile := filepath.Join(certDir, fmt.Sprintf("%s-cert.pem", host))
keyFile := filepath.Join(certDir, fmt.Sprintf("%s-key.pem", host))
// Clean up any existing cert files from previous tests
os.Remove(certFile)
os.Remove(keyFile)
// First server creation - should generate new certificates
sm1 := NewManager()
testPort1 := getFreePort(t)
_, err = sm1.Add(Config{
Name: "SSLTestServer1",
Host: host,
Port: testPort1,
Handler: http.NewServeMux(),
SelfSignedSSL: true,
ShutdownTimeout: 5 * time.Second,
})
require.NoError(t, err)
// Verify certificates were created
_, err = os.Stat(certFile)
require.NoError(t, err, "certificate file should exist after first creation")
_, err = os.Stat(keyFile)
require.NoError(t, err, "key file should exist after first creation")
// Get modification time of cert file
info1, err := os.Stat(certFile)
require.NoError(t, err)
modTime1 := info1.ModTime()
// Wait a bit to ensure different modification times
time.Sleep(100 * time.Millisecond)
// Second server creation - should reuse existing certificates
sm2 := NewManager()
testPort2 := getFreePort(t)
_, err = sm2.Add(Config{
Name: "SSLTestServer2",
Host: host,
Port: testPort2,
Handler: http.NewServeMux(),
SelfSignedSSL: true,
ShutdownTimeout: 5 * time.Second,
})
require.NoError(t, err)
// Get modification time of cert file after second creation
info2, err := os.Stat(certFile)
require.NoError(t, err)
modTime2 := info2.ModTime()
// Verify the certificate was reused (same modification time)
assert.Equal(t, modTime1, modTime2, "certificate should be reused, not regenerated")
// Clean up
sm1.StopAll()
sm2.StopAll()
}

View File

@@ -1,296 +0,0 @@
package server
import (
"context"
"fmt"
"net/http"
"os"
"os/signal"
"sync"
"sync/atomic"
"syscall"
"time"
"github.com/bitechdev/ResolveSpec/pkg/logger"
)
// GracefulServer wraps http.Server with graceful shutdown capabilities
type GracefulServer struct {
server *http.Server
shutdownTimeout time.Duration
drainTimeout time.Duration
inFlightRequests atomic.Int64
isShuttingDown atomic.Bool
shutdownOnce sync.Once
shutdownComplete chan struct{}
}
// Config holds configuration for the graceful server
type Config struct {
// Addr is the server address (e.g., ":8080")
Addr string
// Handler is the HTTP handler
Handler http.Handler
// ShutdownTimeout is the maximum time to wait for graceful shutdown
// Default: 30 seconds
ShutdownTimeout time.Duration
// DrainTimeout is the time to wait for in-flight requests to complete
// before forcing shutdown. Default: 25 seconds
DrainTimeout time.Duration
// ReadTimeout is the maximum duration for reading the entire request
ReadTimeout time.Duration
// WriteTimeout is the maximum duration before timing out writes of the response
WriteTimeout time.Duration
// IdleTimeout is the maximum amount of time to wait for the next request
IdleTimeout time.Duration
}
// NewGracefulServer creates a new graceful server
func NewGracefulServer(config Config) *GracefulServer {
if config.ShutdownTimeout == 0 {
config.ShutdownTimeout = 30 * time.Second
}
if config.DrainTimeout == 0 {
config.DrainTimeout = 25 * time.Second
}
if config.ReadTimeout == 0 {
config.ReadTimeout = 10 * time.Second
}
if config.WriteTimeout == 0 {
config.WriteTimeout = 10 * time.Second
}
if config.IdleTimeout == 0 {
config.IdleTimeout = 120 * time.Second
}
gs := &GracefulServer{
server: &http.Server{
Addr: config.Addr,
Handler: config.Handler,
ReadTimeout: config.ReadTimeout,
WriteTimeout: config.WriteTimeout,
IdleTimeout: config.IdleTimeout,
},
shutdownTimeout: config.ShutdownTimeout,
drainTimeout: config.DrainTimeout,
shutdownComplete: make(chan struct{}),
}
return gs
}
// TrackRequestsMiddleware tracks in-flight requests and blocks new requests during shutdown
func (gs *GracefulServer) TrackRequestsMiddleware(next http.Handler) http.Handler {
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
// Check if shutting down
if gs.isShuttingDown.Load() {
http.Error(w, `{"error":"service_unavailable","message":"Server is shutting down"}`, http.StatusServiceUnavailable)
return
}
// Increment in-flight counter
gs.inFlightRequests.Add(1)
defer gs.inFlightRequests.Add(-1)
// Serve the request
next.ServeHTTP(w, r)
})
}
// ListenAndServe starts the server and handles graceful shutdown
func (gs *GracefulServer) ListenAndServe() error {
// Wrap handler with request tracking
gs.server.Handler = gs.TrackRequestsMiddleware(gs.server.Handler)
// Start server in goroutine
serverErr := make(chan error, 1)
go func() {
logger.Info("Starting server on %s", gs.server.Addr)
if err := gs.server.ListenAndServe(); err != nil && err != http.ErrServerClosed {
serverErr <- err
}
close(serverErr)
}()
// Wait for interrupt signal
sigChan := make(chan os.Signal, 1)
signal.Notify(sigChan, os.Interrupt, syscall.SIGTERM, syscall.SIGINT)
select {
case err := <-serverErr:
return err
case sig := <-sigChan:
logger.Info("Received signal: %v, initiating graceful shutdown", sig)
return gs.Shutdown(context.Background())
}
}
// Shutdown performs graceful shutdown with request draining
func (gs *GracefulServer) Shutdown(ctx context.Context) error {
var shutdownErr error
gs.shutdownOnce.Do(func() {
logger.Info("Starting graceful shutdown...")
// Mark as shutting down (new requests will be rejected)
gs.isShuttingDown.Store(true)
// Create context with timeout
shutdownCtx, cancel := context.WithTimeout(ctx, gs.shutdownTimeout)
defer cancel()
// Wait for in-flight requests to complete (with drain timeout)
drainCtx, drainCancel := context.WithTimeout(shutdownCtx, gs.drainTimeout)
defer drainCancel()
shutdownErr = gs.drainRequests(drainCtx)
if shutdownErr != nil {
logger.Error("Error draining requests: %v", shutdownErr)
}
// Shutdown the server
logger.Info("Shutting down HTTP server...")
if err := gs.server.Shutdown(shutdownCtx); err != nil {
logger.Error("Error shutting down server: %v", err)
if shutdownErr == nil {
shutdownErr = err
}
}
logger.Info("Graceful shutdown complete")
close(gs.shutdownComplete)
})
return shutdownErr
}
// drainRequests waits for in-flight requests to complete
func (gs *GracefulServer) drainRequests(ctx context.Context) error {
ticker := time.NewTicker(100 * time.Millisecond)
defer ticker.Stop()
startTime := time.Now()
for {
inFlight := gs.inFlightRequests.Load()
if inFlight == 0 {
logger.Info("All requests drained in %v", time.Since(startTime))
return nil
}
select {
case <-ctx.Done():
logger.Warn("Drain timeout exceeded with %d requests still in flight", inFlight)
return fmt.Errorf("drain timeout exceeded: %d requests still in flight", inFlight)
case <-ticker.C:
logger.Debug("Waiting for %d in-flight requests to complete...", inFlight)
}
}
}
// InFlightRequests returns the current number of in-flight requests
func (gs *GracefulServer) InFlightRequests() int64 {
return gs.inFlightRequests.Load()
}
// IsShuttingDown returns true if the server is shutting down
func (gs *GracefulServer) IsShuttingDown() bool {
return gs.isShuttingDown.Load()
}
// Wait blocks until shutdown is complete
func (gs *GracefulServer) Wait() {
<-gs.shutdownComplete
}
// HealthCheckHandler returns a handler that responds to health checks
// Returns 200 OK when healthy, 503 Service Unavailable when shutting down
func (gs *GracefulServer) HealthCheckHandler() http.HandlerFunc {
return func(w http.ResponseWriter, r *http.Request) {
if gs.IsShuttingDown() {
http.Error(w, `{"status":"shutting_down"}`, http.StatusServiceUnavailable)
return
}
w.Header().Set("Content-Type", "application/json")
w.WriteHeader(http.StatusOK)
_, err := w.Write([]byte(`{"status":"healthy"}`))
if err != nil {
logger.Warn("Failed to write. %v", err)
}
}
}
// ReadinessHandler returns a handler for readiness checks
// Includes in-flight request count
func (gs *GracefulServer) ReadinessHandler() http.HandlerFunc {
return func(w http.ResponseWriter, r *http.Request) {
if gs.IsShuttingDown() {
http.Error(w, `{"ready":false,"reason":"shutting_down"}`, http.StatusServiceUnavailable)
return
}
inFlight := gs.InFlightRequests()
w.Header().Set("Content-Type", "application/json")
w.WriteHeader(http.StatusOK)
fmt.Fprintf(w, `{"ready":true,"in_flight_requests":%d}`, inFlight)
}
}
// ShutdownCallback is a function called during shutdown
type ShutdownCallback func(context.Context) error
// shutdownCallbacks stores registered shutdown callbacks
var (
shutdownCallbacks []ShutdownCallback
shutdownCallbacksMu sync.Mutex
)
// RegisterShutdownCallback registers a callback to be called during shutdown
// Useful for cleanup tasks like closing database connections, flushing metrics, etc.
func RegisterShutdownCallback(cb ShutdownCallback) {
shutdownCallbacksMu.Lock()
defer shutdownCallbacksMu.Unlock()
shutdownCallbacks = append(shutdownCallbacks, cb)
}
// executeShutdownCallbacks runs all registered shutdown callbacks
func executeShutdownCallbacks(ctx context.Context) error {
shutdownCallbacksMu.Lock()
callbacks := make([]ShutdownCallback, len(shutdownCallbacks))
copy(callbacks, shutdownCallbacks)
shutdownCallbacksMu.Unlock()
var errors []error
for i, cb := range callbacks {
logger.Debug("Executing shutdown callback %d/%d", i+1, len(callbacks))
if err := cb(ctx); err != nil {
logger.Error("Shutdown callback %d failed: %v", i+1, err)
errors = append(errors, err)
}
}
if len(errors) > 0 {
return fmt.Errorf("shutdown callbacks failed: %v", errors)
}
return nil
}
// ShutdownWithCallbacks performs shutdown and executes all registered callbacks
func (gs *GracefulServer) ShutdownWithCallbacks(ctx context.Context) error {
// Execute callbacks first
if err := executeShutdownCallbacks(ctx); err != nil {
logger.Error("Error executing shutdown callbacks: %v", err)
}
// Then shutdown the server
return gs.Shutdown(ctx)
}

View File

@@ -1,231 +0,0 @@
package server
import (
"context"
"net/http"
"net/http/httptest"
"sync"
"testing"
"time"
)
func TestGracefulServerTrackRequests(t *testing.T) {
srv := NewGracefulServer(Config{
Addr: ":0",
Handler: http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
time.Sleep(100 * time.Millisecond)
w.WriteHeader(http.StatusOK)
}),
})
handler := srv.TrackRequestsMiddleware(srv.server.Handler)
// Start some requests
var wg sync.WaitGroup
for i := 0; i < 5; i++ {
wg.Add(1)
go func() {
defer wg.Done()
req := httptest.NewRequest("GET", "/test", nil)
w := httptest.NewRecorder()
handler.ServeHTTP(w, req)
}()
}
// Wait a bit for requests to start
time.Sleep(10 * time.Millisecond)
// Check in-flight count
inFlight := srv.InFlightRequests()
if inFlight == 0 {
t.Error("Should have in-flight requests")
}
// Wait for all requests to complete
wg.Wait()
// Check that counter is back to zero
inFlight = srv.InFlightRequests()
if inFlight != 0 {
t.Errorf("In-flight requests should be 0, got %d", inFlight)
}
}
func TestGracefulServerRejectsRequestsDuringShutdown(t *testing.T) {
srv := NewGracefulServer(Config{
Addr: ":0",
Handler: http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
w.WriteHeader(http.StatusOK)
}),
})
handler := srv.TrackRequestsMiddleware(srv.server.Handler)
// Mark as shutting down
srv.isShuttingDown.Store(true)
// Try to make a request
req := httptest.NewRequest("GET", "/test", nil)
w := httptest.NewRecorder()
handler.ServeHTTP(w, req)
// Should get 503
if w.Code != http.StatusServiceUnavailable {
t.Errorf("Expected 503, got %d", w.Code)
}
}
func TestHealthCheckHandler(t *testing.T) {
srv := NewGracefulServer(Config{
Addr: ":0",
Handler: http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {}),
})
handler := srv.HealthCheckHandler()
// Healthy
t.Run("Healthy", func(t *testing.T) {
req := httptest.NewRequest("GET", "/health", nil)
w := httptest.NewRecorder()
handler.ServeHTTP(w, req)
if w.Code != http.StatusOK {
t.Errorf("Expected 200, got %d", w.Code)
}
if w.Body.String() != `{"status":"healthy"}` {
t.Errorf("Unexpected body: %s", w.Body.String())
}
})
// Shutting down
t.Run("ShuttingDown", func(t *testing.T) {
srv.isShuttingDown.Store(true)
req := httptest.NewRequest("GET", "/health", nil)
w := httptest.NewRecorder()
handler.ServeHTTP(w, req)
if w.Code != http.StatusServiceUnavailable {
t.Errorf("Expected 503, got %d", w.Code)
}
})
}
func TestReadinessHandler(t *testing.T) {
srv := NewGracefulServer(Config{
Addr: ":0",
Handler: http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {}),
})
handler := srv.ReadinessHandler()
// Ready with no in-flight requests
t.Run("Ready", func(t *testing.T) {
req := httptest.NewRequest("GET", "/ready", nil)
w := httptest.NewRecorder()
handler.ServeHTTP(w, req)
if w.Code != http.StatusOK {
t.Errorf("Expected 200, got %d", w.Code)
}
body := w.Body.String()
if body != `{"ready":true,"in_flight_requests":0}` {
t.Errorf("Unexpected body: %s", body)
}
})
// Not ready during shutdown
t.Run("NotReady", func(t *testing.T) {
srv.isShuttingDown.Store(true)
req := httptest.NewRequest("GET", "/ready", nil)
w := httptest.NewRecorder()
handler.ServeHTTP(w, req)
if w.Code != http.StatusServiceUnavailable {
t.Errorf("Expected 503, got %d", w.Code)
}
})
}
func TestShutdownCallbacks(t *testing.T) {
callbackExecuted := false
RegisterShutdownCallback(func(ctx context.Context) error {
callbackExecuted = true
return nil
})
ctx := context.Background()
err := executeShutdownCallbacks(ctx)
if err != nil {
t.Errorf("executeShutdownCallbacks() error = %v", err)
}
if !callbackExecuted {
t.Error("Shutdown callback was not executed")
}
// Reset for other tests
shutdownCallbacks = nil
}
func TestDrainRequests(t *testing.T) {
srv := NewGracefulServer(Config{
Addr: ":0",
Handler: http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {}),
DrainTimeout: 1 * time.Second,
})
// Simulate in-flight requests
srv.inFlightRequests.Add(3)
// Start draining in background
go func() {
time.Sleep(100 * time.Millisecond)
// Simulate requests completing
srv.inFlightRequests.Add(-3)
}()
ctx, cancel := context.WithTimeout(context.Background(), 2*time.Second)
defer cancel()
err := srv.drainRequests(ctx)
if err != nil {
t.Errorf("drainRequests() error = %v", err)
}
if srv.InFlightRequests() != 0 {
t.Errorf("In-flight requests should be 0, got %d", srv.InFlightRequests())
}
}
func TestDrainRequestsTimeout(t *testing.T) {
srv := NewGracefulServer(Config{
Addr: ":0",
Handler: http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {}),
DrainTimeout: 100 * time.Millisecond,
})
// Simulate in-flight requests that don't complete
srv.inFlightRequests.Add(5)
ctx, cancel := context.WithTimeout(context.Background(), 200*time.Millisecond)
defer cancel()
err := srv.drainRequests(ctx)
if err == nil {
t.Error("drainRequests() should timeout with error")
}
// Cleanup
srv.inFlightRequests.Add(-5)
}
func TestGetClientIP(t *testing.T) {
// This test is in ratelimit_test.go since getClientIP is used by rate limiter
// Including here for completeness of server tests
}

294
pkg/server/tls.go Normal file
View File

@@ -0,0 +1,294 @@
package server
import (
"crypto/ecdsa"
"crypto/elliptic"
"crypto/rand"
"crypto/tls"
"crypto/x509"
"crypto/x509/pkix"
"encoding/pem"
"fmt"
"math/big"
"net"
"os"
"path/filepath"
"sync"
"time"
"golang.org/x/crypto/acme/autocert"
)
// certGenerationMutex protects concurrent certificate generation for the same host
var certGenerationMutex sync.Mutex
// generateSelfSignedCert generates a self-signed certificate for the given host.
// Returns the certificate and private key in PEM format.
func generateSelfSignedCert(host string) (certPEM, keyPEM []byte, err error) {
// Generate private key
priv, err := ecdsa.GenerateKey(elliptic.P256(), rand.Reader)
if err != nil {
return nil, nil, fmt.Errorf("failed to generate private key: %w", err)
}
// Create certificate template
notBefore := time.Now()
notAfter := notBefore.Add(365 * 24 * time.Hour) // Valid for 1 year
serialNumber, err := rand.Int(rand.Reader, new(big.Int).Lsh(big.NewInt(1), 128))
if err != nil {
return nil, nil, fmt.Errorf("failed to generate serial number: %w", err)
}
template := x509.Certificate{
SerialNumber: serialNumber,
Subject: pkix.Name{
Organization: []string{"ResolveSpec Self-Signed"},
CommonName: host,
},
NotBefore: notBefore,
NotAfter: notAfter,
KeyUsage: x509.KeyUsageKeyEncipherment | x509.KeyUsageDigitalSignature,
ExtKeyUsage: []x509.ExtKeyUsage{x509.ExtKeyUsageServerAuth},
BasicConstraintsValid: true,
}
// Add host as DNS name or IP address
if ip := net.ParseIP(host); ip != nil {
template.IPAddresses = []net.IP{ip}
} else {
template.DNSNames = []string{host}
}
// Create certificate
derBytes, err := x509.CreateCertificate(rand.Reader, &template, &template, &priv.PublicKey, priv)
if err != nil {
return nil, nil, fmt.Errorf("failed to create certificate: %w", err)
}
// Encode certificate to PEM
certPEM = pem.EncodeToMemory(&pem.Block{Type: "CERTIFICATE", Bytes: derBytes})
// Encode private key to PEM
privBytes, err := x509.MarshalECPrivateKey(priv)
if err != nil {
return nil, nil, fmt.Errorf("failed to marshal private key: %w", err)
}
keyPEM = pem.EncodeToMemory(&pem.Block{Type: "EC PRIVATE KEY", Bytes: privBytes})
return certPEM, keyPEM, nil
}
// sanitizeHostname converts a hostname to a safe filename by replacing invalid characters.
func sanitizeHostname(host string) string {
// Replace any character that's not alphanumeric, dot, or dash with underscore
safe := ""
for _, r := range host {
if (r >= 'a' && r <= 'z') || (r >= 'A' && r <= 'Z') || (r >= '0' && r <= '9') || r == '.' || r == '-' {
safe += string(r)
} else {
safe += "_"
}
}
return safe
}
// getCertDirectory returns the directory path for storing self-signed certificates.
// Creates the directory if it doesn't exist.
func getCertDirectory() (string, error) {
// Use a consistent directory in the user's cache directory
cacheDir, err := os.UserCacheDir()
if err != nil {
// Fallback to current directory if cache dir is not available
cacheDir = "."
}
certDir := filepath.Join(cacheDir, "resolvespec", "certs")
// Create directory if it doesn't exist
if err := os.MkdirAll(certDir, 0700); err != nil {
return "", fmt.Errorf("failed to create certificate directory: %w", err)
}
return certDir, nil
}
// isCertificateValid checks if a certificate file exists and is not expired.
func isCertificateValid(certFile string) bool {
// Check if file exists
certData, err := os.ReadFile(certFile)
if err != nil {
return false
}
// Parse certificate
block, _ := pem.Decode(certData)
if block == nil {
return false
}
cert, err := x509.ParseCertificate(block.Bytes)
if err != nil {
return false
}
// Check if certificate is expired or will expire in the next 30 days
now := time.Now()
expiryThreshold := now.Add(30 * 24 * time.Hour)
if now.Before(cert.NotBefore) || now.After(cert.NotAfter) {
return false
}
// Renew if expiring soon
if expiryThreshold.After(cert.NotAfter) {
return false
}
return true
}
// saveCertToFiles saves certificate and key PEM data to persistent files.
// Returns the file paths for the certificate and key.
func saveCertToFiles(certPEM, keyPEM []byte, host string) (certFile, keyFile string, err error) {
// Get certificate directory
certDir, err := getCertDirectory()
if err != nil {
return "", "", err
}
// Sanitize hostname for safe file naming
safeHost := sanitizeHostname(host)
// Use consistent file names based on host
certFile = filepath.Join(certDir, fmt.Sprintf("%s-cert.pem", safeHost))
keyFile = filepath.Join(certDir, fmt.Sprintf("%s-key.pem", safeHost))
// Write certificate
if err := os.WriteFile(certFile, certPEM, 0600); err != nil {
return "", "", fmt.Errorf("failed to write certificate: %w", err)
}
// Write key
if err := os.WriteFile(keyFile, keyPEM, 0600); err != nil {
return "", "", fmt.Errorf("failed to write private key: %w", err)
}
return certFile, keyFile, nil
}
// setupAutoTLS configures automatic TLS certificate management using Let's Encrypt.
// Returns a TLS config that can be used with http.Server.
func setupAutoTLS(domains []string, email, cacheDir string) (*tls.Config, error) {
if len(domains) == 0 {
return nil, fmt.Errorf("at least one domain must be specified for AutoTLS")
}
// Set default cache directory
if cacheDir == "" {
cacheDir = "./certs-cache"
}
// Create cache directory if it doesn't exist
if err := os.MkdirAll(cacheDir, 0700); err != nil {
return nil, fmt.Errorf("failed to create certificate cache directory: %w", err)
}
// Create autocert manager
m := &autocert.Manager{
Prompt: autocert.AcceptTOS,
Cache: autocert.DirCache(cacheDir),
HostPolicy: autocert.HostWhitelist(domains...),
Email: email,
}
// Create TLS config
tlsConfig := m.TLSConfig()
tlsConfig.MinVersion = tls.VersionTLS13
return tlsConfig, nil
}
// configureTLS configures TLS for the server based on the provided configuration.
// Returns the TLS config and certificate/key file paths (if applicable).
func configureTLS(cfg Config) (tlsConfig *tls.Config, certFile string, keyFile string, err error) {
// Option 1: Certificate files provided
if cfg.SSLCert != "" && cfg.SSLKey != "" {
// Validate that files exist
if _, err := os.Stat(cfg.SSLCert); os.IsNotExist(err) {
return nil, "", "", fmt.Errorf("SSL certificate file not found: %s", cfg.SSLCert)
}
if _, err := os.Stat(cfg.SSLKey); os.IsNotExist(err) {
return nil, "", "", fmt.Errorf("SSL key file not found: %s", cfg.SSLKey)
}
// Return basic TLS config - cert/key will be loaded by ListenAndServeTLS
tlsConfig := &tls.Config{
MinVersion: tls.VersionTLS12,
}
return tlsConfig, cfg.SSLCert, cfg.SSLKey, nil
}
// Option 2: Auto TLS (Let's Encrypt)
if cfg.AutoTLS {
tlsConfig, err := setupAutoTLS(cfg.AutoTLSDomains, cfg.AutoTLSEmail, cfg.AutoTLSCacheDir)
if err != nil {
return nil, "", "", fmt.Errorf("failed to setup AutoTLS: %w", err)
}
return tlsConfig, "", "", nil
}
// Option 3: Self-signed certificate
if cfg.SelfSignedSSL {
host := cfg.Host
if host == "" || host == "0.0.0.0" {
host = "localhost"
}
// Sanitize hostname for safe file naming
safeHost := sanitizeHostname(host)
// Lock to prevent concurrent certificate generation for the same host
certGenerationMutex.Lock()
defer certGenerationMutex.Unlock()
// Get certificate directory
certDir, err := getCertDirectory()
if err != nil {
return nil, "", "", fmt.Errorf("failed to get certificate directory: %w", err)
}
// Check for existing valid certificates
certFile := filepath.Join(certDir, fmt.Sprintf("%s-cert.pem", safeHost))
keyFile := filepath.Join(certDir, fmt.Sprintf("%s-key.pem", safeHost))
// If valid certificates exist, reuse them
if isCertificateValid(certFile) {
// Verify key file also exists
if _, err := os.Stat(keyFile); err == nil {
tlsConfig := &tls.Config{
MinVersion: tls.VersionTLS12,
}
return tlsConfig, certFile, keyFile, nil
}
}
// Generate new certificates
certPEM, keyPEM, err := generateSelfSignedCert(host)
if err != nil {
return nil, "", "", fmt.Errorf("failed to generate self-signed certificate: %w", err)
}
certFile, keyFile, err = saveCertToFiles(certPEM, keyPEM, host)
if err != nil {
return nil, "", "", fmt.Errorf("failed to save self-signed certificate: %w", err)
}
tlsConfig := &tls.Config{
MinVersion: tls.VersionTLS12,
}
return tlsConfig, certFile, keyFile, nil
}
return nil, "", "", nil
}

View File

@@ -1,5 +1,5 @@
// Package common provides nullable SQL types with automatic casting and conversion methods.
package common
// Package spectypes provides nullable SQL types with automatic casting and conversion methods.
package spectypes
import (
"database/sql"

View File

@@ -1,4 +1,4 @@
package common
package spectypes
import (
"database/sql/driver"

View File

@@ -465,7 +465,7 @@ func processRequest(ctx context.Context) {
1. **Check collector is running:**
```bash
docker-compose ps
podman compose ps
```
2. **Verify endpoint:**
@@ -476,7 +476,7 @@ func processRequest(ctx context.Context) {
3. **Check logs:**
```bash
docker-compose logs otel-collector
podman compose logs otel-collector
```
### Disable Tracing

726
pkg/websocketspec/README.md Normal file
View File

@@ -0,0 +1,726 @@
# WebSocketSpec - Real-Time WebSocket API Framework
WebSocketSpec provides a WebSocket-based API specification for real-time, bidirectional communication with full CRUD operations, subscriptions, and lifecycle hooks.
## Table of Contents
- [Features](#features)
- [Installation](#installation)
- [Quick Start](#quick-start)
- [Message Protocol](#message-protocol)
- [CRUD Operations](#crud-operations)
- [Subscriptions](#subscriptions)
- [Lifecycle Hooks](#lifecycle-hooks)
- [Client Examples](#client-examples)
- [Authentication](#authentication)
- [Error Handling](#error-handling)
- [Best Practices](#best-practices)
## Features
- **Real-Time Bidirectional Communication**: WebSocket-based persistent connections
- **Full CRUD Operations**: Create, Read, Update, Delete with rich query options
- **Real-Time Subscriptions**: Subscribe to entity changes with filter support
- **Automatic Notifications**: Server pushes updates to subscribed clients
- **Lifecycle Hooks**: Before/after hooks for all operations
- **Database Agnostic**: Works with GORM and Bun ORM through adapters
- **Connection Management**: Automatic connection tracking and cleanup
- **Request/Response Correlation**: Message IDs for tracking requests
- **Filter & Sort**: Advanced filtering, sorting, pagination, and preloading
## Installation
```bash
go get github.com/bitechdev/ResolveSpec
```
## Quick Start
### Server Setup
```go
package main
import (
"net/http"
"github.com/bitechdev/ResolveSpec/pkg/websocketspec"
"gorm.io/driver/postgres"
"gorm.io/gorm"
)
func main() {
// Connect to database
db, _ := gorm.Open(postgres.Open("your-connection-string"), &gorm.Config{})
// Create WebSocket handler
handler := websocketspec.NewHandlerWithGORM(db)
// Register models
handler.Registry.RegisterModel("public.users", &User{})
handler.Registry.RegisterModel("public.posts", &Post{})
// Setup WebSocket endpoint
http.HandleFunc("/ws", handler.HandleWebSocket)
// Start server
http.ListenAndServe(":8080", nil)
}
type User struct {
ID uint `json:"id" gorm:"primaryKey"`
Name string `json:"name"`
Email string `json:"email"`
Status string `json:"status"`
}
type Post struct {
ID uint `json:"id" gorm:"primaryKey"`
Title string `json:"title"`
Content string `json:"content"`
UserID uint `json:"user_id"`
}
```
### Client Setup (JavaScript)
```javascript
const ws = new WebSocket("ws://localhost:8080/ws");
ws.onopen = () => {
console.log("Connected to WebSocket");
};
ws.onmessage = (event) => {
const message = JSON.parse(event.data);
console.log("Received:", message);
};
ws.onerror = (error) => {
console.error("WebSocket error:", error);
};
```
## Message Protocol
All messages are JSON-encoded with the following structure:
```typescript
interface Message {
id: string; // Unique message ID for correlation
type: "request" | "response" | "notification" | "subscription";
operation?: "read" | "create" | "update" | "delete" | "subscribe" | "unsubscribe" | "meta";
schema?: string; // Database schema
entity: string; // Table/model name
record_id?: string; // For single-record operations
data?: any; // Request/response payload
options?: QueryOptions; // Filters, sorting, pagination
subscription_id?: string; // For subscription messages
success?: boolean; // Response success indicator
error?: ErrorInfo; // Error details
metadata?: Record<string, any>; // Additional metadata
timestamp?: string; // Message timestamp
}
interface QueryOptions {
filters?: FilterOption[];
columns?: string[];
preload?: PreloadOption[];
sort?: SortOption[];
limit?: number;
offset?: number;
}
```
## CRUD Operations
### CREATE - Create New Records
**Request:**
```json
{
"id": "msg-1",
"type": "request",
"operation": "create",
"schema": "public",
"entity": "users",
"data": {
"name": "John Doe",
"email": "john@example.com",
"status": "active"
}
}
```
**Response:**
```json
{
"id": "msg-1",
"type": "response",
"success": true,
"data": {
"id": 123,
"name": "John Doe",
"email": "john@example.com",
"status": "active"
},
"timestamp": "2025-12-12T10:30:00Z"
}
```
### READ - Query Records
**Read Multiple Records:**
```json
{
"id": "msg-2",
"type": "request",
"operation": "read",
"schema": "public",
"entity": "users",
"options": {
"filters": [
{"column": "status", "operator": "eq", "value": "active"}
],
"columns": ["id", "name", "email"],
"sort": [
{"column": "name", "direction": "asc"}
],
"limit": 10,
"offset": 0
}
}
```
**Read Single Record:**
```json
{
"id": "msg-3",
"type": "request",
"operation": "read",
"schema": "public",
"entity": "users",
"record_id": "123"
}
```
**Response:**
```json
{
"id": "msg-2",
"type": "response",
"success": true,
"data": [
{"id": 1, "name": "Alice", "email": "alice@example.com"},
{"id": 2, "name": "Bob", "email": "bob@example.com"}
],
"metadata": {
"total": 50,
"count": 2
},
"timestamp": "2025-12-12T10:30:00Z"
}
```
### UPDATE - Update Records
```json
{
"id": "msg-4",
"type": "request",
"operation": "update",
"schema": "public",
"entity": "users",
"record_id": "123",
"data": {
"name": "John Updated",
"email": "john.updated@example.com"
}
}
```
### DELETE - Delete Records
```json
{
"id": "msg-5",
"type": "request",
"operation": "delete",
"schema": "public",
"entity": "users",
"record_id": "123"
}
```
## Subscriptions
Subscriptions allow clients to receive real-time notifications when entities change.
### Subscribe to Changes
```json
{
"id": "sub-1",
"type": "subscription",
"operation": "subscribe",
"schema": "public",
"entity": "users",
"options": {
"filters": [
{"column": "status", "operator": "eq", "value": "active"}
]
}
}
```
**Response:**
```json
{
"id": "sub-1",
"type": "response",
"success": true,
"data": {
"subscription_id": "sub-abc123",
"schema": "public",
"entity": "users"
},
"timestamp": "2025-12-12T10:30:00Z"
}
```
### Receive Notifications
When a subscribed entity changes, clients automatically receive notifications:
```json
{
"type": "notification",
"operation": "create",
"subscription_id": "sub-abc123",
"schema": "public",
"entity": "users",
"data": {
"id": 124,
"name": "Jane Smith",
"email": "jane@example.com",
"status": "active"
},
"timestamp": "2025-12-12T10:35:00Z"
}
```
**Notification Operations:**
- `create` - New record created
- `update` - Record updated
- `delete` - Record deleted
### Unsubscribe
```json
{
"id": "unsub-1",
"type": "subscription",
"operation": "unsubscribe",
"subscription_id": "sub-abc123"
}
```
## Lifecycle Hooks
Hooks allow you to intercept and modify operations at various points in the lifecycle.
### Available Hook Types
- **BeforeRead** / **AfterRead**
- **BeforeCreate** / **AfterCreate**
- **BeforeUpdate** / **AfterUpdate**
- **BeforeDelete** / **AfterDelete**
- **BeforeSubscribe** / **AfterSubscribe**
- **BeforeConnect** / **AfterConnect**
### Hook Example
```go
handler := websocketspec.NewHandlerWithGORM(db)
// Authorization hook
handler.Hooks().RegisterBefore(websocketspec.OperationRead, func(ctx *websocketspec.HookContext) error {
// Check permissions
userID, _ := ctx.Connection.GetMetadata("user_id")
if userID == nil {
return fmt.Errorf("unauthorized: user not authenticated")
}
// Add filter to only show user's own records
if ctx.Entity == "posts" {
ctx.Options.Filters = append(ctx.Options.Filters, common.FilterOption{
Column: "user_id",
Operator: "eq",
Value: userID,
})
}
return nil
})
// Logging hook
handler.Hooks().RegisterAfter(websocketspec.OperationCreate, func(ctx *websocketspec.HookContext) error {
log.Printf("Created %s in %s.%s", ctx.Result, ctx.Schema, ctx.Entity)
return nil
})
// Validation hook
handler.Hooks().RegisterBefore(websocketspec.OperationCreate, func(ctx *websocketspec.HookContext) error {
// Validate data before creation
if data, ok := ctx.Data.(map[string]interface{}); ok {
if email, exists := data["email"]; !exists || email == "" {
return fmt.Errorf("email is required")
}
}
return nil
})
```
## Client Examples
### JavaScript/TypeScript Client
```typescript
class WebSocketClient {
private ws: WebSocket;
private messageHandlers: Map<string, (data: any) => void> = new Map();
private subscriptions: Map<string, (data: any) => void> = new Map();
constructor(url: string) {
this.ws = new WebSocket(url);
this.ws.onmessage = (event) => this.handleMessage(event);
}
// Send request and wait for response
async request(operation: string, entity: string, options?: any): Promise<any> {
const id = this.generateId();
return new Promise((resolve, reject) => {
this.messageHandlers.set(id, (data) => {
if (data.success) {
resolve(data.data);
} else {
reject(data.error);
}
});
this.ws.send(JSON.stringify({
id,
type: "request",
operation,
entity,
...options
}));
});
}
// Subscribe to entity changes
async subscribe(entity: string, filters?: any[], callback?: (data: any) => void): Promise<string> {
const id = this.generateId();
return new Promise((resolve, reject) => {
this.messageHandlers.set(id, (data) => {
if (data.success) {
const subId = data.data.subscription_id;
if (callback) {
this.subscriptions.set(subId, callback);
}
resolve(subId);
} else {
reject(data.error);
}
});
this.ws.send(JSON.stringify({
id,
type: "subscription",
operation: "subscribe",
entity,
options: { filters }
}));
});
}
private handleMessage(event: MessageEvent) {
const message = JSON.parse(event.data);
if (message.type === "response") {
const handler = this.messageHandlers.get(message.id);
if (handler) {
handler(message);
this.messageHandlers.delete(message.id);
}
} else if (message.type === "notification") {
const callback = this.subscriptions.get(message.subscription_id);
if (callback) {
callback(message);
}
}
}
private generateId(): string {
return `msg-${Date.now()}-${Math.random().toString(36).substr(2, 9)}`;
}
}
// Usage
const client = new WebSocketClient("ws://localhost:8080/ws");
// Read users
const users = await client.request("read", "users", {
options: {
filters: [{ column: "status", operator: "eq", value: "active" }],
limit: 10
}
});
// Subscribe to user changes
await client.subscribe("users",
[{ column: "status", operator: "eq", value: "active" }],
(notification) => {
console.log("User changed:", notification.operation, notification.data);
}
);
// Create user
const newUser = await client.request("create", "users", {
data: {
name: "Alice",
email: "alice@example.com",
status: "active"
}
});
```
### Python Client Example
```python
import asyncio
import websockets
import json
import uuid
class WebSocketClient:
def __init__(self, url):
self.url = url
self.ws = None
self.handlers = {}
self.subscriptions = {}
async def connect(self):
self.ws = await websockets.connect(self.url)
asyncio.create_task(self.listen())
async def listen(self):
async for message in self.ws:
data = json.loads(message)
if data["type"] == "response":
handler = self.handlers.get(data["id"])
if handler:
handler(data)
del self.handlers[data["id"]]
elif data["type"] == "notification":
callback = self.subscriptions.get(data["subscription_id"])
if callback:
callback(data)
async def request(self, operation, entity, **kwargs):
msg_id = str(uuid.uuid4())
future = asyncio.Future()
self.handlers[msg_id] = lambda data: future.set_result(data)
await self.ws.send(json.dumps({
"id": msg_id,
"type": "request",
"operation": operation,
"entity": entity,
**kwargs
}))
result = await future
if result["success"]:
return result["data"]
else:
raise Exception(result["error"]["message"])
async def subscribe(self, entity, callback, filters=None):
msg_id = str(uuid.uuid4())
future = asyncio.Future()
self.handlers[msg_id] = lambda data: future.set_result(data)
await self.ws.send(json.dumps({
"id": msg_id,
"type": "subscription",
"operation": "subscribe",
"entity": entity,
"options": {"filters": filters} if filters else {}
}))
result = await future
if result["success"]:
sub_id = result["data"]["subscription_id"]
self.subscriptions[sub_id] = callback
return sub_id
else:
raise Exception(result["error"]["message"])
# Usage
async def main():
client = WebSocketClient("ws://localhost:8080/ws")
await client.connect()
# Read users
users = await client.request("read", "users",
options={
"filters": [{"column": "status", "operator": "eq", "value": "active"}],
"limit": 10
}
)
print("Users:", users)
# Subscribe to changes
def on_user_change(notification):
print(f"User {notification['operation']}: {notification['data']}")
await client.subscribe("users", on_user_change,
filters=[{"column": "status", "operator": "eq", "value": "active"}]
)
asyncio.run(main())
```
## Authentication
Implement authentication using hooks:
```go
handler := websocketspec.NewHandlerWithGORM(db)
// Authentication on connection
handler.Hooks().Register(websocketspec.BeforeConnect, func(ctx *websocketspec.HookContext) error {
// Extract token from query params or headers
r := ctx.Connection.ws.UnderlyingConn().RemoteAddr()
// Validate token (implement your auth logic)
token := extractToken(r)
user, err := validateToken(token)
if err != nil {
return fmt.Errorf("authentication failed: %w", err)
}
// Store user info in connection metadata
ctx.Connection.SetMetadata("user", user)
ctx.Connection.SetMetadata("user_id", user.ID)
return nil
})
// Check permissions for each operation
handler.Hooks().RegisterBefore(websocketspec.OperationRead, func(ctx *websocketspec.HookContext) error {
userID, ok := ctx.Connection.GetMetadata("user_id")
if !ok {
return fmt.Errorf("unauthorized")
}
// Add user-specific filters
if ctx.Entity == "orders" {
ctx.Options.Filters = append(ctx.Options.Filters, common.FilterOption{
Column: "user_id",
Operator: "eq",
Value: userID,
})
}
return nil
})
```
## Error Handling
Errors are returned in a consistent format:
```json
{
"id": "msg-1",
"type": "response",
"success": false,
"error": {
"code": "validation_error",
"message": "Email is required",
"details": {
"field": "email"
}
},
"timestamp": "2025-12-12T10:30:00Z"
}
```
**Common Error Codes:**
- `invalid_message` - Message format is invalid
- `model_not_found` - Entity not registered
- `invalid_model` - Model validation failed
- `read_error` - Read operation failed
- `create_error` - Create operation failed
- `update_error` - Update operation failed
- `delete_error` - Delete operation failed
- `hook_error` - Hook execution failed
- `unauthorized` - Authentication/authorization failed
## Best Practices
1. **Always Use Message IDs**: Correlate requests with responses using unique IDs
2. **Handle Reconnections**: Implement automatic reconnection logic on the client
3. **Validate Data**: Use before-hooks to validate data before operations
4. **Limit Subscriptions**: Implement limits on subscriptions per connection
5. **Use Filters**: Apply filters to subscriptions to reduce unnecessary notifications
6. **Implement Authentication**: Always validate users before processing operations
7. **Handle Errors Gracefully**: Display user-friendly error messages
8. **Clean Up**: Unsubscribe when components unmount or disconnect
9. **Rate Limiting**: Implement rate limiting to prevent abuse
10. **Monitor Connections**: Track active connections and subscriptions
## Filter Operators
Supported filter operators:
- `eq` - Equal (=)
- `neq` - Not Equal (!=)
- `gt` - Greater Than (>)
- `gte` - Greater Than or Equal (>=)
- `lt` - Less Than (<)
- `lte` - Less Than or Equal (<=)
- `like` - LIKE (case-sensitive)
- `ilike` - ILIKE (case-insensitive)
- `in` - IN (array of values)
## Performance Considerations
- **Connection Pooling**: WebSocket connections are reused, reducing overhead
- **Subscription Filtering**: Only matching updates are sent to clients
- **Efficient Queries**: Uses database adapters for optimized queries
- **Message Batching**: Multiple messages can be sent in one write
- **Keepalive**: Automatic ping/pong for connection health
## Comparison with Other Specs
| Feature | WebSocketSpec | RestHeadSpec | ResolveSpec |
|---------|--------------|--------------|-------------|
| Protocol | WebSocket | HTTP/REST | HTTP/REST |
| Real-time | ✅ Yes | ❌ No | ❌ No |
| Subscriptions | ✅ Yes | ❌ No | ❌ No |
| Bidirectional | ✅ Yes | ❌ No | ❌ No |
| Query Options | In Message | In Headers | In Body |
| Overhead | Low | Medium | Medium |
| Use Case | Real-time apps | Traditional APIs | Body-based APIs |
## License
MIT License - See LICENSE file for details

View File

@@ -0,0 +1,380 @@
package websocketspec
import (
"context"
"encoding/json"
"fmt"
"sync"
"time"
"github.com/gorilla/websocket"
"github.com/bitechdev/ResolveSpec/pkg/logger"
)
// Connection rvepresents a WebSocket connection with its state
type Connection struct {
// ID is a unique identifier for this connection
ID string
// ws is the underlying WebSocket connection
ws *websocket.Conn
// send is a channel for outbound messages
send chan []byte
// subscriptions holds active subscriptions for this connection
subscriptions map[string]*Subscription
// mu protects subscriptions map
mu sync.RWMutex
// ctx is the connection context
ctx context.Context
// cancel cancels the connection context
cancel context.CancelFunc
// handler is the WebSocket handler
handler *Handler
// metadata stores connection-specific metadata (e.g., user info, auth state)
metadata map[string]interface{}
// metaMu protects metadata map
metaMu sync.RWMutex
// closedOnce ensures cleanup happens only once
closedOnce sync.Once
}
// ConnectionManager manages all active WebSocket connections
type ConnectionManager struct {
// connections holds all active connections
connections map[string]*Connection
// mu protects the connections map
mu sync.RWMutex
// register channel for new connections
register chan *Connection
// unregister channel for closing connections
unregister chan *Connection
// broadcast channel for broadcasting messages
broadcast chan *BroadcastMessage
// ctx is the manager context
ctx context.Context
// cancel cancels the manager context
cancel context.CancelFunc
}
// BroadcastMessage represents a message to broadcast to multiple connections
type BroadcastMessage struct {
// Message is the message to broadcast
Message []byte
// Filter is an optional function to filter which connections receive the message
Filter func(*Connection) bool
}
// NewConnection creates a new WebSocket connection
func NewConnection(id string, ws *websocket.Conn, handler *Handler) *Connection {
ctx, cancel := context.WithCancel(context.Background())
return &Connection{
ID: id,
ws: ws,
send: make(chan []byte, 256),
subscriptions: make(map[string]*Subscription),
ctx: ctx,
cancel: cancel,
handler: handler,
metadata: make(map[string]interface{}),
}
}
// NewConnectionManager creates a new connection manager
func NewConnectionManager(ctx context.Context) *ConnectionManager {
ctx, cancel := context.WithCancel(ctx)
return &ConnectionManager{
connections: make(map[string]*Connection),
register: make(chan *Connection),
unregister: make(chan *Connection),
broadcast: make(chan *BroadcastMessage),
ctx: ctx,
cancel: cancel,
}
}
// Run starts the connection manager event loop
func (cm *ConnectionManager) Run() {
for {
select {
case conn := <-cm.register:
cm.mu.Lock()
cm.connections[conn.ID] = conn
count := len(cm.connections)
cm.mu.Unlock()
logger.Info("[WebSocketSpec] Connection registered: %s (total: %d)", conn.ID, count)
case conn := <-cm.unregister:
cm.mu.Lock()
if _, ok := cm.connections[conn.ID]; ok {
delete(cm.connections, conn.ID)
close(conn.send)
count := len(cm.connections)
cm.mu.Unlock()
logger.Info("[WebSocketSpec] Connection unregistered: %s (total: %d)", conn.ID, count)
} else {
cm.mu.Unlock()
}
case msg := <-cm.broadcast:
cm.mu.RLock()
for _, conn := range cm.connections {
if msg.Filter == nil || msg.Filter(conn) {
select {
case conn.send <- msg.Message:
default:
// Channel full, connection is slow - close it
logger.Warn("[WebSocketSpec] Connection %s send buffer full, closing", conn.ID)
cm.mu.RUnlock()
cm.unregister <- conn
cm.mu.RLock()
}
}
}
cm.mu.RUnlock()
case <-cm.ctx.Done():
logger.Info("[WebSocketSpec] Connection manager shutting down")
return
}
}
}
// Register registers a new connection
func (cm *ConnectionManager) Register(conn *Connection) {
cm.register <- conn
}
// Unregister removes a connection
func (cm *ConnectionManager) Unregister(conn *Connection) {
cm.unregister <- conn
}
// Broadcast sends a message to all connections matching the filter
func (cm *ConnectionManager) Broadcast(message []byte, filter func(*Connection) bool) {
cm.broadcast <- &BroadcastMessage{
Message: message,
Filter: filter,
}
}
// Count returns the number of active connections
func (cm *ConnectionManager) Count() int {
cm.mu.RLock()
defer cm.mu.RUnlock()
return len(cm.connections)
}
// GetConnection retrieves a connection by ID
func (cm *ConnectionManager) GetConnection(id string) (*Connection, bool) {
cm.mu.RLock()
defer cm.mu.RUnlock()
conn, ok := cm.connections[id]
return conn, ok
}
// Shutdown gracefully shuts down the connection manager
func (cm *ConnectionManager) Shutdown() {
cm.cancel()
// Close all connections
cm.mu.Lock()
for _, conn := range cm.connections {
conn.Close()
}
cm.mu.Unlock()
}
// ReadPump reads messages from the WebSocket connection
func (c *Connection) ReadPump() {
defer func() {
c.handler.connManager.Unregister(c)
c.Close()
}()
// Configure read parameters
_ = c.ws.SetReadDeadline(time.Now().Add(60 * time.Second))
c.ws.SetPongHandler(func(string) error {
_ = c.ws.SetReadDeadline(time.Now().Add(60 * time.Second))
return nil
})
for {
_, message, err := c.ws.ReadMessage()
if err != nil {
if websocket.IsUnexpectedCloseError(err, websocket.CloseGoingAway, websocket.CloseAbnormalClosure) {
logger.Error("[WebSocketSpec] Connection %s read error: %v", c.ID, err)
}
break
}
// Parse and handle the message
c.handleMessage(message)
}
}
// WritePump writes messages to the WebSocket connection
func (c *Connection) WritePump() {
ticker := time.NewTicker(54 * time.Second)
defer func() {
ticker.Stop()
c.Close()
}()
for {
select {
case message, ok := <-c.send:
_ = c.ws.SetWriteDeadline(time.Now().Add(10 * time.Second))
if !ok {
// Channel closed
_ = c.ws.WriteMessage(websocket.CloseMessage, []byte{})
return
}
w, err := c.ws.NextWriter(websocket.TextMessage)
if err != nil {
return
}
_, _ = w.Write(message)
// Write any queued messages
n := len(c.send)
for i := 0; i < n; i++ {
_, _ = w.Write([]byte{'\n'})
_, _ = w.Write(<-c.send)
}
if err := w.Close(); err != nil {
return
}
case <-ticker.C:
_ = c.ws.SetWriteDeadline(time.Now().Add(10 * time.Second))
if err := c.ws.WriteMessage(websocket.PingMessage, nil); err != nil {
return
}
case <-c.ctx.Done():
return
}
}
}
// Send sends a message to this connection
func (c *Connection) Send(message []byte) error {
select {
case c.send <- message:
return nil
case <-c.ctx.Done():
return fmt.Errorf("connection closed")
default:
return fmt.Errorf("send buffer full")
}
}
// SendJSON sends a JSON-encoded message to this connection
func (c *Connection) SendJSON(v interface{}) error {
data, err := json.Marshal(v)
if err != nil {
return fmt.Errorf("failed to marshal message: %w", err)
}
return c.Send(data)
}
// Close closes the connection
func (c *Connection) Close() {
c.closedOnce.Do(func() {
if c.cancel != nil {
c.cancel()
}
if c.ws != nil {
c.ws.Close()
}
// Clean up subscriptions
c.mu.Lock()
for subID := range c.subscriptions {
if c.handler != nil && c.handler.subscriptionManager != nil {
c.handler.subscriptionManager.Unsubscribe(subID)
}
}
c.subscriptions = make(map[string]*Subscription)
c.mu.Unlock()
logger.Info("[WebSocketSpec] Connection %s closed", c.ID)
})
}
// AddSubscription adds a subscription to this connection
func (c *Connection) AddSubscription(sub *Subscription) {
c.mu.Lock()
defer c.mu.Unlock()
c.subscriptions[sub.ID] = sub
}
// RemoveSubscription removes a subscription from this connection
func (c *Connection) RemoveSubscription(subID string) {
c.mu.Lock()
defer c.mu.Unlock()
delete(c.subscriptions, subID)
}
// GetSubscription retrieves a subscription by ID
func (c *Connection) GetSubscription(subID string) (*Subscription, bool) {
c.mu.RLock()
defer c.mu.RUnlock()
sub, ok := c.subscriptions[subID]
return sub, ok
}
// SetMetadata sets metadata for this connection
func (c *Connection) SetMetadata(key string, value interface{}) {
c.metaMu.Lock()
defer c.metaMu.Unlock()
c.metadata[key] = value
}
// GetMetadata retrieves metadata for this connection
func (c *Connection) GetMetadata(key string) (interface{}, bool) {
c.metaMu.RLock()
defer c.metaMu.RUnlock()
val, ok := c.metadata[key]
return val, ok
}
// handleMessage processes an incoming message
func (c *Connection) handleMessage(data []byte) {
msg, err := ParseMessage(data)
if err != nil {
logger.Error("[WebSocketSpec] Failed to parse message: %v", err)
errResp := NewErrorResponse("", "invalid_message", "Failed to parse message")
_ = c.SendJSON(errResp)
return
}
if !msg.IsValid() {
logger.Error("[WebSocketSpec] Invalid message received")
errResp := NewErrorResponse(msg.ID, "invalid_message", "Message validation failed")
_ = c.SendJSON(errResp)
return
}
// Route message to appropriate handler
c.handler.HandleMessage(c, msg)
}

View File

@@ -0,0 +1,596 @@
package websocketspec
import (
"context"
"testing"
"time"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
// Helper function to create a test connection with proper initialization
func createTestConnection(id string) *Connection {
ctx, cancel := context.WithCancel(context.Background())
return &Connection{
ID: id,
send: make(chan []byte, 256),
subscriptions: make(map[string]*Subscription),
metadata: make(map[string]interface{}),
ctx: ctx,
cancel: cancel,
}
}
func TestNewConnectionManager(t *testing.T) {
ctx := context.Background()
cm := NewConnectionManager(ctx)
assert.NotNil(t, cm)
assert.NotNil(t, cm.connections)
assert.NotNil(t, cm.register)
assert.NotNil(t, cm.unregister)
assert.NotNil(t, cm.broadcast)
assert.Equal(t, 0, cm.Count())
}
func TestConnectionManager_Count(t *testing.T) {
ctx := context.Background()
cm := NewConnectionManager(ctx)
// Start manager
go cm.Run()
defer func() {
// Cancel context without calling Shutdown which tries to close connections
cm.cancel()
}()
// Initially empty
assert.Equal(t, 0, cm.Count())
// Add a connection
conn := createTestConnection("conn-1")
cm.Register(conn)
time.Sleep(10 * time.Millisecond) // Give time for registration
assert.Equal(t, 1, cm.Count())
}
func TestConnectionManager_Register(t *testing.T) {
ctx := context.Background()
cm := NewConnectionManager(ctx)
// Start manager
go cm.Run()
defer cm.cancel()
conn := createTestConnection("conn-1")
cm.Register(conn)
time.Sleep(10 * time.Millisecond)
// Verify connection was registered
retrievedConn, exists := cm.GetConnection("conn-1")
assert.True(t, exists)
assert.Equal(t, "conn-1", retrievedConn.ID)
}
func TestConnectionManager_Unregister(t *testing.T) {
ctx := context.Background()
cm := NewConnectionManager(ctx)
// Start manager
go cm.Run()
defer cm.cancel()
conn := &Connection{
ID: "conn-1",
send: make(chan []byte, 256),
subscriptions: make(map[string]*Subscription),
}
cm.Register(conn)
time.Sleep(10 * time.Millisecond)
assert.Equal(t, 1, cm.Count())
cm.Unregister(conn)
time.Sleep(10 * time.Millisecond)
assert.Equal(t, 0, cm.Count())
// Verify connection was removed
_, exists := cm.GetConnection("conn-1")
assert.False(t, exists)
}
func TestConnectionManager_GetConnection(t *testing.T) {
ctx := context.Background()
cm := NewConnectionManager(ctx)
// Start manager
go cm.Run()
defer cm.cancel()
// Non-existent connection
_, exists := cm.GetConnection("non-existent")
assert.False(t, exists)
// Register connection
conn := &Connection{
ID: "conn-1",
send: make(chan []byte, 256),
subscriptions: make(map[string]*Subscription),
}
cm.Register(conn)
time.Sleep(10 * time.Millisecond)
// Get existing connection
retrievedConn, exists := cm.GetConnection("conn-1")
assert.True(t, exists)
assert.Equal(t, "conn-1", retrievedConn.ID)
}
func TestConnectionManager_MultipleConnections(t *testing.T) {
ctx := context.Background()
cm := NewConnectionManager(ctx)
// Start manager
go cm.Run()
defer cm.cancel()
// Register multiple connections
conn1 := &Connection{ID: "conn-1", send: make(chan []byte, 256), subscriptions: make(map[string]*Subscription)}
conn2 := &Connection{ID: "conn-2", send: make(chan []byte, 256), subscriptions: make(map[string]*Subscription)}
conn3 := &Connection{ID: "conn-3", send: make(chan []byte, 256), subscriptions: make(map[string]*Subscription)}
cm.Register(conn1)
cm.Register(conn2)
cm.Register(conn3)
time.Sleep(10 * time.Millisecond)
assert.Equal(t, 3, cm.Count())
// Verify all connections exist
_, exists := cm.GetConnection("conn-1")
assert.True(t, exists)
_, exists = cm.GetConnection("conn-2")
assert.True(t, exists)
_, exists = cm.GetConnection("conn-3")
assert.True(t, exists)
// Unregister one
cm.Unregister(conn2)
time.Sleep(10 * time.Millisecond)
assert.Equal(t, 2, cm.Count())
// Verify conn-2 is gone but others remain
_, exists = cm.GetConnection("conn-2")
assert.False(t, exists)
_, exists = cm.GetConnection("conn-1")
assert.True(t, exists)
_, exists = cm.GetConnection("conn-3")
assert.True(t, exists)
}
func TestConnectionManager_Shutdown(t *testing.T) {
ctx := context.Background()
cm := NewConnectionManager(ctx)
// Start manager
go cm.Run()
// Register connections
conn1 := &Connection{
ID: "conn-1",
send: make(chan []byte, 256),
subscriptions: make(map[string]*Subscription),
ctx: context.Background(),
}
conn1.ctx, conn1.cancel = context.WithCancel(context.Background())
cm.Register(conn1)
time.Sleep(10 * time.Millisecond)
assert.Equal(t, 1, cm.Count())
// Shutdown
cm.Shutdown()
time.Sleep(10 * time.Millisecond)
// Verify context was cancelled
select {
case <-cm.ctx.Done():
// Expected
case <-time.After(100 * time.Millisecond):
t.Fatal("Context not cancelled after shutdown")
}
}
func TestConnection_SetMetadata(t *testing.T) {
conn := &Connection{
metadata: make(map[string]interface{}),
}
conn.SetMetadata("user_id", 123)
conn.SetMetadata("username", "john")
// Verify metadata was set
userID, exists := conn.GetMetadata("user_id")
assert.True(t, exists)
assert.Equal(t, 123, userID)
username, exists := conn.GetMetadata("username")
assert.True(t, exists)
assert.Equal(t, "john", username)
}
func TestConnection_GetMetadata(t *testing.T) {
conn := &Connection{
metadata: map[string]interface{}{
"user_id": 123,
"role": "admin",
},
}
// Get existing metadata
userID, exists := conn.GetMetadata("user_id")
assert.True(t, exists)
assert.Equal(t, 123, userID)
// Get non-existent metadata
_, exists = conn.GetMetadata("non_existent")
assert.False(t, exists)
}
func TestConnection_AddSubscription(t *testing.T) {
conn := &Connection{
subscriptions: make(map[string]*Subscription),
}
sub := &Subscription{
ID: "sub-1",
ConnectionID: "conn-1",
Entity: "users",
Active: true,
}
conn.AddSubscription(sub)
// Verify subscription was added
retrievedSub, exists := conn.GetSubscription("sub-1")
assert.True(t, exists)
assert.Equal(t, "sub-1", retrievedSub.ID)
}
func TestConnection_RemoveSubscription(t *testing.T) {
sub := &Subscription{
ID: "sub-1",
ConnectionID: "conn-1",
Entity: "users",
Active: true,
}
conn := &Connection{
subscriptions: map[string]*Subscription{
"sub-1": sub,
},
}
// Verify subscription exists
_, exists := conn.GetSubscription("sub-1")
assert.True(t, exists)
// Remove subscription
conn.RemoveSubscription("sub-1")
// Verify subscription was removed
_, exists = conn.GetSubscription("sub-1")
assert.False(t, exists)
}
func TestConnection_GetSubscription(t *testing.T) {
sub1 := &Subscription{ID: "sub-1", Entity: "users"}
sub2 := &Subscription{ID: "sub-2", Entity: "posts"}
conn := &Connection{
subscriptions: map[string]*Subscription{
"sub-1": sub1,
"sub-2": sub2,
},
}
// Get existing subscription
retrievedSub, exists := conn.GetSubscription("sub-1")
assert.True(t, exists)
assert.Equal(t, "sub-1", retrievedSub.ID)
// Get non-existent subscription
_, exists = conn.GetSubscription("non-existent")
assert.False(t, exists)
}
func TestConnection_MultipleSubscriptions(t *testing.T) {
conn := &Connection{
subscriptions: make(map[string]*Subscription),
}
sub1 := &Subscription{ID: "sub-1", Entity: "users"}
sub2 := &Subscription{ID: "sub-2", Entity: "posts"}
sub3 := &Subscription{ID: "sub-3", Entity: "comments"}
conn.AddSubscription(sub1)
conn.AddSubscription(sub2)
conn.AddSubscription(sub3)
// Verify all subscriptions exist
_, exists := conn.GetSubscription("sub-1")
assert.True(t, exists)
_, exists = conn.GetSubscription("sub-2")
assert.True(t, exists)
_, exists = conn.GetSubscription("sub-3")
assert.True(t, exists)
// Remove one subscription
conn.RemoveSubscription("sub-2")
// Verify sub-2 is gone but others remain
_, exists = conn.GetSubscription("sub-2")
assert.False(t, exists)
_, exists = conn.GetSubscription("sub-1")
assert.True(t, exists)
_, exists = conn.GetSubscription("sub-3")
assert.True(t, exists)
}
func TestBroadcastMessage_Structure(t *testing.T) {
msg := &BroadcastMessage{
Message: []byte("test message"),
Filter: func(conn *Connection) bool {
return true
},
}
assert.NotNil(t, msg.Message)
assert.NotNil(t, msg.Filter)
assert.Equal(t, "test message", string(msg.Message))
}
func TestBroadcastMessage_Filter(t *testing.T) {
// Filter that only allows specific connection
filter := func(conn *Connection) bool {
return conn.ID == "conn-1"
}
msg := &BroadcastMessage{
Message: []byte("test"),
Filter: filter,
}
conn1 := &Connection{ID: "conn-1"}
conn2 := &Connection{ID: "conn-2"}
assert.True(t, msg.Filter(conn1))
assert.False(t, msg.Filter(conn2))
}
func TestConnectionManager_Broadcast(t *testing.T) {
ctx := context.Background()
cm := NewConnectionManager(ctx)
// Start manager
go cm.Run()
defer cm.cancel()
// Register connections
conn1 := &Connection{ID: "conn-1", send: make(chan []byte, 256), subscriptions: make(map[string]*Subscription)}
conn2 := &Connection{ID: "conn-2", send: make(chan []byte, 256), subscriptions: make(map[string]*Subscription)}
cm.Register(conn1)
cm.Register(conn2)
time.Sleep(10 * time.Millisecond)
// Broadcast message
message := []byte("test broadcast")
cm.Broadcast(message, nil)
time.Sleep(10 * time.Millisecond)
// Verify both connections received the message
select {
case msg := <-conn1.send:
assert.Equal(t, message, msg)
case <-time.After(100 * time.Millisecond):
t.Fatal("conn1 did not receive message")
}
select {
case msg := <-conn2.send:
assert.Equal(t, message, msg)
case <-time.After(100 * time.Millisecond):
t.Fatal("conn2 did not receive message")
}
}
func TestConnectionManager_BroadcastWithFilter(t *testing.T) {
ctx := context.Background()
cm := NewConnectionManager(ctx)
// Start manager
go cm.Run()
defer cm.cancel()
// Register connections with metadata
conn1 := &Connection{
ID: "conn-1",
send: make(chan []byte, 256),
subscriptions: make(map[string]*Subscription),
metadata: map[string]interface{}{"role": "admin"},
}
conn2 := &Connection{
ID: "conn-2",
send: make(chan []byte, 256),
subscriptions: make(map[string]*Subscription),
metadata: map[string]interface{}{"role": "user"},
}
cm.Register(conn1)
cm.Register(conn2)
time.Sleep(10 * time.Millisecond)
// Broadcast only to admins
filter := func(conn *Connection) bool {
role, _ := conn.GetMetadata("role")
return role == "admin"
}
message := []byte("admin message")
cm.Broadcast(message, filter)
time.Sleep(10 * time.Millisecond)
// Verify only conn1 received the message
select {
case msg := <-conn1.send:
assert.Equal(t, message, msg)
case <-time.After(100 * time.Millisecond):
t.Fatal("conn1 (admin) did not receive message")
}
// Verify conn2 did not receive the message
select {
case <-conn2.send:
t.Fatal("conn2 (user) should not have received admin message")
case <-time.After(50 * time.Millisecond):
// Expected - no message
}
}
func TestConnection_ConcurrentMetadataAccess(t *testing.T) {
// This test verifies that concurrent metadata access doesn't cause race conditions
// Run with: go test -race
conn := &Connection{
metadata: make(map[string]interface{}),
}
done := make(chan bool)
// Goroutine 1: Write metadata
go func() {
for i := 0; i < 100; i++ {
conn.SetMetadata("key", i)
}
done <- true
}()
// Goroutine 2: Read metadata
go func() {
for i := 0; i < 100; i++ {
conn.GetMetadata("key")
}
done <- true
}()
// Wait for completion
<-done
<-done
}
func TestConnection_ConcurrentSubscriptionAccess(t *testing.T) {
// This test verifies that concurrent subscription access doesn't cause race conditions
// Run with: go test -race
conn := &Connection{
subscriptions: make(map[string]*Subscription),
}
done := make(chan bool)
// Goroutine 1: Add subscriptions
go func() {
for i := 0; i < 100; i++ {
sub := &Subscription{ID: "sub-" + string(rune(i)), Entity: "users"}
conn.AddSubscription(sub)
}
done <- true
}()
// Goroutine 2: Get subscriptions
go func() {
for i := 0; i < 100; i++ {
conn.GetSubscription("sub-" + string(rune(i)))
}
done <- true
}()
// Wait for completion
<-done
<-done
}
func TestConnectionManager_CompleteLifecycle(t *testing.T) {
ctx := context.Background()
cm := NewConnectionManager(ctx)
// Start manager
go cm.Run()
defer cm.cancel()
// Create and register connection
conn := &Connection{
ID: "conn-1",
send: make(chan []byte, 256),
subscriptions: make(map[string]*Subscription),
metadata: make(map[string]interface{}),
}
// Set metadata
conn.SetMetadata("user_id", 123)
// Add subscriptions
sub1 := &Subscription{ID: "sub-1", Entity: "users"}
sub2 := &Subscription{ID: "sub-2", Entity: "posts"}
conn.AddSubscription(sub1)
conn.AddSubscription(sub2)
// Register connection
cm.Register(conn)
time.Sleep(10 * time.Millisecond)
assert.Equal(t, 1, cm.Count())
// Verify connection exists
retrievedConn, exists := cm.GetConnection("conn-1")
require.True(t, exists)
assert.Equal(t, "conn-1", retrievedConn.ID)
// Verify metadata
userID, exists := retrievedConn.GetMetadata("user_id")
assert.True(t, exists)
assert.Equal(t, 123, userID)
// Verify subscriptions
_, exists = retrievedConn.GetSubscription("sub-1")
assert.True(t, exists)
_, exists = retrievedConn.GetSubscription("sub-2")
assert.True(t, exists)
// Broadcast message
message := []byte("test message")
cm.Broadcast(message, nil)
time.Sleep(10 * time.Millisecond)
select {
case msg := <-retrievedConn.send:
assert.Equal(t, message, msg)
case <-time.After(100 * time.Millisecond):
t.Fatal("Connection did not receive broadcast")
}
// Unregister connection
cm.Unregister(conn)
time.Sleep(10 * time.Millisecond)
assert.Equal(t, 0, cm.Count())
// Verify connection is gone
_, exists = cm.GetConnection("conn-1")
assert.False(t, exists)
}

View File

@@ -0,0 +1,237 @@
package websocketspec_test
import (
"fmt"
"log"
"net/http"
"github.com/bitechdev/ResolveSpec/pkg/websocketspec"
"gorm.io/driver/postgres"
"gorm.io/gorm"
)
// User model example
type User struct {
ID uint `json:"id" gorm:"primaryKey"`
Name string `json:"name"`
Email string `json:"email"`
Status string `json:"status"`
}
// Post model example
type Post struct {
ID uint `json:"id" gorm:"primaryKey"`
Title string `json:"title"`
Content string `json:"content"`
UserID uint `json:"user_id"`
User *User `json:"user,omitempty" gorm:"foreignKey:UserID"`
}
// Example_basicSetup demonstrates basic WebSocketSpec setup
func Example_basicSetup() {
// Connect to database
db, err := gorm.Open(postgres.Open("your-connection-string"), &gorm.Config{})
if err != nil {
log.Fatal(err)
}
// Create WebSocket handler
handler := websocketspec.NewHandlerWithGORM(db)
// Register models
handler.Registry().RegisterModel("public.users", &User{})
handler.Registry().RegisterModel("public.posts", &Post{})
// Setup WebSocket endpoint
http.HandleFunc("/ws", handler.HandleWebSocket)
// Start server
log.Println("WebSocket server starting on :8080")
if err := http.ListenAndServe(":8080", nil); err != nil {
log.Fatal(err)
}
}
// Example_withHooks demonstrates using lifecycle hooks
func Example_withHooks() {
db, _ := gorm.Open(postgres.Open("your-connection-string"), &gorm.Config{})
handler := websocketspec.NewHandlerWithGORM(db)
// Register models
handler.Registry().RegisterModel("public.users", &User{})
// Add authentication hook
handler.Hooks().Register(websocketspec.BeforeConnect, func(ctx *websocketspec.HookContext) error {
// Validate authentication token
// (In real implementation, extract from query params or headers)
userID := uint(123) // From token
// Store in connection metadata
ctx.Connection.SetMetadata("user_id", userID)
log.Printf("User %d connected", userID)
return nil
})
// Add authorization hook for read operations
handler.Hooks().RegisterBefore(websocketspec.OperationRead, func(ctx *websocketspec.HookContext) error {
userID, ok := ctx.Connection.GetMetadata("user_id")
if !ok {
return fmt.Errorf("unauthorized: not authenticated")
}
log.Printf("User %v reading %s.%s", userID, ctx.Schema, ctx.Entity)
// Add filter to only show user's own records
if ctx.Entity == "posts" {
// ctx.Options.Filters = append(ctx.Options.Filters, common.FilterOption{
// Column: "user_id",
// Operator: "eq",
// Value: userID,
// })
}
return nil
})
// Add logging hook after create
handler.Hooks().RegisterAfter(websocketspec.OperationCreate, func(ctx *websocketspec.HookContext) error {
userID, _ := ctx.Connection.GetMetadata("user_id")
log.Printf("User %v created record in %s.%s", userID, ctx.Schema, ctx.Entity)
return nil
})
// Add validation hook before create
handler.Hooks().RegisterBefore(websocketspec.OperationCreate, func(ctx *websocketspec.HookContext) error {
// Validate required fields
if data, ok := ctx.Data.(map[string]interface{}); ok {
if ctx.Entity == "users" {
if email, exists := data["email"]; !exists || email == "" {
return fmt.Errorf("validation error: email is required")
}
if name, exists := data["name"]; !exists || name == "" {
return fmt.Errorf("validation error: name is required")
}
}
}
return nil
})
// Add limit hook for subscriptions
handler.Hooks().Register(websocketspec.BeforeSubscribe, func(ctx *websocketspec.HookContext) error {
// Limit subscriptions per connection
maxSubscriptions := 10
// Note: In a real implementation, you would count subscriptions using the connection's methods
// currentCount := len(ctx.Connection.subscriptions) // subscriptions is private
// For demonstration purposes, we'll just log
log.Printf("Creating subscription (max: %d)", maxSubscriptions)
return nil
})
http.HandleFunc("/ws", handler.HandleWebSocket)
log.Println("Server with hooks starting on :8080")
http.ListenAndServe(":8080", nil)
}
// Example_monitoring demonstrates monitoring connections and subscriptions
func Example_monitoring() {
db, _ := gorm.Open(postgres.Open("your-connection-string"), &gorm.Config{})
handler := websocketspec.NewHandlerWithGORM(db)
handler.Registry().RegisterModel("public.users", &User{})
// Add connection tracking
handler.Hooks().Register(websocketspec.AfterConnect, func(ctx *websocketspec.HookContext) error {
count := handler.GetConnectionCount()
log.Printf("Client connected. Total connections: %d", count)
return nil
})
handler.Hooks().Register(websocketspec.AfterDisconnect, func(ctx *websocketspec.HookContext) error {
count := handler.GetConnectionCount()
log.Printf("Client disconnected. Total connections: %d", count)
return nil
})
// Add subscription tracking
handler.Hooks().Register(websocketspec.AfterSubscribe, func(ctx *websocketspec.HookContext) error {
count := handler.GetSubscriptionCount()
log.Printf("New subscription. Total subscriptions: %d", count)
return nil
})
// Monitoring endpoint
http.HandleFunc("/stats", func(w http.ResponseWriter, r *http.Request) {
fmt.Fprintf(w, "Active Connections: %d\n", handler.GetConnectionCount())
fmt.Fprintf(w, "Active Subscriptions: %d\n", handler.GetSubscriptionCount())
})
http.HandleFunc("/ws", handler.HandleWebSocket)
log.Println("Server with monitoring starting on :8080")
http.ListenAndServe(":8080", nil)
}
// Example_clientSide shows client-side usage example
func Example_clientSide() {
// This is JavaScript code for documentation purposes
jsCode := `
// JavaScript WebSocket Client Example
const ws = new WebSocket("ws://localhost:8080/ws");
ws.onopen = () => {
console.log("Connected to WebSocket");
// Read users
ws.send(JSON.stringify({
id: "msg-1",
type: "request",
operation: "read",
schema: "public",
entity: "users",
options: {
filters: [{column: "status", operator: "eq", value: "active"}],
limit: 10
}
}));
// Subscribe to user changes
ws.send(JSON.stringify({
id: "sub-1",
type: "subscription",
operation: "subscribe",
schema: "public",
entity: "users",
options: {
filters: [{column: "status", operator: "eq", value: "active"}]
}
}));
};
ws.onmessage = (event) => {
const message = JSON.parse(event.data);
if (message.type === "response") {
if (message.success) {
console.log("Response:", message.data);
} else {
console.error("Error:", message.error);
}
} else if (message.type === "notification") {
console.log("Notification:", message.operation, message.data);
}
};
ws.onerror = (error) => {
console.error("WebSocket error:", error);
};
ws.onclose = () => {
console.log("WebSocket connection closed");
// Implement reconnection logic here
};
`
fmt.Println(jsCode)
}

View File

@@ -0,0 +1,737 @@
package websocketspec
import (
"context"
"encoding/json"
"fmt"
"net/http"
"reflect"
"time"
"github.com/google/uuid"
"github.com/gorilla/websocket"
"github.com/bitechdev/ResolveSpec/pkg/common"
"github.com/bitechdev/ResolveSpec/pkg/logger"
"github.com/bitechdev/ResolveSpec/pkg/reflection"
)
// Handler handles WebSocket connections and messages
type Handler struct {
db common.Database
registry common.ModelRegistry
hooks *HookRegistry
connManager *ConnectionManager
subscriptionManager *SubscriptionManager
upgrader websocket.Upgrader
ctx context.Context
}
// NewHandler creates a new WebSocket handler
func NewHandler(db common.Database, registry common.ModelRegistry) *Handler {
ctx := context.Background()
handler := &Handler{
db: db,
registry: registry,
hooks: NewHookRegistry(),
connManager: NewConnectionManager(ctx),
subscriptionManager: NewSubscriptionManager(),
upgrader: websocket.Upgrader{
ReadBufferSize: 1024,
WriteBufferSize: 1024,
CheckOrigin: func(r *http.Request) bool {
// TODO: Implement proper origin checking
return true
},
},
ctx: ctx,
}
// Start connection manager
go handler.connManager.Run()
return handler
}
// GetRelationshipInfo implements the RelationshipInfoProvider interface
// This is a placeholder implementation - full relationship support can be added later
func (h *Handler) GetRelationshipInfo(modelType reflect.Type, relationName string) *common.RelationshipInfo {
// TODO: Implement full relationship detection similar to restheadspec
return nil
}
// GetDatabase returns the underlying database connection
// Implements common.SpecHandler interface
func (h *Handler) GetDatabase() common.Database {
return h.db
}
// Hooks returns the hook registry for this handler
func (h *Handler) Hooks() *HookRegistry {
return h.hooks
}
// Registry returns the model registry for this handler
func (h *Handler) Registry() common.ModelRegistry {
return h.registry
}
// HandleWebSocket upgrades HTTP connection to WebSocket
func (h *Handler) HandleWebSocket(w http.ResponseWriter, r *http.Request) {
// Upgrade connection
ws, err := h.upgrader.Upgrade(w, r, nil)
if err != nil {
logger.Error("[WebSocketSpec] Failed to upgrade connection: %v", err)
return
}
// Create connection
connID := uuid.New().String()
conn := NewConnection(connID, ws, h)
// Execute before connect hook
hookCtx := &HookContext{
Context: r.Context(),
Handler: h,
Connection: conn,
}
if err := h.hooks.Execute(BeforeConnect, hookCtx); err != nil {
logger.Error("[WebSocketSpec] BeforeConnect hook failed: %v", err)
ws.Close()
return
}
// Register connection
h.connManager.Register(conn)
// Execute after connect hook
_ = h.hooks.Execute(AfterConnect, hookCtx)
// Start read/write pumps
go conn.WritePump()
go conn.ReadPump()
logger.Info("[WebSocketSpec] WebSocket connection established: %s", connID)
}
// HandleMessage routes incoming messages to appropriate handlers
func (h *Handler) HandleMessage(conn *Connection, msg *Message) {
switch msg.Type {
case MessageTypeRequest:
h.handleRequest(conn, msg)
case MessageTypeSubscription:
h.handleSubscription(conn, msg)
case MessageTypePing:
h.handlePing(conn, msg)
default:
errResp := NewErrorResponse(msg.ID, "invalid_message_type", fmt.Sprintf("Unknown message type: %s", msg.Type))
_ = conn.SendJSON(errResp)
}
}
// handleRequest processes a request message
func (h *Handler) handleRequest(conn *Connection, msg *Message) {
ctx := conn.ctx
schema := msg.Schema
entity := msg.Entity
recordID := msg.RecordID
// Get model from registry
model, err := h.registry.GetModelByEntity(schema, entity)
if err != nil {
logger.Error("[WebSocketSpec] Model not found for %s.%s: %v", schema, entity, err)
errResp := NewErrorResponse(msg.ID, "model_not_found", fmt.Sprintf("Model not found: %s.%s", schema, entity))
_ = conn.SendJSON(errResp)
return
}
// Validate and unwrap model
result, err := common.ValidateAndUnwrapModel(model)
if err != nil {
logger.Error("[WebSocketSpec] Model validation failed for %s.%s: %v", schema, entity, err)
errResp := NewErrorResponse(msg.ID, "invalid_model", err.Error())
_ = conn.SendJSON(errResp)
return
}
model = result.Model
modelPtr := result.ModelPtr
tableName := h.getTableName(schema, entity, model)
// Create hook context
hookCtx := &HookContext{
Context: ctx,
Handler: h,
Connection: conn,
Message: msg,
Schema: schema,
Entity: entity,
TableName: tableName,
Model: model,
ModelPtr: modelPtr,
Options: msg.Options,
ID: recordID,
Data: msg.Data,
Metadata: make(map[string]interface{}),
}
// Route to operation handler
switch msg.Operation {
case OperationRead:
h.handleRead(conn, msg, hookCtx)
case OperationCreate:
h.handleCreate(conn, msg, hookCtx)
case OperationUpdate:
h.handleUpdate(conn, msg, hookCtx)
case OperationDelete:
h.handleDelete(conn, msg, hookCtx)
case OperationMeta:
h.handleMeta(conn, msg, hookCtx)
default:
errResp := NewErrorResponse(msg.ID, "invalid_operation", fmt.Sprintf("Unknown operation: %s", msg.Operation))
_ = conn.SendJSON(errResp)
}
}
// handleRead processes a read operation
func (h *Handler) handleRead(conn *Connection, msg *Message, hookCtx *HookContext) {
// Execute before hook
if err := h.hooks.Execute(BeforeRead, hookCtx); err != nil {
logger.Error("[WebSocketSpec] BeforeRead hook failed: %v", err)
errResp := NewErrorResponse(msg.ID, "hook_error", err.Error())
_ = conn.SendJSON(errResp)
return
}
// Perform read operation
var data interface{}
var metadata map[string]interface{}
var err error
if hookCtx.ID != "" {
// Read single record by ID
data, err = h.readByID(hookCtx)
metadata = map[string]interface{}{"total": 1}
} else {
// Read multiple records
data, metadata, err = h.readMultiple(hookCtx)
}
if err != nil {
logger.Error("[WebSocketSpec] Read operation failed: %v", err)
errResp := NewErrorResponse(msg.ID, "read_error", err.Error())
_ = conn.SendJSON(errResp)
return
}
// Update hook context with result
hookCtx.Result = data
// Execute after hook
if err := h.hooks.Execute(AfterRead, hookCtx); err != nil {
logger.Error("[WebSocketSpec] AfterRead hook failed: %v", err)
errResp := NewErrorResponse(msg.ID, "hook_error", err.Error())
_ = conn.SendJSON(errResp)
return
}
// Send response
resp := NewResponseMessage(msg.ID, true, hookCtx.Result)
resp.Metadata = metadata
_ = conn.SendJSON(resp)
}
// handleCreate processes a create operation
func (h *Handler) handleCreate(conn *Connection, msg *Message, hookCtx *HookContext) {
// Execute before hook
if err := h.hooks.Execute(BeforeCreate, hookCtx); err != nil {
logger.Error("[WebSocketSpec] BeforeCreate hook failed: %v", err)
errResp := NewErrorResponse(msg.ID, "hook_error", err.Error())
_ = conn.SendJSON(errResp)
return
}
// Perform create operation
data, err := h.create(hookCtx)
if err != nil {
logger.Error("[WebSocketSpec] Create operation failed: %v", err)
errResp := NewErrorResponse(msg.ID, "create_error", err.Error())
_ = conn.SendJSON(errResp)
return
}
// Update hook context
hookCtx.Result = data
// Execute after hook
if err := h.hooks.Execute(AfterCreate, hookCtx); err != nil {
logger.Error("[WebSocketSpec] AfterCreate hook failed: %v", err)
errResp := NewErrorResponse(msg.ID, "hook_error", err.Error())
_ = conn.SendJSON(errResp)
return
}
// Send response
resp := NewResponseMessage(msg.ID, true, hookCtx.Result)
_ = conn.SendJSON(resp)
// Notify subscribers
h.notifySubscribers(hookCtx.Schema, hookCtx.Entity, OperationCreate, data)
}
// handleUpdate processes an update operation
func (h *Handler) handleUpdate(conn *Connection, msg *Message, hookCtx *HookContext) {
// Execute before hook
if err := h.hooks.Execute(BeforeUpdate, hookCtx); err != nil {
logger.Error("[WebSocketSpec] BeforeUpdate hook failed: %v", err)
errResp := NewErrorResponse(msg.ID, "hook_error", err.Error())
_ = conn.SendJSON(errResp)
return
}
// Perform update operation
data, err := h.update(hookCtx)
if err != nil {
logger.Error("[WebSocketSpec] Update operation failed: %v", err)
errResp := NewErrorResponse(msg.ID, "update_error", err.Error())
_ = conn.SendJSON(errResp)
return
}
// Update hook context
hookCtx.Result = data
// Execute after hook
if err := h.hooks.Execute(AfterUpdate, hookCtx); err != nil {
logger.Error("[WebSocketSpec] AfterUpdate hook failed: %v", err)
errResp := NewErrorResponse(msg.ID, "hook_error", err.Error())
_ = conn.SendJSON(errResp)
return
}
// Send response
resp := NewResponseMessage(msg.ID, true, hookCtx.Result)
_ = conn.SendJSON(resp)
// Notify subscribers
h.notifySubscribers(hookCtx.Schema, hookCtx.Entity, OperationUpdate, data)
}
// handleDelete processes a delete operation
func (h *Handler) handleDelete(conn *Connection, msg *Message, hookCtx *HookContext) {
// Execute before hook
if err := h.hooks.Execute(BeforeDelete, hookCtx); err != nil {
logger.Error("[WebSocketSpec] BeforeDelete hook failed: %v", err)
errResp := NewErrorResponse(msg.ID, "hook_error", err.Error())
_ = conn.SendJSON(errResp)
return
}
// Perform delete operation
err := h.delete(hookCtx)
if err != nil {
logger.Error("[WebSocketSpec] Delete operation failed: %v", err)
errResp := NewErrorResponse(msg.ID, "delete_error", err.Error())
_ = conn.SendJSON(errResp)
return
}
// Execute after hook
if err := h.hooks.Execute(AfterDelete, hookCtx); err != nil {
logger.Error("[WebSocketSpec] AfterDelete hook failed: %v", err)
errResp := NewErrorResponse(msg.ID, "hook_error", err.Error())
_ = conn.SendJSON(errResp)
return
}
// Send response
resp := NewResponseMessage(msg.ID, true, map[string]interface{}{"deleted": true})
_ = conn.SendJSON(resp)
// Notify subscribers
h.notifySubscribers(hookCtx.Schema, hookCtx.Entity, OperationDelete, map[string]interface{}{"id": hookCtx.ID})
}
// handleMeta processes a metadata request
func (h *Handler) handleMeta(conn *Connection, msg *Message, hookCtx *HookContext) {
metadata := h.getMetadata(hookCtx.Schema, hookCtx.Entity, hookCtx.Model)
resp := NewResponseMessage(msg.ID, true, metadata)
_ = conn.SendJSON(resp)
}
// handleSubscription processes subscription messages
func (h *Handler) handleSubscription(conn *Connection, msg *Message) {
switch msg.Operation {
case OperationSubscribe:
h.handleSubscribe(conn, msg)
case OperationUnsubscribe:
h.handleUnsubscribe(conn, msg)
default:
errResp := NewErrorResponse(msg.ID, "invalid_subscription_operation", fmt.Sprintf("Unknown subscription operation: %s", msg.Operation))
_ = conn.SendJSON(errResp)
}
}
// handleSubscribe creates a new subscription
func (h *Handler) handleSubscribe(conn *Connection, msg *Message) {
// Generate subscription ID
subID := uuid.New().String()
// Create hook context
hookCtx := &HookContext{
Context: conn.ctx,
Handler: h,
Connection: conn,
Message: msg,
Schema: msg.Schema,
Entity: msg.Entity,
Options: msg.Options,
Metadata: make(map[string]interface{}),
}
// Execute before hook
if err := h.hooks.Execute(BeforeSubscribe, hookCtx); err != nil {
logger.Error("[WebSocketSpec] BeforeSubscribe hook failed: %v", err)
errResp := NewErrorResponse(msg.ID, "hook_error", err.Error())
_ = conn.SendJSON(errResp)
return
}
// Create subscription
sub := h.subscriptionManager.Subscribe(subID, conn.ID, msg.Schema, msg.Entity, msg.Options)
conn.AddSubscription(sub)
// Update hook context
hookCtx.Subscription = sub
// Execute after hook
_ = h.hooks.Execute(AfterSubscribe, hookCtx)
// Send response
resp := NewResponseMessage(msg.ID, true, map[string]interface{}{
"subscription_id": subID,
"schema": msg.Schema,
"entity": msg.Entity,
})
_ = conn.SendJSON(resp)
logger.Info("[WebSocketSpec] Subscription created: %s for %s.%s (conn: %s)", subID, msg.Schema, msg.Entity, conn.ID)
}
// handleUnsubscribe removes a subscription
func (h *Handler) handleUnsubscribe(conn *Connection, msg *Message) {
subID := msg.SubscriptionID
if subID == "" {
errResp := NewErrorResponse(msg.ID, "missing_subscription_id", "Subscription ID is required for unsubscribe")
_ = conn.SendJSON(errResp)
return
}
// Get subscription
sub, exists := conn.GetSubscription(subID)
if !exists {
errResp := NewErrorResponse(msg.ID, "subscription_not_found", fmt.Sprintf("Subscription not found: %s", subID))
_ = conn.SendJSON(errResp)
return
}
// Create hook context
hookCtx := &HookContext{
Context: conn.ctx,
Handler: h,
Connection: conn,
Message: msg,
Subscription: sub,
Metadata: make(map[string]interface{}),
}
// Execute before hook
if err := h.hooks.Execute(BeforeUnsubscribe, hookCtx); err != nil {
logger.Error("[WebSocketSpec] BeforeUnsubscribe hook failed: %v", err)
errResp := NewErrorResponse(msg.ID, "hook_error", err.Error())
_ = conn.SendJSON(errResp)
return
}
// Remove subscription
h.subscriptionManager.Unsubscribe(subID)
conn.RemoveSubscription(subID)
// Execute after hook
_ = h.hooks.Execute(AfterUnsubscribe, hookCtx)
// Send response
resp := NewResponseMessage(msg.ID, true, map[string]interface{}{
"unsubscribed": true,
"subscription_id": subID,
})
_ = conn.SendJSON(resp)
}
// handlePing responds to ping messages
func (h *Handler) handlePing(conn *Connection, msg *Message) {
pong := &Message{
ID: msg.ID,
Type: MessageTypePong,
Timestamp: time.Now(),
}
_ = conn.SendJSON(pong)
}
// notifySubscribers sends notifications to all subscribers of an entity
func (h *Handler) notifySubscribers(schema, entity string, operation OperationType, data interface{}) {
subscriptions := h.subscriptionManager.GetSubscriptionsByEntity(schema, entity)
if len(subscriptions) == 0 {
return
}
for _, sub := range subscriptions {
// Check if data matches subscription filters
if !sub.MatchesFilters(data) {
continue
}
// Get connection
conn, exists := h.connManager.GetConnection(sub.ConnectionID)
if !exists {
continue
}
// Send notification
notification := NewNotificationMessage(sub.ID, operation, schema, entity, data)
if err := conn.SendJSON(notification); err != nil {
logger.Error("[WebSocketSpec] Failed to send notification to connection %s: %v", conn.ID, err)
}
}
}
// CRUD operation implementations
func (h *Handler) readByID(hookCtx *HookContext) (interface{}, error) {
query := h.db.NewSelect().Model(hookCtx.ModelPtr).Table(hookCtx.TableName)
// Add ID filter
pkName := reflection.GetPrimaryKeyName(hookCtx.Model)
query = query.Where(fmt.Sprintf("%s = ?", pkName), hookCtx.ID)
// Apply columns
if hookCtx.Options != nil && len(hookCtx.Options.Columns) > 0 {
query = query.Column(hookCtx.Options.Columns...)
}
// Apply preloads (simplified for now)
if hookCtx.Options != nil {
for i := range hookCtx.Options.Preload {
query = query.PreloadRelation(hookCtx.Options.Preload[i].Relation)
}
}
// Execute query
if err := query.ScanModel(hookCtx.Context); err != nil {
return nil, fmt.Errorf("failed to read record: %w", err)
}
return hookCtx.ModelPtr, nil
}
func (h *Handler) readMultiple(hookCtx *HookContext) (data interface{}, metadata map[string]interface{}, err error) {
query := h.db.NewSelect().Model(hookCtx.ModelPtr).Table(hookCtx.TableName)
// Apply options (simplified implementation)
if hookCtx.Options != nil {
// Apply filters
for _, filter := range hookCtx.Options.Filters {
query = query.Where(fmt.Sprintf("%s %s ?", filter.Column, h.getOperatorSQL(filter.Operator)), filter.Value)
}
// Apply sorting
for _, sort := range hookCtx.Options.Sort {
direction := "ASC"
if sort.Direction == "desc" {
direction = "DESC"
}
query = query.Order(fmt.Sprintf("%s %s", sort.Column, direction))
}
// Apply limit and offset
if hookCtx.Options.Limit != nil {
query = query.Limit(*hookCtx.Options.Limit)
}
if hookCtx.Options.Offset != nil {
query = query.Offset(*hookCtx.Options.Offset)
}
// Apply preloads
for i := range hookCtx.Options.Preload {
query = query.PreloadRelation(hookCtx.Options.Preload[i].Relation)
}
// Apply columns
if len(hookCtx.Options.Columns) > 0 {
query = query.Column(hookCtx.Options.Columns...)
}
}
// Execute query
if err := query.ScanModel(hookCtx.Context); err != nil {
return nil, nil, fmt.Errorf("failed to read records: %w", err)
}
// Get count
metadata = make(map[string]interface{})
countQuery := h.db.NewSelect().Model(hookCtx.ModelPtr).Table(hookCtx.TableName)
if hookCtx.Options != nil {
for _, filter := range hookCtx.Options.Filters {
countQuery = countQuery.Where(fmt.Sprintf("%s %s ?", filter.Column, h.getOperatorSQL(filter.Operator)), filter.Value)
}
}
count, _ := countQuery.Count(hookCtx.Context)
metadata["total"] = count
metadata["count"] = reflection.Len(hookCtx.ModelPtr)
return hookCtx.ModelPtr, metadata, nil
}
func (h *Handler) create(hookCtx *HookContext) (interface{}, error) {
// Marshal and unmarshal data into model
dataBytes, err := json.Marshal(hookCtx.Data)
if err != nil {
return nil, fmt.Errorf("failed to marshal data: %w", err)
}
if err := json.Unmarshal(dataBytes, hookCtx.ModelPtr); err != nil {
return nil, fmt.Errorf("failed to unmarshal data into model: %w", err)
}
// Insert record
query := h.db.NewInsert().Model(hookCtx.ModelPtr).Table(hookCtx.TableName)
if _, err := query.Exec(hookCtx.Context); err != nil {
return nil, fmt.Errorf("failed to create record: %w", err)
}
return hookCtx.ModelPtr, nil
}
func (h *Handler) update(hookCtx *HookContext) (interface{}, error) {
// Marshal and unmarshal data into model
dataBytes, err := json.Marshal(hookCtx.Data)
if err != nil {
return nil, fmt.Errorf("failed to marshal data: %w", err)
}
if err := json.Unmarshal(dataBytes, hookCtx.ModelPtr); err != nil {
return nil, fmt.Errorf("failed to unmarshal data into model: %w", err)
}
// Update record
query := h.db.NewUpdate().Model(hookCtx.ModelPtr).Table(hookCtx.TableName)
// Add ID filter
pkName := reflection.GetPrimaryKeyName(hookCtx.Model)
query = query.Where(fmt.Sprintf("%s = ?", pkName), hookCtx.ID)
if _, err := query.Exec(hookCtx.Context); err != nil {
return nil, fmt.Errorf("failed to update record: %w", err)
}
// Fetch updated record
return h.readByID(hookCtx)
}
func (h *Handler) delete(hookCtx *HookContext) error {
query := h.db.NewDelete().Model(hookCtx.ModelPtr).Table(hookCtx.TableName)
// Add ID filter
pkName := reflection.GetPrimaryKeyName(hookCtx.Model)
query = query.Where(fmt.Sprintf("%s = ?", pkName), hookCtx.ID)
if _, err := query.Exec(hookCtx.Context); err != nil {
return fmt.Errorf("failed to delete record: %w", err)
}
return nil
}
// Helper methods
func (h *Handler) getTableName(schema, entity string, model interface{}) string {
// Use entity as table name
tableName := entity
if schema != "" {
tableName = schema + "." + tableName
}
return tableName
}
func (h *Handler) getMetadata(schema, entity string, model interface{}) map[string]interface{} {
metadata := make(map[string]interface{})
metadata["schema"] = schema
metadata["entity"] = entity
metadata["table_name"] = h.getTableName(schema, entity, model)
// Get fields from model using reflection
columns := reflection.GetModelColumns(model)
metadata["columns"] = columns
metadata["primary_key"] = reflection.GetPrimaryKeyName(model)
return metadata
}
// getOperatorSQL converts filter operator to SQL operator
func (h *Handler) getOperatorSQL(operator string) string {
switch operator {
case "eq":
return "="
case "neq":
return "!="
case "gt":
return ">"
case "gte":
return ">="
case "lt":
return "<"
case "lte":
return "<="
case "like":
return "LIKE"
case "ilike":
return "ILIKE"
case "in":
return "IN"
default:
return "="
}
}
// Shutdown gracefully shuts down the handler
func (h *Handler) Shutdown() {
h.connManager.Shutdown()
}
// GetConnectionCount returns the number of active connections
func (h *Handler) GetConnectionCount() int {
return h.connManager.Count()
}
// GetSubscriptionCount returns the number of active subscriptions
func (h *Handler) GetSubscriptionCount() int {
return h.subscriptionManager.Count()
}
// BroadcastMessage sends a message to all connections matching the filter
func (h *Handler) BroadcastMessage(message interface{}, filter func(*Connection) bool) error {
data, err := json.Marshal(message)
if err != nil {
return fmt.Errorf("failed to marshal message: %w", err)
}
h.connManager.Broadcast(data, filter)
return nil
}
// GetConnection retrieves a connection by ID
func (h *Handler) GetConnection(id string) (*Connection, bool) {
return h.connManager.GetConnection(id)
}

View File

@@ -0,0 +1,855 @@
package websocketspec
import (
"context"
"encoding/json"
"testing"
"github.com/bitechdev/ResolveSpec/pkg/common"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/mock"
"github.com/stretchr/testify/require"
)
// MockDatabase is a mock implementation of common.Database for testing
type MockDatabase struct {
mock.Mock
}
func (m *MockDatabase) NewSelect() common.SelectQuery {
args := m.Called()
return args.Get(0).(common.SelectQuery)
}
func (m *MockDatabase) NewInsert() common.InsertQuery {
args := m.Called()
return args.Get(0).(common.InsertQuery)
}
func (m *MockDatabase) NewUpdate() common.UpdateQuery {
args := m.Called()
return args.Get(0).(common.UpdateQuery)
}
func (m *MockDatabase) NewDelete() common.DeleteQuery {
args := m.Called()
return args.Get(0).(common.DeleteQuery)
}
func (m *MockDatabase) Close() error {
args := m.Called()
return args.Error(0)
}
func (m *MockDatabase) Exec(ctx context.Context, query string, args ...interface{}) (common.Result, error) {
callArgs := m.Called(ctx, query, args)
if callArgs.Get(0) == nil {
return nil, callArgs.Error(1)
}
return callArgs.Get(0).(common.Result), callArgs.Error(1)
}
func (m *MockDatabase) Query(ctx context.Context, dest interface{}, query string, args ...interface{}) error {
callArgs := m.Called(ctx, dest, query, args)
return callArgs.Error(0)
}
func (m *MockDatabase) BeginTx(ctx context.Context) (common.Database, error) {
args := m.Called(ctx)
if args.Get(0) == nil {
return nil, args.Error(1)
}
return args.Get(0).(common.Database), args.Error(1)
}
func (m *MockDatabase) CommitTx(ctx context.Context) error {
args := m.Called(ctx)
return args.Error(0)
}
func (m *MockDatabase) RollbackTx(ctx context.Context) error {
args := m.Called(ctx)
return args.Error(0)
}
func (m *MockDatabase) RunInTransaction(ctx context.Context, fn func(common.Database) error) error {
args := m.Called(ctx, fn)
return args.Error(0)
}
func (m *MockDatabase) GetUnderlyingDB() interface{} {
args := m.Called()
return args.Get(0)
}
// MockSelectQuery is a mock implementation of common.SelectQuery
type MockSelectQuery struct {
mock.Mock
}
func (m *MockSelectQuery) Model(model interface{}) common.SelectQuery {
args := m.Called(model)
return args.Get(0).(common.SelectQuery)
}
func (m *MockSelectQuery) Table(table string) common.SelectQuery {
args := m.Called(table)
return args.Get(0).(common.SelectQuery)
}
func (m *MockSelectQuery) Column(columns ...string) common.SelectQuery {
args := m.Called(columns)
return args.Get(0).(common.SelectQuery)
}
func (m *MockSelectQuery) Where(query string, args ...interface{}) common.SelectQuery {
callArgs := m.Called(query, args)
return callArgs.Get(0).(common.SelectQuery)
}
func (m *MockSelectQuery) WhereIn(column string, values interface{}) common.SelectQuery {
args := m.Called(column, values)
return args.Get(0).(common.SelectQuery)
}
func (m *MockSelectQuery) Order(order string) common.SelectQuery {
args := m.Called(order)
return args.Get(0).(common.SelectQuery)
}
func (m *MockSelectQuery) Limit(limit int) common.SelectQuery {
args := m.Called(limit)
return args.Get(0).(common.SelectQuery)
}
func (m *MockSelectQuery) Offset(offset int) common.SelectQuery {
args := m.Called(offset)
return args.Get(0).(common.SelectQuery)
}
func (m *MockSelectQuery) PreloadRelation(relation string, apply ...func(common.SelectQuery) common.SelectQuery) common.SelectQuery {
args := m.Called(relation, apply)
return args.Get(0).(common.SelectQuery)
}
func (m *MockSelectQuery) Preload(relation string, conditions ...interface{}) common.SelectQuery {
args := m.Called(relation, conditions)
return args.Get(0).(common.SelectQuery)
}
func (m *MockSelectQuery) ColumnExpr(query string, args ...interface{}) common.SelectQuery {
callArgs := m.Called(query, args)
return callArgs.Get(0).(common.SelectQuery)
}
func (m *MockSelectQuery) WhereOr(query string, args ...interface{}) common.SelectQuery {
callArgs := m.Called(query, args)
return callArgs.Get(0).(common.SelectQuery)
}
func (m *MockSelectQuery) Join(query string, args ...interface{}) common.SelectQuery {
callArgs := m.Called(query, args)
return callArgs.Get(0).(common.SelectQuery)
}
func (m *MockSelectQuery) LeftJoin(query string, args ...interface{}) common.SelectQuery {
callArgs := m.Called(query, args)
return callArgs.Get(0).(common.SelectQuery)
}
func (m *MockSelectQuery) JoinRelation(relation string, apply ...func(common.SelectQuery) common.SelectQuery) common.SelectQuery {
args := m.Called(relation, apply)
return args.Get(0).(common.SelectQuery)
}
func (m *MockSelectQuery) OrderExpr(order string, args ...interface{}) common.SelectQuery {
callArgs := m.Called(order, args)
return callArgs.Get(0).(common.SelectQuery)
}
func (m *MockSelectQuery) Group(group string) common.SelectQuery {
args := m.Called(group)
return args.Get(0).(common.SelectQuery)
}
func (m *MockSelectQuery) Having(having string, args ...interface{}) common.SelectQuery {
callArgs := m.Called(having, args)
return callArgs.Get(0).(common.SelectQuery)
}
func (m *MockSelectQuery) Scan(ctx context.Context, dest interface{}) error {
args := m.Called(ctx, dest)
return args.Error(0)
}
func (m *MockSelectQuery) ScanModel(ctx context.Context) error {
args := m.Called(ctx)
return args.Error(0)
}
func (m *MockSelectQuery) Count(ctx context.Context) (int, error) {
args := m.Called(ctx)
return args.Int(0), args.Error(1)
}
func (m *MockSelectQuery) Exists(ctx context.Context) (bool, error) {
args := m.Called(ctx)
return args.Bool(0), args.Error(1)
}
// MockInsertQuery is a mock implementation of common.InsertQuery
type MockInsertQuery struct {
mock.Mock
}
func (m *MockInsertQuery) Model(model interface{}) common.InsertQuery {
args := m.Called(model)
return args.Get(0).(common.InsertQuery)
}
func (m *MockInsertQuery) Table(table string) common.InsertQuery {
args := m.Called(table)
return args.Get(0).(common.InsertQuery)
}
func (m *MockInsertQuery) Value(column string, value interface{}) common.InsertQuery {
args := m.Called(column, value)
return args.Get(0).(common.InsertQuery)
}
func (m *MockInsertQuery) OnConflict(action string) common.InsertQuery {
args := m.Called(action)
return args.Get(0).(common.InsertQuery)
}
func (m *MockInsertQuery) Returning(columns ...string) common.InsertQuery {
args := m.Called(columns)
return args.Get(0).(common.InsertQuery)
}
func (m *MockInsertQuery) Exec(ctx context.Context) (common.Result, error) {
args := m.Called(ctx)
if args.Get(0) == nil {
return nil, args.Error(1)
}
return args.Get(0).(common.Result), args.Error(1)
}
// MockUpdateQuery is a mock implementation of common.UpdateQuery
type MockUpdateQuery struct {
mock.Mock
}
func (m *MockUpdateQuery) Model(model interface{}) common.UpdateQuery {
args := m.Called(model)
return args.Get(0).(common.UpdateQuery)
}
func (m *MockUpdateQuery) Table(table string) common.UpdateQuery {
args := m.Called(table)
return args.Get(0).(common.UpdateQuery)
}
func (m *MockUpdateQuery) Set(column string, value interface{}) common.UpdateQuery {
args := m.Called(column, value)
return args.Get(0).(common.UpdateQuery)
}
func (m *MockUpdateQuery) SetMap(values map[string]interface{}) common.UpdateQuery {
args := m.Called(values)
return args.Get(0).(common.UpdateQuery)
}
func (m *MockUpdateQuery) Where(query string, args ...interface{}) common.UpdateQuery {
callArgs := m.Called(query, args)
return callArgs.Get(0).(common.UpdateQuery)
}
func (m *MockUpdateQuery) Exec(ctx context.Context) (common.Result, error) {
args := m.Called(ctx)
if args.Get(0) == nil {
return nil, args.Error(1)
}
return args.Get(0).(common.Result), args.Error(1)
}
// MockDeleteQuery is a mock implementation of common.DeleteQuery
type MockDeleteQuery struct {
mock.Mock
}
func (m *MockDeleteQuery) Model(model interface{}) common.DeleteQuery {
args := m.Called(model)
return args.Get(0).(common.DeleteQuery)
}
func (m *MockDeleteQuery) Table(table string) common.DeleteQuery {
args := m.Called(table)
return args.Get(0).(common.DeleteQuery)
}
func (m *MockDeleteQuery) Where(query string, args ...interface{}) common.DeleteQuery {
callArgs := m.Called(query, args)
return callArgs.Get(0).(common.DeleteQuery)
}
func (m *MockDeleteQuery) Exec(ctx context.Context) (common.Result, error) {
args := m.Called(ctx)
if args.Get(0) == nil {
return nil, args.Error(1)
}
return args.Get(0).(common.Result), args.Error(1)
}
// MockModelRegistry is a mock implementation of common.ModelRegistry
type MockModelRegistry struct {
mock.Mock
}
func (m *MockModelRegistry) RegisterModel(key string, model interface{}) error {
args := m.Called(key, model)
return args.Error(0)
}
func (m *MockModelRegistry) GetModel(key string) (interface{}, error) {
args := m.Called(key)
if args.Get(0) == nil {
return nil, args.Error(1)
}
return args.Get(0), args.Error(1)
}
func (m *MockModelRegistry) GetAllModels() map[string]interface{} {
args := m.Called()
return args.Get(0).(map[string]interface{})
}
func (m *MockModelRegistry) GetModelByEntity(schema, entity string) (interface{}, error) {
args := m.Called(schema, entity)
if args.Get(0) == nil {
return nil, args.Error(1)
}
return args.Get(0), args.Error(1)
}
// Test model
type TestUser struct {
ID uint `json:"id" gorm:"primaryKey"`
Name string `json:"name"`
Email string `json:"email"`
Status string `json:"status"`
}
func TestNewHandler(t *testing.T) {
mockDB := &MockDatabase{}
mockRegistry := &MockModelRegistry{}
handler := NewHandler(mockDB, mockRegistry)
defer handler.Shutdown()
assert.NotNil(t, handler)
assert.NotNil(t, handler.db)
assert.NotNil(t, handler.registry)
assert.NotNil(t, handler.hooks)
assert.NotNil(t, handler.connManager)
assert.NotNil(t, handler.subscriptionManager)
assert.NotNil(t, handler.upgrader)
}
func TestHandler_Hooks(t *testing.T) {
mockDB := &MockDatabase{}
mockRegistry := &MockModelRegistry{}
handler := NewHandler(mockDB, mockRegistry)
defer handler.Shutdown()
hooks := handler.Hooks()
assert.NotNil(t, hooks)
assert.IsType(t, &HookRegistry{}, hooks)
}
func TestHandler_Registry(t *testing.T) {
mockDB := &MockDatabase{}
mockRegistry := &MockModelRegistry{}
handler := NewHandler(mockDB, mockRegistry)
defer handler.Shutdown()
registry := handler.Registry()
assert.NotNil(t, registry)
assert.Equal(t, mockRegistry, registry)
}
func TestHandler_GetDatabase(t *testing.T) {
mockDB := &MockDatabase{}
mockRegistry := &MockModelRegistry{}
handler := NewHandler(mockDB, mockRegistry)
defer handler.Shutdown()
db := handler.GetDatabase()
assert.NotNil(t, db)
assert.Equal(t, mockDB, db)
}
func TestHandler_GetConnectionCount(t *testing.T) {
mockDB := &MockDatabase{}
mockRegistry := &MockModelRegistry{}
handler := NewHandler(mockDB, mockRegistry)
defer handler.Shutdown()
count := handler.GetConnectionCount()
assert.Equal(t, 0, count)
}
func TestHandler_GetSubscriptionCount(t *testing.T) {
mockDB := &MockDatabase{}
mockRegistry := &MockModelRegistry{}
handler := NewHandler(mockDB, mockRegistry)
defer handler.Shutdown()
count := handler.GetSubscriptionCount()
assert.Equal(t, 0, count)
}
func TestHandler_GetConnection(t *testing.T) {
mockDB := &MockDatabase{}
mockRegistry := &MockModelRegistry{}
handler := NewHandler(mockDB, mockRegistry)
defer handler.Shutdown()
// Non-existent connection
_, exists := handler.GetConnection("non-existent")
assert.False(t, exists)
}
func TestHandler_HandleMessage_InvalidType(t *testing.T) {
mockDB := &MockDatabase{}
mockRegistry := &MockModelRegistry{}
handler := NewHandler(mockDB, mockRegistry)
defer handler.Shutdown()
conn := &Connection{
ID: "conn-1",
send: make(chan []byte, 256),
subscriptions: make(map[string]*Subscription),
ctx: context.Background(),
}
msg := &Message{
ID: "msg-1",
Type: MessageType("invalid"),
}
handler.HandleMessage(conn, msg)
// Shutdown handler properly
defer handler.Shutdown()
// Should send error response
select {
case data := <-conn.send:
var response ResponseMessage
err := json.Unmarshal(data, &response)
require.NoError(t, err)
assert.False(t, response.Success)
assert.NotNil(t, response.Error)
default:
t.Fatal("Expected error response")
}
}
func TestHandler_GetOperatorSQL(t *testing.T) {
mockDB := &MockDatabase{}
mockRegistry := &MockModelRegistry{}
handler := NewHandler(mockDB, mockRegistry)
defer handler.Shutdown()
tests := []struct {
operator string
expected string
}{
{"eq", "="},
{"neq", "!="},
{"gt", ">"},
{"gte", ">="},
{"lt", "<"},
{"lte", "<="},
{"like", "LIKE"},
{"ilike", "ILIKE"},
{"in", "IN"},
{"unknown", "="}, // default
}
for _, tt := range tests {
t.Run(tt.operator, func(t *testing.T) {
result := handler.getOperatorSQL(tt.operator)
assert.Equal(t, tt.expected, result)
})
}
}
func TestHandler_GetTableName(t *testing.T) {
mockDB := &MockDatabase{}
mockRegistry := &MockModelRegistry{}
handler := NewHandler(mockDB, mockRegistry)
defer handler.Shutdown()
tests := []struct {
name string
schema string
entity string
expected string
}{
{
name: "With schema",
schema: "public",
entity: "users",
expected: "public.users",
},
{
name: "Without schema",
schema: "",
entity: "users",
expected: "users",
},
{
name: "Different schema",
schema: "custom",
entity: "posts",
expected: "custom.posts",
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
result := handler.getTableName(tt.schema, tt.entity, &TestUser{})
assert.Equal(t, tt.expected, result)
})
}
}
func TestHandler_GetMetadata(t *testing.T) {
mockDB := &MockDatabase{}
mockRegistry := &MockModelRegistry{}
handler := NewHandler(mockDB, mockRegistry)
defer handler.Shutdown()
metadata := handler.getMetadata("public", "users", &TestUser{})
assert.NotNil(t, metadata)
assert.Equal(t, "public", metadata["schema"])
assert.Equal(t, "users", metadata["entity"])
assert.Equal(t, "public.users", metadata["table_name"])
assert.NotNil(t, metadata["columns"])
assert.NotNil(t, metadata["primary_key"])
}
func TestHandler_NotifySubscribers(t *testing.T) {
mockDB := &MockDatabase{}
mockRegistry := &MockModelRegistry{}
handler := NewHandler(mockDB, mockRegistry)
defer handler.Shutdown()
// Create connection
ctx, cancel := context.WithCancel(context.Background())
defer cancel()
conn := &Connection{
ID: "conn-1",
send: make(chan []byte, 256),
subscriptions: make(map[string]*Subscription),
handler: handler,
ctx: ctx,
cancel: cancel,
}
// Register connection
handler.connManager.connections["conn-1"] = conn
// Create subscription
sub := handler.subscriptionManager.Subscribe("sub-1", "conn-1", "public", "users", nil)
conn.AddSubscription(sub)
// Notify subscribers
data := map[string]interface{}{"id": 1, "name": "John"}
handler.notifySubscribers("public", "users", OperationCreate, data)
// Verify notification was sent
select {
case msg := <-conn.send:
assert.NotEmpty(t, msg)
default:
t.Fatal("Expected notification to be sent")
}
}
func TestHandler_NotifySubscribers_NoSubscribers(t *testing.T) {
mockDB := &MockDatabase{}
mockRegistry := &MockModelRegistry{}
handler := NewHandler(mockDB, mockRegistry)
defer handler.Shutdown()
// Notify with no subscribers - should not panic
data := map[string]interface{}{"id": 1, "name": "John"}
handler.notifySubscribers("public", "users", OperationCreate, data)
// No assertions needed - just checking it doesn't panic
}
func TestHandler_NotifySubscribers_ConnectionNotFound(t *testing.T) {
mockDB := &MockDatabase{}
mockRegistry := &MockModelRegistry{}
handler := NewHandler(mockDB, mockRegistry)
defer handler.Shutdown()
// Create subscription without connection
handler.subscriptionManager.Subscribe("sub-1", "conn-1", "public", "users", nil)
// Notify - should handle gracefully when connection not found
data := map[string]interface{}{"id": 1, "name": "John"}
handler.notifySubscribers("public", "users", OperationCreate, data)
// No assertions needed - just checking it doesn't panic
}
func TestHandler_HooksIntegration(t *testing.T) {
mockDB := &MockDatabase{}
mockRegistry := &MockModelRegistry{}
handler := NewHandler(mockDB, mockRegistry)
defer handler.Shutdown()
beforeCalled := false
afterCalled := false
// Register hooks
handler.Hooks().RegisterBefore(OperationCreate, func(ctx *HookContext) error {
beforeCalled = true
return nil
})
handler.Hooks().RegisterAfter(OperationCreate, func(ctx *HookContext) error {
afterCalled = true
return nil
})
// Verify hooks are registered
assert.True(t, handler.Hooks().HasHooks(BeforeCreate))
assert.True(t, handler.Hooks().HasHooks(AfterCreate))
// Execute hooks
ctx := &HookContext{Context: context.Background()}
handler.Hooks().Execute(BeforeCreate, ctx)
handler.Hooks().Execute(AfterCreate, ctx)
assert.True(t, beforeCalled)
assert.True(t, afterCalled)
}
func TestHandler_Shutdown(t *testing.T) {
mockDB := &MockDatabase{}
mockRegistry := &MockModelRegistry{}
handler := NewHandler(mockDB, mockRegistry)
defer handler.Shutdown()
// Shutdown should not panic
handler.Shutdown()
// Verify context was cancelled
select {
case <-handler.connManager.ctx.Done():
// Expected
default:
t.Fatal("Connection manager context not cancelled after shutdown")
}
}
func TestHandler_SubscriptionLifecycle(t *testing.T) {
mockDB := &MockDatabase{}
mockRegistry := &MockModelRegistry{}
handler := NewHandler(mockDB, mockRegistry)
defer handler.Shutdown()
// Create connection
conn := &Connection{
ID: "conn-1",
send: make(chan []byte, 256),
subscriptions: make(map[string]*Subscription),
ctx: context.Background(),
handler: handler,
}
// Create subscription message
msg := &Message{
ID: "sub-msg-1",
Type: MessageTypeSubscription,
Operation: OperationSubscribe,
Schema: "public",
Entity: "users",
}
// Handle subscribe
handler.handleSubscribe(conn, msg)
// Verify subscription was created
assert.Equal(t, 1, handler.GetSubscriptionCount())
assert.Equal(t, 1, len(conn.subscriptions))
// Verify response was sent
select {
case data := <-conn.send:
assert.NotEmpty(t, data)
default:
t.Fatal("Expected subscription response")
}
}
func TestHandler_UnsubscribeLifecycle(t *testing.T) {
mockDB := &MockDatabase{}
mockRegistry := &MockModelRegistry{}
handler := NewHandler(mockDB, mockRegistry)
defer handler.Shutdown()
// Create connection
conn := &Connection{
ID: "conn-1",
send: make(chan []byte, 256),
subscriptions: make(map[string]*Subscription),
ctx: context.Background(),
handler: handler,
}
// Create subscription
sub := handler.subscriptionManager.Subscribe("sub-1", "conn-1", "public", "users", nil)
conn.AddSubscription(sub)
assert.Equal(t, 1, handler.GetSubscriptionCount())
// Create unsubscribe message
msg := &Message{
ID: "unsub-msg-1",
Type: MessageTypeSubscription,
Operation: OperationUnsubscribe,
SubscriptionID: "sub-1",
}
// Handle unsubscribe
handler.handleUnsubscribe(conn, msg)
// Verify subscription was removed
assert.Equal(t, 0, handler.GetSubscriptionCount())
assert.Equal(t, 0, len(conn.subscriptions))
// Verify response was sent
select {
case data := <-conn.send:
assert.NotEmpty(t, data)
default:
t.Fatal("Expected unsubscribe response")
}
}
func TestHandler_HandlePing(t *testing.T) {
mockDB := &MockDatabase{}
mockRegistry := &MockModelRegistry{}
handler := NewHandler(mockDB, mockRegistry)
defer handler.Shutdown()
ctx, cancel := context.WithCancel(context.Background())
defer cancel()
conn := &Connection{
ID: "conn-1",
send: make(chan []byte, 256),
subscriptions: make(map[string]*Subscription),
ctx: ctx,
cancel: cancel,
}
msg := &Message{
ID: "ping-1",
Type: MessageTypePing,
}
handler.handlePing(conn, msg)
// Verify pong was sent
select {
case data := <-conn.send:
assert.NotEmpty(t, data)
default:
t.Fatal("Expected pong response")
}
}
func TestHandler_CompleteWorkflow(t *testing.T) {
mockDB := &MockDatabase{}
mockRegistry := &MockModelRegistry{}
handler := NewHandler(mockDB, mockRegistry)
defer handler.Shutdown()
// Setup hooks (these are registered but not called in this test workflow)
handler.Hooks().RegisterBefore(OperationCreate, func(ctx *HookContext) error {
return nil
})
handler.Hooks().RegisterAfter(OperationCreate, func(ctx *HookContext) error {
return nil
})
// Create connection
conn := &Connection{
ID: "conn-1",
send: make(chan []byte, 256),
subscriptions: make(map[string]*Subscription),
ctx: context.Background(),
handler: handler,
metadata: make(map[string]interface{}),
}
// Register connection
handler.connManager.connections["conn-1"] = conn
// Set user metadata
conn.SetMetadata("user_id", 123)
// Create subscription
subMsg := &Message{
ID: "sub-1",
Type: MessageTypeSubscription,
Operation: OperationSubscribe,
Schema: "public",
Entity: "users",
}
handler.handleSubscribe(conn, subMsg)
assert.Equal(t, 1, handler.GetSubscriptionCount())
// Clear send channel
select {
case <-conn.send:
default:
}
// Send ping
pingMsg := &Message{
ID: "ping-1",
Type: MessageTypePing,
}
handler.handlePing(conn, pingMsg)
// Verify pong was sent
select {
case <-conn.send:
// Expected
default:
t.Fatal("Expected pong response")
}
// Verify metadata
userID, exists := conn.GetMetadata("user_id")
assert.True(t, exists)
assert.Equal(t, 123, userID)
// Verify hooks were registered
assert.True(t, handler.Hooks().HasHooks(BeforeCreate))
assert.True(t, handler.Hooks().HasHooks(AfterCreate))
}

193
pkg/websocketspec/hooks.go Normal file
View File

@@ -0,0 +1,193 @@
package websocketspec
import (
"context"
"github.com/bitechdev/ResolveSpec/pkg/common"
)
// HookType represents the type of lifecycle hook
type HookType string
const (
// BeforeRead is called before a read operation
BeforeRead HookType = "before_read"
// AfterRead is called after a read operation
AfterRead HookType = "after_read"
// BeforeCreate is called before a create operation
BeforeCreate HookType = "before_create"
// AfterCreate is called after a create operation
AfterCreate HookType = "after_create"
// BeforeUpdate is called before an update operation
BeforeUpdate HookType = "before_update"
// AfterUpdate is called after an update operation
AfterUpdate HookType = "after_update"
// BeforeDelete is called before a delete operation
BeforeDelete HookType = "before_delete"
// AfterDelete is called after a delete operation
AfterDelete HookType = "after_delete"
// BeforeSubscribe is called before creating a subscription
BeforeSubscribe HookType = "before_subscribe"
// AfterSubscribe is called after creating a subscription
AfterSubscribe HookType = "after_subscribe"
// BeforeUnsubscribe is called before removing a subscription
BeforeUnsubscribe HookType = "before_unsubscribe"
// AfterUnsubscribe is called after removing a subscription
AfterUnsubscribe HookType = "after_unsubscribe"
// BeforeConnect is called when a new connection is established
BeforeConnect HookType = "before_connect"
// AfterConnect is called after a connection is established
AfterConnect HookType = "after_connect"
// BeforeDisconnect is called before a connection is closed
BeforeDisconnect HookType = "before_disconnect"
// AfterDisconnect is called after a connection is closed
AfterDisconnect HookType = "after_disconnect"
)
// HookContext contains context information for hook execution
type HookContext struct {
// Context is the request context
Context context.Context
// Handler provides access to the handler, database, and registry
Handler *Handler
// Connection is the WebSocket connection
Connection *Connection
// Message is the original message
Message *Message
// Schema is the database schema
Schema string
// Entity is the table/model name
Entity string
// TableName is the actual database table name
TableName string
// Model is the registered model instance
Model interface{}
// ModelPtr is a pointer to the model for queries
ModelPtr interface{}
// Options contains the parsed request options
Options *common.RequestOptions
// ID is the record ID for single-record operations
ID string
// Data is the request data (for create/update operations)
Data interface{}
// Result is the operation result (for after hooks)
Result interface{}
// Subscription is the subscription being created/removed
Subscription *Subscription
// Error is any error that occurred (for after hooks)
Error error
// Metadata is additional context data
Metadata map[string]interface{}
}
// HookFunc is a function that processes a hook
type HookFunc func(*HookContext) error
// HookRegistry manages lifecycle hooks
type HookRegistry struct {
hooks map[HookType][]HookFunc
}
// NewHookRegistry creates a new hook registry
func NewHookRegistry() *HookRegistry {
return &HookRegistry{
hooks: make(map[HookType][]HookFunc),
}
}
// Register registers a hook function for a specific hook type
func (hr *HookRegistry) Register(hookType HookType, fn HookFunc) {
hr.hooks[hookType] = append(hr.hooks[hookType], fn)
}
// RegisterBefore registers a hook that runs before an operation
// Convenience method for BeforeRead, BeforeCreate, BeforeUpdate, BeforeDelete
func (hr *HookRegistry) RegisterBefore(operation OperationType, fn HookFunc) {
switch operation {
case OperationRead:
hr.Register(BeforeRead, fn)
case OperationCreate:
hr.Register(BeforeCreate, fn)
case OperationUpdate:
hr.Register(BeforeUpdate, fn)
case OperationDelete:
hr.Register(BeforeDelete, fn)
case OperationSubscribe:
hr.Register(BeforeSubscribe, fn)
case OperationUnsubscribe:
hr.Register(BeforeUnsubscribe, fn)
}
}
// RegisterAfter registers a hook that runs after an operation
// Convenience method for AfterRead, AfterCreate, AfterUpdate, AfterDelete
func (hr *HookRegistry) RegisterAfter(operation OperationType, fn HookFunc) {
switch operation {
case OperationRead:
hr.Register(AfterRead, fn)
case OperationCreate:
hr.Register(AfterCreate, fn)
case OperationUpdate:
hr.Register(AfterUpdate, fn)
case OperationDelete:
hr.Register(AfterDelete, fn)
case OperationSubscribe:
hr.Register(AfterSubscribe, fn)
case OperationUnsubscribe:
hr.Register(AfterUnsubscribe, fn)
}
}
// Execute runs all hooks for a specific type
func (hr *HookRegistry) Execute(hookType HookType, ctx *HookContext) error {
hooks, exists := hr.hooks[hookType]
if !exists {
return nil
}
for _, hook := range hooks {
if err := hook(ctx); err != nil {
return err
}
}
return nil
}
// HasHooks checks if any hooks are registered for a hook type
func (hr *HookRegistry) HasHooks(hookType HookType) bool {
hooks, exists := hr.hooks[hookType]
return exists && len(hooks) > 0
}
// Clear removes all hooks of a specific type
func (hr *HookRegistry) Clear(hookType HookType) {
delete(hr.hooks, hookType)
}
// ClearAll removes all registered hooks
func (hr *HookRegistry) ClearAll() {
hr.hooks = make(map[HookType][]HookFunc)
}

View File

@@ -0,0 +1,547 @@
package websocketspec
import (
"context"
"errors"
"testing"
"github.com/bitechdev/ResolveSpec/pkg/common"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
func TestHookType_Constants(t *testing.T) {
assert.Equal(t, HookType("before_read"), BeforeRead)
assert.Equal(t, HookType("after_read"), AfterRead)
assert.Equal(t, HookType("before_create"), BeforeCreate)
assert.Equal(t, HookType("after_create"), AfterCreate)
assert.Equal(t, HookType("before_update"), BeforeUpdate)
assert.Equal(t, HookType("after_update"), AfterUpdate)
assert.Equal(t, HookType("before_delete"), BeforeDelete)
assert.Equal(t, HookType("after_delete"), AfterDelete)
assert.Equal(t, HookType("before_subscribe"), BeforeSubscribe)
assert.Equal(t, HookType("after_subscribe"), AfterSubscribe)
assert.Equal(t, HookType("before_unsubscribe"), BeforeUnsubscribe)
assert.Equal(t, HookType("after_unsubscribe"), AfterUnsubscribe)
assert.Equal(t, HookType("before_connect"), BeforeConnect)
assert.Equal(t, HookType("after_connect"), AfterConnect)
assert.Equal(t, HookType("before_disconnect"), BeforeDisconnect)
assert.Equal(t, HookType("after_disconnect"), AfterDisconnect)
}
func TestNewHookRegistry(t *testing.T) {
hr := NewHookRegistry()
assert.NotNil(t, hr)
assert.NotNil(t, hr.hooks)
assert.Empty(t, hr.hooks)
}
func TestHookRegistry_Register(t *testing.T) {
hr := NewHookRegistry()
hookCalled := false
hook := func(ctx *HookContext) error {
hookCalled = true
return nil
}
hr.Register(BeforeRead, hook)
// Verify hook was registered
assert.True(t, hr.HasHooks(BeforeRead))
// Execute hook
ctx := &HookContext{Context: context.Background()}
err := hr.Execute(BeforeRead, ctx)
require.NoError(t, err)
assert.True(t, hookCalled)
}
func TestHookRegistry_Register_MultipleHooks(t *testing.T) {
hr := NewHookRegistry()
callOrder := []int{}
hook1 := func(ctx *HookContext) error {
callOrder = append(callOrder, 1)
return nil
}
hook2 := func(ctx *HookContext) error {
callOrder = append(callOrder, 2)
return nil
}
hook3 := func(ctx *HookContext) error {
callOrder = append(callOrder, 3)
return nil
}
hr.Register(BeforeRead, hook1)
hr.Register(BeforeRead, hook2)
hr.Register(BeforeRead, hook3)
// Execute hooks
ctx := &HookContext{Context: context.Background()}
err := hr.Execute(BeforeRead, ctx)
require.NoError(t, err)
// Verify hooks were called in order
assert.Equal(t, []int{1, 2, 3}, callOrder)
}
func TestHookRegistry_RegisterBefore(t *testing.T) {
hr := NewHookRegistry()
tests := []struct {
operation OperationType
hookType HookType
}{
{OperationRead, BeforeRead},
{OperationCreate, BeforeCreate},
{OperationUpdate, BeforeUpdate},
{OperationDelete, BeforeDelete},
{OperationSubscribe, BeforeSubscribe},
{OperationUnsubscribe, BeforeUnsubscribe},
}
for _, tt := range tests {
t.Run(string(tt.operation), func(t *testing.T) {
hookCalled := false
hook := func(ctx *HookContext) error {
hookCalled = true
return nil
}
hr.RegisterBefore(tt.operation, hook)
assert.True(t, hr.HasHooks(tt.hookType))
ctx := &HookContext{Context: context.Background()}
err := hr.Execute(tt.hookType, ctx)
require.NoError(t, err)
assert.True(t, hookCalled)
// Clean up for next test
hr.Clear(tt.hookType)
})
}
}
func TestHookRegistry_RegisterAfter(t *testing.T) {
hr := NewHookRegistry()
tests := []struct {
operation OperationType
hookType HookType
}{
{OperationRead, AfterRead},
{OperationCreate, AfterCreate},
{OperationUpdate, AfterUpdate},
{OperationDelete, AfterDelete},
{OperationSubscribe, AfterSubscribe},
{OperationUnsubscribe, AfterUnsubscribe},
}
for _, tt := range tests {
t.Run(string(tt.operation), func(t *testing.T) {
hookCalled := false
hook := func(ctx *HookContext) error {
hookCalled = true
return nil
}
hr.RegisterAfter(tt.operation, hook)
assert.True(t, hr.HasHooks(tt.hookType))
ctx := &HookContext{Context: context.Background()}
err := hr.Execute(tt.hookType, ctx)
require.NoError(t, err)
assert.True(t, hookCalled)
// Clean up for next test
hr.Clear(tt.hookType)
})
}
}
func TestHookRegistry_Execute_NoHooks(t *testing.T) {
hr := NewHookRegistry()
ctx := &HookContext{Context: context.Background()}
err := hr.Execute(BeforeRead, ctx)
// Should not error when no hooks registered
assert.NoError(t, err)
}
func TestHookRegistry_Execute_HookReturnsError(t *testing.T) {
hr := NewHookRegistry()
expectedErr := errors.New("hook error")
hook := func(ctx *HookContext) error {
return expectedErr
}
hr.Register(BeforeRead, hook)
ctx := &HookContext{Context: context.Background()}
err := hr.Execute(BeforeRead, ctx)
assert.Error(t, err)
assert.Equal(t, expectedErr, err)
}
func TestHookRegistry_Execute_FirstHookErrors(t *testing.T) {
hr := NewHookRegistry()
hook1Called := false
hook2Called := false
hook1 := func(ctx *HookContext) error {
hook1Called = true
return errors.New("hook1 error")
}
hook2 := func(ctx *HookContext) error {
hook2Called = true
return nil
}
hr.Register(BeforeRead, hook1)
hr.Register(BeforeRead, hook2)
ctx := &HookContext{Context: context.Background()}
err := hr.Execute(BeforeRead, ctx)
assert.Error(t, err)
assert.True(t, hook1Called)
assert.False(t, hook2Called) // Should not be called after first error
}
func TestHookRegistry_HasHooks(t *testing.T) {
hr := NewHookRegistry()
assert.False(t, hr.HasHooks(BeforeRead))
hr.Register(BeforeRead, func(ctx *HookContext) error { return nil })
assert.True(t, hr.HasHooks(BeforeRead))
assert.False(t, hr.HasHooks(AfterRead))
}
func TestHookRegistry_Clear(t *testing.T) {
hr := NewHookRegistry()
hr.Register(BeforeRead, func(ctx *HookContext) error { return nil })
hr.Register(BeforeRead, func(ctx *HookContext) error { return nil })
assert.True(t, hr.HasHooks(BeforeRead))
hr.Clear(BeforeRead)
assert.False(t, hr.HasHooks(BeforeRead))
}
func TestHookRegistry_ClearAll(t *testing.T) {
hr := NewHookRegistry()
hr.Register(BeforeRead, func(ctx *HookContext) error { return nil })
hr.Register(AfterRead, func(ctx *HookContext) error { return nil })
hr.Register(BeforeCreate, func(ctx *HookContext) error { return nil })
assert.True(t, hr.HasHooks(BeforeRead))
assert.True(t, hr.HasHooks(AfterRead))
assert.True(t, hr.HasHooks(BeforeCreate))
hr.ClearAll()
assert.False(t, hr.HasHooks(BeforeRead))
assert.False(t, hr.HasHooks(AfterRead))
assert.False(t, hr.HasHooks(BeforeCreate))
}
func TestHookContext_Structure(t *testing.T) {
ctx := &HookContext{
Context: context.Background(),
Schema: "public",
Entity: "users",
TableName: "public.users",
ID: "123",
Data: map[string]interface{}{
"name": "John",
},
Options: &common.RequestOptions{
Filters: []common.FilterOption{
{Column: "status", Operator: "eq", Value: "active"},
},
},
Metadata: map[string]interface{}{
"user_id": 456,
},
}
assert.NotNil(t, ctx.Context)
assert.Equal(t, "public", ctx.Schema)
assert.Equal(t, "users", ctx.Entity)
assert.Equal(t, "public.users", ctx.TableName)
assert.Equal(t, "123", ctx.ID)
assert.NotNil(t, ctx.Data)
assert.NotNil(t, ctx.Options)
assert.NotNil(t, ctx.Metadata)
}
func TestHookContext_ModifyData(t *testing.T) {
hr := NewHookRegistry()
// Hook that modifies data
hook := func(ctx *HookContext) error {
if data, ok := ctx.Data.(map[string]interface{}); ok {
data["modified"] = true
}
return nil
}
hr.Register(BeforeCreate, hook)
ctx := &HookContext{
Context: context.Background(),
Data: map[string]interface{}{
"name": "John",
},
}
err := hr.Execute(BeforeCreate, ctx)
require.NoError(t, err)
// Verify data was modified
data := ctx.Data.(map[string]interface{})
assert.True(t, data["modified"].(bool))
}
func TestHookContext_ModifyOptions(t *testing.T) {
hr := NewHookRegistry()
// Hook that adds a filter
hook := func(ctx *HookContext) error {
if ctx.Options == nil {
ctx.Options = &common.RequestOptions{}
}
ctx.Options.Filters = append(ctx.Options.Filters, common.FilterOption{
Column: "user_id",
Operator: "eq",
Value: 123,
})
return nil
}
hr.Register(BeforeRead, hook)
ctx := &HookContext{
Context: context.Background(),
Options: &common.RequestOptions{},
}
err := hr.Execute(BeforeRead, ctx)
require.NoError(t, err)
// Verify filter was added
assert.Len(t, ctx.Options.Filters, 1)
assert.Equal(t, "user_id", ctx.Options.Filters[0].Column)
}
func TestHookContext_UseMetadata(t *testing.T) {
hr := NewHookRegistry()
// Hook that stores data in metadata
hook := func(ctx *HookContext) error {
ctx.Metadata["processed"] = true
ctx.Metadata["timestamp"] = "2024-01-01"
return nil
}
hr.Register(BeforeCreate, hook)
ctx := &HookContext{
Context: context.Background(),
Metadata: make(map[string]interface{}),
}
err := hr.Execute(BeforeCreate, ctx)
require.NoError(t, err)
// Verify metadata was set
assert.True(t, ctx.Metadata["processed"].(bool))
assert.Equal(t, "2024-01-01", ctx.Metadata["timestamp"])
}
func TestHookRegistry_Authentication_Example(t *testing.T) {
hr := NewHookRegistry()
// Authentication hook
authHook := func(ctx *HookContext) error {
// Simulate getting user from connection metadata
userID := 123
ctx.Metadata["user_id"] = userID
return nil
}
// Authorization hook that uses auth data
authzHook := func(ctx *HookContext) error {
userID, ok := ctx.Metadata["user_id"]
if !ok {
return errors.New("unauthorized: not authenticated")
}
// Add filter to only show user's own records
if ctx.Options == nil {
ctx.Options = &common.RequestOptions{}
}
ctx.Options.Filters = append(ctx.Options.Filters, common.FilterOption{
Column: "user_id",
Operator: "eq",
Value: userID,
})
return nil
}
hr.Register(BeforeConnect, authHook)
hr.Register(BeforeRead, authzHook)
// Simulate connection
ctx1 := &HookContext{
Context: context.Background(),
Metadata: make(map[string]interface{}),
}
err := hr.Execute(BeforeConnect, ctx1)
require.NoError(t, err)
assert.Equal(t, 123, ctx1.Metadata["user_id"])
// Simulate read with authorization
ctx2 := &HookContext{
Context: context.Background(),
Metadata: map[string]interface{}{"user_id": 123},
Options: &common.RequestOptions{},
}
err = hr.Execute(BeforeRead, ctx2)
require.NoError(t, err)
assert.Len(t, ctx2.Options.Filters, 1)
assert.Equal(t, "user_id", ctx2.Options.Filters[0].Column)
}
func TestHookRegistry_Validation_Example(t *testing.T) {
hr := NewHookRegistry()
// Validation hook
validationHook := func(ctx *HookContext) error {
data, ok := ctx.Data.(map[string]interface{})
if !ok {
return errors.New("invalid data format")
}
if ctx.Entity == "users" {
email, hasEmail := data["email"]
if !hasEmail || email == "" {
return errors.New("validation error: email is required")
}
name, hasName := data["name"]
if !hasName || name == "" {
return errors.New("validation error: name is required")
}
}
return nil
}
hr.Register(BeforeCreate, validationHook)
// Test with valid data
ctx1 := &HookContext{
Context: context.Background(),
Entity: "users",
Data: map[string]interface{}{
"name": "John Doe",
"email": "john@example.com",
},
}
err := hr.Execute(BeforeCreate, ctx1)
assert.NoError(t, err)
// Test with missing email
ctx2 := &HookContext{
Context: context.Background(),
Entity: "users",
Data: map[string]interface{}{
"name": "John Doe",
},
}
err = hr.Execute(BeforeCreate, ctx2)
assert.Error(t, err)
assert.Contains(t, err.Error(), "email is required")
// Test with missing name
ctx3 := &HookContext{
Context: context.Background(),
Entity: "users",
Data: map[string]interface{}{
"email": "john@example.com",
},
}
err = hr.Execute(BeforeCreate, ctx3)
assert.Error(t, err)
assert.Contains(t, err.Error(), "name is required")
}
func TestHookRegistry_Logging_Example(t *testing.T) {
hr := NewHookRegistry()
logEntries := []string{}
// Logging hook for create operations
loggingHook := func(ctx *HookContext) error {
logEntries = append(logEntries, "Created record in "+ctx.Entity)
return nil
}
hr.Register(AfterCreate, loggingHook)
ctx := &HookContext{
Context: context.Background(),
Entity: "users",
Result: map[string]interface{}{"id": 1, "name": "John"},
}
err := hr.Execute(AfterCreate, ctx)
require.NoError(t, err)
assert.Len(t, logEntries, 1)
assert.Equal(t, "Created record in users", logEntries[0])
}
func TestHookRegistry_ConcurrentExecution(t *testing.T) {
hr := NewHookRegistry()
// This test verifies that concurrent hook executions don't cause race conditions
// Run with: go test -race
counter := 0
hook := func(ctx *HookContext) error {
counter++
return nil
}
hr.Register(BeforeRead, hook)
done := make(chan bool)
// Execute hooks concurrently
for i := 0; i < 10; i++ {
go func() {
ctx := &HookContext{Context: context.Background()}
hr.Execute(BeforeRead, ctx)
done <- true
}()
}
// Wait for all executions
for i := 0; i < 10; i++ {
<-done
}
assert.Equal(t, 10, counter)
}

View File

@@ -0,0 +1,240 @@
package websocketspec
import (
"encoding/json"
"time"
"github.com/bitechdev/ResolveSpec/pkg/common"
)
// MessageType represents the type of WebSocket message
type MessageType string
const (
// MessageTypeRequest is a client request message
MessageTypeRequest MessageType = "request"
// MessageTypeResponse is a server response message
MessageTypeResponse MessageType = "response"
// MessageTypeNotification is a server-initiated notification
MessageTypeNotification MessageType = "notification"
// MessageTypeSubscription is a subscription control message
MessageTypeSubscription MessageType = "subscription"
// MessageTypeError is an error message
MessageTypeError MessageType = "error"
// MessageTypePing is a keepalive ping message
MessageTypePing MessageType = "ping"
// MessageTypePong is a keepalive pong response
MessageTypePong MessageType = "pong"
)
// OperationType represents the operation to perform
type OperationType string
const (
// OperationRead retrieves records
OperationRead OperationType = "read"
// OperationCreate creates a new record
OperationCreate OperationType = "create"
// OperationUpdate updates an existing record
OperationUpdate OperationType = "update"
// OperationDelete deletes a record
OperationDelete OperationType = "delete"
// OperationSubscribe subscribes to entity changes
OperationSubscribe OperationType = "subscribe"
// OperationUnsubscribe unsubscribes from entity changes
OperationUnsubscribe OperationType = "unsubscribe"
// OperationMeta retrieves metadata about an entity
OperationMeta OperationType = "meta"
)
// Message represents a WebSocket message
type Message struct {
// ID is a unique identifier for request/response correlation
ID string `json:"id,omitempty"`
// Type is the message type
Type MessageType `json:"type"`
// Operation is the operation to perform
Operation OperationType `json:"operation,omitempty"`
// Schema is the database schema name
Schema string `json:"schema,omitempty"`
// Entity is the table/model name
Entity string `json:"entity,omitempty"`
// RecordID is the ID for single-record operations (update, delete, read by ID)
RecordID string `json:"record_id,omitempty"`
// Data contains the request/response payload
Data interface{} `json:"data,omitempty"`
// Options contains query options (filters, sorting, pagination, etc.)
Options *common.RequestOptions `json:"options,omitempty"`
// SubscriptionID is the subscription identifier
SubscriptionID string `json:"subscription_id,omitempty"`
// Success indicates if the operation was successful
Success bool `json:"success,omitempty"`
// Error contains error information
Error *ErrorInfo `json:"error,omitempty"`
// Metadata contains additional response metadata
Metadata map[string]interface{} `json:"metadata,omitempty"`
// Timestamp is when the message was created
Timestamp time.Time `json:"timestamp,omitempty"`
}
// ErrorInfo contains error details
type ErrorInfo struct {
// Code is the error code
Code string `json:"code"`
// Message is a human-readable error message
Message string `json:"message"`
// Details contains additional error context
Details map[string]interface{} `json:"details,omitempty"`
}
// RequestMessage represents a client request
type RequestMessage struct {
ID string `json:"id"`
Type MessageType `json:"type"`
Operation OperationType `json:"operation"`
Schema string `json:"schema,omitempty"`
Entity string `json:"entity"`
RecordID string `json:"record_id,omitempty"`
Data interface{} `json:"data,omitempty"`
Options *common.RequestOptions `json:"options,omitempty"`
}
// ResponseMessage represents a server response
type ResponseMessage struct {
ID string `json:"id"`
Type MessageType `json:"type"`
Success bool `json:"success"`
Data interface{} `json:"data,omitempty"`
Error *ErrorInfo `json:"error,omitempty"`
Metadata map[string]interface{} `json:"metadata,omitempty"`
Timestamp time.Time `json:"timestamp"`
}
// NotificationMessage represents a server-initiated notification
type NotificationMessage struct {
Type MessageType `json:"type"`
Operation OperationType `json:"operation"`
SubscriptionID string `json:"subscription_id"`
Schema string `json:"schema"`
Entity string `json:"entity"`
Data interface{} `json:"data"`
Timestamp time.Time `json:"timestamp"`
}
// SubscriptionMessage represents a subscription control message
type SubscriptionMessage struct {
ID string `json:"id"`
Type MessageType `json:"type"`
Operation OperationType `json:"operation"` // subscribe or unsubscribe
Schema string `json:"schema,omitempty"`
Entity string `json:"entity"`
Options *common.RequestOptions `json:"options,omitempty"` // Filters for subscription
SubscriptionID string `json:"subscription_id,omitempty"` // For unsubscribe
}
// NewRequestMessage creates a new request message
func NewRequestMessage(id string, operation OperationType, schema, entity string) *RequestMessage {
return &RequestMessage{
ID: id,
Type: MessageTypeRequest,
Operation: operation,
Schema: schema,
Entity: entity,
}
}
// NewResponseMessage creates a new response message
func NewResponseMessage(id string, success bool, data interface{}) *ResponseMessage {
return &ResponseMessage{
ID: id,
Type: MessageTypeResponse,
Success: success,
Data: data,
Timestamp: time.Now(),
}
}
// NewErrorResponse creates an error response message
func NewErrorResponse(id string, code, message string) *ResponseMessage {
return &ResponseMessage{
ID: id,
Type: MessageTypeResponse,
Success: false,
Error: &ErrorInfo{
Code: code,
Message: message,
},
Timestamp: time.Now(),
}
}
// NewNotificationMessage creates a new notification message
func NewNotificationMessage(subscriptionID string, operation OperationType, schema, entity string, data interface{}) *NotificationMessage {
return &NotificationMessage{
Type: MessageTypeNotification,
Operation: operation,
SubscriptionID: subscriptionID,
Schema: schema,
Entity: entity,
Data: data,
Timestamp: time.Now(),
}
}
// ParseMessage parses a JSON message into a Message struct
func ParseMessage(data []byte) (*Message, error) {
var msg Message
if err := json.Unmarshal(data, &msg); err != nil {
return nil, err
}
return &msg, nil
}
// ToJSON converts a message to JSON bytes
func (m *Message) ToJSON() ([]byte, error) {
return json.Marshal(m)
}
// ToJSON converts a response message to JSON bytes
func (r *ResponseMessage) ToJSON() ([]byte, error) {
return json.Marshal(r)
}
// ToJSON converts a notification message to JSON bytes
func (n *NotificationMessage) ToJSON() ([]byte, error) {
return json.Marshal(n)
}
// IsValid checks if a message is valid
func (m *Message) IsValid() bool {
// Type must be set
if m.Type == "" {
return false
}
// Request messages must have an ID, operation, and entity
if m.Type == MessageTypeRequest {
return m.ID != "" && m.Operation != "" && m.Entity != ""
}
// Subscription messages must have an ID and operation
if m.Type == MessageTypeSubscription {
return m.ID != "" && m.Operation != ""
}
return true
}

View File

@@ -0,0 +1,414 @@
package websocketspec
import (
"encoding/json"
"testing"
"time"
"github.com/bitechdev/ResolveSpec/pkg/common"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
func TestMessageType_Constants(t *testing.T) {
assert.Equal(t, MessageType("request"), MessageTypeRequest)
assert.Equal(t, MessageType("response"), MessageTypeResponse)
assert.Equal(t, MessageType("notification"), MessageTypeNotification)
assert.Equal(t, MessageType("subscription"), MessageTypeSubscription)
assert.Equal(t, MessageType("error"), MessageTypeError)
assert.Equal(t, MessageType("ping"), MessageTypePing)
assert.Equal(t, MessageType("pong"), MessageTypePong)
}
func TestOperationType_Constants(t *testing.T) {
assert.Equal(t, OperationType("read"), OperationRead)
assert.Equal(t, OperationType("create"), OperationCreate)
assert.Equal(t, OperationType("update"), OperationUpdate)
assert.Equal(t, OperationType("delete"), OperationDelete)
assert.Equal(t, OperationType("subscribe"), OperationSubscribe)
assert.Equal(t, OperationType("unsubscribe"), OperationUnsubscribe)
assert.Equal(t, OperationType("meta"), OperationMeta)
}
func TestParseMessage_ValidRequestMessage(t *testing.T) {
jsonData := `{
"id": "msg-1",
"type": "request",
"operation": "read",
"schema": "public",
"entity": "users",
"record_id": "123",
"options": {
"filters": [
{"column": "status", "operator": "eq", "value": "active"}
],
"limit": 10
}
}`
msg, err := ParseMessage([]byte(jsonData))
require.NoError(t, err)
assert.NotNil(t, msg)
assert.Equal(t, "msg-1", msg.ID)
assert.Equal(t, MessageTypeRequest, msg.Type)
assert.Equal(t, OperationRead, msg.Operation)
assert.Equal(t, "public", msg.Schema)
assert.Equal(t, "users", msg.Entity)
assert.Equal(t, "123", msg.RecordID)
assert.NotNil(t, msg.Options)
assert.Equal(t, 10, *msg.Options.Limit)
}
func TestParseMessage_ValidSubscriptionMessage(t *testing.T) {
jsonData := `{
"id": "sub-1",
"type": "subscription",
"operation": "subscribe",
"schema": "public",
"entity": "users"
}`
msg, err := ParseMessage([]byte(jsonData))
require.NoError(t, err)
assert.NotNil(t, msg)
assert.Equal(t, "sub-1", msg.ID)
assert.Equal(t, MessageTypeSubscription, msg.Type)
assert.Equal(t, OperationSubscribe, msg.Operation)
assert.Equal(t, "public", msg.Schema)
assert.Equal(t, "users", msg.Entity)
}
func TestParseMessage_InvalidJSON(t *testing.T) {
jsonData := `{invalid json}`
msg, err := ParseMessage([]byte(jsonData))
assert.Error(t, err)
assert.Nil(t, msg)
}
func TestParseMessage_EmptyData(t *testing.T) {
msg, err := ParseMessage([]byte{})
assert.Error(t, err)
assert.Nil(t, msg)
}
func TestMessage_IsValid_ValidRequestMessage(t *testing.T) {
msg := &Message{
ID: "msg-1",
Type: MessageTypeRequest,
Operation: OperationRead,
Entity: "users",
}
assert.True(t, msg.IsValid())
}
func TestMessage_IsValid_InvalidRequestMessage_NoID(t *testing.T) {
msg := &Message{
Type: MessageTypeRequest,
Operation: OperationRead,
Entity: "users",
}
assert.False(t, msg.IsValid())
}
func TestMessage_IsValid_InvalidRequestMessage_NoOperation(t *testing.T) {
msg := &Message{
ID: "msg-1",
Type: MessageTypeRequest,
Entity: "users",
}
assert.False(t, msg.IsValid())
}
func TestMessage_IsValid_InvalidRequestMessage_NoEntity(t *testing.T) {
msg := &Message{
ID: "msg-1",
Type: MessageTypeRequest,
Operation: OperationRead,
}
assert.False(t, msg.IsValid())
}
func TestMessage_IsValid_ValidSubscriptionMessage(t *testing.T) {
msg := &Message{
ID: "sub-1",
Type: MessageTypeSubscription,
Operation: OperationSubscribe,
}
assert.True(t, msg.IsValid())
}
func TestMessage_IsValid_InvalidSubscriptionMessage_NoID(t *testing.T) {
msg := &Message{
Type: MessageTypeSubscription,
Operation: OperationSubscribe,
}
assert.False(t, msg.IsValid())
}
func TestMessage_IsValid_InvalidSubscriptionMessage_NoOperation(t *testing.T) {
msg := &Message{
ID: "sub-1",
Type: MessageTypeSubscription,
}
assert.False(t, msg.IsValid())
}
func TestMessage_IsValid_NoType(t *testing.T) {
msg := &Message{
ID: "msg-1",
}
assert.False(t, msg.IsValid())
}
func TestMessage_IsValid_ResponseMessage(t *testing.T) {
msg := &Message{
Type: MessageTypeResponse,
}
// Response messages don't require specific fields
assert.True(t, msg.IsValid())
}
func TestMessage_IsValid_NotificationMessage(t *testing.T) {
msg := &Message{
Type: MessageTypeNotification,
}
// Notification messages don't require specific fields
assert.True(t, msg.IsValid())
}
func TestMessage_ToJSON(t *testing.T) {
msg := &Message{
ID: "msg-1",
Type: MessageTypeRequest,
Operation: OperationRead,
Entity: "users",
}
jsonData, err := msg.ToJSON()
require.NoError(t, err)
assert.NotEmpty(t, jsonData)
// Parse back to verify
var parsed map[string]interface{}
err = json.Unmarshal(jsonData, &parsed)
require.NoError(t, err)
assert.Equal(t, "msg-1", parsed["id"])
assert.Equal(t, "request", parsed["type"])
assert.Equal(t, "read", parsed["operation"])
assert.Equal(t, "users", parsed["entity"])
}
func TestNewRequestMessage(t *testing.T) {
msg := NewRequestMessage("msg-1", OperationRead, "public", "users")
assert.Equal(t, "msg-1", msg.ID)
assert.Equal(t, MessageTypeRequest, msg.Type)
assert.Equal(t, OperationRead, msg.Operation)
assert.Equal(t, "public", msg.Schema)
assert.Equal(t, "users", msg.Entity)
}
func TestNewResponseMessage(t *testing.T) {
data := map[string]interface{}{"id": 1, "name": "John"}
msg := NewResponseMessage("msg-1", true, data)
assert.Equal(t, "msg-1", msg.ID)
assert.Equal(t, MessageTypeResponse, msg.Type)
assert.True(t, msg.Success)
assert.Equal(t, data, msg.Data)
assert.False(t, msg.Timestamp.IsZero())
}
func TestNewErrorResponse(t *testing.T) {
msg := NewErrorResponse("msg-1", "validation_error", "Email is required")
assert.Equal(t, "msg-1", msg.ID)
assert.Equal(t, MessageTypeResponse, msg.Type)
assert.False(t, msg.Success)
assert.Nil(t, msg.Data)
assert.NotNil(t, msg.Error)
assert.Equal(t, "validation_error", msg.Error.Code)
assert.Equal(t, "Email is required", msg.Error.Message)
assert.False(t, msg.Timestamp.IsZero())
}
func TestNewNotificationMessage(t *testing.T) {
data := map[string]interface{}{"id": 1, "name": "John"}
msg := NewNotificationMessage("sub-123", OperationCreate, "public", "users", data)
assert.Equal(t, MessageTypeNotification, msg.Type)
assert.Equal(t, OperationCreate, msg.Operation)
assert.Equal(t, "sub-123", msg.SubscriptionID)
assert.Equal(t, "public", msg.Schema)
assert.Equal(t, "users", msg.Entity)
assert.Equal(t, data, msg.Data)
assert.False(t, msg.Timestamp.IsZero())
}
func TestResponseMessage_ToJSON(t *testing.T) {
resp := NewResponseMessage("msg-1", true, map[string]interface{}{"test": "data"})
jsonData, err := resp.ToJSON()
require.NoError(t, err)
assert.NotEmpty(t, jsonData)
// Verify JSON structure
var parsed map[string]interface{}
err = json.Unmarshal(jsonData, &parsed)
require.NoError(t, err)
assert.Equal(t, "msg-1", parsed["id"])
assert.Equal(t, "response", parsed["type"])
assert.True(t, parsed["success"].(bool))
}
func TestNotificationMessage_ToJSON(t *testing.T) {
notif := NewNotificationMessage("sub-123", OperationUpdate, "public", "users", map[string]interface{}{"id": 1})
jsonData, err := notif.ToJSON()
require.NoError(t, err)
assert.NotEmpty(t, jsonData)
// Verify JSON structure
var parsed map[string]interface{}
err = json.Unmarshal(jsonData, &parsed)
require.NoError(t, err)
assert.Equal(t, "notification", parsed["type"])
assert.Equal(t, "update", parsed["operation"])
assert.Equal(t, "sub-123", parsed["subscription_id"])
}
func TestErrorInfo_Structure(t *testing.T) {
err := &ErrorInfo{
Code: "validation_error",
Message: "Invalid input",
Details: map[string]interface{}{
"field": "email",
"value": "invalid",
},
}
assert.Equal(t, "validation_error", err.Code)
assert.Equal(t, "Invalid input", err.Message)
assert.NotNil(t, err.Details)
assert.Equal(t, "email", err.Details["field"])
}
func TestMessage_WithOptions(t *testing.T) {
limit := 10
offset := 0
msg := &Message{
ID: "msg-1",
Type: MessageTypeRequest,
Operation: OperationRead,
Entity: "users",
Options: &common.RequestOptions{
Filters: []common.FilterOption{
{Column: "status", Operator: "eq", Value: "active"},
},
Columns: []string{"id", "name", "email"},
Sort: []common.SortOption{
{Column: "name", Direction: "asc"},
},
Limit: &limit,
Offset: &offset,
},
}
assert.True(t, msg.IsValid())
assert.NotNil(t, msg.Options)
assert.Len(t, msg.Options.Filters, 1)
assert.Equal(t, "status", msg.Options.Filters[0].Column)
assert.Len(t, msg.Options.Columns, 3)
assert.Len(t, msg.Options.Sort, 1)
assert.Equal(t, 10, *msg.Options.Limit)
}
func TestMessage_CompleteRequestFlow(t *testing.T) {
// Create a request message
req := NewRequestMessage("msg-123", OperationCreate, "public", "users")
req.Data = map[string]interface{}{
"name": "John Doe",
"email": "john@example.com",
"status": "active",
}
// Convert to JSON
reqJSON, err := json.Marshal(req)
require.NoError(t, err)
// Parse back
parsed, err := ParseMessage(reqJSON)
require.NoError(t, err)
assert.True(t, parsed.IsValid())
assert.Equal(t, "msg-123", parsed.ID)
assert.Equal(t, MessageTypeRequest, parsed.Type)
assert.Equal(t, OperationCreate, parsed.Operation)
// Create success response
resp := NewResponseMessage("msg-123", true, map[string]interface{}{
"id": 1,
"name": "John Doe",
"email": "john@example.com",
"status": "active",
})
respJSON, err := resp.ToJSON()
require.NoError(t, err)
assert.NotEmpty(t, respJSON)
}
func TestMessage_TimestampSerialization(t *testing.T) {
now := time.Now()
msg := &Message{
ID: "msg-1",
Type: MessageTypeResponse,
Timestamp: now,
}
jsonData, err := msg.ToJSON()
require.NoError(t, err)
// Parse back
parsed, err := ParseMessage(jsonData)
require.NoError(t, err)
// Timestamps should be approximately equal (within a second due to serialization)
assert.WithinDuration(t, now, parsed.Timestamp, time.Second)
}
func TestMessage_WithMetadata(t *testing.T) {
msg := &Message{
ID: "msg-1",
Type: MessageTypeResponse,
Success: true,
Data: []interface{}{},
Metadata: map[string]interface{}{
"total": 100,
"count": 10,
"page": 1,
},
}
jsonData, err := msg.ToJSON()
require.NoError(t, err)
parsed, err := ParseMessage(jsonData)
require.NoError(t, err)
assert.NotNil(t, parsed.Metadata)
assert.Equal(t, float64(100), parsed.Metadata["total"]) // JSON numbers are float64
assert.Equal(t, float64(10), parsed.Metadata["count"])
}

View File

@@ -0,0 +1,192 @@
package websocketspec
import (
"sync"
"github.com/bitechdev/ResolveSpec/pkg/common"
"github.com/bitechdev/ResolveSpec/pkg/logger"
)
// Subscription represents a subscription to entity changes
type Subscription struct {
// ID is the unique subscription identifier
ID string
// ConnectionID is the ID of the connection that owns this subscription
ConnectionID string
// Schema is the database schema
Schema string
// Entity is the table/model name
Entity string
// Options contains filters and other query options
Options *common.RequestOptions
// Active indicates if the subscription is active
Active bool
}
// SubscriptionManager manages all subscriptions
type SubscriptionManager struct {
// subscriptions maps subscription ID to subscription
subscriptions map[string]*Subscription
// entitySubscriptions maps "schema.entity" to list of subscription IDs
entitySubscriptions map[string][]string
// mu protects the maps
mu sync.RWMutex
}
// NewSubscriptionManager creates a new subscription manager
func NewSubscriptionManager() *SubscriptionManager {
return &SubscriptionManager{
subscriptions: make(map[string]*Subscription),
entitySubscriptions: make(map[string][]string),
}
}
// Subscribe creates a new subscription
func (sm *SubscriptionManager) Subscribe(id, connID, schema, entity string, options *common.RequestOptions) *Subscription {
sm.mu.Lock()
defer sm.mu.Unlock()
sub := &Subscription{
ID: id,
ConnectionID: connID,
Schema: schema,
Entity: entity,
Options: options,
Active: true,
}
// Store subscription
sm.subscriptions[id] = sub
// Index by entity
key := makeEntityKey(schema, entity)
sm.entitySubscriptions[key] = append(sm.entitySubscriptions[key], id)
logger.Info("[WebSocketSpec] Subscription created: %s for %s.%s (conn: %s)", id, schema, entity, connID)
return sub
}
// Unsubscribe removes a subscription
func (sm *SubscriptionManager) Unsubscribe(subID string) bool {
sm.mu.Lock()
defer sm.mu.Unlock()
sub, exists := sm.subscriptions[subID]
if !exists {
return false
}
// Remove from entity index
key := makeEntityKey(sub.Schema, sub.Entity)
if subs, ok := sm.entitySubscriptions[key]; ok {
newSubs := make([]string, 0, len(subs)-1)
for _, id := range subs {
if id != subID {
newSubs = append(newSubs, id)
}
}
if len(newSubs) > 0 {
sm.entitySubscriptions[key] = newSubs
} else {
delete(sm.entitySubscriptions, key)
}
}
// Remove subscription
delete(sm.subscriptions, subID)
logger.Info("[WebSocketSpec] Subscription removed: %s", subID)
return true
}
// GetSubscription retrieves a subscription by ID
func (sm *SubscriptionManager) GetSubscription(subID string) (*Subscription, bool) {
sm.mu.RLock()
defer sm.mu.RUnlock()
sub, ok := sm.subscriptions[subID]
return sub, ok
}
// GetSubscriptionsByEntity retrieves all subscriptions for an entity
func (sm *SubscriptionManager) GetSubscriptionsByEntity(schema, entity string) []*Subscription {
sm.mu.RLock()
defer sm.mu.RUnlock()
key := makeEntityKey(schema, entity)
subIDs, ok := sm.entitySubscriptions[key]
if !ok {
return nil
}
result := make([]*Subscription, 0, len(subIDs))
for _, subID := range subIDs {
if sub, ok := sm.subscriptions[subID]; ok && sub.Active {
result = append(result, sub)
}
}
return result
}
// GetSubscriptionsByConnection retrieves all subscriptions for a connection
func (sm *SubscriptionManager) GetSubscriptionsByConnection(connID string) []*Subscription {
sm.mu.RLock()
defer sm.mu.RUnlock()
result := make([]*Subscription, 0)
for _, sub := range sm.subscriptions {
if sub.ConnectionID == connID && sub.Active {
result = append(result, sub)
}
}
return result
}
// Count returns the total number of active subscriptions
func (sm *SubscriptionManager) Count() int {
sm.mu.RLock()
defer sm.mu.RUnlock()
return len(sm.subscriptions)
}
// CountForEntity returns the number of subscriptions for a specific entity
func (sm *SubscriptionManager) CountForEntity(schema, entity string) int {
sm.mu.RLock()
defer sm.mu.RUnlock()
key := makeEntityKey(schema, entity)
return len(sm.entitySubscriptions[key])
}
// MatchesFilters checks if data matches the subscription's filters
func (s *Subscription) MatchesFilters(data interface{}) bool {
// If no filters, match everything
if s.Options == nil || len(s.Options.Filters) == 0 {
return true
}
// TODO: Implement filter matching logic
// For now, return true (send all notifications)
// In a full implementation, you would:
// 1. Convert data to a map
// 2. Evaluate each filter against the data
// 3. Return true only if all filters match
return true
}
// makeEntityKey creates a key for entity indexing
func makeEntityKey(schema, entity string) string {
if schema == "" {
return entity
}
return schema + "." + entity
}

View File

@@ -0,0 +1,434 @@
package websocketspec
import (
"testing"
"github.com/bitechdev/ResolveSpec/pkg/common"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
func TestNewSubscriptionManager(t *testing.T) {
sm := NewSubscriptionManager()
assert.NotNil(t, sm)
assert.NotNil(t, sm.subscriptions)
assert.NotNil(t, sm.entitySubscriptions)
assert.Equal(t, 0, sm.Count())
}
func TestSubscriptionManager_Subscribe(t *testing.T) {
sm := NewSubscriptionManager()
// Create a subscription
sub := sm.Subscribe("sub-1", "conn-1", "public", "users", nil)
assert.NotNil(t, sub)
assert.Equal(t, "sub-1", sub.ID)
assert.Equal(t, "conn-1", sub.ConnectionID)
assert.Equal(t, "public", sub.Schema)
assert.Equal(t, "users", sub.Entity)
assert.True(t, sub.Active)
assert.Equal(t, 1, sm.Count())
}
func TestSubscriptionManager_Subscribe_WithOptions(t *testing.T) {
sm := NewSubscriptionManager()
options := &common.RequestOptions{
Filters: []common.FilterOption{
{Column: "status", Operator: "eq", Value: "active"},
},
}
sub := sm.Subscribe("sub-1", "conn-1", "public", "users", options)
assert.NotNil(t, sub)
assert.NotNil(t, sub.Options)
assert.Len(t, sub.Options.Filters, 1)
assert.Equal(t, "status", sub.Options.Filters[0].Column)
}
func TestSubscriptionManager_Subscribe_MultipleSubscriptions(t *testing.T) {
sm := NewSubscriptionManager()
sub1 := sm.Subscribe("sub-1", "conn-1", "public", "users", nil)
sub2 := sm.Subscribe("sub-2", "conn-1", "public", "posts", nil)
sub3 := sm.Subscribe("sub-3", "conn-2", "public", "users", nil)
assert.NotNil(t, sub1)
assert.NotNil(t, sub2)
assert.NotNil(t, sub3)
assert.Equal(t, 3, sm.Count())
}
func TestSubscriptionManager_Unsubscribe(t *testing.T) {
sm := NewSubscriptionManager()
sm.Subscribe("sub-1", "conn-1", "public", "users", nil)
assert.Equal(t, 1, sm.Count())
// Unsubscribe
ok := sm.Unsubscribe("sub-1")
assert.True(t, ok)
assert.Equal(t, 0, sm.Count())
}
func TestSubscriptionManager_Unsubscribe_NonExistent(t *testing.T) {
sm := NewSubscriptionManager()
ok := sm.Unsubscribe("non-existent")
assert.False(t, ok)
}
func TestSubscriptionManager_Unsubscribe_MultipleSubscriptions(t *testing.T) {
sm := NewSubscriptionManager()
sm.Subscribe("sub-1", "conn-1", "public", "users", nil)
sm.Subscribe("sub-2", "conn-1", "public", "posts", nil)
sm.Subscribe("sub-3", "conn-2", "public", "users", nil)
assert.Equal(t, 3, sm.Count())
// Unsubscribe one
ok := sm.Unsubscribe("sub-2")
assert.True(t, ok)
assert.Equal(t, 2, sm.Count())
// Verify the right subscription was removed
_, exists := sm.GetSubscription("sub-2")
assert.False(t, exists)
_, exists = sm.GetSubscription("sub-1")
assert.True(t, exists)
_, exists = sm.GetSubscription("sub-3")
assert.True(t, exists)
}
func TestSubscriptionManager_GetSubscription(t *testing.T) {
sm := NewSubscriptionManager()
sm.Subscribe("sub-1", "conn-1", "public", "users", nil)
// Get existing subscription
sub, exists := sm.GetSubscription("sub-1")
assert.True(t, exists)
assert.NotNil(t, sub)
assert.Equal(t, "sub-1", sub.ID)
}
func TestSubscriptionManager_GetSubscription_NonExistent(t *testing.T) {
sm := NewSubscriptionManager()
sub, exists := sm.GetSubscription("non-existent")
assert.False(t, exists)
assert.Nil(t, sub)
}
func TestSubscriptionManager_GetSubscriptionsByEntity(t *testing.T) {
sm := NewSubscriptionManager()
sm.Subscribe("sub-1", "conn-1", "public", "users", nil)
sm.Subscribe("sub-2", "conn-2", "public", "users", nil)
sm.Subscribe("sub-3", "conn-1", "public", "posts", nil)
// Get subscriptions for users entity
subs := sm.GetSubscriptionsByEntity("public", "users")
assert.Len(t, subs, 2)
// Verify subscription IDs
ids := make([]string, len(subs))
for i, sub := range subs {
ids[i] = sub.ID
}
assert.Contains(t, ids, "sub-1")
assert.Contains(t, ids, "sub-2")
}
func TestSubscriptionManager_GetSubscriptionsByEntity_NoSchema(t *testing.T) {
sm := NewSubscriptionManager()
sm.Subscribe("sub-1", "conn-1", "", "users", nil)
sm.Subscribe("sub-2", "conn-2", "", "users", nil)
// Get subscriptions without schema
subs := sm.GetSubscriptionsByEntity("", "users")
assert.Len(t, subs, 2)
}
func TestSubscriptionManager_GetSubscriptionsByEntity_NoResults(t *testing.T) {
sm := NewSubscriptionManager()
sm.Subscribe("sub-1", "conn-1", "public", "users", nil)
// Get subscriptions for non-existent entity
subs := sm.GetSubscriptionsByEntity("public", "posts")
assert.Nil(t, subs)
}
func TestSubscriptionManager_GetSubscriptionsByConnection(t *testing.T) {
sm := NewSubscriptionManager()
sm.Subscribe("sub-1", "conn-1", "public", "users", nil)
sm.Subscribe("sub-2", "conn-1", "public", "posts", nil)
sm.Subscribe("sub-3", "conn-2", "public", "users", nil)
// Get subscriptions for connection 1
subs := sm.GetSubscriptionsByConnection("conn-1")
assert.Len(t, subs, 2)
// Verify subscription IDs
ids := make([]string, len(subs))
for i, sub := range subs {
ids[i] = sub.ID
}
assert.Contains(t, ids, "sub-1")
assert.Contains(t, ids, "sub-2")
}
func TestSubscriptionManager_GetSubscriptionsByConnection_NoResults(t *testing.T) {
sm := NewSubscriptionManager()
sm.Subscribe("sub-1", "conn-1", "public", "users", nil)
// Get subscriptions for non-existent connection
subs := sm.GetSubscriptionsByConnection("conn-2")
assert.Empty(t, subs)
}
func TestSubscriptionManager_Count(t *testing.T) {
sm := NewSubscriptionManager()
assert.Equal(t, 0, sm.Count())
sm.Subscribe("sub-1", "conn-1", "public", "users", nil)
assert.Equal(t, 1, sm.Count())
sm.Subscribe("sub-2", "conn-1", "public", "posts", nil)
assert.Equal(t, 2, sm.Count())
sm.Unsubscribe("sub-1")
assert.Equal(t, 1, sm.Count())
sm.Unsubscribe("sub-2")
assert.Equal(t, 0, sm.Count())
}
func TestSubscriptionManager_CountForEntity(t *testing.T) {
sm := NewSubscriptionManager()
sm.Subscribe("sub-1", "conn-1", "public", "users", nil)
sm.Subscribe("sub-2", "conn-2", "public", "users", nil)
sm.Subscribe("sub-3", "conn-1", "public", "posts", nil)
assert.Equal(t, 2, sm.CountForEntity("public", "users"))
assert.Equal(t, 1, sm.CountForEntity("public", "posts"))
assert.Equal(t, 0, sm.CountForEntity("public", "orders"))
}
func TestSubscriptionManager_UnsubscribeUpdatesEntityIndex(t *testing.T) {
sm := NewSubscriptionManager()
sm.Subscribe("sub-1", "conn-1", "public", "users", nil)
sm.Subscribe("sub-2", "conn-2", "public", "users", nil)
assert.Equal(t, 2, sm.CountForEntity("public", "users"))
// Unsubscribe one
sm.Unsubscribe("sub-1")
assert.Equal(t, 1, sm.CountForEntity("public", "users"))
// Unsubscribe the other
sm.Unsubscribe("sub-2")
assert.Equal(t, 0, sm.CountForEntity("public", "users"))
}
func TestSubscription_MatchesFilters_NoFilters(t *testing.T) {
sub := &Subscription{
ID: "sub-1",
ConnectionID: "conn-1",
Schema: "public",
Entity: "users",
Options: nil,
Active: true,
}
data := map[string]interface{}{
"id": 1,
"name": "John",
"status": "active",
}
// Should match when no filters are specified
assert.True(t, sub.MatchesFilters(data))
}
func TestSubscription_MatchesFilters_WithFilters(t *testing.T) {
sub := &Subscription{
ID: "sub-1",
ConnectionID: "conn-1",
Schema: "public",
Entity: "users",
Options: &common.RequestOptions{
Filters: []common.FilterOption{
{Column: "status", Operator: "eq", Value: "active"},
},
},
Active: true,
}
data := map[string]interface{}{
"id": 1,
"name": "John",
"status": "active",
}
// Current implementation returns true for all data
// This test documents the expected behavior
assert.True(t, sub.MatchesFilters(data))
}
func TestSubscription_MatchesFilters_EmptyFiltersArray(t *testing.T) {
sub := &Subscription{
ID: "sub-1",
ConnectionID: "conn-1",
Schema: "public",
Entity: "users",
Options: &common.RequestOptions{
Filters: []common.FilterOption{},
},
Active: true,
}
data := map[string]interface{}{
"id": 1,
"name": "John",
}
// Should match when filters array is empty
assert.True(t, sub.MatchesFilters(data))
}
func TestMakeEntityKey(t *testing.T) {
tests := []struct {
name string
schema string
entity string
expected string
}{
{
name: "With schema",
schema: "public",
entity: "users",
expected: "public.users",
},
{
name: "Without schema",
schema: "",
entity: "users",
expected: "users",
},
{
name: "Different schema",
schema: "custom",
entity: "posts",
expected: "custom.posts",
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
result := makeEntityKey(tt.schema, tt.entity)
assert.Equal(t, tt.expected, result)
})
}
}
func TestSubscriptionManager_ConcurrentOperations(t *testing.T) {
sm := NewSubscriptionManager()
// This test verifies that concurrent operations don't cause race conditions
// Run with: go test -race
done := make(chan bool)
// Goroutine 1: Subscribe
go func() {
for i := 0; i < 100; i++ {
sm.Subscribe("sub-"+string(rune(i)), "conn-1", "public", "users", nil)
}
done <- true
}()
// Goroutine 2: Get subscriptions
go func() {
for i := 0; i < 100; i++ {
sm.GetSubscriptionsByEntity("public", "users")
}
done <- true
}()
// Goroutine 3: Count
go func() {
for i := 0; i < 100; i++ {
sm.Count()
}
done <- true
}()
// Wait for all goroutines
<-done
<-done
<-done
}
func TestSubscriptionManager_CompleteLifecycle(t *testing.T) {
sm := NewSubscriptionManager()
// Create subscriptions
options := &common.RequestOptions{
Filters: []common.FilterOption{
{Column: "status", Operator: "eq", Value: "active"},
},
}
sub1 := sm.Subscribe("sub-1", "conn-1", "public", "users", options)
require.NotNil(t, sub1)
assert.Equal(t, 1, sm.Count())
sub2 := sm.Subscribe("sub-2", "conn-1", "public", "posts", nil)
require.NotNil(t, sub2)
assert.Equal(t, 2, sm.Count())
// Get by entity
userSubs := sm.GetSubscriptionsByEntity("public", "users")
assert.Len(t, userSubs, 1)
assert.Equal(t, "sub-1", userSubs[0].ID)
// Get by connection
connSubs := sm.GetSubscriptionsByConnection("conn-1")
assert.Len(t, connSubs, 2)
// Get specific subscription
sub, exists := sm.GetSubscription("sub-1")
assert.True(t, exists)
assert.Equal(t, "sub-1", sub.ID)
assert.NotNil(t, sub.Options)
// Count by entity
assert.Equal(t, 1, sm.CountForEntity("public", "users"))
assert.Equal(t, 1, sm.CountForEntity("public", "posts"))
// Unsubscribe
ok := sm.Unsubscribe("sub-1")
assert.True(t, ok)
assert.Equal(t, 1, sm.Count())
assert.Equal(t, 0, sm.CountForEntity("public", "users"))
// Verify subscription is gone
_, exists = sm.GetSubscription("sub-1")
assert.False(t, exists)
// Unsubscribe second subscription
ok = sm.Unsubscribe("sub-2")
assert.True(t, ok)
assert.Equal(t, 0, sm.Count())
}

View File

@@ -0,0 +1,332 @@
// Package websocketspec provides a WebSocket-based API specification for real-time
// CRUD operations with bidirectional communication and subscription support.
//
// # Key Features
//
// - Real-time bidirectional communication over WebSocket
// - CRUD operations (Create, Read, Update, Delete)
// - Real-time subscriptions with filtering
// - Lifecycle hooks for all operations
// - Database-agnostic: Works with GORM and Bun ORM through adapters
// - Automatic change notifications to subscribers
// - Connection and subscription management
//
// # Message Protocol
//
// WebSocketSpec uses JSON messages for communication:
//
// {
// "id": "unique-message-id",
// "type": "request|response|notification|subscription",
// "operation": "read|create|update|delete|subscribe|unsubscribe",
// "schema": "public",
// "entity": "users",
// "data": {...},
// "options": {
// "filters": [...],
// "columns": [...],
// "preload": [...],
// "sort": [...],
// "limit": 10
// }
// }
//
// # Usage Example
//
// // Create handler with GORM
// handler := websocketspec.NewHandlerWithGORM(db)
//
// // Register models
// handler.Registry.RegisterModel("public.users", &User{})
//
// // Setup WebSocket endpoint
// http.HandleFunc("/ws", handler.HandleWebSocket)
//
// // Start server
// http.ListenAndServe(":8080", nil)
//
// # Client Example
//
// // Connect to WebSocket
// ws := new WebSocket("ws://localhost:8080/ws")
//
// // Send read request
// ws.send(JSON.stringify({
// id: "msg-1",
// type: "request",
// operation: "read",
// entity: "users",
// options: {
// filters: [{column: "status", operator: "eq", value: "active"}],
// limit: 10
// }
// }))
//
// // Subscribe to changes
// ws.send(JSON.stringify({
// id: "msg-2",
// type: "subscription",
// operation: "subscribe",
// entity: "users",
// options: {
// filters: [{column: "status", operator: "eq", value: "active"}]
// }
// }))
package websocketspec
import (
"github.com/uptrace/bun"
"gorm.io/gorm"
"github.com/bitechdev/ResolveSpec/pkg/common"
"github.com/bitechdev/ResolveSpec/pkg/common/adapters/database"
"github.com/bitechdev/ResolveSpec/pkg/modelregistry"
)
// NewHandlerWithGORM creates a new Handler with GORM adapter
func NewHandlerWithGORM(db *gorm.DB) *Handler {
gormAdapter := database.NewGormAdapter(db)
registry := modelregistry.NewModelRegistry()
return NewHandler(gormAdapter, registry)
}
// NewHandlerWithBun creates a new Handler with Bun adapter
func NewHandlerWithBun(db *bun.DB) *Handler {
bunAdapter := database.NewBunAdapter(db)
registry := modelregistry.NewModelRegistry()
return NewHandler(bunAdapter, registry)
}
// NewHandlerWithDatabase creates a new Handler with a custom database adapter
func NewHandlerWithDatabase(db common.Database, registry common.ModelRegistry) *Handler {
return NewHandler(db, registry)
}
// Example usage functions for documentation:
// ExampleWithGORM shows how to use WebSocketSpec with GORM
func ExampleWithGORM(db *gorm.DB) {
// Create handler using GORM
handler := NewHandlerWithGORM(db)
// Register models
_ = handler.Registry().RegisterModel("public.users", &struct{}{})
// Register hooks (optional)
handler.Hooks().RegisterBefore(OperationRead, func(ctx *HookContext) error {
// Add custom logic before read operations
return nil
})
// Setup WebSocket endpoint
// http.HandleFunc("/ws", handler.HandleWebSocket)
// Start server
// http.ListenAndServe(":8080", nil)
}
// ExampleWithBun shows how to use WebSocketSpec with Bun ORM
func ExampleWithBun(bunDB *bun.DB) {
// Create handler using Bun
handler := NewHandlerWithBun(bunDB)
// Register models
_ = handler.Registry().RegisterModel("public.users", &struct{}{})
// Setup WebSocket endpoint
// http.HandleFunc("/ws", handler.HandleWebSocket)
}
// ExampleWithHooks shows how to use lifecycle hooks
func ExampleWithHooks(db *gorm.DB) {
handler := NewHandlerWithGORM(db)
// Register a before-read hook for authorization
handler.Hooks().RegisterBefore(OperationRead, func(ctx *HookContext) error {
// Check if user has permission to read this entity
// return fmt.Errorf("unauthorized") if not allowed
return nil
})
// Register an after-create hook for logging
handler.Hooks().RegisterAfter(OperationCreate, func(ctx *HookContext) error {
// Log the created record
// logger.Info("Created record: %v", ctx.Result)
return nil
})
// Register a before-subscribe hook to limit subscriptions
handler.Hooks().Register(BeforeSubscribe, func(ctx *HookContext) error {
// Limit number of subscriptions per connection
// if len(ctx.Connection.subscriptions) >= 10 {
// return fmt.Errorf("maximum subscriptions reached")
// }
return nil
})
}
// ExampleWithSubscriptions shows subscription usage
func ExampleWithSubscriptions() {
// Client-side JavaScript example:
/*
const ws = new WebSocket("ws://localhost:8080/ws");
// Subscribe to user changes
ws.send(JSON.stringify({
id: "sub-1",
type: "subscription",
operation: "subscribe",
schema: "public",
entity: "users",
options: {
filters: [
{column: "status", operator: "eq", value: "active"}
]
}
}));
// Handle notifications
ws.onmessage = (event) => {
const msg = JSON.parse(event.data);
if (msg.type === "notification") {
console.log("User changed:", msg.data);
console.log("Operation:", msg.operation); // create, update, or delete
}
};
// Unsubscribe
ws.send(JSON.stringify({
id: "unsub-1",
type: "subscription",
operation: "unsubscribe",
subscription_id: "sub-abc123"
}));
*/
}
// ExampleCRUDOperations shows basic CRUD operations
func ExampleCRUDOperations() {
// Client-side JavaScript example:
/*
const ws = new WebSocket("ws://localhost:8080/ws");
// CREATE - Create a new user
ws.send(JSON.stringify({
id: "create-1",
type: "request",
operation: "create",
schema: "public",
entity: "users",
data: {
name: "John Doe",
email: "john@example.com",
status: "active"
}
}));
// READ - Get all active users
ws.send(JSON.stringify({
id: "read-1",
type: "request",
operation: "read",
schema: "public",
entity: "users",
options: {
filters: [{column: "status", operator: "eq", value: "active"}],
columns: ["id", "name", "email"],
sort: [{column: "name", direction: "asc"}],
limit: 10
}
}));
// READ BY ID - Get a specific user
ws.send(JSON.stringify({
id: "read-2",
type: "request",
operation: "read",
schema: "public",
entity: "users",
record_id: "123"
}));
// UPDATE - Update a user
ws.send(JSON.stringify({
id: "update-1",
type: "request",
operation: "update",
schema: "public",
entity: "users",
record_id: "123",
data: {
name: "John Updated",
email: "john.updated@example.com"
}
}));
// DELETE - Delete a user
ws.send(JSON.stringify({
id: "delete-1",
type: "request",
operation: "delete",
schema: "public",
entity: "users",
record_id: "123"
}));
// Handle responses
ws.onmessage = (event) => {
const response = JSON.parse(event.data);
if (response.type === "response") {
if (response.success) {
console.log("Operation successful:", response.data);
} else {
console.error("Operation failed:", response.error);
}
}
};
*/
}
// ExampleAuthentication shows how to implement authentication
func ExampleAuthentication() {
// Server-side example with authentication hook:
/*
handler := NewHandlerWithGORM(db)
// Register before-connect hook for authentication
handler.Hooks().Register(BeforeConnect, func(ctx *HookContext) error {
// Extract token from query params or headers
r := ctx.Connection.ws.UnderlyingConn().RemoteAddr()
// Validate token
// token := extractToken(r)
// user, err := validateToken(token)
// if err != nil {
// return fmt.Errorf("authentication failed: %w", err)
// }
// Store user info in connection metadata
// ctx.Connection.SetMetadata("user", user)
// ctx.Connection.SetMetadata("user_id", user.ID)
return nil
})
// Use connection metadata in other hooks
handler.Hooks().RegisterBefore(OperationRead, func(ctx *HookContext) error {
// Get user from connection metadata
// userID, _ := ctx.Connection.GetMetadata("user_id")
// Add filter to only show user's own records
// if ctx.Entity == "orders" {
// ctx.Options.Filters = append(ctx.Options.Filters, common.FilterOption{
// Column: "user_id",
// Operator: "eq",
// Value: userID,
// })
// }
return nil
})
*/
}

530
resolvespec-js/WEBSOCKET.md Normal file
View File

@@ -0,0 +1,530 @@
# WebSocketSpec JavaScript Client
A TypeScript/JavaScript client for connecting to WebSocketSpec servers with full support for real-time subscriptions, CRUD operations, and automatic reconnection.
## Installation
```bash
npm install @warkypublic/resolvespec-js
# or
yarn add @warkypublic/resolvespec-js
# or
pnpm add @warkypublic/resolvespec-js
```
## Quick Start
```typescript
import { WebSocketClient } from '@warkypublic/resolvespec-js';
// Create client
const client = new WebSocketClient({
url: 'ws://localhost:8080/ws',
reconnect: true,
debug: true
});
// Connect
await client.connect();
// Read records
const users = await client.read('users', {
schema: 'public',
filters: [
{ column: 'status', operator: 'eq', value: 'active' }
],
limit: 10
});
// Subscribe to changes
const subscriptionId = await client.subscribe('users', (notification) => {
console.log('User changed:', notification.operation, notification.data);
}, { schema: 'public' });
// Clean up
await client.unsubscribe(subscriptionId);
client.disconnect();
```
## Features
- **Real-Time Updates**: Subscribe to entity changes and receive instant notifications
- **Full CRUD Support**: Create, read, update, and delete operations
- **TypeScript Support**: Full type definitions included
- **Auto Reconnection**: Automatic reconnection with configurable retry logic
- **Heartbeat**: Built-in keepalive mechanism
- **Event System**: Listen to connection, error, and message events
- **Promise-based API**: All async operations return promises
- **Filter & Sort**: Advanced querying with filters, sorting, and pagination
- **Preloading**: Load related entities in a single query
## Configuration
```typescript
const client = new WebSocketClient({
url: 'ws://localhost:8080/ws', // WebSocket server URL
reconnect: true, // Enable auto-reconnection
reconnectInterval: 3000, // Reconnection delay (ms)
maxReconnectAttempts: 10, // Max reconnection attempts
heartbeatInterval: 30000, // Heartbeat interval (ms)
debug: false // Enable debug logging
});
```
## API Reference
### Connection Management
#### `connect(): Promise<void>`
Connect to the WebSocket server.
```typescript
await client.connect();
```
#### `disconnect(): void`
Disconnect from the server.
```typescript
client.disconnect();
```
#### `isConnected(): boolean`
Check if currently connected.
```typescript
if (client.isConnected()) {
console.log('Connected!');
}
```
#### `getState(): ConnectionState`
Get current connection state: `'connecting'`, `'connected'`, `'disconnecting'`, `'disconnected'`, or `'reconnecting'`.
```typescript
const state = client.getState();
console.log('State:', state);
```
### CRUD Operations
#### `read<T>(entity: string, options?): Promise<T>`
Read records from an entity.
```typescript
// Read all active users
const users = await client.read('users', {
schema: 'public',
filters: [
{ column: 'status', operator: 'eq', value: 'active' }
],
columns: ['id', 'name', 'email'],
sort: [
{ column: 'name', direction: 'asc' }
],
limit: 10,
offset: 0
});
// Read single record by ID
const user = await client.read('users', {
schema: 'public',
record_id: '123'
});
// Read with preloading
const posts = await client.read('posts', {
schema: 'public',
preload: [
{
relation: 'user',
columns: ['id', 'name', 'email']
},
{
relation: 'comments',
filters: [
{ column: 'status', operator: 'eq', value: 'approved' }
]
}
]
});
```
#### `create<T>(entity: string, data: any, options?): Promise<T>`
Create a new record.
```typescript
const newUser = await client.create('users', {
name: 'John Doe',
email: 'john@example.com',
status: 'active'
}, {
schema: 'public'
});
```
#### `update<T>(entity: string, id: string, data: any, options?): Promise<T>`
Update an existing record.
```typescript
const updatedUser = await client.update('users', '123', {
name: 'John Updated',
email: 'john.new@example.com'
}, {
schema: 'public'
});
```
#### `delete(entity: string, id: string, options?): Promise<void>`
Delete a record.
```typescript
await client.delete('users', '123', {
schema: 'public'
});
```
#### `meta<T>(entity: string, options?): Promise<T>`
Get metadata for an entity.
```typescript
const metadata = await client.meta('users', {
schema: 'public'
});
console.log('Columns:', metadata.columns);
console.log('Primary key:', metadata.primary_key);
```
### Subscriptions
#### `subscribe(entity: string, callback: Function, options?): Promise<string>`
Subscribe to entity changes.
```typescript
const subscriptionId = await client.subscribe(
'users',
(notification) => {
console.log('Operation:', notification.operation); // 'create', 'update', or 'delete'
console.log('Data:', notification.data);
console.log('Timestamp:', notification.timestamp);
},
{
schema: 'public',
filters: [
{ column: 'status', operator: 'eq', value: 'active' }
]
}
);
```
#### `unsubscribe(subscriptionId: string): Promise<void>`
Unsubscribe from entity changes.
```typescript
await client.unsubscribe(subscriptionId);
```
#### `getSubscriptions(): Subscription[]`
Get list of active subscriptions.
```typescript
const subscriptions = client.getSubscriptions();
console.log('Active subscriptions:', subscriptions.length);
```
### Event Handling
#### `on(event: string, callback: Function): void`
Add event listener.
```typescript
// Connection events
client.on('connect', () => {
console.log('Connected!');
});
client.on('disconnect', (event) => {
console.log('Disconnected:', event.code, event.reason);
});
client.on('error', (error) => {
console.error('Error:', error);
});
// State changes
client.on('stateChange', (state) => {
console.log('State:', state);
});
// All messages
client.on('message', (message) => {
console.log('Message:', message);
});
```
#### `off(event: string): void`
Remove event listener.
```typescript
client.off('connect');
```
## Filter Operators
- `eq` - Equal (=)
- `neq` - Not Equal (!=)
- `gt` - Greater Than (>)
- `gte` - Greater Than or Equal (>=)
- `lt` - Less Than (<)
- `lte` - Less Than or Equal (<=)
- `like` - LIKE (case-sensitive)
- `ilike` - ILIKE (case-insensitive)
- `in` - IN (array of values)
## Examples
### Basic CRUD
```typescript
const client = new WebSocketClient({ url: 'ws://localhost:8080/ws' });
await client.connect();
// Create
const user = await client.create('users', {
name: 'Alice',
email: 'alice@example.com'
});
// Read
const users = await client.read('users', {
filters: [{ column: 'status', operator: 'eq', value: 'active' }]
});
// Update
await client.update('users', user.id, { name: 'Alice Updated' });
// Delete
await client.delete('users', user.id);
client.disconnect();
```
### Real-Time Subscriptions
```typescript
const client = new WebSocketClient({ url: 'ws://localhost:8080/ws' });
await client.connect();
// Subscribe to all user changes
const subId = await client.subscribe('users', (notification) => {
switch (notification.operation) {
case 'create':
console.log('New user:', notification.data);
break;
case 'update':
console.log('User updated:', notification.data);
break;
case 'delete':
console.log('User deleted:', notification.data);
break;
}
});
// Later: unsubscribe
await client.unsubscribe(subId);
```
### React Integration
```typescript
import { useEffect, useState } from 'react';
import { WebSocketClient } from '@warkypublic/resolvespec-js';
function useWebSocket(url: string) {
const [client] = useState(() => new WebSocketClient({ url }));
const [isConnected, setIsConnected] = useState(false);
useEffect(() => {
client.on('connect', () => setIsConnected(true));
client.on('disconnect', () => setIsConnected(false));
client.connect();
return () => client.disconnect();
}, [client]);
return { client, isConnected };
}
function UsersComponent() {
const { client, isConnected } = useWebSocket('ws://localhost:8080/ws');
const [users, setUsers] = useState([]);
useEffect(() => {
if (!isConnected) return;
const loadUsers = async () => {
// Subscribe to changes
await client.subscribe('users', (notification) => {
if (notification.operation === 'create') {
setUsers(prev => [...prev, notification.data]);
} else if (notification.operation === 'update') {
setUsers(prev => prev.map(u =>
u.id === notification.data.id ? notification.data : u
));
} else if (notification.operation === 'delete') {
setUsers(prev => prev.filter(u => u.id !== notification.data.id));
}
});
// Load initial data
const data = await client.read('users');
setUsers(data);
};
loadUsers();
}, [client, isConnected]);
return (
<div>
<h2>Users {isConnected ? '🟢' : '🔴'}</h2>
{users.map(user => (
<div key={user.id}>{user.name}</div>
))}
</div>
);
}
```
### TypeScript with Typed Models
```typescript
interface User {
id: number;
name: string;
email: string;
status: 'active' | 'inactive';
}
interface Post {
id: number;
title: string;
content: string;
user_id: number;
user?: User;
}
const client = new WebSocketClient({ url: 'ws://localhost:8080/ws' });
await client.connect();
// Type-safe operations
const users = await client.read<User[]>('users', {
filters: [{ column: 'status', operator: 'eq', value: 'active' }]
});
const newUser = await client.create<User>('users', {
name: 'Bob',
email: 'bob@example.com',
status: 'active'
});
// Type-safe subscriptions
await client.subscribe(
'posts',
(notification) => {
const post = notification.data as Post;
console.log('Post:', post.title);
}
);
```
### Error Handling
```typescript
const client = new WebSocketClient({
url: 'ws://localhost:8080/ws',
reconnect: true,
maxReconnectAttempts: 5
});
client.on('error', (error) => {
console.error('Connection error:', error);
});
client.on('stateChange', (state) => {
console.log('State:', state);
if (state === 'reconnecting') {
console.log('Attempting to reconnect...');
}
});
try {
await client.connect();
try {
const user = await client.read('users', { record_id: '999' });
} catch (error) {
console.error('Record not found:', error);
}
try {
await client.create('users', { /* invalid data */ });
} catch (error) {
console.error('Validation failed:', error);
}
} catch (error) {
console.error('Connection failed:', error);
}
```
### Multiple Subscriptions
```typescript
const client = new WebSocketClient({ url: 'ws://localhost:8080/ws' });
await client.connect();
// Subscribe to multiple entities
const userSub = await client.subscribe('users', (n) => {
console.log('[Users]', n.operation, n.data);
});
const postSub = await client.subscribe('posts', (n) => {
console.log('[Posts]', n.operation, n.data);
}, {
filters: [{ column: 'status', operator: 'eq', value: 'published' }]
});
const commentSub = await client.subscribe('comments', (n) => {
console.log('[Comments]', n.operation, n.data);
});
// Check active subscriptions
console.log('Active:', client.getSubscriptions().length);
// Clean up
await client.unsubscribe(userSub);
await client.unsubscribe(postSub);
await client.unsubscribe(commentSub);
```
## Best Practices
1. **Always Clean Up**: Call `disconnect()` when done to close the connection properly
2. **Use TypeScript**: Leverage type definitions for better type safety
3. **Handle Errors**: Always wrap operations in try-catch blocks
4. **Limit Subscriptions**: Don't create too many subscriptions per connection
5. **Use Filters**: Apply filters to subscriptions to reduce unnecessary notifications
6. **Connection State**: Check `isConnected()` before operations
7. **Event Listeners**: Remove event listeners when no longer needed with `off()`
8. **Reconnection**: Enable auto-reconnection for production apps
## Browser Support
- Chrome/Edge 88+
- Firefox 85+
- Safari 14+
- Node.js 14.16+
## License
MIT

View File

@@ -0,0 +1,7 @@
// Types
export * from './types';
export * from './websocket-types';
// WebSocket Client
export { WebSocketClient } from './websocket-client';
export type { WebSocketClient as default } from './websocket-client';

View File

@@ -0,0 +1,487 @@
import { v4 as uuidv4 } from 'uuid';
import type {
WebSocketClientConfig,
WSMessage,
WSRequestMessage,
WSResponseMessage,
WSNotificationMessage,
WSOperation,
WSOptions,
Subscription,
SubscriptionOptions,
ConnectionState,
WebSocketClientEvents
} from './websocket-types';
export class WebSocketClient {
private ws: WebSocket | null = null;
private config: Required<WebSocketClientConfig>;
private messageHandlers: Map<string, (message: WSResponseMessage) => void> = new Map();
private subscriptions: Map<string, Subscription> = new Map();
private eventListeners: Partial<WebSocketClientEvents> = {};
private state: ConnectionState = 'disconnected';
private reconnectAttempts = 0;
private reconnectTimer: ReturnType<typeof setTimeout> | null = null;
private heartbeatTimer: ReturnType<typeof setInterval> | null = null;
private isManualClose = false;
constructor(config: WebSocketClientConfig) {
this.config = {
url: config.url,
reconnect: config.reconnect ?? true,
reconnectInterval: config.reconnectInterval ?? 3000,
maxReconnectAttempts: config.maxReconnectAttempts ?? 10,
heartbeatInterval: config.heartbeatInterval ?? 30000,
debug: config.debug ?? false
};
}
/**
* Connect to WebSocket server
*/
async connect(): Promise<void> {
if (this.ws?.readyState === WebSocket.OPEN) {
this.log('Already connected');
return;
}
this.isManualClose = false;
this.setState('connecting');
return new Promise((resolve, reject) => {
try {
this.ws = new WebSocket(this.config.url);
this.ws.onopen = () => {
this.log('Connected to WebSocket server');
this.setState('connected');
this.reconnectAttempts = 0;
this.startHeartbeat();
this.emit('connect');
resolve();
};
this.ws.onmessage = (event) => {
this.handleMessage(event.data);
};
this.ws.onerror = (event) => {
this.log('WebSocket error:', event);
const error = new Error('WebSocket connection error');
this.emit('error', error);
reject(error);
};
this.ws.onclose = (event) => {
this.log('WebSocket closed:', event.code, event.reason);
this.stopHeartbeat();
this.setState('disconnected');
this.emit('disconnect', event);
// Attempt reconnection if enabled and not manually closed
if (this.config.reconnect && !this.isManualClose && this.reconnectAttempts < this.config.maxReconnectAttempts) {
this.reconnectAttempts++;
this.log(`Reconnection attempt ${this.reconnectAttempts}/${this.config.maxReconnectAttempts}`);
this.setState('reconnecting');
this.reconnectTimer = setTimeout(() => {
this.connect().catch((err) => {
this.log('Reconnection failed:', err);
});
}, this.config.reconnectInterval);
}
};
} catch (error) {
reject(error);
}
});
}
/**
* Disconnect from WebSocket server
*/
disconnect(): void {
this.isManualClose = true;
if (this.reconnectTimer) {
clearTimeout(this.reconnectTimer);
this.reconnectTimer = null;
}
this.stopHeartbeat();
if (this.ws) {
this.setState('disconnecting');
this.ws.close();
this.ws = null;
}
this.setState('disconnected');
this.messageHandlers.clear();
}
/**
* Send a CRUD request and wait for response
*/
async request<T = any>(
operation: WSOperation,
entity: string,
options?: {
schema?: string;
record_id?: string;
data?: any;
options?: WSOptions;
}
): Promise<T> {
this.ensureConnected();
const id = uuidv4();
const message: WSRequestMessage = {
id,
type: 'request',
operation,
entity,
schema: options?.schema,
record_id: options?.record_id,
data: options?.data,
options: options?.options
};
return new Promise((resolve, reject) => {
// Set up response handler
this.messageHandlers.set(id, (response: WSResponseMessage) => {
if (response.success) {
resolve(response.data);
} else {
reject(new Error(response.error?.message || 'Request failed'));
}
});
// Send message
this.send(message);
// Timeout after 30 seconds
setTimeout(() => {
if (this.messageHandlers.has(id)) {
this.messageHandlers.delete(id);
reject(new Error('Request timeout'));
}
}, 30000);
});
}
/**
* Read records
*/
async read<T = any>(entity: string, options?: {
schema?: string;
record_id?: string;
filters?: import('./types').FilterOption[];
columns?: string[];
sort?: import('./types').SortOption[];
preload?: import('./types').PreloadOption[];
limit?: number;
offset?: number;
}): Promise<T> {
return this.request<T>('read', entity, {
schema: options?.schema,
record_id: options?.record_id,
options: {
filters: options?.filters,
columns: options?.columns,
sort: options?.sort,
preload: options?.preload,
limit: options?.limit,
offset: options?.offset
}
});
}
/**
* Create a record
*/
async create<T = any>(entity: string, data: any, options?: {
schema?: string;
}): Promise<T> {
return this.request<T>('create', entity, {
schema: options?.schema,
data
});
}
/**
* Update a record
*/
async update<T = any>(entity: string, id: string, data: any, options?: {
schema?: string;
}): Promise<T> {
return this.request<T>('update', entity, {
schema: options?.schema,
record_id: id,
data
});
}
/**
* Delete a record
*/
async delete(entity: string, id: string, options?: {
schema?: string;
}): Promise<void> {
await this.request('delete', entity, {
schema: options?.schema,
record_id: id
});
}
/**
* Get metadata for an entity
*/
async meta<T = any>(entity: string, options?: {
schema?: string;
}): Promise<T> {
return this.request<T>('meta', entity, {
schema: options?.schema
});
}
/**
* Subscribe to entity changes
*/
async subscribe(
entity: string,
callback: (notification: WSNotificationMessage) => void,
options?: {
schema?: string;
filters?: import('./types').FilterOption[];
}
): Promise<string> {
this.ensureConnected();
const id = uuidv4();
const message: WSMessage = {
id,
type: 'subscription',
operation: 'subscribe',
entity,
schema: options?.schema,
options: {
filters: options?.filters
}
};
return new Promise((resolve, reject) => {
this.messageHandlers.set(id, (response: WSResponseMessage) => {
if (response.success && response.data?.subscription_id) {
const subscriptionId = response.data.subscription_id;
// Store subscription
this.subscriptions.set(subscriptionId, {
id: subscriptionId,
entity,
schema: options?.schema,
options: { filters: options?.filters },
callback
});
this.log(`Subscribed to ${entity} with ID: ${subscriptionId}`);
resolve(subscriptionId);
} else {
reject(new Error(response.error?.message || 'Subscription failed'));
}
});
this.send(message);
// Timeout
setTimeout(() => {
if (this.messageHandlers.has(id)) {
this.messageHandlers.delete(id);
reject(new Error('Subscription timeout'));
}
}, 10000);
});
}
/**
* Unsubscribe from entity changes
*/
async unsubscribe(subscriptionId: string): Promise<void> {
this.ensureConnected();
const id = uuidv4();
const message: WSMessage = {
id,
type: 'subscription',
operation: 'unsubscribe',
subscription_id: subscriptionId
};
return new Promise((resolve, reject) => {
this.messageHandlers.set(id, (response: WSResponseMessage) => {
if (response.success) {
this.subscriptions.delete(subscriptionId);
this.log(`Unsubscribed from ${subscriptionId}`);
resolve();
} else {
reject(new Error(response.error?.message || 'Unsubscribe failed'));
}
});
this.send(message);
// Timeout
setTimeout(() => {
if (this.messageHandlers.has(id)) {
this.messageHandlers.delete(id);
reject(new Error('Unsubscribe timeout'));
}
}, 10000);
});
}
/**
* Get list of active subscriptions
*/
getSubscriptions(): Subscription[] {
return Array.from(this.subscriptions.values());
}
/**
* Get connection state
*/
getState(): ConnectionState {
return this.state;
}
/**
* Check if connected
*/
isConnected(): boolean {
return this.ws?.readyState === WebSocket.OPEN;
}
/**
* Add event listener
*/
on<K extends keyof WebSocketClientEvents>(event: K, callback: WebSocketClientEvents[K]): void {
this.eventListeners[event] = callback as any;
}
/**
* Remove event listener
*/
off<K extends keyof WebSocketClientEvents>(event: K): void {
delete this.eventListeners[event];
}
// Private methods
private handleMessage(data: string): void {
try {
const message: WSMessage = JSON.parse(data);
this.log('Received message:', message);
this.emit('message', message);
// Handle different message types
switch (message.type) {
case 'response':
this.handleResponse(message as WSResponseMessage);
break;
case 'notification':
this.handleNotification(message as WSNotificationMessage);
break;
case 'pong':
// Heartbeat response
break;
default:
this.log('Unknown message type:', message.type);
}
} catch (error) {
this.log('Error parsing message:', error);
}
}
private handleResponse(message: WSResponseMessage): void {
const handler = this.messageHandlers.get(message.id);
if (handler) {
handler(message);
this.messageHandlers.delete(message.id);
}
}
private handleNotification(message: WSNotificationMessage): void {
const subscription = this.subscriptions.get(message.subscription_id);
if (subscription?.callback) {
subscription.callback(message);
}
}
private send(message: WSMessage): void {
if (!this.ws || this.ws.readyState !== WebSocket.OPEN) {
throw new Error('WebSocket is not connected');
}
const data = JSON.stringify(message);
this.log('Sending message:', message);
this.ws.send(data);
}
private startHeartbeat(): void {
if (this.heartbeatTimer) {
return;
}
this.heartbeatTimer = setInterval(() => {
if (this.isConnected()) {
const pingMessage: WSMessage = {
id: uuidv4(),
type: 'ping'
};
this.send(pingMessage);
}
}, this.config.heartbeatInterval);
}
private stopHeartbeat(): void {
if (this.heartbeatTimer) {
clearInterval(this.heartbeatTimer);
this.heartbeatTimer = null;
}
}
private setState(state: ConnectionState): void {
if (this.state !== state) {
this.state = state;
this.emit('stateChange', state);
}
}
private ensureConnected(): void {
if (!this.isConnected()) {
throw new Error('WebSocket is not connected. Call connect() first.');
}
}
private emit<K extends keyof WebSocketClientEvents>(
event: K,
...args: Parameters<WebSocketClientEvents[K]>
): void {
const listener = this.eventListeners[event];
if (listener) {
(listener as any)(...args);
}
}
private log(...args: any[]): void {
if (this.config.debug) {
console.log('[WebSocketClient]', ...args);
}
}
}
export default WebSocketClient;

View File

@@ -0,0 +1,427 @@
import { WebSocketClient } from './websocket-client';
import type { WSNotificationMessage } from './websocket-types';
/**
* Example 1: Basic Usage
*/
export async function basicUsageExample() {
// Create client
const client = new WebSocketClient({
url: 'ws://localhost:8080/ws',
reconnect: true,
debug: true
});
// Connect
await client.connect();
// Read users
const users = await client.read('users', {
schema: 'public',
filters: [
{ column: 'status', operator: 'eq', value: 'active' }
],
limit: 10,
sort: [
{ column: 'name', direction: 'asc' }
]
});
console.log('Users:', users);
// Create a user
const newUser = await client.create('users', {
name: 'John Doe',
email: 'john@example.com',
status: 'active'
}, { schema: 'public' });
console.log('Created user:', newUser);
// Update user
const updatedUser = await client.update('users', '123', {
name: 'John Updated'
}, { schema: 'public' });
console.log('Updated user:', updatedUser);
// Delete user
await client.delete('users', '123', { schema: 'public' });
// Disconnect
client.disconnect();
}
/**
* Example 2: Real-time Subscriptions
*/
export async function subscriptionExample() {
const client = new WebSocketClient({
url: 'ws://localhost:8080/ws',
debug: true
});
await client.connect();
// Subscribe to user changes
const subscriptionId = await client.subscribe(
'users',
(notification: WSNotificationMessage) => {
console.log('User changed:', notification.operation, notification.data);
switch (notification.operation) {
case 'create':
console.log('New user created:', notification.data);
break;
case 'update':
console.log('User updated:', notification.data);
break;
case 'delete':
console.log('User deleted:', notification.data);
break;
}
},
{
schema: 'public',
filters: [
{ column: 'status', operator: 'eq', value: 'active' }
]
}
);
console.log('Subscribed with ID:', subscriptionId);
// Later: unsubscribe
setTimeout(async () => {
await client.unsubscribe(subscriptionId);
console.log('Unsubscribed');
client.disconnect();
}, 60000);
}
/**
* Example 3: Event Handling
*/
export async function eventHandlingExample() {
const client = new WebSocketClient({
url: 'ws://localhost:8080/ws'
});
// Listen to connection events
client.on('connect', () => {
console.log('Connected!');
});
client.on('disconnect', (event) => {
console.log('Disconnected:', event.code, event.reason);
});
client.on('error', (error) => {
console.error('WebSocket error:', error);
});
client.on('stateChange', (state) => {
console.log('State changed to:', state);
});
client.on('message', (message) => {
console.log('Received message:', message);
});
await client.connect();
// Your operations here...
}
/**
* Example 4: Multiple Subscriptions
*/
export async function multipleSubscriptionsExample() {
const client = new WebSocketClient({
url: 'ws://localhost:8080/ws',
debug: true
});
await client.connect();
// Subscribe to users
const userSubId = await client.subscribe(
'users',
(notification) => {
console.log('[Users]', notification.operation, notification.data);
},
{ schema: 'public' }
);
// Subscribe to posts
const postSubId = await client.subscribe(
'posts',
(notification) => {
console.log('[Posts]', notification.operation, notification.data);
},
{
schema: 'public',
filters: [
{ column: 'status', operator: 'eq', value: 'published' }
]
}
);
// Subscribe to comments
const commentSubId = await client.subscribe(
'comments',
(notification) => {
console.log('[Comments]', notification.operation, notification.data);
},
{ schema: 'public' }
);
console.log('Active subscriptions:', client.getSubscriptions());
// Clean up after 60 seconds
setTimeout(async () => {
await client.unsubscribe(userSubId);
await client.unsubscribe(postSubId);
await client.unsubscribe(commentSubId);
client.disconnect();
}, 60000);
}
/**
* Example 5: Advanced Queries
*/
export async function advancedQueriesExample() {
const client = new WebSocketClient({
url: 'ws://localhost:8080/ws'
});
await client.connect();
// Complex query with filters, sorting, pagination, and preloading
const posts = await client.read('posts', {
schema: 'public',
filters: [
{ column: 'status', operator: 'eq', value: 'published' },
{ column: 'views', operator: 'gte', value: 100 }
],
columns: ['id', 'title', 'content', 'user_id', 'created_at'],
sort: [
{ column: 'created_at', direction: 'desc' },
{ column: 'views', direction: 'desc' }
],
preload: [
{
relation: 'user',
columns: ['id', 'name', 'email']
},
{
relation: 'comments',
columns: ['id', 'content', 'user_id'],
filters: [
{ column: 'status', operator: 'eq', value: 'approved' }
]
}
],
limit: 20,
offset: 0
});
console.log('Posts:', posts);
// Get single record by ID
const post = await client.read('posts', {
schema: 'public',
record_id: '123'
});
console.log('Single post:', post);
client.disconnect();
}
/**
* Example 6: Error Handling
*/
export async function errorHandlingExample() {
const client = new WebSocketClient({
url: 'ws://localhost:8080/ws',
reconnect: true,
maxReconnectAttempts: 5
});
client.on('error', (error) => {
console.error('Connection error:', error);
});
client.on('stateChange', (state) => {
console.log('Connection state:', state);
});
try {
await client.connect();
try {
// Try to read non-existent entity
await client.read('nonexistent', { schema: 'public' });
} catch (error) {
console.error('Read error:', error);
}
try {
// Try to create invalid record
await client.create('users', {
// Missing required fields
}, { schema: 'public' });
} catch (error) {
console.error('Create error:', error);
}
} catch (error) {
console.error('Connection failed:', error);
} finally {
client.disconnect();
}
}
/**
* Example 7: React Integration
*/
export function reactIntegrationExample() {
const exampleCode = `
import { useEffect, useState } from 'react';
import { WebSocketClient } from '@warkypublic/resolvespec-js';
export function useWebSocket(url: string) {
const [client] = useState(() => new WebSocketClient({ url }));
const [isConnected, setIsConnected] = useState(false);
useEffect(() => {
client.on('connect', () => setIsConnected(true));
client.on('disconnect', () => setIsConnected(false));
client.connect();
return () => {
client.disconnect();
};
}, [client]);
return { client, isConnected };
}
export function UsersComponent() {
const { client, isConnected } = useWebSocket('ws://localhost:8080/ws');
const [users, setUsers] = useState([]);
useEffect(() => {
if (!isConnected) return;
// Subscribe to user changes
const subscribeToUsers = async () => {
const subId = await client.subscribe('users', (notification) => {
if (notification.operation === 'create') {
setUsers(prev => [...prev, notification.data]);
} else if (notification.operation === 'update') {
setUsers(prev => prev.map(u =>
u.id === notification.data.id ? notification.data : u
));
} else if (notification.operation === 'delete') {
setUsers(prev => prev.filter(u => u.id !== notification.data.id));
}
}, { schema: 'public' });
// Load initial users
const initialUsers = await client.read('users', {
schema: 'public',
filters: [{ column: 'status', operator: 'eq', value: 'active' }]
});
setUsers(initialUsers);
return () => client.unsubscribe(subId);
};
subscribeToUsers();
}, [client, isConnected]);
const createUser = async (name: string, email: string) => {
await client.create('users', { name, email, status: 'active' }, {
schema: 'public'
});
};
return (
<div>
<h2>Users ({users.length})</h2>
{isConnected ? '🟢 Connected' : '🔴 Disconnected'}
{/* Render users... */}
</div>
);
}
`;
console.log(exampleCode);
}
/**
* Example 8: TypeScript with Typed Models
*/
export async function typedModelsExample() {
// Define your models
interface User {
id: number;
name: string;
email: string;
status: 'active' | 'inactive';
created_at: string;
}
interface Post {
id: number;
title: string;
content: string;
user_id: number;
status: 'draft' | 'published';
views: number;
user?: User;
}
const client = new WebSocketClient({
url: 'ws://localhost:8080/ws'
});
await client.connect();
// Type-safe operations
const users = await client.read<User[]>('users', {
schema: 'public',
filters: [{ column: 'status', operator: 'eq', value: 'active' }]
});
const newUser = await client.create<User>('users', {
name: 'Alice',
email: 'alice@example.com',
status: 'active'
}, { schema: 'public' });
const posts = await client.read<Post[]>('posts', {
schema: 'public',
preload: [
{
relation: 'user',
columns: ['id', 'name', 'email']
}
]
});
// Type-safe subscriptions
await client.subscribe(
'users',
(notification) => {
const user = notification.data as User;
console.log('User changed:', user.name, user.email);
},
{ schema: 'public' }
);
client.disconnect();
}

View File

@@ -0,0 +1,110 @@
// WebSocket Message Types
export type MessageType = 'request' | 'response' | 'notification' | 'subscription' | 'error' | 'ping' | 'pong';
export type WSOperation = 'read' | 'create' | 'update' | 'delete' | 'subscribe' | 'unsubscribe' | 'meta';
// Re-export common types
export type { FilterOption, SortOption, PreloadOption, Operator, SortDirection } from './types';
export interface WSOptions {
filters?: import('./types').FilterOption[];
columns?: string[];
preload?: import('./types').PreloadOption[];
sort?: import('./types').SortOption[];
limit?: number;
offset?: number;
}
export interface WSMessage {
id?: string;
type: MessageType;
operation?: WSOperation;
schema?: string;
entity?: string;
record_id?: string;
data?: any;
options?: WSOptions;
subscription_id?: string;
success?: boolean;
error?: WSErrorInfo;
metadata?: Record<string, any>;
timestamp?: string;
}
export interface WSErrorInfo {
code: string;
message: string;
details?: Record<string, any>;
}
export interface WSRequestMessage {
id: string;
type: 'request';
operation: WSOperation;
schema?: string;
entity: string;
record_id?: string;
data?: any;
options?: WSOptions;
}
export interface WSResponseMessage {
id: string;
type: 'response';
success: boolean;
data?: any;
error?: WSErrorInfo;
metadata?: Record<string, any>;
timestamp: string;
}
export interface WSNotificationMessage {
type: 'notification';
operation: WSOperation;
subscription_id: string;
schema?: string;
entity: string;
data: any;
timestamp: string;
}
export interface WSSubscriptionMessage {
id: string;
type: 'subscription';
operation: 'subscribe' | 'unsubscribe';
schema?: string;
entity: string;
options?: WSOptions;
subscription_id?: string;
}
export interface SubscriptionOptions {
filters?: import('./types').FilterOption[];
onNotification?: (notification: WSNotificationMessage) => void;
}
export interface WebSocketClientConfig {
url: string;
reconnect?: boolean;
reconnectInterval?: number;
maxReconnectAttempts?: number;
heartbeatInterval?: number;
debug?: boolean;
}
export interface Subscription {
id: string;
entity: string;
schema?: string;
options?: WSOptions;
callback?: (notification: WSNotificationMessage) => void;
}
export type ConnectionState = 'connecting' | 'connected' | 'disconnecting' | 'disconnected' | 'reconnecting';
export interface WebSocketClientEvents {
connect: () => void;
disconnect: (event: CloseEvent) => void;
error: (error: Error) => void;
message: (message: WSMessage) => void;
stateChange: (state: ConnectionState) => void;
}

View File

@@ -14,33 +14,33 @@ NC='\033[0m' # No Color
echo -e "${GREEN}=== ResolveSpec Integration Tests ===${NC}\n"
# Check if docker-compose is available
if ! command -v docker-compose &> /dev/null; then
echo -e "${RED}Error: docker-compose is not installed${NC}"
echo "Please install docker-compose or run PostgreSQL manually"
# Check if podman compose is available
if ! command -v podman &> /dev/null; then
echo -e "${RED}Error: podman is not installed${NC}"
echo "Please install podman or run PostgreSQL manually"
echo "See INTEGRATION_TESTS.md for details"
exit 1
fi
# Clean up any existing containers and networks from previous runs
echo -e "${YELLOW}Cleaning up existing containers and networks...${NC}"
docker-compose down -v 2>/dev/null || true
podman compose down -v 2>/dev/null || true
# Start PostgreSQL
echo -e "${YELLOW}Starting PostgreSQL...${NC}"
docker-compose up -d postgres-test
podman compose up -d postgres-test
# Wait for PostgreSQL to be ready
echo -e "${YELLOW}Waiting for PostgreSQL to be ready...${NC}"
max_attempts=30
attempt=0
while ! docker-compose exec -T postgres-test pg_isready -U postgres > /dev/null 2>&1; do
while ! podman compose exec -T postgres-test pg_isready -U postgres > /dev/null 2>&1; do
attempt=$((attempt + 1))
if [ $attempt -ge $max_attempts ]; then
echo -e "${RED}Error: PostgreSQL failed to start after ${max_attempts} seconds${NC}"
docker-compose logs postgres-test
docker-compose down
podman compose logs postgres-test
podman compose down
exit 1
fi
sleep 1
@@ -51,8 +51,8 @@ echo -e "\n${GREEN}PostgreSQL is ready!${NC}\n"
# Create test databases
echo -e "${YELLOW}Creating test databases...${NC}"
docker-compose exec -T postgres-test psql -U postgres -c "CREATE DATABASE resolvespec_test;" 2>/dev/null || echo " resolvespec_test already exists"
docker-compose exec -T postgres-test psql -U postgres -c "CREATE DATABASE restheadspec_test;" 2>/dev/null || echo " restheadspec_test already exists"
podman compose exec -T postgres-test psql -U postgres -c "CREATE DATABASE resolvespec_test;" 2>/dev/null || echo " resolvespec_test already exists"
podman compose exec -T postgres-test psql -U postgres -c "CREATE DATABASE restheadspec_test;" 2>/dev/null || echo " restheadspec_test already exists"
echo -e "${GREEN}Test databases ready!${NC}\n"
# Determine which tests to run
@@ -79,6 +79,6 @@ fi
# Cleanup
echo -e "\n${YELLOW}Stopping PostgreSQL...${NC}"
docker-compose down
podman compose down
exit $EXIT_CODE

View File

@@ -19,14 +19,14 @@ Integration tests validate the full functionality of both `pkg/resolvespec` and
- Go 1.19 or later
- PostgreSQL 12 or later
- Docker and Docker Compose (optional, for easy setup)
- Podman and Podman Compose (optional, for easy setup)
## Quick Start with Docker
## Quick Start with Podman
### 1. Start PostgreSQL with Docker Compose
### 1. Start PostgreSQL with Podman Compose
```bash
docker-compose up -d postgres-test
podman compose up -d postgres-test
```
This starts a PostgreSQL container with the following default settings:
@@ -52,7 +52,7 @@ go test -tags=integration ./pkg/restheadspec -v
### 3. Stop PostgreSQL
```bash
docker-compose down
podman compose down
```
## Manual PostgreSQL Setup
@@ -161,7 +161,7 @@ If you see "connection refused" errors:
1. Check that PostgreSQL is running:
```bash
docker-compose ps
podman compose ps
```
2. Verify connection parameters:
@@ -194,10 +194,10 @@ Each test automatically cleans up its data using `TRUNCATE`. If you need a fresh
```bash
# Stop and remove containers (removes data)
docker-compose down -v
podman compose down -v
# Restart
docker-compose up -d postgres-test
podman compose up -d postgres-test
```
## CI/CD Integration

View File

@@ -119,13 +119,13 @@ Integration tests require a PostgreSQL database and use the `// +build integrati
- PostgreSQL 12+ installed and running
- Create test databases manually (see below)
### Setup with Docker
### Setup with Podman
1. **Start PostgreSQL**:
```bash
make docker-up
# or
docker-compose up -d postgres-test
podman compose up -d postgres-test
```
2. **Run Tests**:
@@ -141,10 +141,10 @@ Integration tests require a PostgreSQL database and use the `// +build integrati
```bash
make docker-down
# or
docker-compose down
podman compose down
```
### Setup without Docker
### Setup without Podman
1. **Create Databases**:
```sql
@@ -289,8 +289,8 @@ go test -tags=integration ./pkg/resolvespec -v
**Problem**: "connection refused" or "database does not exist"
**Solutions**:
1. Check PostgreSQL is running: `docker-compose ps`
2. Verify databases exist: `docker-compose exec postgres-test psql -U postgres -l`
1. Check PostgreSQL is running: `podman compose ps`
2. Verify databases exist: `podman compose exec postgres-test psql -U postgres -l`
3. Check environment variable: `echo $TEST_DATABASE_URL`
4. Recreate databases: `make clean && make docker-up`