Compare commits

...

23 Commits

Author SHA1 Message Date
Hein
ed67caf055 fix: reasheadspec customsql calls AddTablePrefixToColumns
Some checks failed
Build , Vet Test, and Lint / Run Vet Tests (1.24.x) (push) Successful in -25m42s
Build , Vet Test, and Lint / Run Vet Tests (1.23.x) (push) Successful in -25m6s
Build , Vet Test, and Lint / Lint Code (push) Failing after -25m37s
Build , Vet Test, and Lint / Build (push) Successful in -25m35s
Tests / Unit Tests (push) Failing after -25m50s
Tests / Integration Tests (push) Failing after -25m59s
2025-12-23 14:17:02 +02:00
Hein
63ed62a9a3 fix: Stupid logic error.
Some checks failed
Build , Vet Test, and Lint / Run Vet Tests (1.24.x) (push) Successful in -26m2s
Build , Vet Test, and Lint / Run Vet Tests (1.23.x) (push) Successful in -25m39s
Build , Vet Test, and Lint / Build (push) Successful in -25m47s
Build , Vet Test, and Lint / Lint Code (push) Successful in -25m6s
Tests / Unit Tests (push) Failing after -26m5s
Tests / Integration Tests (push) Failing after -26m5s
Co-authored-by: IvanX006 <ivan@bitechsystems.co.za>
Co-authored-by: Warkanum <HEIN.PUTH@GMAIL.COM>
Co-authored-by: Hein <hein@bitechsystems.co.za>
2025-12-19 16:52:34 +02:00
Hein
0525323a47 Fixed tests failing due to reponse header status
Co-authored-by: IvanX006 <ivan@bitechsystems.co.za>
Co-authored-by: Warkanum <HEIN.PUTH@GMAIL.COM>
Co-authored-by: Hein <hein@bitechsystems.co.za>
2025-12-19 16:50:16 +02:00
Hein Puth (Warkanum)
c3443f702e Merge pull request #4 from bitechdev/fix-dockers
Fixed Attempt to Fix Docker / Podman
2025-12-19 16:42:38 +02:00
Hein
45c463c117 Fixed Attempt to Fix Docker / Podman
Co-authored-by: IvanX006 <ivan@bitechsystems.co.za>
Co-authored-by: Warkanum <HEIN.PUTH@GMAIL.COM>
Co-authored-by: Hein <hein@bitechsystems.co.za>
2025-12-19 16:42:01 +02:00
Hein
84d673ce14 Added OpenAPI UI Routes
Co-authored-by: IvanX006 <ivan@bitechsystems.co.za>
Co-authored-by: Warkanum <HEIN.PUTH@GMAIL.COM>
Co-authored-by: Hein <hein@bitechsystems.co.za>
2025-12-19 16:32:14 +02:00
Hein
02fbdbd651 Cache package is pure infrastructure. Cache invalidates on create/delete from the API
Some checks failed
Tests / Integration Tests (push) Failing after 9s
Build , Vet Test, and Lint / Lint Code (push) Successful in 8m13s
Build , Vet Test, and Lint / Build (push) Successful in -24m36s
Build , Vet Test, and Lint / Run Vet Tests (1.23.x) (push) Successful in -25m6s
Build , Vet Test, and Lint / Run Vet Tests (1.24.x) (push) Successful in -24m33s
Tests / Unit Tests (push) Failing after -25m39s
2025-12-18 16:30:38 +02:00
Hein
97988e3b5e Updated bun version 2025-12-18 15:54:00 +02:00
Hein
c9838ad9d2 Bun bugfix 2025-12-18 15:22:58 +02:00
Hein
c5c0608f63 StatusPartialContent is better since we need to result to see. 2025-12-18 14:48:14 +02:00
Hein
39c3f05d21 StatusNoContent for zero length data 2025-12-18 13:34:07 +02:00
Hein
4ecd1ac17e Fixed to StatusNoContent 2025-12-18 13:20:39 +02:00
Hein
2b1aea0338 Fix null interface issue and added partial content response when content is empty 2025-12-18 13:19:57 +02:00
Hein
1e749efeb3 Fixes for not found records 2025-12-18 13:08:26 +02:00
Hein
09be676096 Resolvespec delete returns deleted record 2025-12-18 12:52:47 +02:00
Hein
e8350a70be Fixed delete record to return the record 2025-12-18 12:49:37 +02:00
Hein
5937b9eab5 Fixed the double table on update
Some checks are pending
Build , Vet Test, and Lint / Run Vet Tests (1.23.x) (push) Waiting to run
Build , Vet Test, and Lint / Run Vet Tests (1.24.x) (push) Waiting to run
Build , Vet Test, and Lint / Lint Code (push) Waiting to run
Build , Vet Test, and Lint / Build (push) Waiting to run
Tests / Unit Tests (push) Waiting to run
Tests / Integration Tests (push) Waiting to run
2025-12-18 12:14:39 +02:00
Hein
7c861c708e [breaking] Another breaking change datatypes -> spectypes 2025-12-18 11:49:35 +02:00
Hein
77f39af2f9 [breaking] Moved sql types to datatypes 2025-12-18 11:43:19 +02:00
Hein
fbc1471581 Fixed panic caused by model type not being pointer in rest header spec. 2025-12-18 11:21:59 +02:00
Hein
9351093e2a Fixed order by. Added OrderExpr to database interface
Some checks are pending
Build , Vet Test, and Lint / Run Vet Tests (1.23.x) (push) Waiting to run
Build , Vet Test, and Lint / Run Vet Tests (1.24.x) (push) Waiting to run
Build , Vet Test, and Lint / Lint Code (push) Waiting to run
Build , Vet Test, and Lint / Build (push) Waiting to run
Tests / Unit Tests (push) Waiting to run
Tests / Integration Tests (push) Waiting to run
2025-12-17 16:50:33 +02:00
Hein
932f12ab0a Update handler fixes for Utils bug
Some checks failed
Build , Vet Test, and Lint / Run Vet Tests (1.23.x) (push) Has been cancelled
Build , Vet Test, and Lint / Run Vet Tests (1.24.x) (push) Has been cancelled
Build , Vet Test, and Lint / Lint Code (push) Has been cancelled
Build , Vet Test, and Lint / Build (push) Has been cancelled
Tests / Unit Tests (push) Has been cancelled
Tests / Integration Tests (push) Has been cancelled
2025-12-12 17:01:37 +02:00
Hein
b22792bad6 Optional check for bun 2025-12-12 14:49:52 +02:00
35 changed files with 3107 additions and 401 deletions

View File

@@ -16,7 +16,7 @@ test: test-unit test-integration
# Start PostgreSQL for integration tests # Start PostgreSQL for integration tests
docker-up: docker-up:
@echo "Starting PostgreSQL container..." @echo "Starting PostgreSQL container..."
@docker-compose up -d postgres-test @podman compose up -d postgres-test
@echo "Waiting for PostgreSQL to be ready..." @echo "Waiting for PostgreSQL to be ready..."
@sleep 5 @sleep 5
@echo "PostgreSQL is ready!" @echo "PostgreSQL is ready!"
@@ -24,12 +24,12 @@ docker-up:
# Stop PostgreSQL container # Stop PostgreSQL container
docker-down: docker-down:
@echo "Stopping PostgreSQL container..." @echo "Stopping PostgreSQL container..."
@docker-compose down @podman compose down
# Clean up Docker volumes and test data # Clean up Docker volumes and test data
clean: clean:
@echo "Cleaning up..." @echo "Cleaning up..."
@docker-compose down -v @podman compose down -v
@echo "Cleanup complete!" @echo "Cleanup complete!"
# Run integration tests with Docker (full workflow) # Run integration tests with Docker (full workflow)

68
go.mod
View File

@@ -11,15 +11,17 @@ require (
github.com/glebarez/sqlite v1.11.0 github.com/glebarez/sqlite v1.11.0
github.com/google/uuid v1.6.0 github.com/google/uuid v1.6.0
github.com/gorilla/mux v1.8.1 github.com/gorilla/mux v1.8.1
github.com/jackc/pgx/v5 v5.6.0
github.com/prometheus/client_golang v1.23.2 github.com/prometheus/client_golang v1.23.2
github.com/redis/go-redis/v9 v9.17.1 github.com/redis/go-redis/v9 v9.17.1
github.com/spf13/viper v1.21.0 github.com/spf13/viper v1.21.0
github.com/stretchr/testify v1.11.1 github.com/stretchr/testify v1.11.1
github.com/testcontainers/testcontainers-go v0.40.0
github.com/tidwall/gjson v1.18.0 github.com/tidwall/gjson v1.18.0
github.com/tidwall/sjson v1.2.5 github.com/tidwall/sjson v1.2.5
github.com/uptrace/bun v1.2.15 github.com/uptrace/bun v1.2.16
github.com/uptrace/bun/dialect/sqlitedialect v1.2.15 github.com/uptrace/bun/dialect/sqlitedialect v1.2.16
github.com/uptrace/bun/driver/sqliteshim v1.2.15 github.com/uptrace/bun/driver/sqliteshim v1.2.16
github.com/uptrace/bunrouter v1.0.23 github.com/uptrace/bunrouter v1.0.23
go.opentelemetry.io/otel v1.38.0 go.opentelemetry.io/otel v1.38.0
go.opentelemetry.io/otel/exporters/otlp/otlptrace v1.38.0 go.opentelemetry.io/otel/exporters/otlp/otlptrace v1.38.0
@@ -33,36 +35,68 @@ require (
) )
require ( require (
dario.cat/mergo v1.0.2 // indirect
github.com/Azure/go-ansiterm v0.0.0-20210617225240-d185dfc1b5a1 // indirect
github.com/Microsoft/go-winio v0.6.2 // indirect
github.com/beorn7/perks v1.0.1 // indirect github.com/beorn7/perks v1.0.1 // indirect
github.com/cenkalti/backoff/v4 v4.3.0 // indirect
github.com/cenkalti/backoff/v5 v5.0.3 // indirect github.com/cenkalti/backoff/v5 v5.0.3 // indirect
github.com/cespare/xxhash/v2 v2.3.0 // indirect github.com/cespare/xxhash/v2 v2.3.0 // indirect
github.com/containerd/errdefs v1.0.0 // indirect
github.com/containerd/errdefs/pkg v0.3.0 // indirect
github.com/containerd/log v0.1.0 // indirect
github.com/containerd/platforms v0.2.1 // indirect
github.com/cpuguy83/dockercfg v0.3.2 // indirect
github.com/davecgh/go-spew v1.1.1 // indirect github.com/davecgh/go-spew v1.1.1 // indirect
github.com/dgryski/go-rendezvous v0.0.0-20200823014737-9f7001d12a5f // indirect github.com/dgryski/go-rendezvous v0.0.0-20200823014737-9f7001d12a5f // indirect
github.com/distribution/reference v0.6.0 // indirect
github.com/docker/docker v28.5.1+incompatible // indirect
github.com/docker/go-connections v0.6.0 // indirect
github.com/docker/go-units v0.5.0 // indirect
github.com/dustin/go-humanize v1.0.1 // indirect github.com/dustin/go-humanize v1.0.1 // indirect
github.com/ebitengine/purego v0.8.4 // indirect
github.com/felixge/httpsnoop v1.0.4 // indirect
github.com/fsnotify/fsnotify v1.9.0 // indirect github.com/fsnotify/fsnotify v1.9.0 // indirect
github.com/glebarez/go-sqlite v1.21.2 // indirect github.com/glebarez/go-sqlite v1.21.2 // indirect
github.com/go-logr/logr v1.4.3 // indirect github.com/go-logr/logr v1.4.3 // indirect
github.com/go-logr/stdr v1.2.2 // indirect github.com/go-logr/stdr v1.2.2 // indirect
github.com/go-ole/go-ole v1.2.6 // indirect
github.com/go-viper/mapstructure/v2 v2.4.0 // indirect github.com/go-viper/mapstructure/v2 v2.4.0 // indirect
github.com/grpc-ecosystem/grpc-gateway/v2 v2.27.2 // indirect github.com/grpc-ecosystem/grpc-gateway/v2 v2.27.2 // indirect
github.com/jackc/pgpassfile v1.0.0 // indirect github.com/jackc/pgpassfile v1.0.0 // indirect
github.com/jackc/pgservicefile v0.0.0-20240606120523-5a60cdf6a761 // indirect github.com/jackc/pgservicefile v0.0.0-20240606120523-5a60cdf6a761 // indirect
github.com/jackc/pgx/v5 v5.6.0 // indirect
github.com/jackc/puddle/v2 v2.2.2 // indirect github.com/jackc/puddle/v2 v2.2.2 // indirect
github.com/jinzhu/inflection v1.0.0 // indirect github.com/jinzhu/inflection v1.0.0 // indirect
github.com/jinzhu/now v1.1.5 // indirect github.com/jinzhu/now v1.1.5 // indirect
github.com/klauspost/compress v1.18.0 // indirect
github.com/lufia/plan9stats v0.0.0-20211012122336-39d0f177ccd0 // indirect
github.com/magiconair/properties v1.8.10 // indirect
github.com/mattn/go-isatty v0.0.20 // indirect github.com/mattn/go-isatty v0.0.20 // indirect
github.com/mattn/go-sqlite3 v1.14.28 // indirect github.com/mattn/go-sqlite3 v1.14.32 // indirect
github.com/moby/docker-image-spec v1.3.1 // indirect
github.com/moby/go-archive v0.1.0 // indirect
github.com/moby/patternmatcher v0.6.0 // indirect
github.com/moby/sys/sequential v0.6.0 // indirect
github.com/moby/sys/user v0.4.0 // indirect
github.com/moby/sys/userns v0.1.0 // indirect
github.com/moby/term v0.5.0 // indirect
github.com/morikuni/aec v1.0.0 // indirect
github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822 // indirect github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822 // indirect
github.com/ncruces/go-strftime v0.1.9 // indirect github.com/ncruces/go-strftime v1.0.0 // indirect
github.com/opencontainers/go-digest v1.0.0 // indirect
github.com/opencontainers/image-spec v1.1.1 // indirect
github.com/pelletier/go-toml/v2 v2.2.4 // indirect github.com/pelletier/go-toml/v2 v2.2.4 // indirect
github.com/pkg/errors v0.9.1 // indirect
github.com/pmezard/go-difflib v1.0.0 // indirect github.com/pmezard/go-difflib v1.0.0 // indirect
github.com/power-devops/perfstat v0.0.0-20210106213030-5aafc221ea8c // indirect
github.com/prometheus/client_model v0.6.2 // indirect github.com/prometheus/client_model v0.6.2 // indirect
github.com/prometheus/common v0.66.1 // indirect github.com/prometheus/common v0.66.1 // indirect
github.com/prometheus/procfs v0.16.1 // indirect github.com/prometheus/procfs v0.16.1 // indirect
github.com/puzpuzpuz/xsync/v3 v3.5.1 // indirect github.com/puzpuzpuz/xsync/v3 v3.5.1 // indirect
github.com/remyoudompheng/bigfft v0.0.0-20230129092748-24d4a6f8daec // indirect github.com/remyoudompheng/bigfft v0.0.0-20230129092748-24d4a6f8daec // indirect
github.com/sagikazarmark/locafero v0.11.0 // indirect github.com/sagikazarmark/locafero v0.11.0 // indirect
github.com/shirou/gopsutil/v4 v4.25.6 // indirect
github.com/sirupsen/logrus v1.9.3 // indirect
github.com/sourcegraph/conc v0.3.1-0.20240121214520-5f936abd7ae8 // indirect github.com/sourcegraph/conc v0.3.1-0.20240121214520-5f936abd7ae8 // indirect
github.com/spf13/afero v1.15.0 // indirect github.com/spf13/afero v1.15.0 // indirect
github.com/spf13/cast v1.10.0 // indirect github.com/spf13/cast v1.10.0 // indirect
@@ -70,28 +104,34 @@ require (
github.com/subosito/gotenv v1.6.0 // indirect github.com/subosito/gotenv v1.6.0 // indirect
github.com/tidwall/match v1.1.1 // indirect github.com/tidwall/match v1.1.1 // indirect
github.com/tidwall/pretty v1.2.0 // indirect github.com/tidwall/pretty v1.2.0 // indirect
github.com/tklauser/go-sysconf v0.3.12 // indirect
github.com/tklauser/numcpus v0.6.1 // indirect
github.com/tmthrgd/go-hex v0.0.0-20190904060850-447a3041c3bc // indirect github.com/tmthrgd/go-hex v0.0.0-20190904060850-447a3041c3bc // indirect
github.com/vmihailenco/msgpack/v5 v5.4.1 // indirect github.com/vmihailenco/msgpack/v5 v5.4.1 // indirect
github.com/vmihailenco/tagparser/v2 v2.0.0 // indirect github.com/vmihailenco/tagparser/v2 v2.0.0 // indirect
github.com/yusufpapurcu/wmi v1.2.4 // indirect
go.opentelemetry.io/auto/sdk v1.1.0 // indirect go.opentelemetry.io/auto/sdk v1.1.0 // indirect
go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.49.0 // indirect
go.opentelemetry.io/otel/metric v1.38.0 // indirect go.opentelemetry.io/otel/metric v1.38.0 // indirect
go.opentelemetry.io/proto/otlp v1.7.1 // indirect go.opentelemetry.io/proto/otlp v1.7.1 // indirect
go.uber.org/multierr v1.10.0 // indirect go.uber.org/multierr v1.10.0 // indirect
go.yaml.in/yaml/v2 v2.4.2 // indirect go.yaml.in/yaml/v2 v2.4.2 // indirect
go.yaml.in/yaml/v3 v3.0.4 // indirect go.yaml.in/yaml/v3 v3.0.4 // indirect
golang.org/x/crypto v0.41.0 // indirect golang.org/x/crypto v0.43.0 // indirect
golang.org/x/exp v0.0.0-20250711185948-6ae5c78190dc // indirect golang.org/x/exp v0.0.0-20251113190631-e25ba8c21ef6 // indirect
golang.org/x/net v0.43.0 // indirect golang.org/x/net v0.45.0 // indirect
golang.org/x/sync v0.16.0 // indirect golang.org/x/sync v0.18.0 // indirect
golang.org/x/sys v0.35.0 // indirect golang.org/x/sys v0.38.0 // indirect
golang.org/x/text v0.28.0 // indirect golang.org/x/text v0.30.0 // indirect
google.golang.org/genproto/googleapis/api v0.0.0-20250825161204-c5933d9347a5 // indirect google.golang.org/genproto/googleapis/api v0.0.0-20250825161204-c5933d9347a5 // indirect
google.golang.org/genproto/googleapis/rpc v0.0.0-20250825161204-c5933d9347a5 // indirect google.golang.org/genproto/googleapis/rpc v0.0.0-20250825161204-c5933d9347a5 // indirect
google.golang.org/grpc v1.75.0 // indirect google.golang.org/grpc v1.75.0 // indirect
google.golang.org/protobuf v1.36.8 // indirect google.golang.org/protobuf v1.36.8 // indirect
gopkg.in/yaml.v3 v3.0.1 // indirect gopkg.in/yaml.v3 v3.0.1 // indirect
modernc.org/libc v1.66.3 // indirect modernc.org/libc v1.67.0 // indirect
modernc.org/mathutil v1.7.1 // indirect modernc.org/mathutil v1.7.1 // indirect
modernc.org/memory v1.11.0 // indirect modernc.org/memory v1.11.0 // indirect
modernc.org/sqlite v1.38.0 // indirect modernc.org/sqlite v1.40.1 // indirect
) )
replace github.com/uptrace/bun => github.com/warkanum/bun v1.2.17

170
go.sum
View File

@@ -1,5 +1,13 @@
dario.cat/mergo v1.0.2 h1:85+piFYR1tMbRrLcDwR18y4UKJ3aH1Tbzi24VRW1TK8=
dario.cat/mergo v1.0.2/go.mod h1:E/hbnu0NxMFBjpMIE34DRGLWqDy0g5FuKDhCb31ngxA=
github.com/AdaLogics/go-fuzz-headers v0.0.0-20240806141605-e8a1dd7889d6 h1:He8afgbRMd7mFxO99hRNu+6tazq8nFF9lIwo9JFroBk=
github.com/AdaLogics/go-fuzz-headers v0.0.0-20240806141605-e8a1dd7889d6/go.mod h1:8o94RPi1/7XTJvwPpRSzSUedZrtlirdB3r9Z20bi2f8=
github.com/Azure/go-ansiterm v0.0.0-20210617225240-d185dfc1b5a1 h1:UQHMgLO+TxOElx5B5HZ4hJQsoJ/PvUvKRhJHDQXO8P8=
github.com/Azure/go-ansiterm v0.0.0-20210617225240-d185dfc1b5a1/go.mod h1:xomTg63KZ2rFqZQzSB4Vz2SUXa1BpHTVz9L5PTmPC4E=
github.com/DATA-DOG/go-sqlmock v1.5.2 h1:OcvFkGmslmlZibjAjaHm3L//6LiuBgolP7OputlJIzU= github.com/DATA-DOG/go-sqlmock v1.5.2 h1:OcvFkGmslmlZibjAjaHm3L//6LiuBgolP7OputlJIzU=
github.com/DATA-DOG/go-sqlmock v1.5.2/go.mod h1:88MAG/4G7SMwSE3CeA0ZKzrT5CiOU3OJ+JlNzwDqpNU= github.com/DATA-DOG/go-sqlmock v1.5.2/go.mod h1:88MAG/4G7SMwSE3CeA0ZKzrT5CiOU3OJ+JlNzwDqpNU=
github.com/Microsoft/go-winio v0.6.2 h1:F2VQgta7ecxGYO8k3ZZz3RS8fVIXVxONVUPlNERoyfY=
github.com/Microsoft/go-winio v0.6.2/go.mod h1:yd8OoFMLzJbo9gZq8j5qaps8bJ9aShtEA8Ipt1oGCvU=
github.com/beorn7/perks v1.0.1 h1:VlbKKnNfV8bJzeqoa4cOKqO6bYr3WgKZxO8Z16+hsOM= github.com/beorn7/perks v1.0.1 h1:VlbKKnNfV8bJzeqoa4cOKqO6bYr3WgKZxO8Z16+hsOM=
github.com/beorn7/perks v1.0.1/go.mod h1:G2ZrVWU2WbWT9wwq4/hrbKbnv/1ERSJQ0ibhJ6rlkpw= github.com/beorn7/perks v1.0.1/go.mod h1:G2ZrVWU2WbWT9wwq4/hrbKbnv/1ERSJQ0ibhJ6rlkpw=
github.com/bradfitz/gomemcache v0.0.0-20250403215159-8d39553ac7cf h1:TqhNAT4zKbTdLa62d2HDBFdvgSbIGB3eJE8HqhgiL9I= github.com/bradfitz/gomemcache v0.0.0-20250403215159-8d39553ac7cf h1:TqhNAT4zKbTdLa62d2HDBFdvgSbIGB3eJE8HqhgiL9I=
@@ -8,17 +16,43 @@ github.com/bsm/ginkgo/v2 v2.12.0 h1:Ny8MWAHyOepLGlLKYmXG4IEkioBysk6GpaRTLC8zwWs=
github.com/bsm/ginkgo/v2 v2.12.0/go.mod h1:SwYbGRRDovPVboqFv0tPTcG1sN61LM1Z4ARdbAV9g4c= github.com/bsm/ginkgo/v2 v2.12.0/go.mod h1:SwYbGRRDovPVboqFv0tPTcG1sN61LM1Z4ARdbAV9g4c=
github.com/bsm/gomega v1.27.10 h1:yeMWxP2pV2fG3FgAODIY8EiRE3dy0aeFYt4l7wh6yKA= github.com/bsm/gomega v1.27.10 h1:yeMWxP2pV2fG3FgAODIY8EiRE3dy0aeFYt4l7wh6yKA=
github.com/bsm/gomega v1.27.10/go.mod h1:JyEr/xRbxbtgWNi8tIEVPUYZ5Dzef52k01W3YH0H+O0= github.com/bsm/gomega v1.27.10/go.mod h1:JyEr/xRbxbtgWNi8tIEVPUYZ5Dzef52k01W3YH0H+O0=
github.com/cenkalti/backoff/v4 v4.3.0 h1:MyRJ/UdXutAwSAT+s3wNd7MfTIcy71VQueUuFK343L8=
github.com/cenkalti/backoff/v4 v4.3.0/go.mod h1:Y3VNntkOUPxTVeUxJ/G5vcM//AlwfmyYozVcomhLiZE=
github.com/cenkalti/backoff/v5 v5.0.3 h1:ZN+IMa753KfX5hd8vVaMixjnqRZ3y8CuJKRKj1xcsSM= github.com/cenkalti/backoff/v5 v5.0.3 h1:ZN+IMa753KfX5hd8vVaMixjnqRZ3y8CuJKRKj1xcsSM=
github.com/cenkalti/backoff/v5 v5.0.3/go.mod h1:rkhZdG3JZukswDf7f0cwqPNk4K0sa+F97BxZthm/crw= github.com/cenkalti/backoff/v5 v5.0.3/go.mod h1:rkhZdG3JZukswDf7f0cwqPNk4K0sa+F97BxZthm/crw=
github.com/cespare/xxhash/v2 v2.3.0 h1:UL815xU9SqsFlibzuggzjXhog7bL6oX9BbNZnL2UFvs= github.com/cespare/xxhash/v2 v2.3.0 h1:UL815xU9SqsFlibzuggzjXhog7bL6oX9BbNZnL2UFvs=
github.com/cespare/xxhash/v2 v2.3.0/go.mod h1:VGX0DQ3Q6kWi7AoAeZDth3/j3BFtOZR5XLFGgcrjCOs= github.com/cespare/xxhash/v2 v2.3.0/go.mod h1:VGX0DQ3Q6kWi7AoAeZDth3/j3BFtOZR5XLFGgcrjCOs=
github.com/containerd/errdefs v1.0.0 h1:tg5yIfIlQIrxYtu9ajqY42W3lpS19XqdxRQeEwYG8PI=
github.com/containerd/errdefs v1.0.0/go.mod h1:+YBYIdtsnF4Iw6nWZhJcqGSg/dwvV7tyJ/kCkyJ2k+M=
github.com/containerd/errdefs/pkg v0.3.0 h1:9IKJ06FvyNlexW690DXuQNx2KA2cUJXx151Xdx3ZPPE=
github.com/containerd/errdefs/pkg v0.3.0/go.mod h1:NJw6s9HwNuRhnjJhM7pylWwMyAkmCQvQ4GpJHEqRLVk=
github.com/containerd/log v0.1.0 h1:TCJt7ioM2cr/tfR8GPbGf9/VRAX8D2B4PjzCpfX540I=
github.com/containerd/log v0.1.0/go.mod h1:VRRf09a7mHDIRezVKTRCrOq78v577GXq3bSa3EhrzVo=
github.com/containerd/platforms v0.2.1 h1:zvwtM3rz2YHPQsF2CHYM8+KtB5dvhISiXh5ZpSBQv6A=
github.com/containerd/platforms v0.2.1/go.mod h1:XHCb+2/hzowdiut9rkudds9bE5yJ7npe7dG/wG+uFPw=
github.com/cpuguy83/dockercfg v0.3.2 h1:DlJTyZGBDlXqUZ2Dk2Q3xHs/FtnooJJVaad2S9GKorA=
github.com/cpuguy83/dockercfg v0.3.2/go.mod h1:sugsbF4//dDlL/i+S+rtpIWp+5h0BHJHfjj5/jFyUJc=
github.com/creack/pty v1.1.18 h1:n56/Zwd5o6whRC5PMGretI4IdRLlmBXYNjScPaBgsbY=
github.com/creack/pty v1.1.18/go.mod h1:MOBLtS5ELjhRRrroQr9kyvTxUAFNvYEK993ew/Vr4O4=
github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38= github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c= github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c=
github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38= github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/dgryski/go-rendezvous v0.0.0-20200823014737-9f7001d12a5f h1:lO4WD4F/rVNCu3HqELle0jiPLLBs70cWOduZpkS1E78= github.com/dgryski/go-rendezvous v0.0.0-20200823014737-9f7001d12a5f h1:lO4WD4F/rVNCu3HqELle0jiPLLBs70cWOduZpkS1E78=
github.com/dgryski/go-rendezvous v0.0.0-20200823014737-9f7001d12a5f/go.mod h1:cuUVRXasLTGF7a8hSLbxyZXjz+1KgoB3wDUb6vlszIc= github.com/dgryski/go-rendezvous v0.0.0-20200823014737-9f7001d12a5f/go.mod h1:cuUVRXasLTGF7a8hSLbxyZXjz+1KgoB3wDUb6vlszIc=
github.com/distribution/reference v0.6.0 h1:0IXCQ5g4/QMHHkarYzh5l+u8T3t73zM5QvfrDyIgxBk=
github.com/distribution/reference v0.6.0/go.mod h1:BbU0aIcezP1/5jX/8MP0YiH4SdvB5Y4f/wlDRiLyi3E=
github.com/docker/docker v28.5.1+incompatible h1:Bm8DchhSD2J6PsFzxC35TZo4TLGR2PdW/E69rU45NhM=
github.com/docker/docker v28.5.1+incompatible/go.mod h1:eEKB0N0r5NX/I1kEveEz05bcu8tLC/8azJZsviup8Sk=
github.com/docker/go-connections v0.6.0 h1:LlMG9azAe1TqfR7sO+NJttz1gy6KO7VJBh+pMmjSD94=
github.com/docker/go-connections v0.6.0/go.mod h1:AahvXYshr6JgfUJGdDCs2b5EZG/vmaMAntpSFH5BFKE=
github.com/docker/go-units v0.5.0 h1:69rxXcBk27SvSaaxTtLh/8llcHD8vYHT7WSdRZ/jvr4=
github.com/docker/go-units v0.5.0/go.mod h1:fgPhTUdO+D/Jk86RDLlptpiXQzgHJF7gydDDbaIK4Dk=
github.com/dustin/go-humanize v1.0.1 h1:GzkhY7T5VNhEkwH0PVJgjz+fX1rhBrR7pRT3mDkpeCY= github.com/dustin/go-humanize v1.0.1 h1:GzkhY7T5VNhEkwH0PVJgjz+fX1rhBrR7pRT3mDkpeCY=
github.com/dustin/go-humanize v1.0.1/go.mod h1:Mu1zIs6XwVuF/gI1OepvI0qD18qycQx+mFykh5fBlto= github.com/dustin/go-humanize v1.0.1/go.mod h1:Mu1zIs6XwVuF/gI1OepvI0qD18qycQx+mFykh5fBlto=
github.com/ebitengine/purego v0.8.4 h1:CF7LEKg5FFOsASUj0+QwaXf8Ht6TlFxg09+S9wz0omw=
github.com/ebitengine/purego v0.8.4/go.mod h1:iIjxzd6CiRiOG0UyXP+V1+jWqUXVjPKLAI0mRfJZTmQ=
github.com/felixge/httpsnoop v1.0.4 h1:NFTV2Zj1bL4mc9sqWACXbQFVBBg2W3GPvqp8/ESS2Wg=
github.com/felixge/httpsnoop v1.0.4/go.mod h1:m8KPJKqk1gH5J9DgRY2ASl2lWCfGKXixSwevea8zH2U=
github.com/frankban/quicktest v1.14.6 h1:7Xjx+VpznH+oBnejlPUj8oUpdxnVs4f8XU8WnHkI4W8= github.com/frankban/quicktest v1.14.6 h1:7Xjx+VpznH+oBnejlPUj8oUpdxnVs4f8XU8WnHkI4W8=
github.com/frankban/quicktest v1.14.6/go.mod h1:4ptaffx2x8+WTWXmUCuVU6aPUX1/Mz7zb5vbUoiM6w0= github.com/frankban/quicktest v1.14.6/go.mod h1:4ptaffx2x8+WTWXmUCuVU6aPUX1/Mz7zb5vbUoiM6w0=
github.com/fsnotify/fsnotify v1.9.0 h1:2Ml+OJNzbYCTzsxtv8vKSFD9PbJjmhYF14k/jKC7S9k= github.com/fsnotify/fsnotify v1.9.0 h1:2Ml+OJNzbYCTzsxtv8vKSFD9PbJjmhYF14k/jKC7S9k=
@@ -36,10 +70,13 @@ github.com/go-logr/logr v1.4.3 h1:CjnDlHq8ikf6E492q6eKboGOC0T8CDaOvkHCIg8idEI=
github.com/go-logr/logr v1.4.3/go.mod h1:9T104GzyrTigFIr8wt5mBrctHMim0Nb2HLGrmQ40KvY= github.com/go-logr/logr v1.4.3/go.mod h1:9T104GzyrTigFIr8wt5mBrctHMim0Nb2HLGrmQ40KvY=
github.com/go-logr/stdr v1.2.2 h1:hSWxHoqTgW2S2qGc0LTAI563KZ5YKYRhT3MFKZMbjag= github.com/go-logr/stdr v1.2.2 h1:hSWxHoqTgW2S2qGc0LTAI563KZ5YKYRhT3MFKZMbjag=
github.com/go-logr/stdr v1.2.2/go.mod h1:mMo/vtBO5dYbehREoey6XUKy/eSumjCCveDpRre4VKE= github.com/go-logr/stdr v1.2.2/go.mod h1:mMo/vtBO5dYbehREoey6XUKy/eSumjCCveDpRre4VKE=
github.com/go-ole/go-ole v1.2.6 h1:/Fpf6oFPoeFik9ty7siob0G6Ke8QvQEuVcuChpwXzpY=
github.com/go-ole/go-ole v1.2.6/go.mod h1:pprOEPIfldk/42T2oK7lQ4v4JSDwmV0As9GaiUsvbm0=
github.com/go-viper/mapstructure/v2 v2.4.0 h1:EBsztssimR/CONLSZZ04E8qAkxNYq4Qp9LvH92wZUgs= github.com/go-viper/mapstructure/v2 v2.4.0 h1:EBsztssimR/CONLSZZ04E8qAkxNYq4Qp9LvH92wZUgs=
github.com/go-viper/mapstructure/v2 v2.4.0/go.mod h1:oJDH3BJKyqBA2TXFhDsKDGDTlndYOZ6rGS0BRZIxGhM= github.com/go-viper/mapstructure/v2 v2.4.0/go.mod h1:oJDH3BJKyqBA2TXFhDsKDGDTlndYOZ6rGS0BRZIxGhM=
github.com/golang/protobuf v1.5.4 h1:i7eJL8qZTpSEXOPTxNKhASYpMn+8e5Q6AdndVa1dWek= github.com/golang/protobuf v1.5.4 h1:i7eJL8qZTpSEXOPTxNKhASYpMn+8e5Q6AdndVa1dWek=
github.com/golang/protobuf v1.5.4/go.mod h1:lnTiLA8Wa4RWRcIUkrtSVa5nRhsEGBg48fD6rSs7xps= github.com/golang/protobuf v1.5.4/go.mod h1:lnTiLA8Wa4RWRcIUkrtSVa5nRhsEGBg48fD6rSs7xps=
github.com/google/go-cmp v0.5.6/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
github.com/google/go-cmp v0.7.0 h1:wk8382ETsv4JYUZwIsn6YpYiWiBsYLSJiTsyBybVuN8= github.com/google/go-cmp v0.7.0 h1:wk8382ETsv4JYUZwIsn6YpYiWiBsYLSJiTsyBybVuN8=
github.com/google/go-cmp v0.7.0/go.mod h1:pXiqmnSA92OHEEa9HXL2W4E7lf9JzCmGVUdgjX3N/iU= github.com/google/go-cmp v0.7.0/go.mod h1:pXiqmnSA92OHEEa9HXL2W4E7lf9JzCmGVUdgjX3N/iU=
github.com/google/pprof v0.0.0-20250317173921-a4b03ec1a45e h1:ijClszYn+mADRFY17kjQEVQ1XRhq2/JR1M3sGqeJoxs= github.com/google/pprof v0.0.0-20250317173921-a4b03ec1a45e h1:ijClszYn+mADRFY17kjQEVQ1XRhq2/JR1M3sGqeJoxs=
@@ -50,6 +87,8 @@ github.com/gorilla/mux v1.8.1 h1:TuBL49tXwgrFYWhqrNgrUNEY92u81SPhu7sTdzQEiWY=
github.com/gorilla/mux v1.8.1/go.mod h1:AKf9I4AEqPTmMytcMc0KkNouC66V3BtZ4qD5fmWSiMQ= github.com/gorilla/mux v1.8.1/go.mod h1:AKf9I4AEqPTmMytcMc0KkNouC66V3BtZ4qD5fmWSiMQ=
github.com/grpc-ecosystem/grpc-gateway/v2 v2.27.2 h1:8Tjv8EJ+pM1xP8mK6egEbD1OgnVTyacbefKhmbLhIhU= github.com/grpc-ecosystem/grpc-gateway/v2 v2.27.2 h1:8Tjv8EJ+pM1xP8mK6egEbD1OgnVTyacbefKhmbLhIhU=
github.com/grpc-ecosystem/grpc-gateway/v2 v2.27.2/go.mod h1:pkJQ2tZHJ0aFOVEEot6oZmaVEZcRme73eIFmhiVuRWs= github.com/grpc-ecosystem/grpc-gateway/v2 v2.27.2/go.mod h1:pkJQ2tZHJ0aFOVEEot6oZmaVEZcRme73eIFmhiVuRWs=
github.com/hashicorp/golang-lru/v2 v2.0.7 h1:a+bsQ5rvGLjzHuww6tVxozPZFVghXaHOwFs4luLUK2k=
github.com/hashicorp/golang-lru/v2 v2.0.7/go.mod h1:QeFd9opnmA6QUJc5vARoKUSoFhyfM2/ZepoAG6RGpeM=
github.com/jackc/pgpassfile v1.0.0 h1:/6Hmqy13Ss2zCq62VdNG8tM1wchn8zjSGOBJ6icpsIM= github.com/jackc/pgpassfile v1.0.0 h1:/6Hmqy13Ss2zCq62VdNG8tM1wchn8zjSGOBJ6icpsIM=
github.com/jackc/pgpassfile v1.0.0/go.mod h1:CEx0iS5ambNFdcRtxPj5JhEz+xB6uRky5eyVu/W2HEg= github.com/jackc/pgpassfile v1.0.0/go.mod h1:CEx0iS5ambNFdcRtxPj5JhEz+xB6uRky5eyVu/W2HEg=
github.com/jackc/pgservicefile v0.0.0-20240606120523-5a60cdf6a761 h1:iCEnooe7UlwOQYpKFhBabPMi4aNAfoODPEFNiAnClxo= github.com/jackc/pgservicefile v0.0.0-20240606120523-5a60cdf6a761 h1:iCEnooe7UlwOQYpKFhBabPMi4aNAfoODPEFNiAnClxo=
@@ -71,14 +110,40 @@ github.com/kr/text v0.2.0 h1:5Nx0Ya0ZqY2ygV366QzturHI13Jq95ApcVaJBhpS+AY=
github.com/kr/text v0.2.0/go.mod h1:eLer722TekiGuMkidMxC/pM04lWEeraHUUmBw8l2grE= github.com/kr/text v0.2.0/go.mod h1:eLer722TekiGuMkidMxC/pM04lWEeraHUUmBw8l2grE=
github.com/kylelemons/godebug v1.1.0 h1:RPNrshWIDI6G2gRW9EHilWtl7Z6Sb1BR0xunSBf0SNc= github.com/kylelemons/godebug v1.1.0 h1:RPNrshWIDI6G2gRW9EHilWtl7Z6Sb1BR0xunSBf0SNc=
github.com/kylelemons/godebug v1.1.0/go.mod h1:9/0rRGxNHcop5bhtWyNeEfOS8JIWk580+fNqagV/RAw= github.com/kylelemons/godebug v1.1.0/go.mod h1:9/0rRGxNHcop5bhtWyNeEfOS8JIWk580+fNqagV/RAw=
github.com/lufia/plan9stats v0.0.0-20211012122336-39d0f177ccd0 h1:6E+4a0GO5zZEnZ81pIr0yLvtUWk2if982qA3F3QD6H4=
github.com/lufia/plan9stats v0.0.0-20211012122336-39d0f177ccd0/go.mod h1:zJYVVT2jmtg6P3p1VtQj7WsuWi/y4VnjVBn7F8KPB3I=
github.com/magiconair/properties v1.8.10 h1:s31yESBquKXCV9a/ScB3ESkOjUYYv+X0rg8SYxI99mE=
github.com/magiconair/properties v1.8.10/go.mod h1:Dhd985XPs7jluiymwWYZ0G4Z61jb3vdS329zhj2hYo0=
github.com/mattn/go-isatty v0.0.20 h1:xfD0iDuEKnDkl03q4limB+vH+GxLEtL/jb4xVJSWWEY= github.com/mattn/go-isatty v0.0.20 h1:xfD0iDuEKnDkl03q4limB+vH+GxLEtL/jb4xVJSWWEY=
github.com/mattn/go-isatty v0.0.20/go.mod h1:W+V8PltTTMOvKvAeJH7IuucS94S2C6jfK/D7dTCTo3Y= github.com/mattn/go-isatty v0.0.20/go.mod h1:W+V8PltTTMOvKvAeJH7IuucS94S2C6jfK/D7dTCTo3Y=
github.com/mattn/go-sqlite3 v1.14.28 h1:ThEiQrnbtumT+QMknw63Befp/ce/nUPgBPMlRFEum7A= github.com/mattn/go-sqlite3 v1.14.32 h1:JD12Ag3oLy1zQA+BNn74xRgaBbdhbNIDYvQUEuuErjs=
github.com/mattn/go-sqlite3 v1.14.28/go.mod h1:Uh1q+B4BYcTPb+yiD3kU8Ct7aC0hY9fxUwlHK0RXw+Y= github.com/mattn/go-sqlite3 v1.14.32/go.mod h1:Uh1q+B4BYcTPb+yiD3kU8Ct7aC0hY9fxUwlHK0RXw+Y=
github.com/moby/docker-image-spec v1.3.1 h1:jMKff3w6PgbfSa69GfNg+zN/XLhfXJGnEx3Nl2EsFP0=
github.com/moby/docker-image-spec v1.3.1/go.mod h1:eKmb5VW8vQEh/BAr2yvVNvuiJuY6UIocYsFu/DxxRpo=
github.com/moby/go-archive v0.1.0 h1:Kk/5rdW/g+H8NHdJW2gsXyZ7UnzvJNOy6VKJqueWdcQ=
github.com/moby/go-archive v0.1.0/go.mod h1:G9B+YoujNohJmrIYFBpSd54GTUB4lt9S+xVQvsJyFuo=
github.com/moby/patternmatcher v0.6.0 h1:GmP9lR19aU5GqSSFko+5pRqHi+Ohk1O69aFiKkVGiPk=
github.com/moby/patternmatcher v0.6.0/go.mod h1:hDPoyOpDY7OrrMDLaYoY3hf52gNCR/YOUYxkhApJIxc=
github.com/moby/sys/atomicwriter v0.1.0 h1:kw5D/EqkBwsBFi0ss9v1VG3wIkVhzGvLklJ+w3A14Sw=
github.com/moby/sys/atomicwriter v0.1.0/go.mod h1:Ul8oqv2ZMNHOceF643P6FKPXeCmYtlQMvpizfsSoaWs=
github.com/moby/sys/sequential v0.6.0 h1:qrx7XFUd/5DxtqcoH1h438hF5TmOvzC/lspjy7zgvCU=
github.com/moby/sys/sequential v0.6.0/go.mod h1:uyv8EUTrca5PnDsdMGXhZe6CCe8U/UiTWd+lL+7b/Ko=
github.com/moby/sys/user v0.4.0 h1:jhcMKit7SA80hivmFJcbB1vqmw//wU61Zdui2eQXuMs=
github.com/moby/sys/user v0.4.0/go.mod h1:bG+tYYYJgaMtRKgEmuueC0hJEAZWwtIbZTB+85uoHjs=
github.com/moby/sys/userns v0.1.0 h1:tVLXkFOxVu9A64/yh59slHVv9ahO9UIev4JZusOLG/g=
github.com/moby/sys/userns v0.1.0/go.mod h1:IHUYgu/kao6N8YZlp9Cf444ySSvCmDlmzUcYfDHOl28=
github.com/moby/term v0.5.0 h1:xt8Q1nalod/v7BqbG21f8mQPqH+xAaC9C3N3wfWbVP0=
github.com/moby/term v0.5.0/go.mod h1:8FzsFHVUBGZdbDsJw/ot+X+d5HLUbvklYLJ9uGfcI3Y=
github.com/morikuni/aec v1.0.0 h1:nP9CBfwrvYnBRgY6qfDQkygYDmYwOilePFkwzv4dU8A=
github.com/morikuni/aec v1.0.0/go.mod h1:BbKIizmSmc5MMPqRYbxO4ZU0S0+P200+tUnFx7PXmsc=
github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822 h1:C3w9PqII01/Oq1c1nUAm88MOHcQC9l5mIlSMApZMrHA= github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822 h1:C3w9PqII01/Oq1c1nUAm88MOHcQC9l5mIlSMApZMrHA=
github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822/go.mod h1:+n7T8mK8HuQTcFwEeznm/DIxMOiR9yIdICNftLE1DvQ= github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822/go.mod h1:+n7T8mK8HuQTcFwEeznm/DIxMOiR9yIdICNftLE1DvQ=
github.com/ncruces/go-strftime v0.1.9 h1:bY0MQC28UADQmHmaF5dgpLmImcShSi2kHU9XLdhx/f4= github.com/ncruces/go-strftime v1.0.0 h1:HMFp8mLCTPp341M/ZnA4qaf7ZlsbTc+miZjCLOFAw7w=
github.com/ncruces/go-strftime v0.1.9/go.mod h1:Fwc5htZGVVkseilnfgOVb9mKy6w1naJmn9CehxcKcls= github.com/ncruces/go-strftime v1.0.0/go.mod h1:Fwc5htZGVVkseilnfgOVb9mKy6w1naJmn9CehxcKcls=
github.com/opencontainers/go-digest v1.0.0 h1:apOUWs51W5PlhuyGyz9FCeeBIOUDA/6nW8Oi/yOhh5U=
github.com/opencontainers/go-digest v1.0.0/go.mod h1:0JzlMkj0TRzQZfJkVvzbP0HBR3IKzErnv2BNG4W4MAM=
github.com/opencontainers/image-spec v1.1.1 h1:y0fUlFfIZhPF1W537XOLg0/fcx6zcHCJwooC2xJA040=
github.com/opencontainers/image-spec v1.1.1/go.mod h1:qpqAh3Dmcf36wStyyWU+kCeDgrGnAve2nCC8+7h8Q0M=
github.com/pelletier/go-toml/v2 v2.2.4 h1:mye9XuhQ6gvn5h28+VilKrrPoQVanw5PMw/TB0t5Ec4= github.com/pelletier/go-toml/v2 v2.2.4 h1:mye9XuhQ6gvn5h28+VilKrrPoQVanw5PMw/TB0t5Ec4=
github.com/pelletier/go-toml/v2 v2.2.4/go.mod h1:2gIqNv+qfxSVS7cM2xJQKtLSTLUE9V8t9Stt+h56mCY= github.com/pelletier/go-toml/v2 v2.2.4/go.mod h1:2gIqNv+qfxSVS7cM2xJQKtLSTLUE9V8t9Stt+h56mCY=
github.com/pingcap/errors v0.11.4 h1:lFuQV/oaUMGcD2tqt+01ROSmJs75VG1ToEOkZIZ4nE4= github.com/pingcap/errors v0.11.4 h1:lFuQV/oaUMGcD2tqt+01ROSmJs75VG1ToEOkZIZ4nE4=
@@ -87,6 +152,8 @@ github.com/pkg/errors v0.9.1 h1:FEBLx1zS214owpjy7qsBeixbURkuhQAwrK5UwLGTwt4=
github.com/pkg/errors v0.9.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0= github.com/pkg/errors v0.9.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=
github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM= github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM=
github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4= github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
github.com/power-devops/perfstat v0.0.0-20210106213030-5aafc221ea8c h1:ncq/mPwQF4JjgDlrVEn3C11VoGHZN7m8qihwgMEtzYw=
github.com/power-devops/perfstat v0.0.0-20210106213030-5aafc221ea8c/go.mod h1:OmDBASR4679mdNQnz2pUhc2G8CO2JrUAVFDRBDP/hJE=
github.com/prometheus/client_golang v1.23.2 h1:Je96obch5RDVy3FDMndoUsjAhG5Edi49h0RJWRi/o0o= github.com/prometheus/client_golang v1.23.2 h1:Je96obch5RDVy3FDMndoUsjAhG5Edi49h0RJWRi/o0o=
github.com/prometheus/client_golang v1.23.2/go.mod h1:Tb1a6LWHB3/SPIzCoaDXI4I8UHKeFTEQ1YCr+0Gyqmg= github.com/prometheus/client_golang v1.23.2/go.mod h1:Tb1a6LWHB3/SPIzCoaDXI4I8UHKeFTEQ1YCr+0Gyqmg=
github.com/prometheus/client_model v0.6.2 h1:oBsgwpGs7iVziMvrGhE53c/GrLUsZdHnqNwqPLxwZyk= github.com/prometheus/client_model v0.6.2 h1:oBsgwpGs7iVziMvrGhE53c/GrLUsZdHnqNwqPLxwZyk=
@@ -105,6 +172,10 @@ github.com/rogpeppe/go-internal v1.13.1 h1:KvO1DLK/DRN07sQ1LQKScxyZJuNnedQ5/wKSR
github.com/rogpeppe/go-internal v1.13.1/go.mod h1:uMEvuHeurkdAXX61udpOXGD/AzZDWNMNyH2VO9fmH0o= github.com/rogpeppe/go-internal v1.13.1/go.mod h1:uMEvuHeurkdAXX61udpOXGD/AzZDWNMNyH2VO9fmH0o=
github.com/sagikazarmark/locafero v0.11.0 h1:1iurJgmM9G3PA/I+wWYIOw/5SyBtxapeHDcg+AAIFXc= github.com/sagikazarmark/locafero v0.11.0 h1:1iurJgmM9G3PA/I+wWYIOw/5SyBtxapeHDcg+AAIFXc=
github.com/sagikazarmark/locafero v0.11.0/go.mod h1:nVIGvgyzw595SUSUE6tvCp3YYTeHs15MvlmU87WwIik= github.com/sagikazarmark/locafero v0.11.0/go.mod h1:nVIGvgyzw595SUSUE6tvCp3YYTeHs15MvlmU87WwIik=
github.com/shirou/gopsutil/v4 v4.25.6 h1:kLysI2JsKorfaFPcYmcJqbzROzsBWEOAtw6A7dIfqXs=
github.com/shirou/gopsutil/v4 v4.25.6/go.mod h1:PfybzyydfZcN+JMMjkF6Zb8Mq1A/VcogFFg7hj50W9c=
github.com/sirupsen/logrus v1.9.3 h1:dueUQJ1C2q9oE3F7wvmSGAaVtTmUizReu6fjN8uqzbQ=
github.com/sirupsen/logrus v1.9.3/go.mod h1:naHLuLoDiP4jHNo9R0sCBMtWGeIprob74mVsIT4qYEQ=
github.com/sourcegraph/conc v0.3.1-0.20240121214520-5f936abd7ae8 h1:+jumHNA0Wrelhe64i8F6HNlS8pkoyMv5sreGx2Ry5Rw= github.com/sourcegraph/conc v0.3.1-0.20240121214520-5f936abd7ae8 h1:+jumHNA0Wrelhe64i8F6HNlS8pkoyMv5sreGx2Ry5Rw=
github.com/sourcegraph/conc v0.3.1-0.20240121214520-5f936abd7ae8/go.mod h1:3n1Cwaq1E1/1lhQhtRK2ts/ZwZEhjcQeJQ1RuC6Q/8U= github.com/sourcegraph/conc v0.3.1-0.20240121214520-5f936abd7ae8/go.mod h1:3n1Cwaq1E1/1lhQhtRK2ts/ZwZEhjcQeJQ1RuC6Q/8U=
github.com/spf13/afero v1.15.0 h1:b/YBCLWAJdFWJTN9cLhiXXcD7mzKn9Dm86dNnfyQw1I= github.com/spf13/afero v1.15.0 h1:b/YBCLWAJdFWJTN9cLhiXXcD7mzKn9Dm86dNnfyQw1I=
@@ -116,12 +187,16 @@ github.com/spf13/pflag v1.0.10/go.mod h1:McXfInJRrz4CZXVZOBLb0bTZqETkiAhM9Iw0y3A
github.com/spf13/viper v1.21.0 h1:x5S+0EU27Lbphp4UKm1C+1oQO+rKx36vfCoaVebLFSU= github.com/spf13/viper v1.21.0 h1:x5S+0EU27Lbphp4UKm1C+1oQO+rKx36vfCoaVebLFSU=
github.com/spf13/viper v1.21.0/go.mod h1:P0lhsswPGWD/1lZJ9ny3fYnVqxiegrlNrEmgLjbTCAY= github.com/spf13/viper v1.21.0/go.mod h1:P0lhsswPGWD/1lZJ9ny3fYnVqxiegrlNrEmgLjbTCAY=
github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME= github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=
github.com/stretchr/objx v0.5.2 h1:xuMeJ0Sdp5ZMRXx/aWO6RZxdr3beISkG5/G/aIRr3pY=
github.com/stretchr/objx v0.5.2/go.mod h1:FRsXN1f5AsAjCGJKqEizvkpNtU+EGNCLh3NxZ/8L+MA=
github.com/stretchr/testify v1.3.0/go.mod h1:M5WIy9Dh21IEIfnGCwXGc5bZfKNJtfHm1UVUgZn+9EI= github.com/stretchr/testify v1.3.0/go.mod h1:M5WIy9Dh21IEIfnGCwXGc5bZfKNJtfHm1UVUgZn+9EI=
github.com/stretchr/testify v1.7.0/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg= github.com/stretchr/testify v1.7.0/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg=
github.com/stretchr/testify v1.11.1 h1:7s2iGBzp5EwR7/aIZr8ao5+dra3wiQyKjjFuvgVKu7U= github.com/stretchr/testify v1.11.1 h1:7s2iGBzp5EwR7/aIZr8ao5+dra3wiQyKjjFuvgVKu7U=
github.com/stretchr/testify v1.11.1/go.mod h1:wZwfW3scLgRK+23gO65QZefKpKQRnfz6sD981Nm4B6U= github.com/stretchr/testify v1.11.1/go.mod h1:wZwfW3scLgRK+23gO65QZefKpKQRnfz6sD981Nm4B6U=
github.com/subosito/gotenv v1.6.0 h1:9NlTDc1FTs4qu0DDq7AEtTPNw6SVm7uBMsUCUjABIf8= github.com/subosito/gotenv v1.6.0 h1:9NlTDc1FTs4qu0DDq7AEtTPNw6SVm7uBMsUCUjABIf8=
github.com/subosito/gotenv v1.6.0/go.mod h1:Dk4QP5c2W3ibzajGcXpNraDfq2IrhjMIvMSWPKKo0FU= github.com/subosito/gotenv v1.6.0/go.mod h1:Dk4QP5c2W3ibzajGcXpNraDfq2IrhjMIvMSWPKKo0FU=
github.com/testcontainers/testcontainers-go v0.40.0 h1:pSdJYLOVgLE8YdUY2FHQ1Fxu+aMnb6JfVz1mxk7OeMU=
github.com/testcontainers/testcontainers-go v0.40.0/go.mod h1:FSXV5KQtX2HAMlm7U3APNyLkkap35zNLxukw9oBi/MY=
github.com/tidwall/gjson v1.14.2/go.mod h1:/wbyibRr2FHMks5tjHJ5F8dMZh3AcwJEMf5vlfC0lxk= github.com/tidwall/gjson v1.14.2/go.mod h1:/wbyibRr2FHMks5tjHJ5F8dMZh3AcwJEMf5vlfC0lxk=
github.com/tidwall/gjson v1.18.0 h1:FIDeeyB800efLX89e5a8Y0BNH+LOngJyGrIWxG2FKQY= github.com/tidwall/gjson v1.18.0 h1:FIDeeyB800efLX89e5a8Y0BNH+LOngJyGrIWxG2FKQY=
github.com/tidwall/gjson v1.18.0/go.mod h1:/wbyibRr2FHMks5tjHJ5F8dMZh3AcwJEMf5vlfC0lxk= github.com/tidwall/gjson v1.18.0/go.mod h1:/wbyibRr2FHMks5tjHJ5F8dMZh3AcwJEMf5vlfC0lxk=
@@ -131,28 +206,38 @@ github.com/tidwall/pretty v1.2.0 h1:RWIZEg2iJ8/g6fDDYzMpobmaoGh5OLl4AXtGUGPcqCs=
github.com/tidwall/pretty v1.2.0/go.mod h1:ITEVvHYasfjBbM0u2Pg8T2nJnzm8xPwvNhhsoaGGjNU= github.com/tidwall/pretty v1.2.0/go.mod h1:ITEVvHYasfjBbM0u2Pg8T2nJnzm8xPwvNhhsoaGGjNU=
github.com/tidwall/sjson v1.2.5 h1:kLy8mja+1c9jlljvWTlSazM7cKDRfJuR/bOJhcY5NcY= github.com/tidwall/sjson v1.2.5 h1:kLy8mja+1c9jlljvWTlSazM7cKDRfJuR/bOJhcY5NcY=
github.com/tidwall/sjson v1.2.5/go.mod h1:Fvgq9kS/6ociJEDnK0Fk1cpYF4FIW6ZF7LAe+6jwd28= github.com/tidwall/sjson v1.2.5/go.mod h1:Fvgq9kS/6ociJEDnK0Fk1cpYF4FIW6ZF7LAe+6jwd28=
github.com/tklauser/go-sysconf v0.3.12 h1:0QaGUFOdQaIVdPgfITYzaTegZvdCjmYO52cSFAEVmqU=
github.com/tklauser/go-sysconf v0.3.12/go.mod h1:Ho14jnntGE1fpdOqQEEaiKRpvIavV0hSfmBq8nJbHYI=
github.com/tklauser/numcpus v0.6.1 h1:ng9scYS7az0Bk4OZLvrNXNSAO2Pxr1XXRAPyjhIx+Fk=
github.com/tklauser/numcpus v0.6.1/go.mod h1:1XfjsgE2zo8GVw7POkMbHENHzVg3GzmoZ9fESEdAacY=
github.com/tmthrgd/go-hex v0.0.0-20190904060850-447a3041c3bc h1:9lRDQMhESg+zvGYmW5DyG0UqvY96Bu5QYsTLvCHdrgo= github.com/tmthrgd/go-hex v0.0.0-20190904060850-447a3041c3bc h1:9lRDQMhESg+zvGYmW5DyG0UqvY96Bu5QYsTLvCHdrgo=
github.com/tmthrgd/go-hex v0.0.0-20190904060850-447a3041c3bc/go.mod h1:bciPuU6GHm1iF1pBvUfxfsH0Wmnc2VbpgvbI9ZWuIRs= github.com/tmthrgd/go-hex v0.0.0-20190904060850-447a3041c3bc/go.mod h1:bciPuU6GHm1iF1pBvUfxfsH0Wmnc2VbpgvbI9ZWuIRs=
github.com/uptrace/bun v1.2.15 h1:Ut68XRBLDgp9qG9QBMa9ELWaZOmzHNdczHQdrOZbEFE= github.com/uptrace/bun/dialect/sqlitedialect v1.2.16 h1:6wVAiYLj1pMibRthGwy4wDLa3D5AQo32Y8rvwPd8CQ0=
github.com/uptrace/bun v1.2.15/go.mod h1:Eghz7NonZMiTX/Z6oKYytJ0oaMEJ/eq3kEV4vSqG038= github.com/uptrace/bun/dialect/sqlitedialect v1.2.16/go.mod h1:Z7+5qK8CGZkDQiPMu+LSdVuDuR1I5jcwtkB1Pi3F82E=
github.com/uptrace/bun/dialect/sqlitedialect v1.2.15 h1:7upGMVjFRB1oI78GQw6ruNLblYn5CR+kxqcbbeBBils= github.com/uptrace/bun/driver/sqliteshim v1.2.16 h1:M6Dh5kkDWFbUWBrOsIE1g1zdZ5JbSytTD4piFRBOUAI=
github.com/uptrace/bun/dialect/sqlitedialect v1.2.15/go.mod h1:c7YIDaPNS2CU2uI1p7umFuFWkuKbDcPDDvp+DLHZnkI= github.com/uptrace/bun/driver/sqliteshim v1.2.16/go.mod h1:iKdJ06P3XS+pwKcONjSIK07bbhksH3lWsw3mpfr0+bY=
github.com/uptrace/bun/driver/sqliteshim v1.2.15 h1:M/rZJSjOPV4OmfTVnDPtL+wJmdMTqDUn8cuk5ycfABA=
github.com/uptrace/bun/driver/sqliteshim v1.2.15/go.mod h1:YqwxFyvM992XOCpGJtXyKPkgkb+aZpIIMzGbpaw1hIk=
github.com/uptrace/bunrouter v1.0.23 h1:Bi7NKw3uCQkcA/GUCtDNPq5LE5UdR9pe+UyWbjHB/wU= github.com/uptrace/bunrouter v1.0.23 h1:Bi7NKw3uCQkcA/GUCtDNPq5LE5UdR9pe+UyWbjHB/wU=
github.com/uptrace/bunrouter v1.0.23/go.mod h1:O3jAcl+5qgnF+ejhgkmbceEk0E/mqaK+ADOocdNpY8M= github.com/uptrace/bunrouter v1.0.23/go.mod h1:O3jAcl+5qgnF+ejhgkmbceEk0E/mqaK+ADOocdNpY8M=
github.com/vmihailenco/msgpack/v5 v5.4.1 h1:cQriyiUvjTwOHg8QZaPihLWeRAAVoCpE00IUPn0Bjt8= github.com/vmihailenco/msgpack/v5 v5.4.1 h1:cQriyiUvjTwOHg8QZaPihLWeRAAVoCpE00IUPn0Bjt8=
github.com/vmihailenco/msgpack/v5 v5.4.1/go.mod h1:GaZTsDaehaPpQVyxrf5mtQlH+pc21PIudVV/E3rRQok= github.com/vmihailenco/msgpack/v5 v5.4.1/go.mod h1:GaZTsDaehaPpQVyxrf5mtQlH+pc21PIudVV/E3rRQok=
github.com/vmihailenco/tagparser/v2 v2.0.0 h1:y09buUbR+b5aycVFQs/g70pqKVZNBmxwAhO7/IwNM9g= github.com/vmihailenco/tagparser/v2 v2.0.0 h1:y09buUbR+b5aycVFQs/g70pqKVZNBmxwAhO7/IwNM9g=
github.com/vmihailenco/tagparser/v2 v2.0.0/go.mod h1:Wri+At7QHww0WTrCBeu4J6bNtoV6mEfg5OIWRZA9qds= github.com/vmihailenco/tagparser/v2 v2.0.0/go.mod h1:Wri+At7QHww0WTrCBeu4J6bNtoV6mEfg5OIWRZA9qds=
github.com/warkanum/bun v1.2.17 h1:HP8eTuKSNcqMDhhIPFxEbgV/yct6RR0/c3qHH3PNZUA=
github.com/warkanum/bun v1.2.17/go.mod h1:jMoNg2n56ckaawi/O/J92BHaECmrz6IRjuMWqlMaMTM=
github.com/yusufpapurcu/wmi v1.2.4 h1:zFUKzehAFReQwLys1b/iSMl+JQGSCSjtVqQn9bBrPo0=
github.com/yusufpapurcu/wmi v1.2.4/go.mod h1:SBZ9tNy3G9/m5Oi98Zks0QjeHVDvuK0qfxQmPyzfmi0=
go.opentelemetry.io/auto/sdk v1.1.0 h1:cH53jehLUN6UFLY71z+NDOiNJqDdPRaXzTel0sJySYA= go.opentelemetry.io/auto/sdk v1.1.0 h1:cH53jehLUN6UFLY71z+NDOiNJqDdPRaXzTel0sJySYA=
go.opentelemetry.io/auto/sdk v1.1.0/go.mod h1:3wSPjt5PWp2RhlCcmmOial7AvC4DQqZb7a7wCow3W8A= go.opentelemetry.io/auto/sdk v1.1.0/go.mod h1:3wSPjt5PWp2RhlCcmmOial7AvC4DQqZb7a7wCow3W8A=
go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.49.0 h1:jq9TW8u3so/bN+JPT166wjOI6/vQPF6Xe7nMNIltagk=
go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.49.0/go.mod h1:p8pYQP+m5XfbZm9fxtSKAbM6oIllS7s2AfxrChvc7iw=
go.opentelemetry.io/otel v1.38.0 h1:RkfdswUDRimDg0m2Az18RKOsnI8UDzppJAtj01/Ymk8= go.opentelemetry.io/otel v1.38.0 h1:RkfdswUDRimDg0m2Az18RKOsnI8UDzppJAtj01/Ymk8=
go.opentelemetry.io/otel v1.38.0/go.mod h1:zcmtmQ1+YmQM9wrNsTGV/q/uyusom3P8RxwExxkZhjM= go.opentelemetry.io/otel v1.38.0/go.mod h1:zcmtmQ1+YmQM9wrNsTGV/q/uyusom3P8RxwExxkZhjM=
go.opentelemetry.io/otel/exporters/otlp/otlptrace v1.38.0 h1:GqRJVj7UmLjCVyVJ3ZFLdPRmhDUp2zFmQe3RHIOsw24= go.opentelemetry.io/otel/exporters/otlp/otlptrace v1.38.0 h1:GqRJVj7UmLjCVyVJ3ZFLdPRmhDUp2zFmQe3RHIOsw24=
go.opentelemetry.io/otel/exporters/otlp/otlptrace v1.38.0/go.mod h1:ri3aaHSmCTVYu2AWv44YMauwAQc0aqI9gHKIcSbI1pU= go.opentelemetry.io/otel/exporters/otlp/otlptrace v1.38.0/go.mod h1:ri3aaHSmCTVYu2AWv44YMauwAQc0aqI9gHKIcSbI1pU=
go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc v1.38.0 h1:lwI4Dc5leUqENgGuQImwLo4WnuXFPetmPpkLi2IrX54= go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc v1.38.0 h1:lwI4Dc5leUqENgGuQImwLo4WnuXFPetmPpkLi2IrX54=
go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc v1.38.0/go.mod h1:Kz/oCE7z5wuyhPxsXDuaPteSWqjSBD5YaSdbxZYGbGk= go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc v1.38.0/go.mod h1:Kz/oCE7z5wuyhPxsXDuaPteSWqjSBD5YaSdbxZYGbGk=
go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracehttp v1.19.0 h1:IeMeyr1aBvBiPVYihXIaeIZba6b8E1bYp7lbdxK8CQg=
go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracehttp v1.19.0/go.mod h1:oVdCUtjq9MK9BlS7TtucsQwUcXcymNiEDjgDD2jMtZU=
go.opentelemetry.io/otel/metric v1.38.0 h1:Kl6lzIYGAh5M159u9NgiRkmoMKjvbsKtYRwgfrA6WpA= go.opentelemetry.io/otel/metric v1.38.0 h1:Kl6lzIYGAh5M159u9NgiRkmoMKjvbsKtYRwgfrA6WpA=
go.opentelemetry.io/otel/metric v1.38.0/go.mod h1:kB5n/QoRM8YwmUahxvI3bO34eVtQf2i4utNVLr9gEmI= go.opentelemetry.io/otel/metric v1.38.0/go.mod h1:kB5n/QoRM8YwmUahxvI3bO34eVtQf2i4utNVLr9gEmI=
go.opentelemetry.io/otel/sdk v1.38.0 h1:l48sr5YbNf2hpCUj/FoGhW9yDkl+Ma+LrVl8qaM5b+E= go.opentelemetry.io/otel/sdk v1.38.0 h1:l48sr5YbNf2hpCUj/FoGhW9yDkl+Ma+LrVl8qaM5b+E=
@@ -173,25 +258,34 @@ go.yaml.in/yaml/v2 v2.4.2 h1:DzmwEr2rDGHl7lsFgAHxmNz/1NlQ7xLIrlN2h5d1eGI=
go.yaml.in/yaml/v2 v2.4.2/go.mod h1:081UH+NErpNdqlCXm3TtEran0rJZGxAYx9hb/ELlsPU= go.yaml.in/yaml/v2 v2.4.2/go.mod h1:081UH+NErpNdqlCXm3TtEran0rJZGxAYx9hb/ELlsPU=
go.yaml.in/yaml/v3 v3.0.4 h1:tfq32ie2Jv2UxXFdLJdh3jXuOzWiL1fo0bu/FbuKpbc= go.yaml.in/yaml/v3 v3.0.4 h1:tfq32ie2Jv2UxXFdLJdh3jXuOzWiL1fo0bu/FbuKpbc=
go.yaml.in/yaml/v3 v3.0.4/go.mod h1:DhzuOOF2ATzADvBadXxruRBLzYTpT36CKvDb3+aBEFg= go.yaml.in/yaml/v3 v3.0.4/go.mod h1:DhzuOOF2ATzADvBadXxruRBLzYTpT36CKvDb3+aBEFg=
golang.org/x/crypto v0.41.0 h1:WKYxWedPGCTVVl5+WHSSrOBT0O8lx32+zxmHxijgXp4= golang.org/x/crypto v0.43.0 h1:dduJYIi3A3KOfdGOHX8AVZ/jGiyPa3IbBozJ5kNuE04=
golang.org/x/crypto v0.41.0/go.mod h1:pO5AFd7FA68rFak7rOAGVuygIISepHftHnr8dr6+sUc= golang.org/x/crypto v0.43.0/go.mod h1:BFbav4mRNlXJL4wNeejLpWxB7wMbc79PdRGhWKncxR0=
golang.org/x/exp v0.0.0-20250711185948-6ae5c78190dc h1:TS73t7x3KarrNd5qAipmspBDS1rkMcgVG/fS1aRb4Rc= golang.org/x/exp v0.0.0-20251113190631-e25ba8c21ef6 h1:zfMcR1Cs4KNuomFFgGefv5N0czO2XZpUbxGUy8i8ug0=
golang.org/x/exp v0.0.0-20250711185948-6ae5c78190dc/go.mod h1:A+z0yzpGtvnG90cToK5n2tu8UJVP2XUATh+r+sfOOOc= golang.org/x/exp v0.0.0-20251113190631-e25ba8c21ef6/go.mod h1:46edojNIoXTNOhySWIWdix628clX9ODXwPsQuG6hsK0=
golang.org/x/mod v0.26.0 h1:EGMPT//Ezu+ylkCijjPc+f4Aih7sZvaAr+O3EHBxvZg= golang.org/x/mod v0.30.0 h1:fDEXFVZ/fmCKProc/yAXXUijritrDzahmwwefnjoPFk=
golang.org/x/mod v0.26.0/go.mod h1:/j6NAhSk8iQ723BGAUyoAcn7SlD7s15Dp9Nd/SfeaFQ= golang.org/x/mod v0.30.0/go.mod h1:lAsf5O2EvJeSFMiBxXDki7sCgAxEUcZHXoXMKT4GJKc=
golang.org/x/net v0.43.0 h1:lat02VYK2j4aLzMzecihNvTlJNQUq316m2Mr9rnM6YE= golang.org/x/net v0.45.0 h1:RLBg5JKixCy82FtLJpeNlVM0nrSqpCRYzVU1n8kj0tM=
golang.org/x/net v0.43.0/go.mod h1:vhO1fvI4dGsIjh73sWfUVjj3N7CA9WkKJNQm2svM6Jg= golang.org/x/net v0.45.0/go.mod h1:ECOoLqd5U3Lhyeyo/QDCEVQ4sNgYsqvCZ722XogGieY=
golang.org/x/sync v0.16.0 h1:ycBJEhp9p4vXvUZNszeOq0kGTPghopOL8q0fq3vstxw= golang.org/x/sync v0.18.0 h1:kr88TuHDroi+UVf+0hZnirlk8o8T+4MrK6mr60WkH/I=
golang.org/x/sync v0.16.0/go.mod h1:1dzgHSNfp02xaA81J2MS99Qcpr2w7fw1gpm99rleRqA= golang.org/x/sync v0.18.0/go.mod h1:9KTHXmSnoGruLpwFjVSX0lNNA75CykiMECbovNTZqGI=
golang.org/x/sys v0.0.0-20190916202348-b4ddaad3f8a3/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20201204225414-ed752295db88/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210616094352-59db8d763f22/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20220715151400-c0bba94af5f8/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.6.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= golang.org/x/sys v0.6.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.35.0 h1:vz1N37gP5bs89s7He8XuIYXpyY0+QlsKmzipCbUtyxI= golang.org/x/sys v0.8.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.35.0/go.mod h1:BJP2sWEmIv4KK5OTEluFJCKSidICx8ciO85XgH3Ak8k= golang.org/x/sys v0.11.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/text v0.28.0 h1:rhazDwis8INMIwQ4tpjLDzUhx6RlXqZNPEM0huQojng= golang.org/x/sys v0.38.0 h1:3yZWxaJjBmCWXqhN1qh02AkOnCQ1poK6oF+a7xWL6Gc=
golang.org/x/text v0.28.0/go.mod h1:U8nCwOR8jO/marOQ0QbDiOngZVEBB7MAiitBuMjXiNU= golang.org/x/sys v0.38.0/go.mod h1:OgkHotnGiDImocRcuBABYBEXf8A9a87e/uXjp9XT3ks=
golang.org/x/term v0.36.0 h1:zMPR+aF8gfksFprF/Nc/rd1wRS1EI6nDBGyWAvDzx2Q=
golang.org/x/term v0.36.0/go.mod h1:Qu394IJq6V6dCBRgwqshf3mPF85AqzYEzofzRdZkWss=
golang.org/x/text v0.30.0 h1:yznKA/E9zq54KzlzBEAWn1NXSQ8DIp/NYMy88xJjl4k=
golang.org/x/text v0.30.0/go.mod h1:yDdHFIX9t+tORqspjENWgzaCVXgk0yYnYuSZ8UzzBVM=
golang.org/x/time v0.14.0 h1:MRx4UaLrDotUKUdCIqzPC48t1Y9hANFKIRpNx+Te8PI= golang.org/x/time v0.14.0 h1:MRx4UaLrDotUKUdCIqzPC48t1Y9hANFKIRpNx+Te8PI=
golang.org/x/time v0.14.0/go.mod h1:eL/Oa2bBBK0TkX57Fyni+NgnyQQN4LitPmob2Hjnqw4= golang.org/x/time v0.14.0/go.mod h1:eL/Oa2bBBK0TkX57Fyni+NgnyQQN4LitPmob2Hjnqw4=
golang.org/x/tools v0.35.0 h1:mBffYraMEf7aa0sB+NuKnuCy8qI/9Bughn8dC2Gu5r0= golang.org/x/tools v0.39.0 h1:ik4ho21kwuQln40uelmciQPp9SipgNDdrafrYA4TmQQ=
golang.org/x/tools v0.35.0/go.mod h1:NKdj5HkL/73byiZSJjqJgKn3ep7KjFkBOkR/Hps3VPw= golang.org/x/tools v0.39.0/go.mod h1:JnefbkDPyD8UU2kI5fuf8ZX4/yUeh9W877ZeBONxUqQ=
golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
gonum.org/v1/gonum v0.16.0 h1:5+ul4Swaf3ESvrOnidPp4GZbzf0mxVQpDCYUQE7OJfk= gonum.org/v1/gonum v0.16.0 h1:5+ul4Swaf3ESvrOnidPp4GZbzf0mxVQpDCYUQE7OJfk=
gonum.org/v1/gonum v0.16.0/go.mod h1:fef3am4MQ93R2HHpKnLk4/Tbh/s0+wqD5nfa6Pnwy4E= gonum.org/v1/gonum v0.16.0/go.mod h1:fef3am4MQ93R2HHpKnLk4/Tbh/s0+wqD5nfa6Pnwy4E=
google.golang.org/genproto/googleapis/api v0.0.0-20250825161204-c5933d9347a5 h1:BIRfGDEjiHRrk0QKZe3Xv2ieMhtgRGeLcZQ0mIVn4EY= google.golang.org/genproto/googleapis/api v0.0.0-20250825161204-c5933d9347a5 h1:BIRfGDEjiHRrk0QKZe3Xv2ieMhtgRGeLcZQ0mIVn4EY=
@@ -212,18 +306,22 @@ gorm.io/driver/postgres v1.6.0 h1:2dxzU8xJ+ivvqTRph34QX+WrRaJlmfyPqXmoGVjMBa4=
gorm.io/driver/postgres v1.6.0/go.mod h1:vUw0mrGgrTK+uPHEhAdV4sfFELrByKVGnaVRkXDhtWo= gorm.io/driver/postgres v1.6.0/go.mod h1:vUw0mrGgrTK+uPHEhAdV4sfFELrByKVGnaVRkXDhtWo=
gorm.io/gorm v1.25.12 h1:I0u8i2hWQItBq1WfE0o2+WuL9+8L21K9e2HHSTE/0f8= gorm.io/gorm v1.25.12 h1:I0u8i2hWQItBq1WfE0o2+WuL9+8L21K9e2HHSTE/0f8=
gorm.io/gorm v1.25.12/go.mod h1:xh7N7RHfYlNc5EmcI/El95gXusucDrQnHXe0+CgWcLQ= gorm.io/gorm v1.25.12/go.mod h1:xh7N7RHfYlNc5EmcI/El95gXusucDrQnHXe0+CgWcLQ=
modernc.org/cc/v4 v4.26.2 h1:991HMkLjJzYBIfha6ECZdjrIYz2/1ayr+FL8GN+CNzM= gotest.tools/v3 v3.5.2 h1:7koQfIKdy+I8UTetycgUqXWSDwpgv193Ka+qRsmBY8Q=
modernc.org/cc/v4 v4.26.2/go.mod h1:uVtb5OGqUKpoLWhqwNQo/8LwvoiEBLvZXIQ/SmO6mL0= gotest.tools/v3 v3.5.2/go.mod h1:LtdLGcnqToBH83WByAAi/wiwSFCArdFIUV/xxN4pcjA=
modernc.org/ccgo/v4 v4.28.0 h1:rjznn6WWehKq7dG4JtLRKxb52Ecv8OUGah8+Z/SfpNU= modernc.org/cc/v4 v4.27.1 h1:9W30zRlYrefrDV2JE2O8VDtJ1yPGownxciz5rrbQZis=
modernc.org/ccgo/v4 v4.28.0/go.mod h1:JygV3+9AV6SmPhDasu4JgquwU81XAKLd3OKTUDNOiKE= modernc.org/cc/v4 v4.27.1/go.mod h1:uVtb5OGqUKpoLWhqwNQo/8LwvoiEBLvZXIQ/SmO6mL0=
modernc.org/fileutil v1.3.8 h1:qtzNm7ED75pd1C7WgAGcK4edm4fvhtBsEiI/0NQ54YM= modernc.org/ccgo/v4 v4.30.1 h1:4r4U1J6Fhj98NKfSjnPUN7Ze2c6MnAdL0hWw6+LrJpc=
modernc.org/fileutil v1.3.8/go.mod h1:HxmghZSZVAz/LXcMNwZPA/DRrQZEVP9VX0V4LQGQFOc= modernc.org/ccgo/v4 v4.30.1/go.mod h1:bIOeI1JL54Utlxn+LwrFyjCx2n2RDiYEaJVSrgdrRfM=
modernc.org/fileutil v1.3.40 h1:ZGMswMNc9JOCrcrakF1HrvmergNLAmxOPjizirpfqBA=
modernc.org/fileutil v1.3.40/go.mod h1:HxmghZSZVAz/LXcMNwZPA/DRrQZEVP9VX0V4LQGQFOc=
modernc.org/gc/v2 v2.6.5 h1:nyqdV8q46KvTpZlsw66kWqwXRHdjIlJOhG6kxiV/9xI= modernc.org/gc/v2 v2.6.5 h1:nyqdV8q46KvTpZlsw66kWqwXRHdjIlJOhG6kxiV/9xI=
modernc.org/gc/v2 v2.6.5/go.mod h1:YgIahr1ypgfe7chRuJi2gD7DBQiKSLMPgBQe9oIiito= modernc.org/gc/v2 v2.6.5/go.mod h1:YgIahr1ypgfe7chRuJi2gD7DBQiKSLMPgBQe9oIiito=
modernc.org/gc/v3 v3.1.1 h1:k8T3gkXWY9sEiytKhcgyiZ2L0DTyCQ/nvX+LoCljoRE=
modernc.org/gc/v3 v3.1.1/go.mod h1:HFK/6AGESC7Ex+EZJhJ2Gni6cTaYpSMmU/cT9RmlfYY=
modernc.org/goabi0 v0.2.0 h1:HvEowk7LxcPd0eq6mVOAEMai46V+i7Jrj13t4AzuNks= modernc.org/goabi0 v0.2.0 h1:HvEowk7LxcPd0eq6mVOAEMai46V+i7Jrj13t4AzuNks=
modernc.org/goabi0 v0.2.0/go.mod h1:CEFRnnJhKvWT1c1JTI3Avm+tgOWbkOu5oPA8eH8LnMI= modernc.org/goabi0 v0.2.0/go.mod h1:CEFRnnJhKvWT1c1JTI3Avm+tgOWbkOu5oPA8eH8LnMI=
modernc.org/libc v1.66.3 h1:cfCbjTUcdsKyyZZfEUKfoHcP3S0Wkvz3jgSzByEWVCQ= modernc.org/libc v1.67.0 h1:QzL4IrKab2OFmxA3/vRYl0tLXrIamwrhD6CKD4WBVjQ=
modernc.org/libc v1.66.3/go.mod h1:XD9zO8kt59cANKvHPXpx7yS2ELPheAey0vjIuZOhOU8= modernc.org/libc v1.67.0/go.mod h1:QvvnnJ5P7aitu0ReNpVIEyesuhmDLQ8kaEoyMjIFZJA=
modernc.org/mathutil v1.7.1 h1:GCZVGXdaN8gTqB1Mf/usp1Y/hSqgI2vAGGP4jZMCxOU= modernc.org/mathutil v1.7.1 h1:GCZVGXdaN8gTqB1Mf/usp1Y/hSqgI2vAGGP4jZMCxOU=
modernc.org/mathutil v1.7.1/go.mod h1:4p5IwJITfppl0G4sUEDtCr4DthTaT47/N3aT6MhfgJg= modernc.org/mathutil v1.7.1/go.mod h1:4p5IwJITfppl0G4sUEDtCr4DthTaT47/N3aT6MhfgJg=
modernc.org/memory v1.11.0 h1:o4QC8aMQzmcwCK3t3Ux/ZHmwFPzE6hf2Y5LbkRs+hbI= modernc.org/memory v1.11.0 h1:o4QC8aMQzmcwCK3t3Ux/ZHmwFPzE6hf2Y5LbkRs+hbI=
@@ -232,8 +330,8 @@ modernc.org/opt v0.1.4 h1:2kNGMRiUjrp4LcaPuLY2PzUfqM/w9N23quVwhKt5Qm8=
modernc.org/opt v0.1.4/go.mod h1:03fq9lsNfvkYSfxrfUhZCWPk1lm4cq4N+Bh//bEtgns= modernc.org/opt v0.1.4/go.mod h1:03fq9lsNfvkYSfxrfUhZCWPk1lm4cq4N+Bh//bEtgns=
modernc.org/sortutil v1.2.1 h1:+xyoGf15mM3NMlPDnFqrteY07klSFxLElE2PVuWIJ7w= modernc.org/sortutil v1.2.1 h1:+xyoGf15mM3NMlPDnFqrteY07klSFxLElE2PVuWIJ7w=
modernc.org/sortutil v1.2.1/go.mod h1:7ZI3a3REbai7gzCLcotuw9AC4VZVpYMjDzETGsSMqJE= modernc.org/sortutil v1.2.1/go.mod h1:7ZI3a3REbai7gzCLcotuw9AC4VZVpYMjDzETGsSMqJE=
modernc.org/sqlite v1.38.0 h1:+4OrfPQ8pxHKuWG4md1JpR/EYAh3Md7TdejuuzE7EUI= modernc.org/sqlite v1.40.1 h1:VfuXcxcUWWKRBuP8+BR9L7VnmusMgBNNnBYGEe9w/iY=
modernc.org/sqlite v1.38.0/go.mod h1:1Bj+yES4SVvBZ4cBOpVZ6QgesMCKpJZDq0nxYzOpmNE= modernc.org/sqlite v1.40.1/go.mod h1:9fjQZ0mB1LLP0GYrp39oOJXx/I2sxEnZtzCmEQIKvGE=
modernc.org/strutil v1.2.1 h1:UneZBkQA+DX2Rp35KcM69cSsNES9ly8mQWD71HKlOA0= modernc.org/strutil v1.2.1 h1:UneZBkQA+DX2Rp35KcM69cSsNES9ly8mQWD71HKlOA0=
modernc.org/strutil v1.2.1/go.mod h1:EHkiggD70koQxjVdSBM3JKM7k6L0FbGE5eymy9i3B9A= modernc.org/strutil v1.2.1/go.mod h1:EHkiggD70koQxjVdSBM3JKM7k6L0FbGE5eymy9i3B9A=
modernc.org/token v1.1.0 h1:Xl7Ap9dKaEs5kLoOQeQmPWevfnk/DM5qcLcYlA8ys6Y= modernc.org/token v1.1.0 h1:Xl7Ap9dKaEs5kLoOQeQmPWevfnk/DM5qcLcYlA8ys6Y=

View File

@@ -57,11 +57,31 @@ func (c *Cache) SetBytes(ctx context.Context, key string, value []byte, ttl time
return c.provider.Set(ctx, key, value, ttl) return c.provider.Set(ctx, key, value, ttl)
} }
// SetWithTags serializes and stores a value in the cache with the specified TTL and tags.
func (c *Cache) SetWithTags(ctx context.Context, key string, value interface{}, ttl time.Duration, tags []string) error {
data, err := json.Marshal(value)
if err != nil {
return fmt.Errorf("failed to serialize: %w", err)
}
return c.provider.SetWithTags(ctx, key, data, ttl, tags)
}
// SetBytesWithTags stores raw bytes in the cache with the specified TTL and tags.
func (c *Cache) SetBytesWithTags(ctx context.Context, key string, value []byte, ttl time.Duration, tags []string) error {
return c.provider.SetWithTags(ctx, key, value, ttl, tags)
}
// Delete removes a key from the cache. // Delete removes a key from the cache.
func (c *Cache) Delete(ctx context.Context, key string) error { func (c *Cache) Delete(ctx context.Context, key string) error {
return c.provider.Delete(ctx, key) return c.provider.Delete(ctx, key)
} }
// DeleteByTag removes all keys associated with the given tag.
func (c *Cache) DeleteByTag(ctx context.Context, tag string) error {
return c.provider.DeleteByTag(ctx, tag)
}
// DeleteByPattern removes all keys matching the pattern. // DeleteByPattern removes all keys matching the pattern.
func (c *Cache) DeleteByPattern(ctx context.Context, pattern string) error { func (c *Cache) DeleteByPattern(ctx context.Context, pattern string) error {
return c.provider.DeleteByPattern(ctx, pattern) return c.provider.DeleteByPattern(ctx, pattern)

View File

@@ -15,9 +15,17 @@ type Provider interface {
// If ttl is 0, the item never expires. // If ttl is 0, the item never expires.
Set(ctx context.Context, key string, value []byte, ttl time.Duration) error Set(ctx context.Context, key string, value []byte, ttl time.Duration) error
// SetWithTags stores a value in the cache with the specified TTL and tags.
// Tags can be used to invalidate groups of related keys.
// If ttl is 0, the item never expires.
SetWithTags(ctx context.Context, key string, value []byte, ttl time.Duration, tags []string) error
// Delete removes a key from the cache. // Delete removes a key from the cache.
Delete(ctx context.Context, key string) error Delete(ctx context.Context, key string) error
// DeleteByTag removes all keys associated with the given tag.
DeleteByTag(ctx context.Context, tag string) error
// DeleteByPattern removes all keys matching the pattern. // DeleteByPattern removes all keys matching the pattern.
// Pattern syntax depends on the provider implementation. // Pattern syntax depends on the provider implementation.
DeleteByPattern(ctx context.Context, pattern string) error DeleteByPattern(ctx context.Context, pattern string) error

View File

@@ -2,6 +2,7 @@ package cache
import ( import (
"context" "context"
"encoding/json"
"fmt" "fmt"
"time" "time"
@@ -97,8 +98,115 @@ func (m *MemcacheProvider) Set(ctx context.Context, key string, value []byte, tt
return m.client.Set(item) return m.client.Set(item)
} }
// SetWithTags stores a value in the cache with the specified TTL and tags.
// Note: Tag support in Memcache is limited and less efficient than Redis.
func (m *MemcacheProvider) SetWithTags(ctx context.Context, key string, value []byte, ttl time.Duration, tags []string) error {
if ttl == 0 {
ttl = m.options.DefaultTTL
}
expiration := int32(ttl.Seconds())
// Set the main value
item := &memcache.Item{
Key: key,
Value: value,
Expiration: expiration,
}
if err := m.client.Set(item); err != nil {
return err
}
// Store tags for this key
if len(tags) > 0 {
tagsData, err := json.Marshal(tags)
if err != nil {
return fmt.Errorf("failed to marshal tags: %w", err)
}
tagsItem := &memcache.Item{
Key: fmt.Sprintf("cache:tags:%s", key),
Value: tagsData,
Expiration: expiration,
}
if err := m.client.Set(tagsItem); err != nil {
return err
}
// Add key to each tag's key list
for _, tag := range tags {
tagKey := fmt.Sprintf("cache:tag:%s", tag)
// Get existing keys for this tag
var keys []string
if item, err := m.client.Get(tagKey); err == nil {
_ = json.Unmarshal(item.Value, &keys)
}
// Add current key if not already present
found := false
for _, k := range keys {
if k == key {
found = true
break
}
}
if !found {
keys = append(keys, key)
}
// Store updated key list
keysData, err := json.Marshal(keys)
if err != nil {
continue
}
tagItem := &memcache.Item{
Key: tagKey,
Value: keysData,
Expiration: expiration + 3600, // Give tag lists longer TTL
}
_ = m.client.Set(tagItem)
}
}
return nil
}
// Delete removes a key from the cache. // Delete removes a key from the cache.
func (m *MemcacheProvider) Delete(ctx context.Context, key string) error { func (m *MemcacheProvider) Delete(ctx context.Context, key string) error {
// Get tags for this key
tagsKey := fmt.Sprintf("cache:tags:%s", key)
if item, err := m.client.Get(tagsKey); err == nil {
var tags []string
if err := json.Unmarshal(item.Value, &tags); err == nil {
// Remove key from each tag's key list
for _, tag := range tags {
tagKey := fmt.Sprintf("cache:tag:%s", tag)
if tagItem, err := m.client.Get(tagKey); err == nil {
var keys []string
if err := json.Unmarshal(tagItem.Value, &keys); err == nil {
// Remove current key from the list
newKeys := make([]string, 0, len(keys))
for _, k := range keys {
if k != key {
newKeys = append(newKeys, k)
}
}
// Update the tag's key list
if keysData, err := json.Marshal(newKeys); err == nil {
tagItem.Value = keysData
_ = m.client.Set(tagItem)
}
}
}
}
}
// Delete the tags key
_ = m.client.Delete(tagsKey)
}
// Delete the actual key
err := m.client.Delete(key) err := m.client.Delete(key)
if err == memcache.ErrCacheMiss { if err == memcache.ErrCacheMiss {
return nil return nil
@@ -106,6 +214,38 @@ func (m *MemcacheProvider) Delete(ctx context.Context, key string) error {
return err return err
} }
// DeleteByTag removes all keys associated with the given tag.
func (m *MemcacheProvider) DeleteByTag(ctx context.Context, tag string) error {
tagKey := fmt.Sprintf("cache:tag:%s", tag)
// Get all keys associated with this tag
item, err := m.client.Get(tagKey)
if err == memcache.ErrCacheMiss {
return nil
}
if err != nil {
return err
}
var keys []string
if err := json.Unmarshal(item.Value, &keys); err != nil {
return fmt.Errorf("failed to unmarshal tag keys: %w", err)
}
// Delete all keys
for _, key := range keys {
_ = m.client.Delete(key)
// Also delete the tags key for this cache key
tagsKey := fmt.Sprintf("cache:tags:%s", key)
_ = m.client.Delete(tagsKey)
}
// Delete the tag key itself
_ = m.client.Delete(tagKey)
return nil
}
// DeleteByPattern removes all keys matching the pattern. // DeleteByPattern removes all keys matching the pattern.
// Note: Memcache does not support pattern-based deletion natively. // Note: Memcache does not support pattern-based deletion natively.
// This is a no-op for memcache and returns an error. // This is a no-op for memcache and returns an error.

View File

@@ -15,6 +15,7 @@ type memoryItem struct {
Expiration time.Time Expiration time.Time
LastAccess time.Time LastAccess time.Time
HitCount int64 HitCount int64
Tags []string
} }
// isExpired checks if the item has expired. // isExpired checks if the item has expired.
@@ -27,11 +28,12 @@ func (m *memoryItem) isExpired() bool {
// MemoryProvider is an in-memory implementation of the Provider interface. // MemoryProvider is an in-memory implementation of the Provider interface.
type MemoryProvider struct { type MemoryProvider struct {
mu sync.RWMutex mu sync.RWMutex
items map[string]*memoryItem items map[string]*memoryItem
options *Options tagToKeys map[string]map[string]struct{} // tag -> set of keys
hits atomic.Int64 options *Options
misses atomic.Int64 hits atomic.Int64
misses atomic.Int64
} }
// NewMemoryProvider creates a new in-memory cache provider. // NewMemoryProvider creates a new in-memory cache provider.
@@ -44,8 +46,9 @@ func NewMemoryProvider(opts *Options) *MemoryProvider {
} }
return &MemoryProvider{ return &MemoryProvider{
items: make(map[string]*memoryItem), items: make(map[string]*memoryItem),
options: opts, tagToKeys: make(map[string]map[string]struct{}),
options: opts,
} }
} }
@@ -114,15 +117,116 @@ func (m *MemoryProvider) Set(ctx context.Context, key string, value []byte, ttl
return nil return nil
} }
// SetWithTags stores a value in the cache with the specified TTL and tags.
func (m *MemoryProvider) SetWithTags(ctx context.Context, key string, value []byte, ttl time.Duration, tags []string) error {
m.mu.Lock()
defer m.mu.Unlock()
if ttl == 0 {
ttl = m.options.DefaultTTL
}
var expiration time.Time
if ttl > 0 {
expiration = time.Now().Add(ttl)
}
// Check max size and evict if necessary
if m.options.MaxSize > 0 && len(m.items) >= m.options.MaxSize {
if _, exists := m.items[key]; !exists {
m.evictOne()
}
}
// Remove old tag associations if key exists
if oldItem, exists := m.items[key]; exists {
for _, tag := range oldItem.Tags {
if keySet, ok := m.tagToKeys[tag]; ok {
delete(keySet, key)
if len(keySet) == 0 {
delete(m.tagToKeys, tag)
}
}
}
}
// Store the item
m.items[key] = &memoryItem{
Value: value,
Expiration: expiration,
LastAccess: time.Now(),
Tags: tags,
}
// Add new tag associations
for _, tag := range tags {
if m.tagToKeys[tag] == nil {
m.tagToKeys[tag] = make(map[string]struct{})
}
m.tagToKeys[tag][key] = struct{}{}
}
return nil
}
// Delete removes a key from the cache. // Delete removes a key from the cache.
func (m *MemoryProvider) Delete(ctx context.Context, key string) error { func (m *MemoryProvider) Delete(ctx context.Context, key string) error {
m.mu.Lock() m.mu.Lock()
defer m.mu.Unlock() defer m.mu.Unlock()
// Remove tag associations
if item, exists := m.items[key]; exists {
for _, tag := range item.Tags {
if keySet, ok := m.tagToKeys[tag]; ok {
delete(keySet, key)
if len(keySet) == 0 {
delete(m.tagToKeys, tag)
}
}
}
}
delete(m.items, key) delete(m.items, key)
return nil return nil
} }
// DeleteByTag removes all keys associated with the given tag.
func (m *MemoryProvider) DeleteByTag(ctx context.Context, tag string) error {
m.mu.Lock()
defer m.mu.Unlock()
// Get all keys associated with this tag
keySet, exists := m.tagToKeys[tag]
if !exists {
return nil // No keys with this tag
}
// Delete all items with this tag
for key := range keySet {
if item, ok := m.items[key]; ok {
// Remove this tag from the item's tag list
newTags := make([]string, 0, len(item.Tags))
for _, t := range item.Tags {
if t != tag {
newTags = append(newTags, t)
}
}
// If item has no more tags, delete it
// Otherwise update its tags
if len(newTags) == 0 {
delete(m.items, key)
} else {
item.Tags = newTags
}
}
}
// Remove the tag mapping
delete(m.tagToKeys, tag)
return nil
}
// DeleteByPattern removes all keys matching the pattern. // DeleteByPattern removes all keys matching the pattern.
func (m *MemoryProvider) DeleteByPattern(ctx context.Context, pattern string) error { func (m *MemoryProvider) DeleteByPattern(ctx context.Context, pattern string) error {
m.mu.Lock() m.mu.Lock()

View File

@@ -103,9 +103,93 @@ func (r *RedisProvider) Set(ctx context.Context, key string, value []byte, ttl t
return r.client.Set(ctx, key, value, ttl).Err() return r.client.Set(ctx, key, value, ttl).Err()
} }
// SetWithTags stores a value in the cache with the specified TTL and tags.
func (r *RedisProvider) SetWithTags(ctx context.Context, key string, value []byte, ttl time.Duration, tags []string) error {
if ttl == 0 {
ttl = r.options.DefaultTTL
}
pipe := r.client.Pipeline()
// Set the value
pipe.Set(ctx, key, value, ttl)
// Add key to each tag's set
for _, tag := range tags {
tagKey := fmt.Sprintf("cache:tag:%s", tag)
pipe.SAdd(ctx, tagKey, key)
// Set expiration on tag set (longer than cache items to ensure cleanup)
if ttl > 0 {
pipe.Expire(ctx, tagKey, ttl+time.Hour)
}
}
// Store tags for this key for later cleanup
if len(tags) > 0 {
tagsKey := fmt.Sprintf("cache:tags:%s", key)
pipe.SAdd(ctx, tagsKey, tags)
if ttl > 0 {
pipe.Expire(ctx, tagsKey, ttl)
}
}
_, err := pipe.Exec(ctx)
return err
}
// Delete removes a key from the cache. // Delete removes a key from the cache.
func (r *RedisProvider) Delete(ctx context.Context, key string) error { func (r *RedisProvider) Delete(ctx context.Context, key string) error {
return r.client.Del(ctx, key).Err() pipe := r.client.Pipeline()
// Get tags for this key
tagsKey := fmt.Sprintf("cache:tags:%s", key)
tags, err := r.client.SMembers(ctx, tagsKey).Result()
if err == nil && len(tags) > 0 {
// Remove key from each tag set
for _, tag := range tags {
tagKey := fmt.Sprintf("cache:tag:%s", tag)
pipe.SRem(ctx, tagKey, key)
}
// Delete the tags key
pipe.Del(ctx, tagsKey)
}
// Delete the actual key
pipe.Del(ctx, key)
_, err = pipe.Exec(ctx)
return err
}
// DeleteByTag removes all keys associated with the given tag.
func (r *RedisProvider) DeleteByTag(ctx context.Context, tag string) error {
tagKey := fmt.Sprintf("cache:tag:%s", tag)
// Get all keys associated with this tag
keys, err := r.client.SMembers(ctx, tagKey).Result()
if err != nil {
return err
}
if len(keys) == 0 {
return nil
}
pipe := r.client.Pipeline()
// Delete all keys and their tag associations
for _, key := range keys {
pipe.Del(ctx, key)
// Also delete the tags key for this cache key
tagsKey := fmt.Sprintf("cache:tags:%s", key)
pipe.Del(ctx, tagsKey)
}
// Delete the tag set itself
pipe.Del(ctx, tagKey)
_, err = pipe.Exec(ctx)
return err
} }
// DeleteByPattern removes all keys matching the pattern. // DeleteByPattern removes all keys matching the pattern.

View File

@@ -1,151 +0,0 @@
package cache
import (
"context"
"testing"
"time"
"github.com/bitechdev/ResolveSpec/pkg/common"
)
func TestBuildQueryCacheKey(t *testing.T) {
filters := []common.FilterOption{
{Column: "name", Operator: "eq", Value: "test"},
{Column: "age", Operator: "gt", Value: 25},
}
sorts := []common.SortOption{
{Column: "name", Direction: "asc"},
}
// Generate cache key
key1 := BuildQueryCacheKey("users", filters, sorts, "status = 'active'", "")
// Same parameters should generate same key
key2 := BuildQueryCacheKey("users", filters, sorts, "status = 'active'", "")
if key1 != key2 {
t.Errorf("Expected same cache keys for identical parameters, got %s and %s", key1, key2)
}
// Different parameters should generate different key
key3 := BuildQueryCacheKey("users", filters, sorts, "status = 'inactive'", "")
if key1 == key3 {
t.Errorf("Expected different cache keys for different parameters, got %s and %s", key1, key3)
}
}
func TestBuildExtendedQueryCacheKey(t *testing.T) {
filters := []common.FilterOption{
{Column: "name", Operator: "eq", Value: "test"},
}
sorts := []common.SortOption{
{Column: "name", Direction: "asc"},
}
expandOpts := []interface{}{
map[string]interface{}{
"relation": "posts",
"where": "status = 'published'",
},
}
// Generate cache key
key1 := BuildExtendedQueryCacheKey("users", filters, sorts, "", "", expandOpts, false, "", "")
// Same parameters should generate same key
key2 := BuildExtendedQueryCacheKey("users", filters, sorts, "", "", expandOpts, false, "", "")
if key1 != key2 {
t.Errorf("Expected same cache keys for identical parameters")
}
// Different distinct value should generate different key
key3 := BuildExtendedQueryCacheKey("users", filters, sorts, "", "", expandOpts, true, "", "")
if key1 == key3 {
t.Errorf("Expected different cache keys for different distinct values")
}
}
func TestGetQueryTotalCacheKey(t *testing.T) {
hash := "abc123"
key := GetQueryTotalCacheKey(hash)
expected := "query_total:abc123"
if key != expected {
t.Errorf("Expected %s, got %s", expected, key)
}
}
func TestCachedTotalIntegration(t *testing.T) {
// Initialize cache with memory provider for testing
UseMemory(&Options{
DefaultTTL: 1 * time.Minute,
MaxSize: 100,
})
ctx := context.Background()
// Create test data
filters := []common.FilterOption{
{Column: "status", Operator: "eq", Value: "active"},
}
sorts := []common.SortOption{
{Column: "created_at", Direction: "desc"},
}
// Build cache key
cacheKeyHash := BuildQueryCacheKey("test_table", filters, sorts, "", "")
cacheKey := GetQueryTotalCacheKey(cacheKeyHash)
// Store a total count in cache
totalToCache := CachedTotal{Total: 42}
err := GetDefaultCache().Set(ctx, cacheKey, totalToCache, time.Minute)
if err != nil {
t.Fatalf("Failed to set cache: %v", err)
}
// Retrieve from cache
var cachedTotal CachedTotal
err = GetDefaultCache().Get(ctx, cacheKey, &cachedTotal)
if err != nil {
t.Fatalf("Failed to get from cache: %v", err)
}
if cachedTotal.Total != 42 {
t.Errorf("Expected total 42, got %d", cachedTotal.Total)
}
// Test cache miss
nonExistentKey := GetQueryTotalCacheKey("nonexistent")
var missedTotal CachedTotal
err = GetDefaultCache().Get(ctx, nonExistentKey, &missedTotal)
if err == nil {
t.Errorf("Expected error for cache miss, got nil")
}
}
func TestHashString(t *testing.T) {
input1 := "test string"
input2 := "test string"
input3 := "different string"
hash1 := hashString(input1)
hash2 := hashString(input2)
hash3 := hashString(input3)
// Same input should produce same hash
if hash1 != hash2 {
t.Errorf("Expected same hash for identical inputs")
}
// Different input should produce different hash
if hash1 == hash3 {
t.Errorf("Expected different hash for different inputs")
}
// Hash should be hex encoded SHA256 (64 characters)
if len(hash1) != 64 {
t.Errorf("Expected hash length of 64, got %d", len(hash1))
}
}

View File

@@ -691,6 +691,11 @@ func (b *BunSelectQuery) Order(order string) common.SelectQuery {
return b return b
} }
func (b *BunSelectQuery) OrderExpr(order string, args ...interface{}) common.SelectQuery {
b.query = b.query.OrderExpr(order, args...)
return b
}
func (b *BunSelectQuery) Limit(n int) common.SelectQuery { func (b *BunSelectQuery) Limit(n int) common.SelectQuery {
b.query = b.query.Limit(n) b.query = b.query.Limit(n)
return b return b

View File

@@ -386,6 +386,12 @@ func (g *GormSelectQuery) Order(order string) common.SelectQuery {
return g return g
} }
func (g *GormSelectQuery) OrderExpr(order string, args ...interface{}) common.SelectQuery {
// GORM's Order can handle expressions directly
g.db = g.db.Order(gorm.Expr(order, args...))
return g
}
func (g *GormSelectQuery) Limit(n int) common.SelectQuery { func (g *GormSelectQuery) Limit(n int) common.SelectQuery {
g.db = g.db.Limit(n) g.db = g.db.Limit(n)
return g return g

View File

@@ -281,6 +281,13 @@ func (p *PgSQLSelectQuery) Order(order string) common.SelectQuery {
return p return p
} }
func (p *PgSQLSelectQuery) OrderExpr(order string, args ...interface{}) common.SelectQuery {
// For PgSQL, expressions are passed directly without quoting
// If there are args, we would need to format them, but for now just append the expression
p.orderBy = append(p.orderBy, order)
return p
}
func (p *PgSQLSelectQuery) Limit(n int) common.SelectQuery { func (p *PgSQLSelectQuery) Limit(n int) common.SelectQuery {
p.limit = n p.limit = n
return p return p

View File

@@ -46,6 +46,7 @@ type SelectQuery interface {
PreloadRelation(relation string, apply ...func(SelectQuery) SelectQuery) SelectQuery PreloadRelation(relation string, apply ...func(SelectQuery) SelectQuery) SelectQuery
JoinRelation(relation string, apply ...func(SelectQuery) SelectQuery) SelectQuery JoinRelation(relation string, apply ...func(SelectQuery) SelectQuery) SelectQuery
Order(order string) SelectQuery Order(order string) SelectQuery
OrderExpr(order string, args ...interface{}) SelectQuery
Limit(n int) SelectQuery Limit(n int) SelectQuery
Offset(n int) SelectQuery Offset(n int) SelectQuery
Group(group string) SelectQuery Group(group string) SelectQuery

View File

@@ -208,21 +208,9 @@ func SanitizeWhereClause(where string, tableName string, options ...*RequestOpti
} }
} }
} }
} else if tableName != "" && !hasTablePrefix(condToCheck) {
// If tableName is provided and the condition DOESN'T have a table prefix,
// qualify unambiguous column references to prevent "ambiguous column" errors
// when there are multiple joins on the same table (e.g., recursive preloads)
columnName := extractUnqualifiedColumnName(condToCheck)
if columnName != "" && (validColumns == nil || isValidColumn(columnName, validColumns)) {
// Qualify the column with the table name
// Be careful to only replace the column name, not other occurrences of the string
oldRef := columnName
newRef := tableName + "." + columnName
// Use word boundary matching to avoid replacing partial matches
cond = qualifyColumnInCondition(cond, oldRef, newRef)
logger.Debug("Qualified unqualified column in condition: '%s' added table prefix '%s'", oldRef, tableName)
}
} }
// Note: We no longer add prefixes to unqualified columns here.
// Use AddTablePrefixToColumns() separately if you need to add prefixes.
validConditions = append(validConditions, cond) validConditions = append(validConditions, cond)
} }
@@ -633,3 +621,145 @@ func isValidColumn(columnName string, validColumns map[string]bool) bool {
} }
return validColumns[strings.ToLower(columnName)] return validColumns[strings.ToLower(columnName)]
} }
// AddTablePrefixToColumns adds table prefix to unqualified column references in a WHERE clause.
// This function only prefixes simple column references and skips:
// - Columns already having a table prefix (containing a dot)
// - Columns inside function calls or expressions (inside parentheses)
// - Columns inside subqueries
// - Columns that don't exist in the table (validation via model registry)
//
// Examples:
// - "status = 'active'" -> "users.status = 'active'" (if status exists in users table)
// - "COALESCE(status, 'default') = 'active'" -> unchanged (status inside function)
// - "users.status = 'active'" -> unchanged (already has prefix)
// - "(status = 'active')" -> "(users.status = 'active')" (grouping parens are OK)
// - "invalid_col = 'value'" -> unchanged (if invalid_col doesn't exist in table)
//
// Parameters:
// - where: The WHERE clause to process
// - tableName: The table name to use as prefix
//
// Returns:
// - The WHERE clause with table prefixes added to appropriate and valid columns
func AddTablePrefixToColumns(where string, tableName string) string {
if where == "" || tableName == "" {
return where
}
where = strings.TrimSpace(where)
// Get valid columns from the model registry for validation
validColumns := getValidColumnsForTable(tableName)
// Split by AND to handle multiple conditions (parenthesis-aware)
conditions := splitByAND(where)
prefixedConditions := make([]string, 0, len(conditions))
for _, cond := range conditions {
cond = strings.TrimSpace(cond)
if cond == "" {
continue
}
// Process this condition to add table prefix if appropriate
processedCond := addPrefixToSingleCondition(cond, tableName, validColumns)
prefixedConditions = append(prefixedConditions, processedCond)
}
if len(prefixedConditions) == 0 {
return ""
}
return strings.Join(prefixedConditions, " AND ")
}
// addPrefixToSingleCondition adds table prefix to a single condition if appropriate
// Returns the condition unchanged if:
// - The condition is a SQL literal/expression (true, false, null, 1=1, etc.)
// - The column reference is inside a function call
// - The column already has a table prefix
// - No valid column reference is found
// - The column doesn't exist in the table (when validColumns is provided)
func addPrefixToSingleCondition(cond string, tableName string, validColumns map[string]bool) string {
// Strip outer grouping parentheses to get to the actual condition
strippedCond := stripOuterParentheses(cond)
// Skip SQL literals and trivial conditions (true, false, null, 1=1, etc.)
if IsSQLExpression(strippedCond) || IsTrivialCondition(strippedCond) {
logger.Debug("Skipping SQL literal/trivial condition: '%s'", strippedCond)
return cond
}
// Extract the left side of the comparison (before the operator)
columnRef := extractLeftSideOfComparison(strippedCond)
if columnRef == "" {
return cond
}
// Skip if it already has a prefix (contains a dot)
if strings.Contains(columnRef, ".") {
logger.Debug("Skipping column '%s' - already has table prefix", columnRef)
return cond
}
// Skip if it's a function call or expression (contains parentheses)
if strings.Contains(columnRef, "(") {
logger.Debug("Skipping column reference '%s' - inside function or expression", columnRef)
return cond
}
// Validate that the column exists in the table (if we have column info)
if !isValidColumn(columnRef, validColumns) {
logger.Debug("Skipping column '%s' - not found in table '%s'", columnRef, tableName)
return cond
}
// It's a simple unqualified column reference that exists in the table - add the table prefix
newRef := tableName + "." + columnRef
result := qualifyColumnInCondition(cond, columnRef, newRef)
logger.Debug("Added table prefix to column: '%s' -> '%s'", columnRef, newRef)
return result
}
// extractLeftSideOfComparison extracts the left side of a comparison operator from a condition.
// This is used to identify the column reference that may need a table prefix.
//
// Examples:
// - "status = 'active'" returns "status"
// - "COALESCE(status, 'default') = 'active'" returns "COALESCE(status, 'default')"
// - "priority > 5" returns "priority"
//
// Returns empty string if no operator is found.
func extractLeftSideOfComparison(cond string) string {
operators := []string{" = ", " != ", " <> ", " > ", " >= ", " < ", " <= ", " LIKE ", " like ", " IN ", " in ", " IS ", " is ", " NOT ", " not "}
// Find the first operator outside of parentheses and quotes
minIdx := -1
for _, op := range operators {
idx := findOperatorOutsideParentheses(cond, op)
if idx > 0 && (minIdx == -1 || idx < minIdx) {
minIdx = idx
}
}
if minIdx > 0 {
leftSide := strings.TrimSpace(cond[:minIdx])
// Remove any surrounding quotes
leftSide = strings.Trim(leftSide, "`\"'")
return leftSide
}
// No operator found - might be a boolean column
parts := strings.Fields(cond)
if len(parts) > 0 {
columnRef := strings.Trim(parts[0], "`\"'")
// Make sure it's not a SQL keyword
if !IsSQLKeyword(strings.ToLower(columnRef)) {
return columnRef
}
}
return ""
}

View File

@@ -237,6 +237,13 @@ func (v *ColumnValidator) FilterRequestOptions(options RequestOptions) RequestOp
for _, sort := range options.Sort { for _, sort := range options.Sort {
if v.IsValidColumn(sort.Column) { if v.IsValidColumn(sort.Column) {
validSorts = append(validSorts, sort) validSorts = append(validSorts, sort)
} else if strings.HasPrefix(sort.Column, "(") && strings.HasSuffix(sort.Column, ")") {
// Allow sort by expression/subquery, but validate for security
if IsSafeSortExpression(sort.Column) {
validSorts = append(validSorts, sort)
} else {
logger.Warn("Unsafe sort expression '%s' removed", sort.Column)
}
} else { } else {
logger.Warn("Invalid column in sort '%s' removed", sort.Column) logger.Warn("Invalid column in sort '%s' removed", sort.Column)
} }
@@ -262,6 +269,24 @@ func (v *ColumnValidator) FilterRequestOptions(options RequestOptions) RequestOp
} }
filteredPreload.Filters = validPreloadFilters filteredPreload.Filters = validPreloadFilters
// Filter preload sort columns
validPreloadSorts := make([]SortOption, 0, len(preload.Sort))
for _, sort := range preload.Sort {
if v.IsValidColumn(sort.Column) {
validPreloadSorts = append(validPreloadSorts, sort)
} else if strings.HasPrefix(sort.Column, "(") && strings.HasSuffix(sort.Column, ")") {
// Allow sort by expression/subquery, but validate for security
if IsSafeSortExpression(sort.Column) {
validPreloadSorts = append(validPreloadSorts, sort)
} else {
logger.Warn("Unsafe sort expression in preload '%s' removed: '%s'", preload.Relation, sort.Column)
}
} else {
logger.Warn("Invalid column in preload '%s' sort '%s' removed", preload.Relation, sort.Column)
}
}
filteredPreload.Sort = validPreloadSorts
validPreloads = append(validPreloads, filteredPreload) validPreloads = append(validPreloads, filteredPreload)
} }
filtered.Preload = validPreloads filtered.Preload = validPreloads
@@ -269,6 +294,56 @@ func (v *ColumnValidator) FilterRequestOptions(options RequestOptions) RequestOp
return filtered return filtered
} }
// IsSafeSortExpression validates that a sort expression (enclosed in brackets) is safe
// and doesn't contain SQL injection attempts or dangerous commands
func IsSafeSortExpression(expr string) bool {
if expr == "" {
return false
}
// Expression must be enclosed in brackets
expr = strings.TrimSpace(expr)
if !strings.HasPrefix(expr, "(") || !strings.HasSuffix(expr, ")") {
return false
}
// Remove outer brackets for content validation
expr = expr[1 : len(expr)-1]
expr = strings.TrimSpace(expr)
// Convert to lowercase for checking dangerous keywords
exprLower := strings.ToLower(expr)
// Check for dangerous SQL commands that should never be in a sort expression
dangerousKeywords := []string{
"drop ", "delete ", "insert ", "update ", "alter ", "create ",
"truncate ", "exec ", "execute ", "grant ", "revoke ",
"into ", "values ", "set ", "shutdown", "xp_",
}
for _, keyword := range dangerousKeywords {
if strings.Contains(exprLower, keyword) {
logger.Warn("Dangerous SQL keyword '%s' detected in sort expression: %s", keyword, expr)
return false
}
}
// Check for SQL comment attempts
if strings.Contains(expr, "--") || strings.Contains(expr, "/*") || strings.Contains(expr, "*/") {
logger.Warn("SQL comment detected in sort expression: %s", expr)
return false
}
// Check for semicolon (command separator)
if strings.Contains(expr, ";") {
logger.Warn("Command separator (;) detected in sort expression: %s", expr)
return false
}
// Expression appears safe
return true
}
// GetValidColumns returns a list of all valid column names for debugging purposes // GetValidColumns returns a list of all valid column names for debugging purposes
func (v *ColumnValidator) GetValidColumns() []string { func (v *ColumnValidator) GetValidColumns() []string {
columns := make([]string, 0, len(v.validColumns)) columns := make([]string, 0, len(v.validColumns))

View File

@@ -361,3 +361,83 @@ func TestFilterRequestOptions(t *testing.T) {
t.Errorf("Expected sort column 'id', got %s", filtered.Sort[0].Column) t.Errorf("Expected sort column 'id', got %s", filtered.Sort[0].Column)
} }
} }
func TestIsSafeSortExpression(t *testing.T) {
tests := []struct {
name string
expression string
shouldPass bool
}{
// Safe expressions
{"Valid subquery", "(SELECT MAX(price) FROM products)", true},
{"Valid CASE expression", "(CASE WHEN status = 'active' THEN 1 ELSE 0 END)", true},
{"Valid aggregate", "(COUNT(*) OVER (PARTITION BY category))", true},
{"Valid function", "(COALESCE(discount, 0))", true},
// Dangerous expressions - SQL injection attempts
{"DROP TABLE attempt", "(id); DROP TABLE users; --", false},
{"DELETE attempt", "(id WHERE 1=1); DELETE FROM users; --", false},
{"INSERT attempt", "(id); INSERT INTO admin VALUES ('hacker'); --", false},
{"UPDATE attempt", "(id); UPDATE users SET role='admin'; --", false},
{"EXEC attempt", "(id); EXEC sp_executesql 'DROP TABLE users'; --", false},
{"XP_ stored proc", "(id); xp_cmdshell 'dir'; --", false},
// Comment injection
{"SQL comment dash", "(id) -- malicious comment", false},
{"SQL comment block start", "(id) /* comment", false},
{"SQL comment block end", "(id) comment */", false},
// Semicolon attempts
{"Semicolon separator", "(id); SELECT * FROM passwords", false},
// Empty/invalid
{"Empty string", "", false},
{"Just brackets", "()", true}, // Empty but technically valid structure
{"No brackets", "id", false}, // Must have brackets for expressions
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
result := IsSafeSortExpression(tt.expression)
if result != tt.shouldPass {
t.Errorf("IsSafeSortExpression(%q) = %v, want %v", tt.expression, result, tt.shouldPass)
}
})
}
}
func TestFilterRequestOptions_WithSortExpressions(t *testing.T) {
model := TestModel{}
validator := NewColumnValidator(model)
options := RequestOptions{
Sort: []SortOption{
{Column: "id", Direction: "ASC"}, // Valid column
{Column: "(SELECT MAX(age) FROM users)", Direction: "DESC"}, // Safe expression
{Column: "name", Direction: "ASC"}, // Valid column
{Column: "(id); DROP TABLE users; --", Direction: "DESC"}, // Dangerous expression
{Column: "invalid_col", Direction: "ASC"}, // Invalid column
{Column: "(CASE WHEN age > 18 THEN 1 ELSE 0 END)", Direction: "ASC"}, // Safe expression
},
}
filtered := validator.FilterRequestOptions(options)
// Should keep: id, safe expression, name, another safe expression
// Should remove: dangerous expression, invalid column
expectedCount := 4
if len(filtered.Sort) != expectedCount {
t.Errorf("Expected %d sort options, got %d", expectedCount, len(filtered.Sort))
}
// Verify the kept options
if filtered.Sort[0].Column != "id" {
t.Errorf("Expected first sort to be 'id', got '%s'", filtered.Sort[0].Column)
}
if filtered.Sort[1].Column != "(SELECT MAX(age) FROM users)" {
t.Errorf("Expected second sort to be safe expression, got '%s'", filtered.Sort[1].Column)
}
if filtered.Sort[2].Column != "name" {
t.Errorf("Expected third sort to be 'name', got '%s'", filtered.Sort[2].Column)
}
}

View File

@@ -273,25 +273,151 @@ handler.SetOpenAPIGenerator(func() (string, error) {
}) })
``` ```
## Using with Swagger UI ## Using the Built-in UI Handler
You can serve the generated OpenAPI spec with Swagger UI: The package includes a built-in UI handler that serves popular OpenAPI visualization tools. No need to download or manage static files - everything is served from CDN.
### Quick Start
```go
import (
"github.com/bitechdev/ResolveSpec/pkg/openapi"
"github.com/gorilla/mux"
)
func main() {
router := mux.NewRouter()
// Setup your API routes and OpenAPI generator...
// (see examples above)
// Add the UI handler - defaults to Swagger UI
openapi.SetupUIRoute(router, "/docs", openapi.UIConfig{
UIType: openapi.SwaggerUI,
SpecURL: "/openapi",
Title: "My API Documentation",
})
// Now visit http://localhost:8080/docs
http.ListenAndServe(":8080", router)
}
```
### Supported UI Frameworks
The handler supports four popular OpenAPI UI frameworks:
#### 1. Swagger UI (Default)
The most widely used OpenAPI UI with excellent compatibility and features.
```go
openapi.SetupUIRoute(router, "/docs", openapi.UIConfig{
UIType: openapi.SwaggerUI,
Theme: "dark", // optional: "light" or "dark"
})
```
#### 2. RapiDoc
Modern, customizable, and feature-rich OpenAPI UI.
```go
openapi.SetupUIRoute(router, "/docs", openapi.UIConfig{
UIType: openapi.RapiDoc,
Theme: "dark",
})
```
#### 3. Redoc
Clean, responsive documentation with great UX.
```go
openapi.SetupUIRoute(router, "/docs", openapi.UIConfig{
UIType: openapi.Redoc,
})
```
#### 4. Scalar
Modern and sleek OpenAPI documentation.
```go
openapi.SetupUIRoute(router, "/docs", openapi.UIConfig{
UIType: openapi.Scalar,
Theme: "dark",
})
```
### Configuration Options
```go
type UIConfig struct {
UIType UIType // SwaggerUI, RapiDoc, Redoc, or Scalar
SpecURL string // URL to OpenAPI spec (default: "/openapi")
Title string // Page title (default: "API Documentation")
FaviconURL string // Custom favicon URL (optional)
CustomCSS string // Custom CSS to inject (optional)
Theme string // "light" or "dark" (support varies by UI)
}
```
### Custom Styling Example
```go
openapi.SetupUIRoute(router, "/docs", openapi.UIConfig{
UIType: openapi.SwaggerUI,
Title: "Acme Corp API",
CustomCSS: `
.swagger-ui .topbar {
background-color: #1976d2;
}
.swagger-ui .info .title {
color: #1976d2;
}
`,
})
```
### Using Multiple UIs
You can serve different UIs at different paths:
```go
// Swagger UI at /docs
openapi.SetupUIRoute(router, "/docs", openapi.UIConfig{
UIType: openapi.SwaggerUI,
})
// Redoc at /redoc
openapi.SetupUIRoute(router, "/redoc", openapi.UIConfig{
UIType: openapi.Redoc,
})
// RapiDoc at /api-docs
openapi.SetupUIRoute(router, "/api-docs", openapi.UIConfig{
UIType: openapi.RapiDoc,
})
```
### Manual Handler Usage
If you need more control, use the handler directly:
```go
handler := openapi.UIHandler(openapi.UIConfig{
UIType: openapi.SwaggerUI,
SpecURL: "/api/openapi.json",
})
router.Handle("/documentation", handler)
```
## Using with External Swagger UI
Alternatively, you can use an external Swagger UI instance:
1. Get the spec from `/openapi` 1. Get the spec from `/openapi`
2. Load it in Swagger UI at `https://petstore.swagger.io/` 2. Load it in Swagger UI at `https://petstore.swagger.io/`
3. Or self-host Swagger UI and point it to your `/openapi` endpoint 3. Or self-host Swagger UI and point it to your `/openapi` endpoint
Example with self-hosted Swagger UI:
```go
// Serve Swagger UI static files
router.PathPrefix("/swagger/").Handler(
http.StripPrefix("/swagger/", http.FileServer(http.Dir("./swagger-ui"))),
)
// Configure Swagger UI to use /openapi
```
## Testing ## Testing
You can test the OpenAPI endpoint: You can test the OpenAPI endpoint:

View File

@@ -183,6 +183,69 @@ func ExampleWithFuncSpec() {
_ = generatorFunc _ = generatorFunc
} }
// ExampleWithUIHandler shows how to serve OpenAPI documentation with a web UI
func ExampleWithUIHandler(db *gorm.DB) {
// Create handler and configure OpenAPI generator
handler := restheadspec.NewHandlerWithGORM(db)
registry := modelregistry.NewModelRegistry()
handler.SetOpenAPIGenerator(func() (string, error) {
generator := NewGenerator(GeneratorConfig{
Title: "My API",
Description: "API documentation with interactive UI",
Version: "1.0.0",
BaseURL: "http://localhost:8080",
Registry: registry,
IncludeRestheadSpec: true,
})
return generator.GenerateJSON()
})
// Setup routes
router := mux.NewRouter()
restheadspec.SetupMuxRoutes(router, handler, nil)
// Add UI handlers for different frameworks
// Swagger UI at /docs (most popular)
SetupUIRoute(router, "/docs", UIConfig{
UIType: SwaggerUI,
SpecURL: "/openapi",
Title: "My API - Swagger UI",
Theme: "light",
})
// RapiDoc at /rapidoc (modern alternative)
SetupUIRoute(router, "/rapidoc", UIConfig{
UIType: RapiDoc,
SpecURL: "/openapi",
Title: "My API - RapiDoc",
})
// Redoc at /redoc (clean and responsive)
SetupUIRoute(router, "/redoc", UIConfig{
UIType: Redoc,
SpecURL: "/openapi",
Title: "My API - Redoc",
})
// Scalar at /scalar (modern and sleek)
SetupUIRoute(router, "/scalar", UIConfig{
UIType: Scalar,
SpecURL: "/openapi",
Title: "My API - Scalar",
Theme: "dark",
})
// Now you can access:
// http://localhost:8080/docs - Swagger UI
// http://localhost:8080/rapidoc - RapiDoc
// http://localhost:8080/redoc - Redoc
// http://localhost:8080/scalar - Scalar
// http://localhost:8080/openapi - Raw OpenAPI JSON
_ = router
}
// ExampleCustomization shows advanced customization options // ExampleCustomization shows advanced customization options
func ExampleCustomization() { func ExampleCustomization() {
// Create registry and register models with descriptions using struct tags // Create registry and register models with descriptions using struct tags

294
pkg/openapi/ui_handler.go Normal file
View File

@@ -0,0 +1,294 @@
package openapi
import (
"fmt"
"html/template"
"net/http"
"strings"
"github.com/gorilla/mux"
)
// UIType represents the type of OpenAPI UI to serve
type UIType string
const (
// SwaggerUI is the most popular OpenAPI UI
SwaggerUI UIType = "swagger-ui"
// RapiDoc is a modern, customizable OpenAPI UI
RapiDoc UIType = "rapidoc"
// Redoc is a clean, responsive OpenAPI UI
Redoc UIType = "redoc"
// Scalar is a modern and sleek OpenAPI UI
Scalar UIType = "scalar"
)
// UIConfig holds configuration for the OpenAPI UI handler
type UIConfig struct {
// UIType specifies which UI framework to use (default: SwaggerUI)
UIType UIType
// SpecURL is the URL to the OpenAPI spec JSON (default: "/openapi")
SpecURL string
// Title is the page title (default: "API Documentation")
Title string
// FaviconURL is the URL to the favicon (optional)
FaviconURL string
// CustomCSS allows injecting custom CSS (optional)
CustomCSS string
// Theme for the UI (light/dark, depends on UI type)
Theme string
}
// UIHandler creates an HTTP handler that serves an OpenAPI UI
func UIHandler(config UIConfig) http.HandlerFunc {
// Set defaults
if config.UIType == "" {
config.UIType = SwaggerUI
}
if config.SpecURL == "" {
config.SpecURL = "/openapi"
}
if config.Title == "" {
config.Title = "API Documentation"
}
if config.Theme == "" {
config.Theme = "light"
}
return func(w http.ResponseWriter, r *http.Request) {
var htmlContent string
var err error
switch config.UIType {
case SwaggerUI:
htmlContent, err = generateSwaggerUI(config)
case RapiDoc:
htmlContent, err = generateRapiDoc(config)
case Redoc:
htmlContent, err = generateRedoc(config)
case Scalar:
htmlContent, err = generateScalar(config)
default:
http.Error(w, "Unsupported UI type", http.StatusBadRequest)
return
}
if err != nil {
http.Error(w, fmt.Sprintf("Failed to generate UI: %v", err), http.StatusInternalServerError)
return
}
w.Header().Set("Content-Type", "text/html; charset=utf-8")
w.WriteHeader(http.StatusOK)
_, err = w.Write([]byte(htmlContent))
if err != nil {
http.Error(w, fmt.Sprintf("Failed to write response: %v", err), http.StatusInternalServerError)
return
}
}
}
// templateData wraps UIConfig to properly handle CSS in templates
type templateData struct {
UIConfig
SafeCustomCSS template.CSS
}
// generateSwaggerUI generates the HTML for Swagger UI
func generateSwaggerUI(config UIConfig) (string, error) {
tmpl := `<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>{{.Title}}</title>
{{if .FaviconURL}}<link rel="icon" type="image/png" href="{{.FaviconURL}}">{{end}}
<link rel="stylesheet" type="text/css" href="https://cdn.jsdelivr.net/npm/swagger-ui-dist@5/swagger-ui.css">
{{if .SafeCustomCSS}}<style>{{.SafeCustomCSS}}</style>{{end}}
<style>
html { box-sizing: border-box; overflow: -moz-scrollbars-vertical; overflow-y: scroll; }
*, *:before, *:after { box-sizing: inherit; }
body { margin: 0; padding: 0; }
</style>
</head>
<body>
<div id="swagger-ui"></div>
<script src="https://cdn.jsdelivr.net/npm/swagger-ui-dist@5/swagger-ui-bundle.js"></script>
<script src="https://cdn.jsdelivr.net/npm/swagger-ui-dist@5/swagger-ui-standalone-preset.js"></script>
<script>
window.onload = function() {
const ui = SwaggerUIBundle({
url: "{{.SpecURL}}",
dom_id: '#swagger-ui',
deepLinking: true,
presets: [
SwaggerUIBundle.presets.apis,
SwaggerUIStandalonePreset
],
plugins: [
SwaggerUIBundle.plugins.DownloadUrl
],
layout: "StandaloneLayout",
{{if eq .Theme "dark"}}
syntaxHighlight: {
activate: true,
theme: "monokai"
}
{{end}}
});
window.ui = ui;
};
</script>
</body>
</html>`
t, err := template.New("swagger").Parse(tmpl)
if err != nil {
return "", err
}
data := templateData{
UIConfig: config,
SafeCustomCSS: template.CSS(config.CustomCSS),
}
var buf strings.Builder
if err := t.Execute(&buf, data); err != nil {
return "", err
}
return buf.String(), nil
}
// generateRapiDoc generates the HTML for RapiDoc
func generateRapiDoc(config UIConfig) (string, error) {
theme := "light"
if config.Theme == "dark" {
theme = "dark"
}
tmpl := `<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>{{.Title}}</title>
{{if .FaviconURL}}<link rel="icon" type="image/png" href="{{.FaviconURL}}">{{end}}
<script type="module" src="https://unpkg.com/rapidoc/dist/rapidoc-min.js"></script>
{{if .SafeCustomCSS}}<style>{{.SafeCustomCSS}}</style>{{end}}
</head>
<body>
<rapi-doc
spec-url="{{.SpecURL}}"
theme="` + theme + `"
render-style="read"
show-header="true"
show-info="true"
allow-try="true"
allow-server-selection="true"
allow-authentication="true"
api-key-name="Authorization"
api-key-location="header"
></rapi-doc>
</body>
</html>`
t, err := template.New("rapidoc").Parse(tmpl)
if err != nil {
return "", err
}
data := templateData{
UIConfig: config,
SafeCustomCSS: template.CSS(config.CustomCSS),
}
var buf strings.Builder
if err := t.Execute(&buf, data); err != nil {
return "", err
}
return buf.String(), nil
}
// generateRedoc generates the HTML for Redoc
func generateRedoc(config UIConfig) (string, error) {
tmpl := `<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>{{.Title}}</title>
{{if .FaviconURL}}<link rel="icon" type="image/png" href="{{.FaviconURL}}">{{end}}
{{if .SafeCustomCSS}}<style>{{.SafeCustomCSS}}</style>{{end}}
<style>
body { margin: 0; padding: 0; }
</style>
</head>
<body>
<redoc spec-url="{{.SpecURL}}" {{if eq .Theme "dark"}}theme='{"colors": {"primary": {"main": "#dd5522"}}}'{{end}}></redoc>
<script src="https://cdn.redoc.ly/redoc/latest/bundles/redoc.standalone.js"></script>
</body>
</html>`
t, err := template.New("redoc").Parse(tmpl)
if err != nil {
return "", err
}
data := templateData{
UIConfig: config,
SafeCustomCSS: template.CSS(config.CustomCSS),
}
var buf strings.Builder
if err := t.Execute(&buf, data); err != nil {
return "", err
}
return buf.String(), nil
}
// generateScalar generates the HTML for Scalar
func generateScalar(config UIConfig) (string, error) {
tmpl := `<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>{{.Title}}</title>
{{if .FaviconURL}}<link rel="icon" type="image/png" href="{{.FaviconURL}}">{{end}}
{{if .SafeCustomCSS}}<style>{{.SafeCustomCSS}}</style>{{end}}
<style>
body { margin: 0; padding: 0; }
</style>
</head>
<body>
<script id="api-reference" data-url="{{.SpecURL}}" {{if eq .Theme "dark"}}data-theme="dark"{{end}}></script>
<script src="https://cdn.jsdelivr.net/npm/@scalar/api-reference"></script>
</body>
</html>`
t, err := template.New("scalar").Parse(tmpl)
if err != nil {
return "", err
}
data := templateData{
UIConfig: config,
SafeCustomCSS: template.CSS(config.CustomCSS),
}
var buf strings.Builder
if err := t.Execute(&buf, data); err != nil {
return "", err
}
return buf.String(), nil
}
// SetupUIRoute adds the OpenAPI UI route to a mux router
// This is a convenience function for the most common use case
func SetupUIRoute(router *mux.Router, path string, config UIConfig) {
router.Handle(path, UIHandler(config))
}

View File

@@ -0,0 +1,308 @@
package openapi
import (
"net/http"
"net/http/httptest"
"strings"
"testing"
"github.com/gorilla/mux"
)
func TestUIHandler_SwaggerUI(t *testing.T) {
config := UIConfig{
UIType: SwaggerUI,
SpecURL: "/openapi",
Title: "Test API Docs",
}
handler := UIHandler(config)
req := httptest.NewRequest("GET", "/docs", nil)
w := httptest.NewRecorder()
handler(w, req)
resp := w.Result()
if resp.StatusCode != http.StatusOK {
t.Errorf("Expected status 200, got %d", resp.StatusCode)
}
body := w.Body.String()
// Check for Swagger UI specific content
if !strings.Contains(body, "swagger-ui") {
t.Error("Expected Swagger UI content")
}
if !strings.Contains(body, "SwaggerUIBundle") {
t.Error("Expected SwaggerUIBundle script")
}
if !strings.Contains(body, config.Title) {
t.Errorf("Expected title '%s' in HTML", config.Title)
}
if !strings.Contains(body, config.SpecURL) {
t.Errorf("Expected spec URL '%s' in HTML", config.SpecURL)
}
if !strings.Contains(body, "swagger-ui-dist") {
t.Error("Expected Swagger UI CDN link")
}
}
func TestUIHandler_RapiDoc(t *testing.T) {
config := UIConfig{
UIType: RapiDoc,
SpecURL: "/api/spec",
Title: "RapiDoc Test",
}
handler := UIHandler(config)
req := httptest.NewRequest("GET", "/docs", nil)
w := httptest.NewRecorder()
handler(w, req)
resp := w.Result()
if resp.StatusCode != http.StatusOK {
t.Errorf("Expected status 200, got %d", resp.StatusCode)
}
body := w.Body.String()
// Check for RapiDoc specific content
if !strings.Contains(body, "rapi-doc") {
t.Error("Expected rapi-doc element")
}
if !strings.Contains(body, "rapidoc-min.js") {
t.Error("Expected RapiDoc script")
}
if !strings.Contains(body, config.Title) {
t.Errorf("Expected title '%s' in HTML", config.Title)
}
if !strings.Contains(body, config.SpecURL) {
t.Errorf("Expected spec URL '%s' in HTML", config.SpecURL)
}
}
func TestUIHandler_Redoc(t *testing.T) {
config := UIConfig{
UIType: Redoc,
SpecURL: "/spec.json",
Title: "Redoc Test",
}
handler := UIHandler(config)
req := httptest.NewRequest("GET", "/docs", nil)
w := httptest.NewRecorder()
handler(w, req)
resp := w.Result()
if resp.StatusCode != http.StatusOK {
t.Errorf("Expected status 200, got %d", resp.StatusCode)
}
body := w.Body.String()
// Check for Redoc specific content
if !strings.Contains(body, "<redoc") {
t.Error("Expected redoc element")
}
if !strings.Contains(body, "redoc.standalone.js") {
t.Error("Expected Redoc script")
}
if !strings.Contains(body, config.Title) {
t.Errorf("Expected title '%s' in HTML", config.Title)
}
if !strings.Contains(body, config.SpecURL) {
t.Errorf("Expected spec URL '%s' in HTML", config.SpecURL)
}
}
func TestUIHandler_Scalar(t *testing.T) {
config := UIConfig{
UIType: Scalar,
SpecURL: "/openapi.json",
Title: "Scalar Test",
}
handler := UIHandler(config)
req := httptest.NewRequest("GET", "/docs", nil)
w := httptest.NewRecorder()
handler(w, req)
resp := w.Result()
if resp.StatusCode != http.StatusOK {
t.Errorf("Expected status 200, got %d", resp.StatusCode)
}
body := w.Body.String()
// Check for Scalar specific content
if !strings.Contains(body, "api-reference") {
t.Error("Expected api-reference element")
}
if !strings.Contains(body, "@scalar/api-reference") {
t.Error("Expected Scalar script")
}
if !strings.Contains(body, config.Title) {
t.Errorf("Expected title '%s' in HTML", config.Title)
}
if !strings.Contains(body, config.SpecURL) {
t.Errorf("Expected spec URL '%s' in HTML", config.SpecURL)
}
}
func TestUIHandler_DefaultValues(t *testing.T) {
// Test with empty config to check defaults
config := UIConfig{}
handler := UIHandler(config)
req := httptest.NewRequest("GET", "/docs", nil)
w := httptest.NewRecorder()
handler(w, req)
resp := w.Result()
if resp.StatusCode != http.StatusOK {
t.Errorf("Expected status 200, got %d", resp.StatusCode)
}
body := w.Body.String()
// Should default to Swagger UI
if !strings.Contains(body, "swagger-ui") {
t.Error("Expected default to Swagger UI")
}
// Should default to /openapi spec URL
if !strings.Contains(body, "/openapi") {
t.Error("Expected default spec URL '/openapi'")
}
// Should default to "API Documentation" title
if !strings.Contains(body, "API Documentation") {
t.Error("Expected default title 'API Documentation'")
}
}
func TestUIHandler_CustomCSS(t *testing.T) {
customCSS := ".custom-class { color: red; }"
config := UIConfig{
UIType: SwaggerUI,
CustomCSS: customCSS,
}
handler := UIHandler(config)
req := httptest.NewRequest("GET", "/docs", nil)
w := httptest.NewRecorder()
handler(w, req)
body := w.Body.String()
if !strings.Contains(body, customCSS) {
t.Errorf("Expected custom CSS to be included. Body:\n%s", body)
}
}
func TestUIHandler_Favicon(t *testing.T) {
faviconURL := "https://example.com/favicon.ico"
config := UIConfig{
UIType: SwaggerUI,
FaviconURL: faviconURL,
}
handler := UIHandler(config)
req := httptest.NewRequest("GET", "/docs", nil)
w := httptest.NewRecorder()
handler(w, req)
body := w.Body.String()
if !strings.Contains(body, faviconURL) {
t.Error("Expected favicon URL to be included")
}
}
func TestUIHandler_DarkTheme(t *testing.T) {
config := UIConfig{
UIType: SwaggerUI,
Theme: "dark",
}
handler := UIHandler(config)
req := httptest.NewRequest("GET", "/docs", nil)
w := httptest.NewRecorder()
handler(w, req)
body := w.Body.String()
// SwaggerUI uses monokai theme for dark mode
if !strings.Contains(body, "monokai") {
t.Error("Expected dark theme configuration for Swagger UI")
}
}
func TestUIHandler_InvalidUIType(t *testing.T) {
config := UIConfig{
UIType: "invalid-ui-type",
}
handler := UIHandler(config)
req := httptest.NewRequest("GET", "/docs", nil)
w := httptest.NewRecorder()
handler(w, req)
resp := w.Result()
if resp.StatusCode != http.StatusBadRequest {
t.Errorf("Expected status 400 for invalid UI type, got %d", resp.StatusCode)
}
}
func TestUIHandler_ContentType(t *testing.T) {
config := UIConfig{
UIType: SwaggerUI,
}
handler := UIHandler(config)
req := httptest.NewRequest("GET", "/docs", nil)
w := httptest.NewRecorder()
handler(w, req)
contentType := w.Header().Get("Content-Type")
if !strings.Contains(contentType, "text/html") {
t.Errorf("Expected Content-Type to contain 'text/html', got '%s'", contentType)
}
if !strings.Contains(contentType, "charset=utf-8") {
t.Errorf("Expected Content-Type to contain 'charset=utf-8', got '%s'", contentType)
}
}
func TestSetupUIRoute(t *testing.T) {
router := mux.NewRouter()
config := UIConfig{
UIType: SwaggerUI,
}
SetupUIRoute(router, "/api-docs", config)
// Test that the route was added and works
req := httptest.NewRequest("GET", "/api-docs", nil)
w := httptest.NewRecorder()
router.ServeHTTP(w, req)
if w.Code != http.StatusOK {
t.Errorf("Expected status 200, got %d", w.Code)
}
// Verify it returns HTML
body := w.Body.String()
if !strings.Contains(body, "swagger-ui") {
t.Error("Expected Swagger UI content")
}
}

View File

@@ -1,10 +1,12 @@
package reflection package reflection
import ( import (
"encoding/json"
"fmt" "fmt"
"reflect" "reflect"
"strconv" "strconv"
"strings" "strings"
"time"
"github.com/bitechdev/ResolveSpec/pkg/modelregistry" "github.com/bitechdev/ResolveSpec/pkg/modelregistry"
) )
@@ -897,6 +899,368 @@ func GetRelationModel(model interface{}, fieldName string) interface{} {
return currentModel return currentModel
} }
// MapToStruct populates a struct from a map while preserving custom types
// It uses reflection to set struct fields based on map keys, matching by:
// 1. Bun tag column name
// 2. Gorm tag column name
// 3. JSON tag name
// 4. Field name (case-insensitive)
// This preserves custom types that implement driver.Valuer like SqlJSONB
func MapToStruct(dataMap map[string]interface{}, target interface{}) error {
if dataMap == nil || target == nil {
return fmt.Errorf("dataMap and target cannot be nil")
}
targetValue := reflect.ValueOf(target)
if targetValue.Kind() != reflect.Ptr {
return fmt.Errorf("target must be a pointer to a struct")
}
targetValue = targetValue.Elem()
if targetValue.Kind() != reflect.Struct {
return fmt.Errorf("target must be a pointer to a struct")
}
targetType := targetValue.Type()
// Create a map of column names to field indices for faster lookup
columnToField := make(map[string]int)
for i := 0; i < targetType.NumField(); i++ {
field := targetType.Field(i)
// Skip unexported fields
if !field.IsExported() {
continue
}
// Build list of possible column names for this field
var columnNames []string
// 1. Bun tag
if bunTag := field.Tag.Get("bun"); bunTag != "" && bunTag != "-" {
if colName := ExtractColumnFromBunTag(bunTag); colName != "" {
columnNames = append(columnNames, colName)
}
}
// 2. Gorm tag
if gormTag := field.Tag.Get("gorm"); gormTag != "" && gormTag != "-" {
if colName := ExtractColumnFromGormTag(gormTag); colName != "" {
columnNames = append(columnNames, colName)
}
}
// 3. JSON tag
if jsonTag := field.Tag.Get("json"); jsonTag != "" && jsonTag != "-" {
parts := strings.Split(jsonTag, ",")
if len(parts) > 0 && parts[0] != "" {
columnNames = append(columnNames, parts[0])
}
}
// 4. Field name variations
columnNames = append(columnNames, field.Name)
columnNames = append(columnNames, strings.ToLower(field.Name))
columnNames = append(columnNames, ToSnakeCase(field.Name))
// Map all column name variations to this field index
for _, colName := range columnNames {
columnToField[strings.ToLower(colName)] = i
}
}
// Iterate through the map and set struct fields
for key, value := range dataMap {
// Find the field index for this key
fieldIndex, found := columnToField[strings.ToLower(key)]
if !found {
// Skip keys that don't map to any field
continue
}
field := targetValue.Field(fieldIndex)
if !field.CanSet() {
continue
}
// Set the value, preserving custom types
if err := setFieldValue(field, value); err != nil {
return fmt.Errorf("failed to set field %s: %w", targetType.Field(fieldIndex).Name, err)
}
}
return nil
}
// setFieldValue sets a reflect.Value from an interface{} value, handling type conversions
func setFieldValue(field reflect.Value, value interface{}) error {
if value == nil {
// Set zero value for nil
field.Set(reflect.Zero(field.Type()))
return nil
}
valueReflect := reflect.ValueOf(value)
// If types match exactly, just set it
if valueReflect.Type().AssignableTo(field.Type()) {
field.Set(valueReflect)
return nil
}
// Handle pointer fields
if field.Kind() == reflect.Ptr {
if valueReflect.Kind() != reflect.Ptr {
// Create a new pointer and set its value
newPtr := reflect.New(field.Type().Elem())
if err := setFieldValue(newPtr.Elem(), value); err != nil {
return err
}
field.Set(newPtr)
return nil
}
}
// Handle conversions for basic types
switch field.Kind() {
case reflect.String:
if str, ok := value.(string); ok {
field.SetString(str)
return nil
}
case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64:
if num, ok := convertToInt64(value); ok {
if field.OverflowInt(num) {
return fmt.Errorf("integer overflow")
}
field.SetInt(num)
return nil
}
case reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64:
if num, ok := convertToUint64(value); ok {
if field.OverflowUint(num) {
return fmt.Errorf("unsigned integer overflow")
}
field.SetUint(num)
return nil
}
case reflect.Float32, reflect.Float64:
if num, ok := convertToFloat64(value); ok {
if field.OverflowFloat(num) {
return fmt.Errorf("float overflow")
}
field.SetFloat(num)
return nil
}
case reflect.Bool:
if b, ok := value.(bool); ok {
field.SetBool(b)
return nil
}
case reflect.Slice:
// Handle []byte specially (for types like SqlJSONB)
if field.Type().Elem().Kind() == reflect.Uint8 {
switch v := value.(type) {
case []byte:
field.SetBytes(v)
return nil
case string:
field.SetBytes([]byte(v))
return nil
case map[string]interface{}, []interface{}:
// Marshal complex types to JSON for SqlJSONB fields
jsonBytes, err := json.Marshal(v)
if err != nil {
return fmt.Errorf("failed to marshal value to JSON: %w", err)
}
field.SetBytes(jsonBytes)
return nil
}
}
}
// Handle struct types (like SqlTimeStamp, SqlDate, SqlTime which wrap SqlNull[time.Time])
if field.Kind() == reflect.Struct {
// Handle datatypes.SqlNull[T] and wrapped types (SqlTimeStamp, SqlDate, SqlTime)
// Check if the type has a Scan method (sql.Scanner interface)
if field.CanAddr() {
scanMethod := field.Addr().MethodByName("Scan")
if scanMethod.IsValid() {
// Call the Scan method with the value
results := scanMethod.Call([]reflect.Value{reflect.ValueOf(value)})
if len(results) > 0 {
// Check if there was an error
if err, ok := results[0].Interface().(error); ok && err != nil {
return err
}
return nil
}
}
}
// Handle time.Time with ISO string fallback
if field.Type() == reflect.TypeOf(time.Time{}) {
switch v := value.(type) {
case time.Time:
field.Set(reflect.ValueOf(v))
return nil
case string:
// Try parsing as ISO 8601 / RFC3339
if t, err := time.Parse(time.RFC3339, v); err == nil {
field.Set(reflect.ValueOf(t))
return nil
}
// Try other common formats
formats := []string{
"2006-01-02T15:04:05.000-0700",
"2006-01-02T15:04:05.000",
"2006-01-02T15:04:05",
"2006-01-02 15:04:05",
"2006-01-02",
}
for _, format := range formats {
if t, err := time.Parse(format, v); err == nil {
field.Set(reflect.ValueOf(t))
return nil
}
}
return fmt.Errorf("cannot parse time string: %s", v)
}
}
// Fallback: Try to find a "Val" field (for SqlNull types) and set it directly
valField := field.FieldByName("Val")
if valField.IsValid() && valField.CanSet() {
// Also set Valid field to true
validField := field.FieldByName("Valid")
if validField.IsValid() && validField.CanSet() && validField.Kind() == reflect.Bool {
// Set the Val field
if err := setFieldValue(valField, value); err != nil {
return err
}
// Set Valid to true
validField.SetBool(true)
return nil
}
}
}
// If we can convert the type, do it
if valueReflect.Type().ConvertibleTo(field.Type()) {
field.Set(valueReflect.Convert(field.Type()))
return nil
}
return fmt.Errorf("cannot convert %v to %v", valueReflect.Type(), field.Type())
}
// convertToInt64 attempts to convert various types to int64
func convertToInt64(value interface{}) (int64, bool) {
switch v := value.(type) {
case int:
return int64(v), true
case int8:
return int64(v), true
case int16:
return int64(v), true
case int32:
return int64(v), true
case int64:
return v, true
case uint:
return int64(v), true
case uint8:
return int64(v), true
case uint16:
return int64(v), true
case uint32:
return int64(v), true
case uint64:
return int64(v), true
case float32:
return int64(v), true
case float64:
return int64(v), true
case string:
if num, err := strconv.ParseInt(v, 10, 64); err == nil {
return num, true
}
}
return 0, false
}
// convertToUint64 attempts to convert various types to uint64
func convertToUint64(value interface{}) (uint64, bool) {
switch v := value.(type) {
case int:
return uint64(v), true
case int8:
return uint64(v), true
case int16:
return uint64(v), true
case int32:
return uint64(v), true
case int64:
return uint64(v), true
case uint:
return uint64(v), true
case uint8:
return uint64(v), true
case uint16:
return uint64(v), true
case uint32:
return uint64(v), true
case uint64:
return v, true
case float32:
return uint64(v), true
case float64:
return uint64(v), true
case string:
if num, err := strconv.ParseUint(v, 10, 64); err == nil {
return num, true
}
}
return 0, false
}
// convertToFloat64 attempts to convert various types to float64
func convertToFloat64(value interface{}) (float64, bool) {
switch v := value.(type) {
case int:
return float64(v), true
case int8:
return float64(v), true
case int16:
return float64(v), true
case int32:
return float64(v), true
case int64:
return float64(v), true
case uint:
return float64(v), true
case uint8:
return float64(v), true
case uint16:
return float64(v), true
case uint32:
return float64(v), true
case uint64:
return float64(v), true
case float32:
return float64(v), true
case float64:
return v, true
case string:
if num, err := strconv.ParseFloat(v, 64); err == nil {
return num, true
}
}
return 0, false
}
// getRelationModelSingleLevel gets the model type for a single level field (non-recursive) // getRelationModelSingleLevel gets the model type for a single level field (non-recursive)
// This is a helper function used by GetRelationModel to handle one level at a time // This is a helper function used by GetRelationModel to handle one level at a time
func getRelationModelSingleLevel(model interface{}, fieldName string) interface{} { func getRelationModelSingleLevel(model interface{}, fieldName string) interface{} {

View File

@@ -0,0 +1,266 @@
package reflection_test
import (
"testing"
"time"
"github.com/bitechdev/ResolveSpec/pkg/reflection"
"github.com/bitechdev/ResolveSpec/pkg/spectypes"
)
func TestMapToStruct_SqlJSONB_PreservesDriverValuer(t *testing.T) {
// Test that SqlJSONB type preserves driver.Valuer interface
type TestModel struct {
ID int64 `bun:"id,pk" json:"id"`
Meta spectypes.SqlJSONB `bun:"meta" json:"meta"`
}
dataMap := map[string]interface{}{
"id": int64(123),
"meta": map[string]interface{}{
"key": "value",
"num": 42,
},
}
var result TestModel
err := reflection.MapToStruct(dataMap, &result)
if err != nil {
t.Fatalf("MapToStruct() error = %v", err)
}
// Verify the field was set
if result.ID != 123 {
t.Errorf("ID = %v, want 123", result.ID)
}
// Verify SqlJSONB was populated
if len(result.Meta) == 0 {
t.Error("Meta is empty, want non-empty")
}
// Most importantly: verify driver.Valuer interface works
value, err := result.Meta.Value()
if err != nil {
t.Errorf("Meta.Value() error = %v, want nil", err)
}
// Value should return a string representation of the JSON
if value == nil {
t.Error("Meta.Value() returned nil, want non-nil")
}
// Check it's a valid JSON string
if str, ok := value.(string); ok {
if len(str) == 0 {
t.Error("Meta.Value() returned empty string, want valid JSON")
}
t.Logf("SqlJSONB.Value() returned: %s", str)
} else {
t.Errorf("Meta.Value() returned type %T, want string", value)
}
}
func TestMapToStruct_SqlJSONB_FromBytes(t *testing.T) {
// Test that SqlJSONB can be set from []byte directly
type TestModel struct {
ID int64 `bun:"id,pk" json:"id"`
Meta spectypes.SqlJSONB `bun:"meta" json:"meta"`
}
jsonBytes := []byte(`{"direct":"bytes"}`)
dataMap := map[string]interface{}{
"id": int64(456),
"meta": jsonBytes,
}
var result TestModel
err := reflection.MapToStruct(dataMap, &result)
if err != nil {
t.Fatalf("MapToStruct() error = %v", err)
}
if result.ID != 456 {
t.Errorf("ID = %v, want 456", result.ID)
}
if string(result.Meta) != string(jsonBytes) {
t.Errorf("Meta = %s, want %s", string(result.Meta), string(jsonBytes))
}
// Verify driver.Valuer works
value, err := result.Meta.Value()
if err != nil {
t.Errorf("Meta.Value() error = %v", err)
}
if value == nil {
t.Error("Meta.Value() returned nil")
}
}
func TestMapToStruct_AllSqlTypes(t *testing.T) {
// Test model with all SQL custom types
type TestModel struct {
ID int64 `bun:"id,pk" json:"id"`
Name string `bun:"name" json:"name"`
CreatedAt spectypes.SqlTimeStamp `bun:"created_at" json:"created_at"`
BirthDate spectypes.SqlDate `bun:"birth_date" json:"birth_date"`
LoginTime spectypes.SqlTime `bun:"login_time" json:"login_time"`
Meta spectypes.SqlJSONB `bun:"meta" json:"meta"`
Tags spectypes.SqlJSONB `bun:"tags" json:"tags"`
}
now := time.Now()
birthDate := time.Date(1990, 1, 15, 0, 0, 0, 0, time.UTC)
loginTime := time.Date(0, 1, 1, 14, 30, 0, 0, time.UTC)
dataMap := map[string]interface{}{
"id": int64(100),
"name": "Test User",
"created_at": now,
"birth_date": birthDate,
"login_time": loginTime,
"meta": map[string]interface{}{
"role": "admin",
"active": true,
},
"tags": []interface{}{"golang", "testing", "sql"},
}
var result TestModel
err := reflection.MapToStruct(dataMap, &result)
if err != nil {
t.Fatalf("MapToStruct() error = %v", err)
}
// Verify basic fields
if result.ID != 100 {
t.Errorf("ID = %v, want 100", result.ID)
}
if result.Name != "Test User" {
t.Errorf("Name = %v, want 'Test User'", result.Name)
}
// Verify SqlTimeStamp
if !result.CreatedAt.Valid {
t.Error("CreatedAt.Valid = false, want true")
}
if !result.CreatedAt.Val.Equal(now) {
t.Errorf("CreatedAt.Val = %v, want %v", result.CreatedAt.Val, now)
}
// Verify driver.Valuer for SqlTimeStamp
tsValue, err := result.CreatedAt.Value()
if err != nil {
t.Errorf("CreatedAt.Value() error = %v", err)
}
if tsValue == nil {
t.Error("CreatedAt.Value() returned nil")
}
// Verify SqlDate
if !result.BirthDate.Valid {
t.Error("BirthDate.Valid = false, want true")
}
if !result.BirthDate.Val.Equal(birthDate) {
t.Errorf("BirthDate.Val = %v, want %v", result.BirthDate.Val, birthDate)
}
// Verify driver.Valuer for SqlDate
dateValue, err := result.BirthDate.Value()
if err != nil {
t.Errorf("BirthDate.Value() error = %v", err)
}
if dateValue == nil {
t.Error("BirthDate.Value() returned nil")
}
// Verify SqlTime
if !result.LoginTime.Valid {
t.Error("LoginTime.Valid = false, want true")
}
// Verify driver.Valuer for SqlTime
timeValue, err := result.LoginTime.Value()
if err != nil {
t.Errorf("LoginTime.Value() error = %v", err)
}
if timeValue == nil {
t.Error("LoginTime.Value() returned nil")
}
// Verify SqlJSONB for Meta
if len(result.Meta) == 0 {
t.Error("Meta is empty")
}
metaValue, err := result.Meta.Value()
if err != nil {
t.Errorf("Meta.Value() error = %v", err)
}
if metaValue == nil {
t.Error("Meta.Value() returned nil")
}
// Verify SqlJSONB for Tags
if len(result.Tags) == 0 {
t.Error("Tags is empty")
}
tagsValue, err := result.Tags.Value()
if err != nil {
t.Errorf("Tags.Value() error = %v", err)
}
if tagsValue == nil {
t.Error("Tags.Value() returned nil")
}
t.Logf("All SQL types successfully preserved driver.Valuer interface:")
t.Logf(" - SqlTimeStamp: %v", tsValue)
t.Logf(" - SqlDate: %v", dateValue)
t.Logf(" - SqlTime: %v", timeValue)
t.Logf(" - SqlJSONB (Meta): %v", metaValue)
t.Logf(" - SqlJSONB (Tags): %v", tagsValue)
}
func TestMapToStruct_SqlNull_NilValues(t *testing.T) {
// Test that SqlNull types handle nil values correctly
type TestModel struct {
ID int64 `bun:"id,pk" json:"id"`
UpdatedAt spectypes.SqlTimeStamp `bun:"updated_at" json:"updated_at"`
DeletedAt spectypes.SqlTimeStamp `bun:"deleted_at" json:"deleted_at"`
}
now := time.Now()
dataMap := map[string]interface{}{
"id": int64(200),
"updated_at": now,
"deleted_at": nil, // Explicitly nil
}
var result TestModel
err := reflection.MapToStruct(dataMap, &result)
if err != nil {
t.Fatalf("MapToStruct() error = %v", err)
}
// UpdatedAt should be valid
if !result.UpdatedAt.Valid {
t.Error("UpdatedAt.Valid = false, want true")
}
if !result.UpdatedAt.Val.Equal(now) {
t.Errorf("UpdatedAt.Val = %v, want %v", result.UpdatedAt.Val, now)
}
// DeletedAt should be invalid (null)
if result.DeletedAt.Valid {
t.Error("DeletedAt.Valid = true, want false (null)")
}
// Verify driver.Valuer for null SqlTimeStamp
deletedValue, err := result.DeletedAt.Value()
if err != nil {
t.Errorf("DeletedAt.Value() error = %v", err)
}
if deletedValue != nil {
t.Errorf("DeletedAt.Value() = %v, want nil", deletedValue)
}
}

View File

@@ -1687,3 +1687,201 @@ func TestGetRelationModel_WithTags(t *testing.T) {
}) })
} }
} }
func TestMapToStruct(t *testing.T) {
// Test model with various field types
type TestModel struct {
ID int64 `bun:"id,pk" json:"id"`
Name string `bun:"name" json:"name"`
Age int `bun:"age" json:"age"`
Active bool `bun:"active" json:"active"`
Score float64 `bun:"score" json:"score"`
Data []byte `bun:"data" json:"data"`
MetaJSON []byte `bun:"meta_json" json:"meta_json"`
}
tests := []struct {
name string
dataMap map[string]interface{}
expected TestModel
wantErr bool
}{
{
name: "Basic types conversion",
dataMap: map[string]interface{}{
"id": int64(123),
"name": "Test User",
"age": 30,
"active": true,
"score": 95.5,
},
expected: TestModel{
ID: 123,
Name: "Test User",
Age: 30,
Active: true,
Score: 95.5,
},
wantErr: false,
},
{
name: "Byte slice (SqlJSONB-like) from []byte",
dataMap: map[string]interface{}{
"id": int64(456),
"name": "JSON Test",
"data": []byte(`{"key":"value"}`),
},
expected: TestModel{
ID: 456,
Name: "JSON Test",
Data: []byte(`{"key":"value"}`),
},
wantErr: false,
},
{
name: "Byte slice from string",
dataMap: map[string]interface{}{
"id": int64(789),
"data": "string data",
},
expected: TestModel{
ID: 789,
Data: []byte("string data"),
},
wantErr: false,
},
{
name: "Byte slice from map (JSON marshal)",
dataMap: map[string]interface{}{
"id": int64(999),
"meta_json": map[string]interface{}{
"field1": "value1",
"field2": 42,
},
},
expected: TestModel{
ID: 999,
MetaJSON: []byte(`{"field1":"value1","field2":42}`),
},
wantErr: false,
},
{
name: "Byte slice from slice (JSON marshal)",
dataMap: map[string]interface{}{
"id": int64(111),
"meta_json": []interface{}{"item1", "item2", 3},
},
expected: TestModel{
ID: 111,
MetaJSON: []byte(`["item1","item2",3]`),
},
wantErr: false,
},
{
name: "Field matching by bun tag",
dataMap: map[string]interface{}{
"id": int64(222),
"name": "Tagged Field",
},
expected: TestModel{
ID: 222,
Name: "Tagged Field",
},
wantErr: false,
},
{
name: "Nil values",
dataMap: map[string]interface{}{
"id": int64(333),
"data": nil,
},
expected: TestModel{
ID: 333,
Data: nil,
},
wantErr: false,
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
var result TestModel
err := MapToStruct(tt.dataMap, &result)
if (err != nil) != tt.wantErr {
t.Errorf("MapToStruct() error = %v, wantErr %v", err, tt.wantErr)
return
}
// Compare fields individually for better error messages
if result.ID != tt.expected.ID {
t.Errorf("ID = %v, want %v", result.ID, tt.expected.ID)
}
if result.Name != tt.expected.Name {
t.Errorf("Name = %v, want %v", result.Name, tt.expected.Name)
}
if result.Age != tt.expected.Age {
t.Errorf("Age = %v, want %v", result.Age, tt.expected.Age)
}
if result.Active != tt.expected.Active {
t.Errorf("Active = %v, want %v", result.Active, tt.expected.Active)
}
if result.Score != tt.expected.Score {
t.Errorf("Score = %v, want %v", result.Score, tt.expected.Score)
}
// For byte slices, compare as strings for JSON data
if tt.expected.Data != nil {
if string(result.Data) != string(tt.expected.Data) {
t.Errorf("Data = %s, want %s", string(result.Data), string(tt.expected.Data))
}
}
if tt.expected.MetaJSON != nil {
if string(result.MetaJSON) != string(tt.expected.MetaJSON) {
t.Errorf("MetaJSON = %s, want %s", string(result.MetaJSON), string(tt.expected.MetaJSON))
}
}
})
}
}
func TestMapToStruct_Errors(t *testing.T) {
type TestModel struct {
ID int `bun:"id" json:"id"`
}
tests := []struct {
name string
dataMap map[string]interface{}
target interface{}
wantErr bool
}{
{
name: "Nil dataMap",
dataMap: nil,
target: &TestModel{},
wantErr: true,
},
{
name: "Nil target",
dataMap: map[string]interface{}{"id": 1},
target: nil,
wantErr: true,
},
{
name: "Non-pointer target",
dataMap: map[string]interface{}{"id": 1},
target: TestModel{},
wantErr: true,
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
err := MapToStruct(tt.dataMap, tt.target)
if (err != nil) != tt.wantErr {
t.Errorf("MapToStruct() error = %v, wantErr %v", err, tt.wantErr)
}
})
}
}

View File

@@ -0,0 +1,118 @@
package resolvespec
import (
"context"
"crypto/sha256"
"encoding/hex"
"encoding/json"
"fmt"
"strings"
"time"
"github.com/bitechdev/ResolveSpec/pkg/cache"
"github.com/bitechdev/ResolveSpec/pkg/common"
)
// queryCacheKey represents the components used to build a cache key for query total count
type queryCacheKey struct {
TableName string `json:"table_name"`
Filters []common.FilterOption `json:"filters"`
Sort []common.SortOption `json:"sort"`
CustomSQLWhere string `json:"custom_sql_where,omitempty"`
CustomSQLOr string `json:"custom_sql_or,omitempty"`
CursorForward string `json:"cursor_forward,omitempty"`
CursorBackward string `json:"cursor_backward,omitempty"`
}
// cachedTotal represents a cached total count
type cachedTotal struct {
Total int `json:"total"`
}
// buildQueryCacheKey builds a cache key from query parameters for total count caching
func buildQueryCacheKey(tableName string, filters []common.FilterOption, sort []common.SortOption, customWhere, customOr string) string {
key := queryCacheKey{
TableName: tableName,
Filters: filters,
Sort: sort,
CustomSQLWhere: customWhere,
CustomSQLOr: customOr,
}
// Serialize to JSON for consistent hashing
jsonData, err := json.Marshal(key)
if err != nil {
// Fallback to simple string concatenation if JSON fails
return hashString(fmt.Sprintf("%s_%v_%v_%s_%s", tableName, filters, sort, customWhere, customOr))
}
return hashString(string(jsonData))
}
// buildExtendedQueryCacheKey builds a cache key for extended query options with cursor pagination
func buildExtendedQueryCacheKey(tableName string, filters []common.FilterOption, sort []common.SortOption,
customWhere, customOr string, cursorFwd, cursorBwd string) string {
key := queryCacheKey{
TableName: tableName,
Filters: filters,
Sort: sort,
CustomSQLWhere: customWhere,
CustomSQLOr: customOr,
CursorForward: cursorFwd,
CursorBackward: cursorBwd,
}
// Serialize to JSON for consistent hashing
jsonData, err := json.Marshal(key)
if err != nil {
// Fallback to simple string concatenation if JSON fails
return hashString(fmt.Sprintf("%s_%v_%v_%s_%s_%s_%s",
tableName, filters, sort, customWhere, customOr, cursorFwd, cursorBwd))
}
return hashString(string(jsonData))
}
// hashString computes SHA256 hash of a string
func hashString(s string) string {
h := sha256.New()
h.Write([]byte(s))
return hex.EncodeToString(h.Sum(nil))
}
// getQueryTotalCacheKey returns a formatted cache key for storing/retrieving total count
func getQueryTotalCacheKey(hash string) string {
return fmt.Sprintf("query_total:%s", hash)
}
// buildCacheTags creates cache tags from schema and table name
func buildCacheTags(schema, tableName string) []string {
return []string{
fmt.Sprintf("schema:%s", strings.ToLower(schema)),
fmt.Sprintf("table:%s", strings.ToLower(tableName)),
}
}
// setQueryTotalCache stores a query total in the cache with schema and table tags
func setQueryTotalCache(ctx context.Context, cacheKey string, total int, schema, tableName string, ttl time.Duration) error {
c := cache.GetDefaultCache()
cacheData := cachedTotal{Total: total}
tags := buildCacheTags(schema, tableName)
return c.SetWithTags(ctx, cacheKey, cacheData, ttl, tags)
}
// invalidateCacheForTags removes all cached items matching the specified tags
func invalidateCacheForTags(ctx context.Context, tags []string) error {
c := cache.GetDefaultCache()
// Invalidate for each tag
for _, tag := range tags {
if err := c.DeleteByTag(ctx, tag); err != nil {
return err
}
}
return nil
}

View File

@@ -2,6 +2,7 @@ package resolvespec
import ( import (
"context" "context"
"database/sql"
"encoding/json" "encoding/json"
"fmt" "fmt"
"net/http" "net/http"
@@ -330,19 +331,17 @@ func (h *Handler) handleRead(ctx context.Context, w common.ResponseWriter, id st
// Use extended cache key if cursors are present // Use extended cache key if cursors are present
var cacheKeyHash string var cacheKeyHash string
if len(options.CursorForward) > 0 || len(options.CursorBackward) > 0 { if len(options.CursorForward) > 0 || len(options.CursorBackward) > 0 {
cacheKeyHash = cache.BuildExtendedQueryCacheKey( cacheKeyHash = buildExtendedQueryCacheKey(
tableName, tableName,
options.Filters, options.Filters,
options.Sort, options.Sort,
"", // No custom SQL WHERE in resolvespec "", // No custom SQL WHERE in resolvespec
"", // No custom SQL OR in resolvespec "", // No custom SQL OR in resolvespec
nil, // No expand options in resolvespec
false, // distinct not used here
options.CursorForward, options.CursorForward,
options.CursorBackward, options.CursorBackward,
) )
} else { } else {
cacheKeyHash = cache.BuildQueryCacheKey( cacheKeyHash = buildQueryCacheKey(
tableName, tableName,
options.Filters, options.Filters,
options.Sort, options.Sort,
@@ -350,10 +349,10 @@ func (h *Handler) handleRead(ctx context.Context, w common.ResponseWriter, id st
"", // No custom SQL OR in resolvespec "", // No custom SQL OR in resolvespec
) )
} }
cacheKey := cache.GetQueryTotalCacheKey(cacheKeyHash) cacheKey := getQueryTotalCacheKey(cacheKeyHash)
// Try to retrieve from cache // Try to retrieve from cache
var cachedTotal cache.CachedTotal var cachedTotal cachedTotal
err := cache.GetDefaultCache().Get(ctx, cacheKey, &cachedTotal) err := cache.GetDefaultCache().Get(ctx, cacheKey, &cachedTotal)
if err == nil { if err == nil {
total = cachedTotal.Total total = cachedTotal.Total
@@ -370,10 +369,9 @@ func (h *Handler) handleRead(ctx context.Context, w common.ResponseWriter, id st
total = count total = count
logger.Debug("Total records (from query): %d", total) logger.Debug("Total records (from query): %d", total)
// Store in cache // Store in cache with schema and table tags
cacheTTL := time.Minute * 2 // Default 2 minutes TTL cacheTTL := time.Minute * 2 // Default 2 minutes TTL
cacheData := cache.CachedTotal{Total: total} if err := setQueryTotalCache(ctx, cacheKey, total, schema, tableName, cacheTTL); err != nil {
if err := cache.GetDefaultCache().Set(ctx, cacheKey, cacheData, cacheTTL); err != nil {
logger.Warn("Failed to cache query total: %v", err) logger.Warn("Failed to cache query total: %v", err)
// Don't fail the request if caching fails // Don't fail the request if caching fails
} else { } else {
@@ -463,6 +461,11 @@ func (h *Handler) handleCreate(ctx context.Context, w common.ResponseWriter, dat
return return
} }
logger.Info("Successfully created record with nested data, ID: %v", result.ID) logger.Info("Successfully created record with nested data, ID: %v", result.ID)
// Invalidate cache for this table
cacheTags := buildCacheTags(schema, tableName)
if err := invalidateCacheForTags(ctx, cacheTags); err != nil {
logger.Warn("Failed to invalidate cache for table %s: %v", tableName, err)
}
h.sendResponse(w, result.Data, nil) h.sendResponse(w, result.Data, nil)
return return
} }
@@ -479,6 +482,11 @@ func (h *Handler) handleCreate(ctx context.Context, w common.ResponseWriter, dat
return return
} }
logger.Info("Successfully created record, rows affected: %d", result.RowsAffected()) logger.Info("Successfully created record, rows affected: %d", result.RowsAffected())
// Invalidate cache for this table
cacheTags := buildCacheTags(schema, tableName)
if err := invalidateCacheForTags(ctx, cacheTags); err != nil {
logger.Warn("Failed to invalidate cache for table %s: %v", tableName, err)
}
h.sendResponse(w, v, nil) h.sendResponse(w, v, nil)
case []map[string]interface{}: case []map[string]interface{}:
@@ -517,6 +525,11 @@ func (h *Handler) handleCreate(ctx context.Context, w common.ResponseWriter, dat
return return
} }
logger.Info("Successfully created %d records with nested data", len(results)) logger.Info("Successfully created %d records with nested data", len(results))
// Invalidate cache for this table
cacheTags := buildCacheTags(schema, tableName)
if err := invalidateCacheForTags(ctx, cacheTags); err != nil {
logger.Warn("Failed to invalidate cache for table %s: %v", tableName, err)
}
h.sendResponse(w, results, nil) h.sendResponse(w, results, nil)
return return
} }
@@ -540,6 +553,11 @@ func (h *Handler) handleCreate(ctx context.Context, w common.ResponseWriter, dat
return return
} }
logger.Info("Successfully created %d records", len(v)) logger.Info("Successfully created %d records", len(v))
// Invalidate cache for this table
cacheTags := buildCacheTags(schema, tableName)
if err := invalidateCacheForTags(ctx, cacheTags); err != nil {
logger.Warn("Failed to invalidate cache for table %s: %v", tableName, err)
}
h.sendResponse(w, v, nil) h.sendResponse(w, v, nil)
case []interface{}: case []interface{}:
@@ -583,6 +601,11 @@ func (h *Handler) handleCreate(ctx context.Context, w common.ResponseWriter, dat
return return
} }
logger.Info("Successfully created %d records with nested data", len(results)) logger.Info("Successfully created %d records with nested data", len(results))
// Invalidate cache for this table
cacheTags := buildCacheTags(schema, tableName)
if err := invalidateCacheForTags(ctx, cacheTags); err != nil {
logger.Warn("Failed to invalidate cache for table %s: %v", tableName, err)
}
h.sendResponse(w, results, nil) h.sendResponse(w, results, nil)
return return
} }
@@ -610,6 +633,11 @@ func (h *Handler) handleCreate(ctx context.Context, w common.ResponseWriter, dat
return return
} }
logger.Info("Successfully created %d records", len(v)) logger.Info("Successfully created %d records", len(v))
// Invalidate cache for this table
cacheTags := buildCacheTags(schema, tableName)
if err := invalidateCacheForTags(ctx, cacheTags); err != nil {
logger.Warn("Failed to invalidate cache for table %s: %v", tableName, err)
}
h.sendResponse(w, list, nil) h.sendResponse(w, list, nil)
default: default:
@@ -660,6 +688,11 @@ func (h *Handler) handleUpdate(ctx context.Context, w common.ResponseWriter, url
return return
} }
logger.Info("Successfully updated record with nested data, rows: %d", result.AffectedRows) logger.Info("Successfully updated record with nested data, rows: %d", result.AffectedRows)
// Invalidate cache for this table
cacheTags := buildCacheTags(schema, tableName)
if err := invalidateCacheForTags(ctx, cacheTags); err != nil {
logger.Warn("Failed to invalidate cache for table %s: %v", tableName, err)
}
h.sendResponse(w, result.Data, nil) h.sendResponse(w, result.Data, nil)
return return
} }
@@ -696,6 +729,11 @@ func (h *Handler) handleUpdate(ctx context.Context, w common.ResponseWriter, url
} }
logger.Info("Successfully updated %d records", result.RowsAffected()) logger.Info("Successfully updated %d records", result.RowsAffected())
// Invalidate cache for this table
cacheTags := buildCacheTags(schema, tableName)
if err := invalidateCacheForTags(ctx, cacheTags); err != nil {
logger.Warn("Failed to invalidate cache for table %s: %v", tableName, err)
}
h.sendResponse(w, data, nil) h.sendResponse(w, data, nil)
case []map[string]interface{}: case []map[string]interface{}:
@@ -734,6 +772,11 @@ func (h *Handler) handleUpdate(ctx context.Context, w common.ResponseWriter, url
return return
} }
logger.Info("Successfully updated %d records with nested data", len(results)) logger.Info("Successfully updated %d records with nested data", len(results))
// Invalidate cache for this table
cacheTags := buildCacheTags(schema, tableName)
if err := invalidateCacheForTags(ctx, cacheTags); err != nil {
logger.Warn("Failed to invalidate cache for table %s: %v", tableName, err)
}
h.sendResponse(w, results, nil) h.sendResponse(w, results, nil)
return return
} }
@@ -757,6 +800,11 @@ func (h *Handler) handleUpdate(ctx context.Context, w common.ResponseWriter, url
return return
} }
logger.Info("Successfully updated %d records", len(updates)) logger.Info("Successfully updated %d records", len(updates))
// Invalidate cache for this table
cacheTags := buildCacheTags(schema, tableName)
if err := invalidateCacheForTags(ctx, cacheTags); err != nil {
logger.Warn("Failed to invalidate cache for table %s: %v", tableName, err)
}
h.sendResponse(w, updates, nil) h.sendResponse(w, updates, nil)
case []interface{}: case []interface{}:
@@ -799,6 +847,11 @@ func (h *Handler) handleUpdate(ctx context.Context, w common.ResponseWriter, url
return return
} }
logger.Info("Successfully updated %d records with nested data", len(results)) logger.Info("Successfully updated %d records with nested data", len(results))
// Invalidate cache for this table
cacheTags := buildCacheTags(schema, tableName)
if err := invalidateCacheForTags(ctx, cacheTags); err != nil {
logger.Warn("Failed to invalidate cache for table %s: %v", tableName, err)
}
h.sendResponse(w, results, nil) h.sendResponse(w, results, nil)
return return
} }
@@ -826,6 +879,11 @@ func (h *Handler) handleUpdate(ctx context.Context, w common.ResponseWriter, url
return return
} }
logger.Info("Successfully updated %d records", len(list)) logger.Info("Successfully updated %d records", len(list))
// Invalidate cache for this table
cacheTags := buildCacheTags(schema, tableName)
if err := invalidateCacheForTags(ctx, cacheTags); err != nil {
logger.Warn("Failed to invalidate cache for table %s: %v", tableName, err)
}
h.sendResponse(w, list, nil) h.sendResponse(w, list, nil)
default: default:
@@ -872,6 +930,11 @@ func (h *Handler) handleDelete(ctx context.Context, w common.ResponseWriter, id
return return
} }
logger.Info("Successfully deleted %d records", len(v)) logger.Info("Successfully deleted %d records", len(v))
// Invalidate cache for this table
cacheTags := buildCacheTags(schema, tableName)
if err := invalidateCacheForTags(ctx, cacheTags); err != nil {
logger.Warn("Failed to invalidate cache for table %s: %v", tableName, err)
}
h.sendResponse(w, map[string]interface{}{"deleted": len(v)}, nil) h.sendResponse(w, map[string]interface{}{"deleted": len(v)}, nil)
return return
@@ -913,6 +976,11 @@ func (h *Handler) handleDelete(ctx context.Context, w common.ResponseWriter, id
return return
} }
logger.Info("Successfully deleted %d records", deletedCount) logger.Info("Successfully deleted %d records", deletedCount)
// Invalidate cache for this table
cacheTags := buildCacheTags(schema, tableName)
if err := invalidateCacheForTags(ctx, cacheTags); err != nil {
logger.Warn("Failed to invalidate cache for table %s: %v", tableName, err)
}
h.sendResponse(w, map[string]interface{}{"deleted": deletedCount}, nil) h.sendResponse(w, map[string]interface{}{"deleted": deletedCount}, nil)
return return
@@ -939,6 +1007,11 @@ func (h *Handler) handleDelete(ctx context.Context, w common.ResponseWriter, id
return return
} }
logger.Info("Successfully deleted %d records", deletedCount) logger.Info("Successfully deleted %d records", deletedCount)
// Invalidate cache for this table
cacheTags := buildCacheTags(schema, tableName)
if err := invalidateCacheForTags(ctx, cacheTags); err != nil {
logger.Warn("Failed to invalidate cache for table %s: %v", tableName, err)
}
h.sendResponse(w, map[string]interface{}{"deleted": deletedCount}, nil) h.sendResponse(w, map[string]interface{}{"deleted": deletedCount}, nil)
return return
@@ -957,7 +1030,29 @@ func (h *Handler) handleDelete(ctx context.Context, w common.ResponseWriter, id
return return
} }
query := h.db.NewDelete().Table(tableName).Where(fmt.Sprintf("%s = ?", common.QuoteIdent(reflection.GetPrimaryKeyName(model))), id) // Get primary key name
pkName := reflection.GetPrimaryKeyName(model)
// First, fetch the record that will be deleted
modelType := reflect.TypeOf(model)
if modelType.Kind() == reflect.Ptr {
modelType = modelType.Elem()
}
recordToDelete := reflect.New(modelType).Interface()
selectQuery := h.db.NewSelect().Model(recordToDelete).Where(fmt.Sprintf("%s = ?", common.QuoteIdent(pkName)), id)
if err := selectQuery.ScanModel(ctx); err != nil {
if err == sql.ErrNoRows {
logger.Warn("Record not found for delete: %s = %s", pkName, id)
h.sendError(w, http.StatusNotFound, "not_found", "Record not found", err)
return
}
logger.Error("Error fetching record for delete: %v", err)
h.sendError(w, http.StatusInternalServerError, "fetch_error", "Error fetching record", err)
return
}
query := h.db.NewDelete().Table(tableName).Where(fmt.Sprintf("%s = ?", common.QuoteIdent(pkName)), id)
result, err := query.Exec(ctx) result, err := query.Exec(ctx)
if err != nil { if err != nil {
@@ -966,14 +1061,21 @@ func (h *Handler) handleDelete(ctx context.Context, w common.ResponseWriter, id
return return
} }
// Check if the record was actually deleted
if result.RowsAffected() == 0 { if result.RowsAffected() == 0 {
logger.Warn("No record found to delete with ID: %s", id) logger.Warn("No rows deleted for ID: %s", id)
h.sendError(w, http.StatusNotFound, "not_found", "Record not found", nil) h.sendError(w, http.StatusNotFound, "not_found", "Record not found or already deleted", nil)
return return
} }
logger.Info("Successfully deleted record with ID: %s", id) logger.Info("Successfully deleted record with ID: %s", id)
h.sendResponse(w, nil, nil) // Return the deleted record data
// Invalidate cache for this table
cacheTags := buildCacheTags(schema, tableName)
if err := invalidateCacheForTags(ctx, cacheTags); err != nil {
logger.Warn("Failed to invalidate cache for table %s: %v", tableName, err)
}
h.sendResponse(w, recordToDelete, nil)
} }
func (h *Handler) applyFilter(query common.SelectQuery, filter common.FilterOption) common.SelectQuery { func (h *Handler) applyFilter(query common.SelectQuery, filter common.FilterOption) common.SelectQuery {

View File

@@ -1,4 +1,4 @@
package cache package restheadspec
import ( import (
"context" "context"
@@ -7,56 +7,42 @@ import (
"encoding/json" "encoding/json"
"fmt" "fmt"
"strings" "strings"
"time"
"github.com/bitechdev/ResolveSpec/pkg/cache"
"github.com/bitechdev/ResolveSpec/pkg/common" "github.com/bitechdev/ResolveSpec/pkg/common"
) )
// QueryCacheKey represents the components used to build a cache key for query total count // expandOptionKey represents expand options for cache key
type QueryCacheKey struct { type expandOptionKey struct {
Relation string `json:"relation"`
Where string `json:"where,omitempty"`
}
// queryCacheKey represents the components used to build a cache key for query total count
type queryCacheKey struct {
TableName string `json:"table_name"` TableName string `json:"table_name"`
Filters []common.FilterOption `json:"filters"` Filters []common.FilterOption `json:"filters"`
Sort []common.SortOption `json:"sort"` Sort []common.SortOption `json:"sort"`
CustomSQLWhere string `json:"custom_sql_where,omitempty"` CustomSQLWhere string `json:"custom_sql_where,omitempty"`
CustomSQLOr string `json:"custom_sql_or,omitempty"` CustomSQLOr string `json:"custom_sql_or,omitempty"`
Expand []ExpandOptionKey `json:"expand,omitempty"` Expand []expandOptionKey `json:"expand,omitempty"`
Distinct bool `json:"distinct,omitempty"` Distinct bool `json:"distinct,omitempty"`
CursorForward string `json:"cursor_forward,omitempty"` CursorForward string `json:"cursor_forward,omitempty"`
CursorBackward string `json:"cursor_backward,omitempty"` CursorBackward string `json:"cursor_backward,omitempty"`
} }
// ExpandOptionKey represents expand options for cache key // cachedTotal represents a cached total count
type ExpandOptionKey struct { type cachedTotal struct {
Relation string `json:"relation"` Total int `json:"total"`
Where string `json:"where,omitempty"`
} }
// BuildQueryCacheKey builds a cache key from query parameters for total count caching // buildExtendedQueryCacheKey builds a cache key for extended query options (restheadspec)
// This is used to cache the total count of records matching a query
func BuildQueryCacheKey(tableName string, filters []common.FilterOption, sort []common.SortOption, customWhere, customOr string) string {
key := QueryCacheKey{
TableName: tableName,
Filters: filters,
Sort: sort,
CustomSQLWhere: customWhere,
CustomSQLOr: customOr,
}
// Serialize to JSON for consistent hashing
jsonData, err := json.Marshal(key)
if err != nil {
// Fallback to simple string concatenation if JSON fails
return hashString(fmt.Sprintf("%s_%v_%v_%s_%s", tableName, filters, sort, customWhere, customOr))
}
return hashString(string(jsonData))
}
// BuildExtendedQueryCacheKey builds a cache key for extended query options (restheadspec)
// Includes expand, distinct, and cursor pagination options // Includes expand, distinct, and cursor pagination options
func BuildExtendedQueryCacheKey(tableName string, filters []common.FilterOption, sort []common.SortOption, func buildExtendedQueryCacheKey(tableName string, filters []common.FilterOption, sort []common.SortOption,
customWhere, customOr string, expandOpts []interface{}, distinct bool, cursorFwd, cursorBwd string) string { customWhere, customOr string, expandOpts []interface{}, distinct bool, cursorFwd, cursorBwd string) string {
key := QueryCacheKey{ key := queryCacheKey{
TableName: tableName, TableName: tableName,
Filters: filters, Filters: filters,
Sort: sort, Sort: sort,
@@ -69,11 +55,11 @@ func BuildExtendedQueryCacheKey(tableName string, filters []common.FilterOption,
// Convert expand options to cache key format // Convert expand options to cache key format
if len(expandOpts) > 0 { if len(expandOpts) > 0 {
key.Expand = make([]ExpandOptionKey, 0, len(expandOpts)) key.Expand = make([]expandOptionKey, 0, len(expandOpts))
for _, exp := range expandOpts { for _, exp := range expandOpts {
// Type assert to get the expand option fields we care about for caching // Type assert to get the expand option fields we care about for caching
if expMap, ok := exp.(map[string]interface{}); ok { if expMap, ok := exp.(map[string]interface{}); ok {
expKey := ExpandOptionKey{} expKey := expandOptionKey{}
if rel, ok := expMap["relation"].(string); ok { if rel, ok := expMap["relation"].(string); ok {
expKey.Relation = rel expKey.Relation = rel
} }
@@ -83,7 +69,6 @@ func BuildExtendedQueryCacheKey(tableName string, filters []common.FilterOption,
key.Expand = append(key.Expand, expKey) key.Expand = append(key.Expand, expKey)
} }
} }
// Sort expand options for consistent hashing (already sorted by relation name above)
} }
// Serialize to JSON for consistent hashing // Serialize to JSON for consistent hashing
@@ -104,24 +89,38 @@ func hashString(s string) string {
return hex.EncodeToString(h.Sum(nil)) return hex.EncodeToString(h.Sum(nil))
} }
// GetQueryTotalCacheKey returns a formatted cache key for storing/retrieving total count // getQueryTotalCacheKey returns a formatted cache key for storing/retrieving total count
func GetQueryTotalCacheKey(hash string) string { func getQueryTotalCacheKey(hash string) string {
return fmt.Sprintf("query_total:%s", hash) return fmt.Sprintf("query_total:%s", hash)
} }
// CachedTotal represents a cached total count // buildCacheTags creates cache tags from schema and table name
type CachedTotal struct { func buildCacheTags(schema, tableName string) []string {
Total int `json:"total"` return []string{
fmt.Sprintf("schema:%s", strings.ToLower(schema)),
fmt.Sprintf("table:%s", strings.ToLower(tableName)),
}
} }
// InvalidateCacheForTable removes all cached totals for a specific table // setQueryTotalCache stores a query total in the cache with schema and table tags
// This should be called when data in the table changes (insert/update/delete) func setQueryTotalCache(ctx context.Context, cacheKey string, total int, schema, tableName string, ttl time.Duration) error {
func InvalidateCacheForTable(ctx context.Context, tableName string) error { c := cache.GetDefaultCache()
cache := GetDefaultCache() cacheData := cachedTotal{Total: total}
tags := buildCacheTags(schema, tableName)
// Build a pattern to match all query totals for this table return c.SetWithTags(ctx, cacheKey, cacheData, ttl, tags)
// Note: This requires pattern matching support in the provider }
pattern := fmt.Sprintf("query_total:*%s*", strings.ToLower(tableName))
// invalidateCacheForTags removes all cached items matching the specified tags
return cache.DeleteByPattern(ctx, pattern) func invalidateCacheForTags(ctx context.Context, tags []string) error {
c := cache.GetDefaultCache()
// Invalidate for each tag
for _, tag := range tags {
if err := c.DeleteByTag(ctx, tag); err != nil {
return err
}
}
return nil
} }

View File

@@ -2,6 +2,7 @@ package restheadspec
import ( import (
"context" "context"
"database/sql"
"encoding/json" "encoding/json"
"fmt" "fmt"
"net/http" "net/http"
@@ -481,8 +482,10 @@ func (h *Handler) handleRead(ctx context.Context, w common.ResponseWriter, id st
// Apply custom SQL WHERE clause (AND condition) // Apply custom SQL WHERE clause (AND condition)
if options.CustomSQLWhere != "" { if options.CustomSQLWhere != "" {
logger.Debug("Applying custom SQL WHERE: %s", options.CustomSQLWhere) logger.Debug("Applying custom SQL WHERE: %s", options.CustomSQLWhere)
// Sanitize and allow preload table prefixes since custom SQL may reference multiple tables // First add table prefixes to unqualified columns (but skip columns inside function calls)
sanitizedWhere := common.SanitizeWhereClause(options.CustomSQLWhere, reflection.ExtractTableNameOnly(tableName), &options.RequestOptions) prefixedWhere := common.AddTablePrefixToColumns(options.CustomSQLWhere, reflection.ExtractTableNameOnly(tableName))
// Then sanitize and allow preload table prefixes since custom SQL may reference multiple tables
sanitizedWhere := common.SanitizeWhereClause(prefixedWhere, reflection.ExtractTableNameOnly(tableName), &options.RequestOptions)
if sanitizedWhere != "" { if sanitizedWhere != "" {
query = query.Where(sanitizedWhere) query = query.Where(sanitizedWhere)
} }
@@ -491,8 +494,9 @@ func (h *Handler) handleRead(ctx context.Context, w common.ResponseWriter, id st
// Apply custom SQL WHERE clause (OR condition) // Apply custom SQL WHERE clause (OR condition)
if options.CustomSQLOr != "" { if options.CustomSQLOr != "" {
logger.Debug("Applying custom SQL OR: %s", options.CustomSQLOr) logger.Debug("Applying custom SQL OR: %s", options.CustomSQLOr)
customOr := common.AddTablePrefixToColumns(options.CustomSQLOr, reflection.ExtractTableNameOnly(tableName))
// Sanitize and allow preload table prefixes since custom SQL may reference multiple tables // Sanitize and allow preload table prefixes since custom SQL may reference multiple tables
sanitizedOr := common.SanitizeWhereClause(options.CustomSQLOr, reflection.ExtractTableNameOnly(tableName), &options.RequestOptions) sanitizedOr := common.SanitizeWhereClause(customOr, reflection.ExtractTableNameOnly(tableName), &options.RequestOptions)
if sanitizedOr != "" { if sanitizedOr != "" {
query = query.WhereOr(sanitizedOr) query = query.WhereOr(sanitizedOr)
} }
@@ -513,14 +517,22 @@ func (h *Handler) handleRead(ctx context.Context, w common.ResponseWriter, id st
direction = "DESC" direction = "DESC"
} }
logger.Debug("Applying sort: %s %s", sort.Column, direction) logger.Debug("Applying sort: %s %s", sort.Column, direction)
query = query.Order(fmt.Sprintf("%s %s", sort.Column, direction))
// Check if it's an expression (enclosed in brackets) - use directly without quoting
if strings.HasPrefix(sort.Column, "(") && strings.HasSuffix(sort.Column, ")") {
// For expressions, pass as raw SQL to prevent auto-quoting
query = query.OrderExpr(fmt.Sprintf("%s %s", sort.Column, direction))
} else {
// Regular column - let Bun handle quoting
query = query.Order(fmt.Sprintf("%s %s", sort.Column, direction))
}
} }
// Get total count before pagination (unless skip count is requested) // Get total count before pagination (unless skip count is requested)
var total int var total int
if !options.SkipCount { if !options.SkipCount {
// Try to get from cache first (unless SkipCache is true) // Try to get from cache first (unless SkipCache is true)
var cachedTotal *cache.CachedTotal var cachedTotalData *cachedTotal
var cacheKey string var cacheKey string
if !options.SkipCache { if !options.SkipCache {
@@ -534,7 +546,7 @@ func (h *Handler) handleRead(ctx context.Context, w common.ResponseWriter, id st
} }
} }
cacheKeyHash := cache.BuildExtendedQueryCacheKey( cacheKeyHash := buildExtendedQueryCacheKey(
tableName, tableName,
options.Filters, options.Filters,
options.Sort, options.Sort,
@@ -545,22 +557,22 @@ func (h *Handler) handleRead(ctx context.Context, w common.ResponseWriter, id st
options.CursorForward, options.CursorForward,
options.CursorBackward, options.CursorBackward,
) )
cacheKey = cache.GetQueryTotalCacheKey(cacheKeyHash) cacheKey = getQueryTotalCacheKey(cacheKeyHash)
// Try to retrieve from cache // Try to retrieve from cache
cachedTotal = &cache.CachedTotal{} cachedTotalData = &cachedTotal{}
err := cache.GetDefaultCache().Get(ctx, cacheKey, cachedTotal) err := cache.GetDefaultCache().Get(ctx, cacheKey, cachedTotalData)
if err == nil { if err == nil {
total = cachedTotal.Total total = cachedTotalData.Total
logger.Debug("Total records (from cache): %d", total) logger.Debug("Total records (from cache): %d", total)
} else { } else {
logger.Debug("Cache miss for query total") logger.Debug("Cache miss for query total")
cachedTotal = nil cachedTotalData = nil
} }
} }
// If not in cache or cache skip, execute count query // If not in cache or cache skip, execute count query
if cachedTotal == nil { if cachedTotalData == nil {
count, err := query.Count(ctx) count, err := query.Count(ctx)
if err != nil { if err != nil {
logger.Error("Error counting records: %v", err) logger.Error("Error counting records: %v", err)
@@ -570,11 +582,10 @@ func (h *Handler) handleRead(ctx context.Context, w common.ResponseWriter, id st
total = count total = count
logger.Debug("Total records (from query): %d", total) logger.Debug("Total records (from query): %d", total)
// Store in cache (if caching is enabled) // Store in cache with schema and table tags (if caching is enabled)
if !options.SkipCache && cacheKey != "" { if !options.SkipCache && cacheKey != "" {
cacheTTL := time.Minute * 2 // Default 2 minutes TTL cacheTTL := time.Minute * 2 // Default 2 minutes TTL
cacheData := &cache.CachedTotal{Total: total} if err := setQueryTotalCache(ctx, cacheKey, total, schema, tableName, cacheTTL); err != nil {
if err := cache.GetDefaultCache().Set(ctx, cacheKey, cacheData, cacheTTL); err != nil {
logger.Warn("Failed to cache query total: %v", err) logger.Warn("Failed to cache query total: %v", err)
// Don't fail the request if caching fails // Don't fail the request if caching fails
} else { } else {
@@ -653,6 +664,14 @@ func (h *Handler) handleRead(ctx context.Context, w common.ResponseWriter, id st
return return
} }
// Check if a specific ID was requested but no record was found
resultCount := reflection.Len(modelPtr)
if id != "" && resultCount == 0 {
logger.Warn("Record not found for ID: %s", id)
h.sendError(w, http.StatusNotFound, "not_found", "Record not found", nil)
return
}
limit := 0 limit := 0
if options.Limit != nil { if options.Limit != nil {
limit = *options.Limit limit = *options.Limit
@@ -667,7 +686,7 @@ func (h *Handler) handleRead(ctx context.Context, w common.ResponseWriter, id st
metadata := &common.Metadata{ metadata := &common.Metadata{
Total: int64(total), Total: int64(total),
Count: int64(reflection.Len(modelPtr)), Count: int64(resultCount),
Filtered: int64(total), Filtered: int64(total),
Limit: limit, Limit: limit,
Offset: offset, Offset: offset,
@@ -748,8 +767,21 @@ func (h *Handler) applyPreloadWithRecursion(query common.SelectQuery, preload co
if len(preload.ComputedQL) > 0 { if len(preload.ComputedQL) > 0 {
// Get the base table name from the related model // Get the base table name from the related model
baseTableName := getTableNameFromModel(relatedModel) baseTableName := getTableNameFromModel(relatedModel)
// Convert the preload relation path to the Bun alias format
preloadAlias := relationPathToBunAlias(preload.Relation) // Convert the preload relation path to the appropriate alias format
// This is ORM-specific. Currently we only support Bun's format.
// TODO: Add support for other ORMs if needed
preloadAlias := ""
if h.db.GetUnderlyingDB() != nil {
// Check if we're using Bun by checking the type name
underlyingType := fmt.Sprintf("%T", h.db.GetUnderlyingDB())
if strings.Contains(underlyingType, "bun.DB") {
// Use Bun's alias format: lowercase with double underscores
preloadAlias = relationPathToBunAlias(preload.Relation)
}
// For GORM: GORM doesn't use the same alias format, and this fix
// may not be needed since GORM handles preloads differently
}
logger.Debug("Applying computed columns to preload %s (alias: %s, base table: %s)", logger.Debug("Applying computed columns to preload %s (alias: %s, base table: %s)",
preload.Relation, preloadAlias, baseTableName) preload.Relation, preloadAlias, baseTableName)
@@ -814,7 +846,14 @@ func (h *Handler) applyPreloadWithRecursion(query common.SelectQuery, preload co
// Apply sorting // Apply sorting
if len(preload.Sort) > 0 { if len(preload.Sort) > 0 {
for _, sort := range preload.Sort { for _, sort := range preload.Sort {
sq = sq.Order(fmt.Sprintf("%s %s", sort.Column, sort.Direction)) // Check if it's an expression (enclosed in brackets) - use directly without quoting
if strings.HasPrefix(sort.Column, "(") && strings.HasSuffix(sort.Column, ")") {
// For expressions, pass as raw SQL to prevent auto-quoting
sq = sq.OrderExpr(fmt.Sprintf("%s %s", sort.Column, sort.Direction))
} else {
// Regular column - let ORM handle quoting
sq = sq.Order(fmt.Sprintf("%s %s", sort.Column, sort.Direction))
}
} }
} }
@@ -1112,6 +1151,11 @@ func (h *Handler) handleCreate(ctx context.Context, w common.ResponseWriter, dat
} }
logger.Info("Successfully created %d record(s)", len(mergedResults)) logger.Info("Successfully created %d record(s)", len(mergedResults))
// Invalidate cache for this table
cacheTags := buildCacheTags(schema, tableName)
if err := invalidateCacheForTags(ctx, cacheTags); err != nil {
logger.Warn("Failed to invalidate cache for table %s: %v", tableName, err)
}
h.sendResponseWithOptions(w, responseData, nil, &options) h.sendResponseWithOptions(w, responseData, nil, &options)
} }
@@ -1207,8 +1251,19 @@ func (h *Handler) handleUpdate(ctx context.Context, w common.ResponseWriter, id
// Ensure ID is in the data map for the update // Ensure ID is in the data map for the update
dataMap[pkName] = targetID dataMap[pkName] = targetID
// Create update query // Populate model instance from dataMap to preserve custom types (like SqlJSONB)
query := tx.NewUpdate().Table(tableName).SetMap(dataMap) // Get the type of the model, handling both pointer and non-pointer types
modelType := reflect.TypeOf(model)
if modelType.Kind() == reflect.Ptr {
modelType = modelType.Elem()
}
modelInstance := reflect.New(modelType).Interface()
if err := reflection.MapToStruct(dataMap, modelInstance); err != nil {
return fmt.Errorf("failed to populate model from data: %w", err)
}
// Create update query using Model() to preserve custom types and driver.Valuer interfaces
query := tx.NewUpdate().Model(modelInstance)
query = query.Where(fmt.Sprintf("%s = ?", common.QuoteIdent(pkName)), targetID) query = query.Where(fmt.Sprintf("%s = ?", common.QuoteIdent(pkName)), targetID)
// Execute BeforeScan hooks - pass query chain so hooks can modify it // Execute BeforeScan hooks - pass query chain so hooks can modify it
@@ -1272,6 +1327,11 @@ func (h *Handler) handleUpdate(ctx context.Context, w common.ResponseWriter, id
} }
logger.Info("Successfully updated record with ID: %v", targetID) logger.Info("Successfully updated record with ID: %v", targetID)
// Invalidate cache for this table
cacheTags := buildCacheTags(schema, tableName)
if err := invalidateCacheForTags(ctx, cacheTags); err != nil {
logger.Warn("Failed to invalidate cache for table %s: %v", tableName, err)
}
h.sendResponseWithOptions(w, mergedData, nil, &options) h.sendResponseWithOptions(w, mergedData, nil, &options)
} }
@@ -1340,6 +1400,11 @@ func (h *Handler) handleDelete(ctx context.Context, w common.ResponseWriter, id
return return
} }
logger.Info("Successfully deleted %d records", deletedCount) logger.Info("Successfully deleted %d records", deletedCount)
// Invalidate cache for this table
cacheTags := buildCacheTags(schema, tableName)
if err := invalidateCacheForTags(ctx, cacheTags); err != nil {
logger.Warn("Failed to invalidate cache for table %s: %v", tableName, err)
}
h.sendResponse(w, map[string]interface{}{"deleted": deletedCount}, nil) h.sendResponse(w, map[string]interface{}{"deleted": deletedCount}, nil)
return return
@@ -1408,6 +1473,11 @@ func (h *Handler) handleDelete(ctx context.Context, w common.ResponseWriter, id
return return
} }
logger.Info("Successfully deleted %d records", deletedCount) logger.Info("Successfully deleted %d records", deletedCount)
// Invalidate cache for this table
cacheTags := buildCacheTags(schema, tableName)
if err := invalidateCacheForTags(ctx, cacheTags); err != nil {
logger.Warn("Failed to invalidate cache for table %s: %v", tableName, err)
}
h.sendResponse(w, map[string]interface{}{"deleted": deletedCount}, nil) h.sendResponse(w, map[string]interface{}{"deleted": deletedCount}, nil)
return return
@@ -1462,6 +1532,11 @@ func (h *Handler) handleDelete(ctx context.Context, w common.ResponseWriter, id
return return
} }
logger.Info("Successfully deleted %d records", deletedCount) logger.Info("Successfully deleted %d records", deletedCount)
// Invalidate cache for this table
cacheTags := buildCacheTags(schema, tableName)
if err := invalidateCacheForTags(ctx, cacheTags); err != nil {
logger.Warn("Failed to invalidate cache for table %s: %v", tableName, err)
}
h.sendResponse(w, map[string]interface{}{"deleted": deletedCount}, nil) h.sendResponse(w, map[string]interface{}{"deleted": deletedCount}, nil)
return return
@@ -1475,7 +1550,34 @@ func (h *Handler) handleDelete(ctx context.Context, w common.ResponseWriter, id
} }
// Single delete with URL ID // Single delete with URL ID
// Execute BeforeDelete hooks if id == "" {
h.sendError(w, http.StatusBadRequest, "missing_id", "ID is required for delete", nil)
return
}
// Get primary key name
pkName := reflection.GetPrimaryKeyName(model)
// First, fetch the record that will be deleted
modelType := reflect.TypeOf(model)
if modelType.Kind() == reflect.Ptr {
modelType = modelType.Elem()
}
recordToDelete := reflect.New(modelType).Interface()
selectQuery := h.db.NewSelect().Model(recordToDelete).Where(fmt.Sprintf("%s = ?", common.QuoteIdent(pkName)), id)
if err := selectQuery.ScanModel(ctx); err != nil {
if err == sql.ErrNoRows {
logger.Warn("Record not found for delete: %s = %s", pkName, id)
h.sendError(w, http.StatusNotFound, "not_found", "Record not found", err)
return
}
logger.Error("Error fetching record for delete: %v", err)
h.sendError(w, http.StatusInternalServerError, "fetch_error", "Error fetching record", err)
return
}
// Execute BeforeDelete hooks with the record data
hookCtx := &HookContext{ hookCtx := &HookContext{
Context: ctx, Context: ctx,
Handler: h, Handler: h,
@@ -1486,6 +1588,7 @@ func (h *Handler) handleDelete(ctx context.Context, w common.ResponseWriter, id
ID: id, ID: id,
Writer: w, Writer: w,
Tx: h.db, Tx: h.db,
Data: recordToDelete,
} }
if err := h.hooks.Execute(BeforeDelete, hookCtx); err != nil { if err := h.hooks.Execute(BeforeDelete, hookCtx); err != nil {
@@ -1495,13 +1598,7 @@ func (h *Handler) handleDelete(ctx context.Context, w common.ResponseWriter, id
} }
query := h.db.NewDelete().Table(tableName) query := h.db.NewDelete().Table(tableName)
query = query.Where(fmt.Sprintf("%s = ?", common.QuoteIdent(pkName)), id)
if id == "" {
h.sendError(w, http.StatusBadRequest, "missing_id", "ID is required for delete", nil)
return
}
query = query.Where(fmt.Sprintf("%s = ?", common.QuoteIdent(reflection.GetPrimaryKeyName(model))), id)
// Execute BeforeScan hooks - pass query chain so hooks can modify it // Execute BeforeScan hooks - pass query chain so hooks can modify it
hookCtx.Query = query hookCtx.Query = query
@@ -1523,11 +1620,15 @@ func (h *Handler) handleDelete(ctx context.Context, w common.ResponseWriter, id
return return
} }
// Execute AfterDelete hooks // Check if the record was actually deleted
responseData := map[string]interface{}{ if result.RowsAffected() == 0 {
"deleted": result.RowsAffected(), logger.Warn("No rows deleted for ID: %s", id)
h.sendError(w, http.StatusNotFound, "not_found", "Record not found or already deleted", nil)
return
} }
hookCtx.Result = responseData
// Execute AfterDelete hooks with the deleted record data
hookCtx.Result = recordToDelete
hookCtx.Error = nil hookCtx.Error = nil
if err := h.hooks.Execute(AfterDelete, hookCtx); err != nil { if err := h.hooks.Execute(AfterDelete, hookCtx); err != nil {
@@ -1536,7 +1637,13 @@ func (h *Handler) handleDelete(ctx context.Context, w common.ResponseWriter, id
return return
} }
h.sendResponse(w, responseData, nil) // Return the deleted record data
// Invalidate cache for this table
cacheTags := buildCacheTags(schema, tableName)
if err := invalidateCacheForTags(ctx, cacheTags); err != nil {
logger.Warn("Failed to invalidate cache for table %s: %v", tableName, err)
}
h.sendResponse(w, recordToDelete, nil)
} }
// mergeRecordWithRequest merges a database record with the original request data // mergeRecordWithRequest merges a database record with the original request data
@@ -2032,14 +2139,20 @@ func (h *Handler) sendResponse(w common.ResponseWriter, data interface{}, metada
// sendResponseWithOptions sends a response with optional formatting // sendResponseWithOptions sends a response with optional formatting
func (h *Handler) sendResponseWithOptions(w common.ResponseWriter, data interface{}, metadata *common.Metadata, options *ExtendedRequestOptions) { func (h *Handler) sendResponseWithOptions(w common.ResponseWriter, data interface{}, metadata *common.Metadata, options *ExtendedRequestOptions) {
w.SetHeader("Content-Type", "application/json")
if data == nil {
data = map[string]interface{}{}
w.WriteHeader(http.StatusPartialContent)
} else {
w.WriteHeader(http.StatusOK)
}
// Normalize single-record arrays to objects if requested // Normalize single-record arrays to objects if requested
if options != nil && options.SingleRecordAsObject { if options != nil && options.SingleRecordAsObject {
data = h.normalizeResultArray(data) data = h.normalizeResultArray(data)
} }
// Return data as-is without wrapping in common.Response // Return data as-is without wrapping in common.Response
w.SetHeader("Content-Type", "application/json")
w.WriteHeader(http.StatusOK)
if err := w.WriteJSON(data); err != nil { if err := w.WriteJSON(data); err != nil {
logger.Error("Failed to write JSON response: %v", err) logger.Error("Failed to write JSON response: %v", err)
} }
@@ -2049,7 +2162,7 @@ func (h *Handler) sendResponseWithOptions(w common.ResponseWriter, data interfac
// Returns the single element if data is a slice/array with exactly one element, otherwise returns data unchanged // Returns the single element if data is a slice/array with exactly one element, otherwise returns data unchanged
func (h *Handler) normalizeResultArray(data interface{}) interface{} { func (h *Handler) normalizeResultArray(data interface{}) interface{} {
if data == nil { if data == nil {
return nil return map[string]interface{}{}
} }
// Use reflection to check if data is a slice or array // Use reflection to check if data is a slice or array
@@ -2058,18 +2171,41 @@ func (h *Handler) normalizeResultArray(data interface{}) interface{} {
dataValue = dataValue.Elem() dataValue = dataValue.Elem()
} }
// Check if it's a slice or array with exactly one element // Check if it's a slice or array
if (dataValue.Kind() == reflect.Slice || dataValue.Kind() == reflect.Array) && dataValue.Len() == 1 { if dataValue.Kind() == reflect.Slice || dataValue.Kind() == reflect.Array {
// Return the single element if dataValue.Len() == 1 {
return dataValue.Index(0).Interface() // Return the single element
return dataValue.Index(0).Interface()
} else if dataValue.Len() == 0 {
// Return empty object instead of empty array
return map[string]interface{}{}
}
} }
if dataValue.Kind() == reflect.String {
str := dataValue.String()
if str == "" || str == "null" {
return map[string]interface{}{}
}
}
return data return data
} }
// sendFormattedResponse sends response with formatting options // sendFormattedResponse sends response with formatting options
func (h *Handler) sendFormattedResponse(w common.ResponseWriter, data interface{}, metadata *common.Metadata, options ExtendedRequestOptions) { func (h *Handler) sendFormattedResponse(w common.ResponseWriter, data interface{}, metadata *common.Metadata, options ExtendedRequestOptions) {
// Normalize single-record arrays to objects if requested // Normalize single-record arrays to objects if requested
httpStatus := http.StatusOK
if data == nil {
data = map[string]interface{}{}
httpStatus = http.StatusPartialContent
} else {
dataLen := reflection.Len(data)
if dataLen == 0 {
httpStatus = http.StatusPartialContent
}
}
if options.SingleRecordAsObject { if options.SingleRecordAsObject {
data = h.normalizeResultArray(data) data = h.normalizeResultArray(data)
} }
@@ -2088,7 +2224,7 @@ func (h *Handler) sendFormattedResponse(w common.ResponseWriter, data interface{
switch options.ResponseFormat { switch options.ResponseFormat {
case "simple": case "simple":
// Simple format: just return the data array // Simple format: just return the data array
w.WriteHeader(http.StatusOK) w.WriteHeader(httpStatus)
if err := w.WriteJSON(data); err != nil { if err := w.WriteJSON(data); err != nil {
logger.Error("Failed to write JSON response: %v", err) logger.Error("Failed to write JSON response: %v", err)
} }
@@ -2100,7 +2236,7 @@ func (h *Handler) sendFormattedResponse(w common.ResponseWriter, data interface{
if metadata != nil { if metadata != nil {
response["count"] = metadata.Total response["count"] = metadata.Total
} }
w.WriteHeader(http.StatusOK) w.WriteHeader(httpStatus)
if err := w.WriteJSON(response); err != nil { if err := w.WriteJSON(response); err != nil {
logger.Error("Failed to write JSON response: %v", err) logger.Error("Failed to write JSON response: %v", err)
} }
@@ -2111,7 +2247,7 @@ func (h *Handler) sendFormattedResponse(w common.ResponseWriter, data interface{
Data: data, Data: data,
Metadata: metadata, Metadata: metadata,
} }
w.WriteHeader(http.StatusOK) w.WriteHeader(httpStatus)
if err := w.WriteJSON(response); err != nil { if err := w.WriteJSON(response); err != nil {
logger.Error("Failed to write JSON response: %v", err) logger.Error("Failed to write JSON response: %v", err)
} }
@@ -2169,7 +2305,14 @@ func (h *Handler) FetchRowNumber(ctx context.Context, tableName string, pkName s
if strings.EqualFold(sort.Direction, "desc") { if strings.EqualFold(sort.Direction, "desc") {
direction = "DESC" direction = "DESC"
} }
sortParts = append(sortParts, fmt.Sprintf("%s.%s %s", tableName, sort.Column, direction))
// Check if it's an expression (enclosed in brackets) - use directly without table prefix
if strings.HasPrefix(sort.Column, "(") && strings.HasSuffix(sort.Column, ")") {
sortParts = append(sortParts, fmt.Sprintf("%s %s", sort.Column, direction))
} else {
// Regular column - add table prefix
sortParts = append(sortParts, fmt.Sprintf("%s.%s %s", tableName, sort.Column, direction))
}
} }
sortSQL = strings.Join(sortParts, ", ") sortSQL = strings.Join(sortParts, ", ")
} else { } else {
@@ -2378,6 +2521,55 @@ func (h *Handler) filterExtendedOptions(validator *common.ColumnValidator, optio
expandValidator := common.NewColumnValidator(relInfo.relatedModel) expandValidator := common.NewColumnValidator(relInfo.relatedModel)
// Filter columns using the related model's validator // Filter columns using the related model's validator
filteredExpand.Columns = expandValidator.FilterValidColumns(expand.Columns) filteredExpand.Columns = expandValidator.FilterValidColumns(expand.Columns)
// Filter sort columns in the expand Sort string
if expand.Sort != "" {
sortFields := strings.Split(expand.Sort, ",")
validSortFields := make([]string, 0, len(sortFields))
for _, sortField := range sortFields {
sortField = strings.TrimSpace(sortField)
if sortField == "" {
continue
}
// Extract column name (remove direction prefixes/suffixes)
colName := sortField
direction := ""
if strings.HasPrefix(sortField, "-") {
direction = "-"
colName = strings.TrimPrefix(sortField, "-")
} else if strings.HasPrefix(sortField, "+") {
direction = "+"
colName = strings.TrimPrefix(sortField, "+")
}
if strings.HasSuffix(strings.ToLower(colName), " desc") {
direction = " desc"
colName = strings.TrimSuffix(strings.ToLower(colName), " desc")
} else if strings.HasSuffix(strings.ToLower(colName), " asc") {
direction = " asc"
colName = strings.TrimSuffix(strings.ToLower(colName), " asc")
}
colName = strings.TrimSpace(colName)
// Validate the column name
if expandValidator.IsValidColumn(colName) {
validSortFields = append(validSortFields, direction+colName)
} else if strings.HasPrefix(colName, "(") && strings.HasSuffix(colName, ")") {
// Allow sort by expression/subquery, but validate for security
if common.IsSafeSortExpression(colName) {
validSortFields = append(validSortFields, direction+colName)
} else {
logger.Warn("Unsafe sort expression in expand '%s' removed: '%s'", expand.Relation, colName)
}
} else {
logger.Warn("Invalid column in expand '%s' sort '%s' removed", expand.Relation, colName)
}
}
filteredExpand.Sort = strings.Join(validSortFields, ",")
}
} else { } else {
// If we can't find the relationship, log a warning and skip column filtering // If we can't find the relationship, log a warning and skip column filtering
logger.Warn("Cannot validate columns for unknown relation: %s", expand.Relation) logger.Warn("Cannot validate columns for unknown relation: %s", expand.Relation)

View File

@@ -529,19 +529,47 @@ func (h *Handler) parseSorting(options *ExtendedRequestOptions, value string) {
} }
// parseCommaSeparated parses comma-separated values and trims whitespace // parseCommaSeparated parses comma-separated values and trims whitespace
// It respects bracket nesting and only splits on commas outside of parentheses
func (h *Handler) parseCommaSeparated(value string) []string { func (h *Handler) parseCommaSeparated(value string) []string {
if value == "" { if value == "" {
return nil return nil
} }
parts := strings.Split(value, ",") result := make([]string, 0)
result := make([]string, 0, len(parts)) var current strings.Builder
for _, part := range parts { nestingLevel := 0
part = strings.TrimSpace(part)
if part != "" { for _, char := range value {
result = append(result, part) switch char {
case '(':
nestingLevel++
current.WriteRune(char)
case ')':
nestingLevel--
current.WriteRune(char)
case ',':
if nestingLevel == 0 {
// We're outside all brackets, so split here
part := strings.TrimSpace(current.String())
if part != "" {
result = append(result, part)
}
current.Reset()
} else {
// Inside brackets, keep the comma
current.WriteRune(char)
}
default:
current.WriteRune(char)
} }
} }
// Add the last part
part := strings.TrimSpace(current.String())
if part != "" {
result = append(result, part)
}
return result return result
} }

View File

@@ -1,3 +1,4 @@
//go:build integration
// +build integration // +build integration
package restheadspec package restheadspec
@@ -21,12 +22,12 @@ import (
// Test models // Test models
type TestUser struct { type TestUser struct {
ID uint `gorm:"primaryKey" json:"id"` ID uint `gorm:"primaryKey" json:"id"`
Name string `gorm:"not null" json:"name"` Name string `gorm:"not null" json:"name"`
Email string `gorm:"uniqueIndex;not null" json:"email"` Email string `gorm:"uniqueIndex;not null" json:"email"`
Age int `json:"age"` Age int `json:"age"`
Active bool `gorm:"default:true" json:"active"` Active bool `gorm:"default:true" json:"active"`
CreatedAt time.Time `json:"created_at"` CreatedAt time.Time `json:"created_at"`
Posts []TestPost `gorm:"foreignKey:UserID" json:"posts,omitempty"` Posts []TestPost `gorm:"foreignKey:UserID" json:"posts,omitempty"`
} }
@@ -35,13 +36,13 @@ func (TestUser) TableName() string {
} }
type TestPost struct { type TestPost struct {
ID uint `gorm:"primaryKey" json:"id"` ID uint `gorm:"primaryKey" json:"id"`
UserID uint `gorm:"not null" json:"user_id"` UserID uint `gorm:"not null" json:"user_id"`
Title string `gorm:"not null" json:"title"` Title string `gorm:"not null" json:"title"`
Content string `json:"content"` Content string `json:"content"`
Published bool `gorm:"default:false" json:"published"` Published bool `gorm:"default:false" json:"published"`
CreatedAt time.Time `json:"created_at"` CreatedAt time.Time `json:"created_at"`
User *TestUser `gorm:"foreignKey:UserID" json:"user,omitempty"` User *TestUser `gorm:"foreignKey:UserID" json:"user,omitempty"`
Comments []TestComment `gorm:"foreignKey:PostID" json:"comments,omitempty"` Comments []TestComment `gorm:"foreignKey:PostID" json:"comments,omitempty"`
} }
@@ -54,7 +55,7 @@ type TestComment struct {
PostID uint `gorm:"not null" json:"post_id"` PostID uint `gorm:"not null" json:"post_id"`
Content string `gorm:"not null" json:"content"` Content string `gorm:"not null" json:"content"`
CreatedAt time.Time `json:"created_at"` CreatedAt time.Time `json:"created_at"`
Post *TestPost `gorm:"foreignKey:PostID" json:"post,omitempty"` Post *TestPost `gorm:"foreignKey:PostID" json:"post,omitempty"`
} }
func (TestComment) TableName() string { func (TestComment) TableName() string {
@@ -401,7 +402,7 @@ func TestIntegration_GetMetadata(t *testing.T) {
muxRouter.ServeHTTP(w, req) muxRouter.ServeHTTP(w, req)
if w.Code != http.StatusOK { if !(w.Code == http.StatusOK || w.Code == http.StatusPartialContent) {
t.Errorf("Expected status 200, got %d. Body: %s", w.Code, w.Body.String()) t.Errorf("Expected status 200, got %d. Body: %s", w.Code, w.Body.String())
} }
@@ -492,7 +493,7 @@ func TestIntegration_QueryParamsOverHeaders(t *testing.T) {
muxRouter.ServeHTTP(w, req) muxRouter.ServeHTTP(w, req)
if w.Code != http.StatusOK { if !(w.Code == http.StatusOK || w.Code == http.StatusPartialContent) {
t.Errorf("Expected status 200, got %d", w.Code) t.Errorf("Expected status 200, got %d", w.Code)
} }

View File

@@ -1,5 +1,5 @@
// Package common provides nullable SQL types with automatic casting and conversion methods. // Package spectypes provides nullable SQL types with automatic casting and conversion methods.
package common package spectypes
import ( import (
"database/sql" "database/sql"

View File

@@ -1,4 +1,4 @@
package common package spectypes
import ( import (
"database/sql/driver" "database/sql/driver"

View File

@@ -465,7 +465,7 @@ func processRequest(ctx context.Context) {
1. **Check collector is running:** 1. **Check collector is running:**
```bash ```bash
docker-compose ps podman compose ps
``` ```
2. **Verify endpoint:** 2. **Verify endpoint:**
@@ -476,7 +476,7 @@ func processRequest(ctx context.Context) {
3. **Check logs:** 3. **Check logs:**
```bash ```bash
docker-compose logs otel-collector podman compose logs otel-collector
``` ```
### Disable Tracing ### Disable Tracing

View File

@@ -14,33 +14,33 @@ NC='\033[0m' # No Color
echo -e "${GREEN}=== ResolveSpec Integration Tests ===${NC}\n" echo -e "${GREEN}=== ResolveSpec Integration Tests ===${NC}\n"
# Check if docker-compose is available # Check if podman compose is available
if ! command -v docker-compose &> /dev/null; then if ! command -v podman &> /dev/null; then
echo -e "${RED}Error: docker-compose is not installed${NC}" echo -e "${RED}Error: podman is not installed${NC}"
echo "Please install docker-compose or run PostgreSQL manually" echo "Please install podman or run PostgreSQL manually"
echo "See INTEGRATION_TESTS.md for details" echo "See INTEGRATION_TESTS.md for details"
exit 1 exit 1
fi fi
# Clean up any existing containers and networks from previous runs # Clean up any existing containers and networks from previous runs
echo -e "${YELLOW}Cleaning up existing containers and networks...${NC}" echo -e "${YELLOW}Cleaning up existing containers and networks...${NC}"
docker-compose down -v 2>/dev/null || true podman compose down -v 2>/dev/null || true
# Start PostgreSQL # Start PostgreSQL
echo -e "${YELLOW}Starting PostgreSQL...${NC}" echo -e "${YELLOW}Starting PostgreSQL...${NC}"
docker-compose up -d postgres-test podman compose up -d postgres-test
# Wait for PostgreSQL to be ready # Wait for PostgreSQL to be ready
echo -e "${YELLOW}Waiting for PostgreSQL to be ready...${NC}" echo -e "${YELLOW}Waiting for PostgreSQL to be ready...${NC}"
max_attempts=30 max_attempts=30
attempt=0 attempt=0
while ! docker-compose exec -T postgres-test pg_isready -U postgres > /dev/null 2>&1; do while ! podman compose exec -T postgres-test pg_isready -U postgres > /dev/null 2>&1; do
attempt=$((attempt + 1)) attempt=$((attempt + 1))
if [ $attempt -ge $max_attempts ]; then if [ $attempt -ge $max_attempts ]; then
echo -e "${RED}Error: PostgreSQL failed to start after ${max_attempts} seconds${NC}" echo -e "${RED}Error: PostgreSQL failed to start after ${max_attempts} seconds${NC}"
docker-compose logs postgres-test podman compose logs postgres-test
docker-compose down podman compose down
exit 1 exit 1
fi fi
sleep 1 sleep 1
@@ -51,8 +51,8 @@ echo -e "\n${GREEN}PostgreSQL is ready!${NC}\n"
# Create test databases # Create test databases
echo -e "${YELLOW}Creating test databases...${NC}" echo -e "${YELLOW}Creating test databases...${NC}"
docker-compose exec -T postgres-test psql -U postgres -c "CREATE DATABASE resolvespec_test;" 2>/dev/null || echo " resolvespec_test already exists" podman compose exec -T postgres-test psql -U postgres -c "CREATE DATABASE resolvespec_test;" 2>/dev/null || echo " resolvespec_test already exists"
docker-compose exec -T postgres-test psql -U postgres -c "CREATE DATABASE restheadspec_test;" 2>/dev/null || echo " restheadspec_test already exists" podman compose exec -T postgres-test psql -U postgres -c "CREATE DATABASE restheadspec_test;" 2>/dev/null || echo " restheadspec_test already exists"
echo -e "${GREEN}Test databases ready!${NC}\n" echo -e "${GREEN}Test databases ready!${NC}\n"
# Determine which tests to run # Determine which tests to run
@@ -79,6 +79,6 @@ fi
# Cleanup # Cleanup
echo -e "\n${YELLOW}Stopping PostgreSQL...${NC}" echo -e "\n${YELLOW}Stopping PostgreSQL...${NC}"
docker-compose down podman compose down
exit $EXIT_CODE exit $EXIT_CODE

View File

@@ -19,14 +19,14 @@ Integration tests validate the full functionality of both `pkg/resolvespec` and
- Go 1.19 or later - Go 1.19 or later
- PostgreSQL 12 or later - PostgreSQL 12 or later
- Docker and Docker Compose (optional, for easy setup) - Podman and Podman Compose (optional, for easy setup)
## Quick Start with Docker ## Quick Start with Podman
### 1. Start PostgreSQL with Docker Compose ### 1. Start PostgreSQL with Podman Compose
```bash ```bash
docker-compose up -d postgres-test podman compose up -d postgres-test
``` ```
This starts a PostgreSQL container with the following default settings: This starts a PostgreSQL container with the following default settings:
@@ -52,7 +52,7 @@ go test -tags=integration ./pkg/restheadspec -v
### 3. Stop PostgreSQL ### 3. Stop PostgreSQL
```bash ```bash
docker-compose down podman compose down
``` ```
## Manual PostgreSQL Setup ## Manual PostgreSQL Setup
@@ -161,7 +161,7 @@ If you see "connection refused" errors:
1. Check that PostgreSQL is running: 1. Check that PostgreSQL is running:
```bash ```bash
docker-compose ps podman compose ps
``` ```
2. Verify connection parameters: 2. Verify connection parameters:
@@ -194,10 +194,10 @@ Each test automatically cleans up its data using `TRUNCATE`. If you need a fresh
```bash ```bash
# Stop and remove containers (removes data) # Stop and remove containers (removes data)
docker-compose down -v podman compose down -v
# Restart # Restart
docker-compose up -d postgres-test podman compose up -d postgres-test
``` ```
## CI/CD Integration ## CI/CD Integration

View File

@@ -119,13 +119,13 @@ Integration tests require a PostgreSQL database and use the `// +build integrati
- PostgreSQL 12+ installed and running - PostgreSQL 12+ installed and running
- Create test databases manually (see below) - Create test databases manually (see below)
### Setup with Docker ### Setup with Podman
1. **Start PostgreSQL**: 1. **Start PostgreSQL**:
```bash ```bash
make docker-up make docker-up
# or # or
docker-compose up -d postgres-test podman compose up -d postgres-test
``` ```
2. **Run Tests**: 2. **Run Tests**:
@@ -141,10 +141,10 @@ Integration tests require a PostgreSQL database and use the `// +build integrati
```bash ```bash
make docker-down make docker-down
# or # or
docker-compose down podman compose down
``` ```
### Setup without Docker ### Setup without Podman
1. **Create Databases**: 1. **Create Databases**:
```sql ```sql
@@ -289,8 +289,8 @@ go test -tags=integration ./pkg/resolvespec -v
**Problem**: "connection refused" or "database does not exist" **Problem**: "connection refused" or "database does not exist"
**Solutions**: **Solutions**:
1. Check PostgreSQL is running: `docker-compose ps` 1. Check PostgreSQL is running: `podman compose ps`
2. Verify databases exist: `docker-compose exec postgres-test psql -U postgres -l` 2. Verify databases exist: `podman compose exec postgres-test psql -U postgres -l`
3. Check environment variable: `echo $TEST_DATABASE_URL` 3. Check environment variable: `echo $TEST_DATABASE_URL`
4. Recreate databases: `make clean && make docker-up` 4. Recreate databases: `make clean && make docker-up`