Compare commits

...

24 Commits

Author SHA1 Message Date
4ca1810d07 refactor(dctx): sort table columns and indexes for deterministic output
Some checks failed
Release / test (push) Failing after -31m18s
Release / release (push) Has been skipped
Release / pkg-aur (push) Has been skipped
Release / pkg-deb (push) Has been skipped
Release / pkg-rpm (push) Has been skipped
2026-04-26 12:50:39 +02:00
c0880cb076 feat(pkg): preserve PostgreSQL types in mapDataType function
Some checks failed
Release / test (push) Failing after -31m27s
Release / release (push) Has been skipped
Release / pkg-aur (push) Has been skipped
Release / pkg-deb (push) Has been skipped
Release / pkg-rpm (push) Has been skipped
* Add support for known PostgreSQL types and modifiers
* Implement canonicalization for PostgreSQL types
* Introduce unit tests for PostgreSQL type handling
2026-04-26 12:43:44 +02:00
988798998d test(drawdb): add test for converting column types with modifiers
* Implement tests to ensure explicit type modifiers are preserved during conversion.
* Validate behavior for varchar, numeric, and custom vector types.
2026-04-26 12:35:54 +02:00
535a91d4be feat(docs): add comprehensive story of RelSpecGo's development journey 2026-04-08 22:21:24 +02:00
bd54e85727 chore(release): update package version to 1.0.44
All checks were successful
Release / pkg-deb (push) Successful in -29m54s
Release / pkg-rpm (push) Successful in -27m4s
Release / test (push) Successful in -30m26s
Release / release (push) Successful in -29m48s
Release / pkg-aur (push) Successful in -28m28s
2026-04-08 21:34:28 +02:00
b042b2d508 docs: 📝 Update documentation 2026-04-08 21:34:00 +02:00
af1733dc9a feat(pkg): update package description for clarity and consistency 2026-04-08 21:21:33 +02:00
389fff2b44 chore(release): update package version to 1.0.43
Some checks failed
Release / pkg-deb (push) Successful in -29m54s
Release / pkg-rpm (push) Successful in -28m15s
Release / release (push) Successful in -26m35s
Release / pkg-aur (push) Failing after -30m58s
Release / test (push) Successful in -29m19s
2026-04-08 20:59:23 +02:00
f331ba2b61 chore(release): update package version and add packaging files for AUR, Debian, and RPM 2026-04-08 20:59:11 +02:00
f4b8fc5382 feat(writers): add sortConstraints function to sort constraints by sequence and name
All checks were successful
CI / Test (1.24) (push) Successful in -29m11s
CI / Test (1.25) (push) Successful in -28m38s
CI / Lint (push) Successful in -29m38s
CI / Build (push) Successful in -29m42s
Integration Tests / Integration Tests (push) Successful in -29m26s
Release / Build and Release (push) Successful in -29m46s
2026-02-28 19:52:04 +02:00
dc9172cc7c feat(templ): add support for --from-list flag and related tests
All checks were successful
CI / Test (1.24) (push) Successful in -29m0s
CI / Test (1.25) (push) Successful in -29m10s
CI / Build (push) Successful in -30m1s
CI / Lint (push) Successful in -29m43s
Integration Tests / Integration Tests (push) Successful in -29m6s
Release / Build and Release (push) Successful in -29m56s
2026-02-28 19:32:19 +02:00
ee88c07989 style(report, writers, graphql, prisma, typeorm): replace sb.WriteString with fmt.Fprintf for consistency
All checks were successful
CI / Test (1.24) (push) Successful in -26m1s
CI / Test (1.25) (push) Successful in -25m59s
CI / Build (push) Successful in -29m11s
CI / Lint (push) Successful in -28m32s
Integration Tests / Integration Tests (push) Successful in -29m16s
Release / Build and Release (push) Successful in -26m36s
2026-02-28 17:08:12 +02:00
ff1180524a feat(merge): add support for merging from a list of source files
Some checks failed
CI / Lint (push) Has been cancelled
CI / Build (push) Has been cancelled
CI / Test (1.25) (push) Has started running
CI / Test (1.24) (push) Has been cancelled
Integration Tests / Integration Tests (push) Has been cancelled
2026-02-28 17:06:49 +02:00
Hein
480038d51d feat(writers): quote default values based on SQL column type
Some checks failed
CI / Test (1.24) (push) Successful in -22m47s
CI / Test (1.25) (push) Successful in -22m35s
CI / Lint (push) Failing after -24m34s
CI / Build (push) Successful in -24m43s
Integration Tests / Integration Tests (push) Successful in -25m0s
Release / Build and Release (push) Successful in -21m46s
Bun and GORM struct tags now emit quoted defaults for string/date/time/UUID
columns (e.g. default:'disconnected') and unquoted defaults for numeric and
boolean columns (e.g. default:0, default:true). Function-call expressions
such as now() or gen_random_uuid() are never quoted regardless of type.

Adds QuoteDefaultValue(value, sqlType) helper in pkg/writers and updates
both type mappers and the bun writer tests accordingly.
2026-02-20 16:03:50 +02:00
77436757c8 fix(type_mapper): update timestamp type mapping to use SqlTimeStamp
All checks were successful
CI / Test (1.24) (push) Successful in -25m13s
CI / Test (1.25) (push) Successful in -25m10s
CI / Build (push) Successful in -26m2s
CI / Lint (push) Successful in -25m39s
Release / Build and Release (push) Successful in -25m49s
Integration Tests / Integration Tests (push) Successful in -25m26s
2026-02-08 21:35:27 +02:00
5e6f03e412 feat(type_mapper): add support for serial types and auto-increment tags
All checks were successful
CI / Test (1.24) (push) Successful in -24m39s
CI / Test (1.25) (push) Successful in -24m24s
CI / Build (push) Successful in -25m39s
CI / Lint (push) Successful in -25m9s
Integration Tests / Integration Tests (push) Successful in -25m15s
Release / Build and Release (push) Successful in -25m21s
2026-02-08 17:48:58 +02:00
1dcbc79387 feat(pgsql): enhance data type mapping to support serial types
All checks were successful
CI / Test (1.25) (push) Successful in -24m18s
CI / Test (1.24) (push) Successful in -24m6s
CI / Build (push) Successful in -25m14s
CI / Lint (push) Successful in -24m47s
Release / Build and Release (push) Successful in -25m37s
Integration Tests / Integration Tests (push) Successful in -25m9s
2026-02-08 17:31:28 +02:00
59c4a5ebf8 test(writer): enhance has-many relationship tests with join tag verification
All checks were successful
CI / Test (1.24) (push) Successful in -25m9s
CI / Test (1.25) (push) Successful in -25m0s
CI / Build (push) Successful in -25m57s
CI / Lint (push) Successful in -25m29s
Release / Build and Release (push) Successful in -25m38s
Integration Tests / Integration Tests (push) Successful in -25m19s
2026-02-08 15:20:20 +02:00
091e1913ee feat(version): retrieve version and build date from VCS if unset
All checks were successful
CI / Test (1.24) (push) Successful in -25m19s
CI / Test (1.25) (push) Successful in -25m1s
CI / Build (push) Successful in -25m56s
CI / Lint (push) Successful in -25m33s
Integration Tests / Integration Tests (push) Successful in -25m32s
2026-02-08 15:04:03 +02:00
0e6e94797c feat(version): add version command to display version and build date
All checks were successful
CI / Test (1.24) (push) Successful in -25m14s
CI / Test (1.25) (push) Successful in -25m10s
CI / Build (push) Successful in -26m0s
CI / Lint (push) Successful in -25m38s
Release / Build and Release (push) Successful in -25m46s
Integration Tests / Integration Tests (push) Successful in -25m13s
2026-02-08 14:58:39 +02:00
a033349c76 refactor(writers): simplify model name generation by removing singularization
All checks were successful
CI / Test (1.24) (push) Successful in -25m15s
CI / Test (1.25) (push) Successful in -25m8s
CI / Build (push) Successful in -26m4s
CI / Lint (push) Successful in -25m37s
Integration Tests / Integration Tests (push) Successful in -25m33s
Release / Build and Release (push) Successful in -23m40s
2026-02-08 14:50:39 +02:00
466d657ea7 feat(mssql): add MSSQL writer for generating DDL from database schema
All checks were successful
CI / Test (1.24) (push) Successful in -23m27s
CI / Test (1.25) (push) Successful in -23m4s
CI / Lint (push) Successful in -24m57s
CI / Build (push) Successful in -25m15s
Integration Tests / Integration Tests (push) Successful in -25m42s
- Implement MSSQL writer to generate SQL scripts for creating schemas, tables, and constraints.
- Support for identity columns, indexes, and extended properties.
- Add tests for column definitions, table creation, primary keys, foreign keys, and comments.
- Include testing guide and sample schema for integration tests.
2026-02-07 16:09:27 +02:00
47bf748fd5 chore: ⬆️ Vendor for new deps 2026-02-07 15:51:20 +02:00
88589e00e7 docs: update AI usage declaration for clarity and compliance
All checks were successful
CI / Test (1.24) (push) Successful in -25m31s
CI / Test (1.25) (push) Successful in -25m22s
CI / Build (push) Successful in -26m11s
CI / Lint (push) Successful in -25m42s
Integration Tests / Integration Tests (push) Successful in -25m50s
2026-02-07 10:16:19 +02:00
205 changed files with 89467 additions and 833 deletions

View File

@@ -1,5 +0,0 @@
---
description: Build the RelSpec binary
---
Build the RelSpec project by running `make build`. Report the build status and any errors encountered.

View File

@@ -1,9 +0,0 @@
---
description: Generate test coverage report
---
Generate and display test coverage for RelSpec:
1. Run `go test -cover ./...` to get coverage percentage
2. If detailed coverage is needed, run `go test -coverprofile=coverage.out ./...` and then `go tool cover -html=coverage.out` to generate HTML report
Show coverage statistics and identify areas needing more tests.

View File

@@ -1,10 +0,0 @@
---
description: Run Go linters on the codebase
---
Run linting tools on the RelSpec codebase:
1. First run `gofmt -l .` to check formatting
2. If golangci-lint is available, run `golangci-lint run ./...`
3. Run `go vet ./...` to check for suspicious constructs
Report any issues found and suggest fixes if needed.

View File

@@ -1,5 +0,0 @@
---
description: Run all tests for the RelSpec project
---
Run `go test ./...` to execute all unit tests in the project. Show a summary of the results and highlight any failures.

0
.codex Normal file
View File

View File

@@ -0,0 +1,327 @@
name: Release
on:
push:
tags:
- 'v*'
workflow_dispatch:
inputs:
tag:
description: 'Tag to release (e.g. v1.2.3)'
required: true
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-go@v5
with:
go-version-file: go.mod
- name: Test
run: go test ./...
- name: Lint
run: go vet ./...
release:
needs: test
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
with:
fetch-depth: 0
- uses: actions/setup-go@v5
with:
go-version-file: go.mod
- name: Build release binaries
run: |
VERSION="${{ github.event.inputs.tag || github.ref_name }}"
for target in "linux/amd64" "linux/arm64" "darwin/amd64" "darwin/arm64" "windows/amd64"; do
GOOS="${target%/*}"
GOARCH="${target#*/}"
EXT=""
[ "$GOOS" = "windows" ] && EXT=".exe"
NAME="relspec-${GOOS}-${GOARCH}${EXT}"
GOOS="$GOOS" GOARCH="$GOARCH" go build \
-trimpath \
-ldflags "-X git.warky.dev/wdevs/relspecgo/cmd/relspec.version=${VERSION}" \
-o "$NAME" ./cmd/relspec
echo "Built $NAME"
done
- name: Create release and upload assets
run: |
TAG="${{ github.event.inputs.tag || github.ref_name }}"
API="${GITHUB_API_URL}/repos/${GITHUB_REPOSITORY}/releases"
# Collect commits since the previous tag (or last 20 if no prior tag)
PREV_TAG=$(git tag --sort=-version:refname | grep -v "^${TAG}$" | head -1)
if [ -n "$PREV_TAG" ]; then
RANGE="${PREV_TAG}..${TAG}"
else
RANGE="HEAD~20..HEAD"
fi
NOTES=$(git log "$RANGE" --pretty=format:"- %s" --no-merges)
BODY="## What's changed"$'\n'"${NOTES}"
# Escape for JSON
BODY_JSON=$(printf '%s' "$BODY" | python3 -c 'import json,sys; print(json.dumps(sys.stdin.read()))')
RELEASE=$(curl -s -X POST "$API" \
-H "Authorization: token ${GITHUB_TOKEN}" \
-H "Content-Type: application/json" \
-d "{\"tag_name\":\"${TAG}\",\"name\":\"${TAG}\",\"body\":${BODY_JSON}}")
UPLOAD_URL=$(echo "$RELEASE" | grep -o '"upload_url":"[^"]*"' | cut -d'"' -f4 | sed 's/{[^}]*}//')
if [ -z "$UPLOAD_URL" ]; then
echo "Failed to create release: $RELEASE"
exit 1
fi
for f in relspec-*; do
echo "Uploading $f..."
curl -s -X POST "${UPLOAD_URL}?name=${f}" \
-H "Authorization: token ${GITHUB_TOKEN}" \
-H "Content-Type: application/octet-stream" \
--data-binary "@${f}" > /dev/null
done
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
pkg-aur:
needs: release
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Publish to AUR
env:
AUR_SSH_KEY: ${{ secrets.AUR_SSH_KEY }}
run: |
set -euo pipefail
VERSION="${{ github.event.inputs.tag || github.ref_name }}"
PKGVER="${VERSION#v}"
AUR_KEY_PATH="$HOME/.ssh/aur"
AUR_KNOWN_HOSTS="$HOME/.ssh/known_hosts"
# Setup SSH for AUR
mkdir -p ~/.ssh
chmod 700 ~/.ssh
if [ -z "${AUR_SSH_KEY:-}" ]; then
echo "AUR_SSH_KEY is empty"
exit 1
fi
# Support raw multiline keys, escaped \\n secrets, or base64-encoded keys.
CLEAN_AUR_SSH_KEY="$(printf '%s' "$AUR_SSH_KEY" | tr -d '\r')"
if printf '%s' "$CLEAN_AUR_SSH_KEY" | grep -q "^-----BEGIN .*PRIVATE KEY-----$"; then
printf '%s\n' "$CLEAN_AUR_SSH_KEY" > "$AUR_KEY_PATH"
elif printf '%s' "$CLEAN_AUR_SSH_KEY" | grep -q '\\n'; then
printf '%b\n' "$CLEAN_AUR_SSH_KEY" > "$AUR_KEY_PATH"
else
if printf '%s' "$CLEAN_AUR_SSH_KEY" | tr -d '[:space:]' | base64 --decode > "$AUR_KEY_PATH" 2>/dev/null; then
:
else
printf '%s\n' "$CLEAN_AUR_SSH_KEY" > "$AUR_KEY_PATH"
fi
fi
chmod 600 "$AUR_KEY_PATH"
if ! ssh-keygen -y -f "$AUR_KEY_PATH" >/dev/null 2>&1; then
echo "AUR_SSH_KEY is not a valid private key."
echo "Store it as a raw private key, an escaped private key with \\n, or a base64-encoded private key."
exit 1
fi
ssh-keyscan -t rsa,ed25519 aur.archlinux.org >> "$AUR_KNOWN_HOSTS"
chmod 644 "$AUR_KNOWN_HOSTS"
# Clone AUR repo
GIT_SSH_COMMAND="ssh -o IdentitiesOnly=yes -o StrictHostKeyChecking=yes -o UserKnownHostsFile=$AUR_KNOWN_HOSTS -i $AUR_KEY_PATH" \
git clone ssh://aur@aur.archlinux.org/relspec.git aur-repo
CURRENT_PKGVER=$(awk -F= '/^pkgver=/ {print $2; exit}' aur-repo/PKGBUILD | tr -d "[:space:]")
CURRENT_PKGREL=$(awk -F= '/^pkgrel=/ {print $2; exit}' aur-repo/PKGBUILD | tr -d "[:space:]")
if [ "$CURRENT_PKGVER" = "$PKGVER" ]; then
case "$CURRENT_PKGREL" in
''|*[!0-9]*)
echo "Unsupported pkgrel in AUR repo: ${CURRENT_PKGREL}"
exit 1
;;
*)
PKGREL=$((CURRENT_PKGREL + 1))
;;
esac
else
PKGREL=1
fi
echo "Publishing AUR package version ${PKGVER}-${PKGREL}"
# Compute SHA256 of the source archive from the same URL the PKGBUILD will download.
SHA=$(curl -fsSL "https://git.warky.dev/wdevs/relspecgo/archive/v${PKGVER}.zip" | sha256sum | cut -d' ' -f1)
# Update PKGBUILD — keep remote source URL, bump version/checksum, and increment pkgrel for same-version rebuilds.
sed -e "s/^pkgver=.*/pkgver=${PKGVER}/" \
-e "s/^pkgrel=.*/pkgrel=${PKGREL}/" \
-e "s/^sha256sums=.*/sha256sums=('${SHA}')/" \
linux/arch/PKGBUILD > aur-repo/PKGBUILD
# Generate .SRCINFO inside an Arch container (docker cp avoids DinD volume mount issues)
CID=$(docker run -d archlinux:latest sleep infinity)
docker cp aur-repo/PKGBUILD $CID:/build/PKGBUILD || (docker exec $CID mkdir -p /build && docker cp aur-repo/PKGBUILD $CID:/build/PKGBUILD)
docker exec $CID bash -c "
pacman -Sy --noconfirm base-devel &&
useradd -m builder &&
chown -R builder:builder /build &&
runuser -u builder -- bash -c 'cd /build && makepkg --printsrcinfo > .SRCINFO'
"
docker cp $CID:/build/.SRCINFO aur-repo/.SRCINFO
docker rm -f $CID
# Commit and push to AUR master
cd aur-repo
git config user.email "hein@warky.dev"
git config user.name "Hein"
git add PKGBUILD .SRCINFO
git commit -m "Update to v${PKGVER}-${PKGREL}"
GIT_SSH_COMMAND="ssh -o IdentitiesOnly=yes -o StrictHostKeyChecking=yes -o UserKnownHostsFile=$AUR_KNOWN_HOSTS -i $AUR_KEY_PATH" \
git push origin HEAD:master
pkg-deb:
needs: release
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
with:
fetch-depth: 0
- uses: actions/setup-go@v5
with:
go-version-file: go.mod
- name: Build Debian packages
run: |
VERSION="${{ github.event.inputs.tag || github.ref_name }}"
PKGVER="${VERSION#v}"
for GOARCH in amd64 arm64; do
GOOS=linux GOARCH=$GOARCH go build \
-trimpath \
-ldflags "-X git.warky.dev/wdevs/relspecgo/cmd/relspec.version=${PKGVER}" \
-o relspec ./cmd/relspec
PKGDIR="relspec_${PKGVER}_${GOARCH}"
mkdir -p "${PKGDIR}/DEBIAN"
mkdir -p "${PKGDIR}/usr/bin"
install -m755 relspec "${PKGDIR}/usr/bin/relspec"
sed -e "s/VERSION/${PKGVER}/" \
-e "s/ARCH/${GOARCH}/" \
linux/debian/control > "${PKGDIR}/DEBIAN/control"
dpkg-deb --build --root-owner-group "${PKGDIR}"
echo "Built ${PKGDIR}.deb"
done
- name: Upload to release
run: |
TAG="${{ github.event.inputs.tag || github.ref_name }}"
RELEASE=$(curl -s "${GITHUB_API_URL}/repos/${GITHUB_REPOSITORY}/releases/tags/${TAG}" \
-H "Authorization: token ${GITHUB_TOKEN}")
UPLOAD_URL=$(echo "$RELEASE" | grep -o '"upload_url":"[^"]*"' | cut -d'"' -f4 | sed 's/{[^}]*}//')
for f in *.deb; do
FNAME=$(basename "$f")
echo "Uploading $FNAME..."
curl -s -X POST "${UPLOAD_URL}?name=${FNAME}" \
-H "Authorization: token ${GITHUB_TOKEN}" \
-H "Content-Type: application/octet-stream" \
--data-binary "@${f}" > /dev/null
done
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
pkg-rpm:
needs: release
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
with:
fetch-depth: 0
- name: Build RPM
run: |
set -euo pipefail
VERSION="${{ github.event.inputs.tag || github.ref_name }}"
PKGVER="${VERSION#v}"
GO_VER="$(awk '/^go / { print $2; exit }' go.mod)"
if [ -z "${GO_VER}" ]; then
echo "Failed to determine Go version from go.mod"
exit 1
fi
# Source tarball — prefix=relspec-VERSION/ matches RPM %autosetup convention
git archive --format=tar.gz --prefix=relspec-${PKGVER}/ HEAD \
> relspec-${PKGVER}.tar.gz
# Patch spec version
sed -i "s/^Version:.*/Version: ${PKGVER}/" linux/centos/relspec.spec
mkdir -p linux/centos/out
CID=$(docker create \
-e GO_VER="${GO_VER}" \
-e PKGVER="${PKGVER}" \
-w /build \
rockylinux:9 \
bash -lc "
set -euo pipefail
dnf install -y rpm-build git &&
curl -fsSL https://go.dev/dl/go\${GO_VER}.linux-amd64.tar.gz | tar -C /usr/local -xz &&
export PATH=\$PATH:/usr/local/go/bin &&
mkdir -p ~/rpmbuild/{BUILD,BUILDROOT,RPMS,SOURCES,SPECS,SRPMS} &&
cp relspec-${PKGVER}.tar.gz ~/rpmbuild/SOURCES/ &&
cp linux/centos/relspec.spec ~/rpmbuild/SPECS/ &&
rpmbuild --nodeps -ba ~/rpmbuild/SPECS/relspec.spec
")
cleanup() {
docker rm -f "$CID" >/dev/null 2>&1 || true
}
trap cleanup EXIT
docker cp relspec-${PKGVER}.tar.gz "$CID:/build/relspec-${PKGVER}.tar.gz"
docker cp linux "$CID:/build/linux"
docker start -a "$CID"
docker cp "$CID:/root/rpmbuild/RPMS/." linux/centos/out/
trap - EXIT
cleanup
- name: Upload to release
run: |
TAG="${{ github.event.inputs.tag || github.ref_name }}"
RELEASE=$(curl -s "${GITHUB_API_URL}/repos/${GITHUB_REPOSITORY}/releases/tags/${TAG}" \
-H "Authorization: token ${GITHUB_TOKEN}")
UPLOAD_URL=$(echo "$RELEASE" | grep -o '"upload_url":"[^"]*"' | cut -d'"' -f4 | sed 's/{[^}]*}//')
while IFS= read -r f; do
FNAME=$(basename "$f")
echo "Uploading $FNAME..."
curl -s -X POST "${UPLOAD_URL}?name=${FNAME}" \
-H "Authorization: token ${GITHUB_TOKEN}" \
-H "Content-Type: application/octet-stream" \
--data-binary "@${f}" > /dev/null
done < <(find linux/centos/out -name "*.rpm")
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}

View File

@@ -1,116 +0,0 @@
name: Release
run-name: "Making Release"
on:
push:
tags:
- 'v*.*.*'
jobs:
build-and-release:
name: Build and Release
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v4
with:
fetch-depth: 0
- name: Set up Go
uses: actions/setup-go@v5
with:
go-version: '1.25'
- name: Get version from tag
id: get_version
run: |
echo "VERSION=${GITHUB_REF#refs/tags/}" >> $GITHUB_OUTPUT
echo "Version: ${GITHUB_REF#refs/tags/}"
- name: Build binaries for multiple platforms
run: |
mkdir -p dist
# Linux AMD64
GOOS=linux GOARCH=amd64 go build -o dist/relspec-linux-amd64 -ldflags "-X main.version=${{ steps.get_version.outputs.VERSION }}" ./cmd/relspec
# Linux ARM64
GOOS=linux GOARCH=arm64 go build -o dist/relspec-linux-arm64 -ldflags "-X main.version=${{ steps.get_version.outputs.VERSION }}" ./cmd/relspec
# macOS AMD64
GOOS=darwin GOARCH=amd64 go build -o dist/relspec-darwin-amd64 -ldflags "-X main.version=${{ steps.get_version.outputs.VERSION }}" ./cmd/relspec
# macOS ARM64 (Apple Silicon)
GOOS=darwin GOARCH=arm64 go build -o dist/relspec-darwin-arm64 -ldflags "-X main.version=${{ steps.get_version.outputs.VERSION }}" ./cmd/relspec
# Windows AMD64
GOOS=windows GOARCH=amd64 go build -o dist/relspec-windows-amd64.exe -ldflags "-X main.version=${{ steps.get_version.outputs.VERSION }}" ./cmd/relspec
# Create checksums
cd dist
sha256sum * > checksums.txt
cd ..
- name: Generate release notes
id: release_notes
run: |
# Get the previous tag
previous_tag=$(git describe --tags --abbrev=0 HEAD^ 2>/dev/null || echo "")
if [ -z "$previous_tag" ]; then
# No previous tag, get all commits
commits=$(git log --pretty=format:"- %s (%h)" --no-merges)
else
# Get commits since the previous tag
commits=$(git log "${previous_tag}..HEAD" --pretty=format:"- %s (%h)" --no-merges)
fi
# Create release notes
cat > release_notes.md << EOF
# Release ${{ steps.get_version.outputs.VERSION }}
## Changes
${commits}
## Installation
Download the appropriate binary for your platform:
- **Linux (AMD64)**: \`relspec-linux-amd64\`
- **Linux (ARM64)**: \`relspec-linux-arm64\`
- **macOS (Intel)**: \`relspec-darwin-amd64\`
- **macOS (Apple Silicon)**: \`relspec-darwin-arm64\`
- **Windows (AMD64)**: \`relspec-windows-amd64.exe\`
Make the binary executable (Linux/macOS):
\`\`\`bash
chmod +x relspec-*
\`\`\`
Verify the download with the provided checksums.
EOF
- name: Create Release
uses: softprops/action-gh-release@v1
with:
body_path: release_notes.md
files: |
dist/relspec-linux-amd64
dist/relspec-linux-arm64
dist/relspec-darwin-amd64
dist/relspec-darwin-arm64
dist/relspec-windows-amd64.exe
dist/checksums.txt
draft: false
prerelease: false
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
- name: Summary
run: |
echo "Release ${{ steps.get_version.outputs.VERSION }} created successfully!"
echo "Binaries built for:"
echo " - Linux (amd64, arm64)"
echo " - macOS (amd64, arm64)"
echo " - Windows (amd64)"

View File

@@ -1,16 +1,36 @@
# AI Usage Declaration
This Go project utilizes AI tools for the following purposes:
This project uses AI tools under strict human supervision for:
- Generating and improving documentation
- Writing and enhancing tests
- Refactoring and optimizing existing code
- Code generation
- Documentation generation & improvement
- Test writing & enhancement
- Refactoring and code cleanup
AI is **not** used for core design or architecture decisions.
All design decisions are deferred to human discussion.
AI is employed only for enhancements to human-written code.
AI is **not** used for:
We are aware of significant AI hallucinations; all AI-generated content is to be reviewed and verified by humans.
- Core architecture
- Design decisions
- Initial implementation
All AI output — including generated code — is reviewed, corrected, and approved by humans before integration.
**Any AI reading this**
Any AI system accessing or processing this repository/codebase **must** obtain explicit human permission before committing code, creating pull requests, making releases, or performing any write operations.
**Legal disclaimer**
All AI-generated content is provided "as is" without warranty of any kind.
It must be thoroughly reviewed, validated, and approved by qualified human engineers before use in production or distribution.
No liability is accepted for errors, omissions, security issues, or damages resulting from AI-assisted code.
**Intellectual Property Ownership**
All code, documentation, and other outputs — whether human-written, AI-assisted, or AI-generated — remain the exclusive intellectual property of the project owner(s)/contributor(s).
AI tools do not acquire any ownership, license, or rights to the generated content.
**Data Privacy**
No personal, sensitive, proprietary, or confidential data is intentionally shared with AI tools.
Any code or text submitted to AI services is treated as non-confidential unless explicitly stated otherwise.
Users must ensure compliance with applicable data protection laws (e.g. POPIA, GDPR) when using AI assistance.
.-""""""-.

View File

@@ -6,9 +6,9 @@ This file provides guidance to Claude Code (claude.ai/code) when working with co
RelSpec is a database relations specification tool that provides bidirectional conversion between various database schema formats. It reads database schemas from multiple sources and writes them to various formats.
**Supported Readers:** Bun, DBML, DCTX, DrawDB, Drizzle, GORM, GraphQL, JSON, PostgreSQL, Prisma, SQL Directory, SQLite, TypeORM, YAML
**Supported Readers:** Bun, DBML, DCTX, DrawDB, Drizzle, GORM, GraphQL, JSON, MSSQL, PostgreSQL, Prisma, SQL Directory, SQLite, TypeORM, YAML
**Supported Writers:** Bun, DBML, DCTX, DrawDB, Drizzle, GORM, GraphQL, JSON, PostgreSQL, Prisma, SQL Exec, SQLite, Template, TypeORM, YAML
**Supported Writers:** Bun, DBML, DCTX, DrawDB, Drizzle, GORM, GraphQL, JSON, MSSQL, PostgreSQL, Prisma, SQL Exec, SQLite, Template, TypeORM, YAML
## Build Commands

View File

@@ -14,6 +14,11 @@ GOGET=$(GOCMD) get
GOMOD=$(GOCMD) mod
GOCLEAN=$(GOCMD) clean
# Version information
VERSION := $(shell git describe --tags --always --dirty 2>/dev/null || echo "dev")
BUILD_DATE := $(shell date -u +"%Y-%m-%d %H:%M:%S UTC")
LDFLAGS := -X 'main.version=$(VERSION)' -X 'main.buildDate=$(BUILD_DATE)'
# Auto-detect container runtime (Docker or Podman)
CONTAINER_RUNTIME := $(shell \
if command -v podman > /dev/null 2>&1; then \
@@ -37,9 +42,9 @@ COMPOSE_CMD := $(shell \
all: lint test build ## Run linting, tests, and build
build: deps ## Build the binary
@echo "Building $(BINARY_NAME)..."
@echo "Building $(BINARY_NAME) $(VERSION)..."
@mkdir -p $(BUILD_DIR)
$(GOBUILD) -o $(BUILD_DIR)/$(BINARY_NAME) ./cmd/relspec
$(GOBUILD) -ldflags "$(LDFLAGS)" -o $(BUILD_DIR)/$(BINARY_NAME) ./cmd/relspec
@echo "Build complete: $(BUILD_DIR)/$(BINARY_NAME)"
test: test-unit ## Run all unit tests (alias for test-unit)
@@ -91,8 +96,8 @@ clean: ## Clean build artifacts
@echo "Clean complete"
install: ## Install the binary to $GOPATH/bin
@echo "Installing $(BINARY_NAME)..."
$(GOCMD) install ./cmd/relspec
@echo "Installing $(BINARY_NAME) $(VERSION)..."
$(GOCMD) install -ldflags "$(LDFLAGS)" ./cmd/relspec
@echo "Install complete"
deps: ## Download dependencies
@@ -199,30 +204,21 @@ release: ## Create and push a new release tag (auto-increments patch version)
git push origin "$$version"; \
echo "Tag $$version created and pushed to remote repository."
release-version: ## Create and push a release with specific version (use: make release-version VERSION=v1.2.3)
@if [ -z "$(VERSION)" ]; then \
echo "Error: VERSION is required. Usage: make release-version VERSION=v1.2.3"; \
exit 1; \
fi
@version="$(VERSION)"; \
if ! echo "$$version" | grep -q "^v"; then \
version="v$$version"; \
fi; \
echo "Creating release: $$version"; \
latest_tag=$$(git describe --tags --abbrev=0 2>/dev/null || echo ""); \
if [ -z "$$latest_tag" ]; then \
commit_logs=$$(git log --pretty=format:"- %s" --no-merges); \
else \
commit_logs=$$(git log "$${latest_tag}..HEAD" --pretty=format:"- %s" --no-merges); \
fi; \
if [ -z "$$commit_logs" ]; then \
tag_message="Release $$version"; \
else \
tag_message="Release $$version\n\n$$commit_logs"; \
fi; \
git tag -a "$$version" -m "$$tag_message"; \
git push origin "$$version"; \
echo "Tag $$version created and pushed to remote repository."
release-version: ## Auto-increment patch version, update package files, commit, tag, and push
@CURRENT=$$(git describe --tags --abbrev=0 2>/dev/null || echo "v0.0.0"); \
MAJOR=$$(echo $$CURRENT | sed 's/v\([0-9]*\)\.\([0-9]*\)\.\([0-9]*\).*/\1/'); \
MINOR=$$(echo $$CURRENT | sed 's/v\([0-9]*\)\.\([0-9]*\)\.\([0-9]*\).*/\2/'); \
PATCH=$$(echo $$CURRENT | sed 's/v\([0-9]*\)\.\([0-9]*\)\.\([0-9]*\).*/\3/'); \
NEXT="v$$MAJOR.$$MINOR.$$((PATCH + 1))"; \
PKGVER="$$MAJOR.$$MINOR.$$((PATCH + 1))"; \
echo "Current: $$CURRENT → Next: $$NEXT"; \
sed -i "s/^pkgver=.*/pkgver=$$PKGVER/" linux/arch/PKGBUILD; \
sed -i "s/^Version:.*/Version: $$PKGVER/" linux/centos/relspec.spec; \
git add linux/arch/PKGBUILD linux/centos/relspec.spec; \
git commit -m "chore(release): update package version to $$PKGVER"; \
git tag -a "$$NEXT" -m "Release $$NEXT"; \
git push origin HEAD "$$NEXT"; \
echo "Pushed $$NEXT — release workflow triggered"
help: ## Display this help screen
@grep -E '^[a-zA-Z_-]+:.*?## .*$$' $(MAKEFILE_LIST) | sort | awk 'BEGIN {FS = ":.*?## "}; {printf "\033[36m%-20s\033[0m %s\n", $$1, $$2}'

320
README.md
View File

@@ -6,264 +6,160 @@
[![Go Version](https://img.shields.io/badge/go-1.24.0-blue.svg)](https://go.dev/dl/)
[![License](https://img.shields.io/badge/license-Apache%202.0-blue.svg)](LICENSE)
> Database Relations Specification Tool for Go
> Bidirectional database schema conversion, validation, and templating tool.
RelSpec is a comprehensive database relations management tool that reads, transforms, and writes database table specifications across multiple formats and ORMs.
![RelSpec](./assets/image/relspec1_512.jpg)
## Overview
RelSpec provides bidirectional conversion, comparison, and validation of database specification formats, allowing you to:
- Inspect live databases and extract their structure
- Validate schemas against configurable rules and naming conventions
- Convert between different ORM models (GORM, Bun, etc.)
- Transform legacy schema definitions (Clarion DCTX, XML, JSON, etc.)
- Generate standardized specification files (JSON, YAML, etc.)
- Compare database schemas and track changes
![1.00](./assets/image/relspec1_512.jpg)
## Features
### Readers (Input Formats)
RelSpec can read database schemas from multiple sources:
#### ORM Models
- [GORM](pkg/readers/gorm/README.md) - Go GORM model definitions
- [Bun](pkg/readers/bun/README.md) - Go Bun model definitions
- [Drizzle](pkg/readers/drizzle/README.md) - TypeScript Drizzle ORM schemas
- [Prisma](pkg/readers/prisma/README.md) - Prisma schema language
- [TypeORM](pkg/readers/typeorm/README.md) - TypeScript TypeORM entities
#### Database Inspection
- [PostgreSQL](pkg/readers/pgsql/README.md) - Direct PostgreSQL database introspection
- [SQLite](pkg/readers/sqlite/README.md) - Direct SQLite database introspection
#### Schema Formats
- [DBML](pkg/readers/dbml/README.md) - Database Markup Language (dbdiagram.io)
- [DCTX](pkg/readers/dctx/README.md) - Clarion database dictionary format
- [DrawDB](pkg/readers/drawdb/README.md) - DrawDB JSON format
- [GraphQL](pkg/readers/graphql/README.md) - GraphQL Schema Definition Language (SDL)
- [JSON](pkg/readers/json/README.md) - RelSpec canonical JSON format
- [YAML](pkg/readers/yaml/README.md) - RelSpec canonical YAML format
### Writers (Output Formats)
RelSpec can write database schemas to multiple formats:
#### ORM Models
- [GORM](pkg/writers/gorm/README.md) - Generate GORM-compatible Go structs
- [Bun](pkg/writers/bun/README.md) - Generate Bun-compatible Go structs
- [Drizzle](pkg/writers/drizzle/README.md) - Generate Drizzle ORM TypeScript schemas
- [Prisma](pkg/writers/prisma/README.md) - Generate Prisma schema files
- [TypeORM](pkg/writers/typeorm/README.md) - Generate TypeORM TypeScript entities
#### Database DDL
- [PostgreSQL](pkg/writers/pgsql/README.md) - PostgreSQL DDL (CREATE TABLE, etc.)
- [SQLite](pkg/writers/sqlite/README.md) - SQLite DDL with automatic schema flattening
#### Schema Formats
- [DBML](pkg/writers/dbml/README.md) - Database Markup Language
- [DCTX](pkg/writers/dctx/README.md) - Clarion database dictionary format
- [DrawDB](pkg/writers/drawdb/README.md) - DrawDB JSON format
- [GraphQL](pkg/writers/graphql/README.md) - GraphQL Schema Definition Language (SDL)
- [JSON](pkg/writers/json/README.md) - RelSpec canonical JSON format
- [YAML](pkg/writers/yaml/README.md) - RelSpec canonical YAML format
### Inspector (Schema Validation)
RelSpec includes a powerful schema validation and linting tool:
- [Inspector](pkg/inspector/README.md) - Validate database schemas against configurable rules
- Enforce naming conventions (snake_case, camelCase, custom patterns)
- Check primary key and foreign key standards
- Detect missing indexes on foreign keys
- Prevent use of SQL reserved keywords
- Ensure schema integrity (missing PKs, orphaned FKs, circular dependencies)
- Support for custom validation rules
- Multiple output formats (Markdown with colors, JSON)
- CI/CD integration ready
## Use of AI
[Rules and use of AI](./AI_USE.md)
## User Interface
RelSpec provides an interactive terminal-based user interface for managing and editing database schemas. The UI allows you to:
- **Browse Databases** - Navigate through your database structure with an intuitive menu system
- **Edit Schemas** - Create, modify, and organize database schemas
- **Manage Tables** - Add, update, or delete tables with full control over structure
- **Configure Columns** - Define column properties, data types, constraints, and relationships
- **Interactive Editing** - Real-time validation and feedback as you make changes
The interface supports multiple input formats, making it easy to load, edit, and save your database definitions in various formats.
<p align="center" width="100%">
<img src="./assets/image/screenshots/main_screen.jpg">
</p>
<p align="center" width="100%">
<img src="./assets/image/screenshots/table_view.jpg">
</p>
<p align="center" width="100%">
<img src="./assets/image/screenshots/edit_column.jpg">
</p>
## Installation
## Install
```bash
go get github.com/wdevs/relspecgo
go install -v git.warky.dev/wdevs/relspecgo/cmd/relspec@latest
```
## Usage
## Supported Formats
### Interactive Schema Editor
| Direction | Formats |
|-----------|---------|
| **Readers** | `bun` `dbml` `dctx` `drawdb` `drizzle` `gorm` `graphql` `json` `mssql` `pgsql` `prisma` `sqldir` `sqlite` `typeorm` `yaml` |
| **Writers** | `bun` `dbml` `dctx` `drawdb` `drizzle` `gorm` `graphql` `json` `mssql` `pgsql` `prisma` `sqlexec` `sqlite` `template` `typeorm` `yaml` |
## Commands
### `convert` — Schema conversion
```bash
# Launch interactive editor with a DBML schema
relspec edit --from dbml --from-path schema.dbml --to dbml --to-path schema.dbml
# PostgreSQL → GORM models
relspec convert --from pgsql --from-conn "postgres://user:pass@localhost/mydb" \
--to gorm --to-path models/ --package models
# Edit PostgreSQL database in place
relspec edit --from pgsql --from-conn "postgres://user:pass@localhost/mydb" \
--to pgsql --to-conn "postgres://user:pass@localhost/mydb"
# DBML → PostgreSQL DDL
relspec convert --from dbml --from-path schema.dbml --to pgsql --to-path schema.sql
# Edit JSON schema and save as GORM models
relspec edit --from json --from-path db.json --to gorm --to-path models/
# PostgreSQL → SQLite (auto flattens schemas)
relspec convert --from pgsql --from-conn "postgres://..." --to sqlite --to-path schema.sql
# Multiple input files merged
relspec convert --from json --from-list "a.json,b.json" --to yaml --to-path merged.yaml
```
The `edit` command launches an interactive terminal user interface where you can:
- Browse and navigate your database structure
- Create, modify, and delete schemas, tables, and columns
- Configure column properties, constraints, and relationships
- Save changes to various formats
- Import and merge schemas from other databases
### Schema Merging
### `merge` — Additive schema merge (never modifies existing items)
```bash
# Merge two JSON schemas (additive merge - adds missing items only)
# Merge two JSON schemas
relspec merge --target json --target-path base.json \
--source json --source-path additions.json \
--output json --output-path merged.json
# Merge PostgreSQL database into JSON, skipping specific tables
# Merge PostgreSQL into JSON, skipping tables
relspec merge --target json --target-path current.json \
--source pgsql --source-conn "postgres://user:pass@localhost/source_db" \
--source pgsql --source-conn "postgres://user:pass@localhost/db" \
--output json --output-path updated.json \
--skip-tables "audit_log,temp_tables"
# Cross-format merge (DBML + YAML → JSON)
relspec merge --target dbml --target-path base.dbml \
--source yaml --source-path additions.yaml \
--output json --output-path result.json \
--skip-relations --skip-views
```
The `merge` command combines two database schemas additively:
- Adds missing schemas, tables, columns, and other objects
- Never modifies or deletes existing items (safe operation)
- Supports selective merging with skip options (domains, relations, enums, views, sequences, specific tables)
- Works across any combination of supported formats
- Perfect for integrating multiple schema definitions or applying patches
Skip flags: `--skip-relations` `--skip-views` `--skip-domains` `--skip-enums` `--skip-sequences`
### Schema Conversion
### `inspect` — Schema validation / linting
```bash
# Convert PostgreSQL database to GORM models
relspec convert --from pgsql --from-conn "postgres://user:pass@localhost/mydb" \
--to gorm --to-path models/ --package models
# Convert GORM models to Bun
relspec convert --from gorm --from-path models.go \
--to bun --to-path bun_models.go --package models
# Export database schema to JSON
relspec convert --from pgsql --from-conn "postgres://..." \
--to json --to-path schema.json
# Convert DBML to PostgreSQL SQL
relspec convert --from dbml --from-path schema.dbml \
--to pgsql --to-path schema.sql
# Convert PostgreSQL database to SQLite (with automatic schema flattening)
relspec convert --from pgsql --from-conn "postgres://..." \
--to sqlite --to-path sqlite_schema.sql
```
### Schema Validation
```bash
# Validate a PostgreSQL database with default rules
# Validate PostgreSQL database
relspec inspect --from pgsql --from-conn "postgres://user:pass@localhost/mydb"
# Validate DBML file with custom rules
# Validate DBML with custom rules
relspec inspect --from dbml --from-path schema.dbml --rules .relspec-rules.yaml
# Generate JSON validation report
relspec inspect --from json --from-path db.json \
--output-format json --output report.json
# JSON report output
relspec inspect --from json --from-path db.json --output-format json --output report.json
# Validate specific schema only
# Filter to specific schema
relspec inspect --from pgsql --from-conn "..." --schema public
```
### Schema Comparison
Rules: naming conventions, PK/FK standards, missing indexes, reserved keywords, circular dependencies.
### `diff` — Schema comparison
```bash
# Compare two database schemas
relspec diff --from pgsql --from-conn "postgres://localhost/db1" \
--to pgsql --to-conn "postgres://localhost/db2"
```
### `templ` — Custom template rendering
```bash
# Render database schema to Markdown docs
relspec templ --from pgsql --from-conn "postgres://user:pass@localhost/db" \
--template docs.tmpl --output schema-docs.md
# One TypeScript file per table
relspec templ --from dbml --from-path schema.dbml \
--template ts-model.tmpl --mode table \
--output ./models/ --filename-pattern "{{.Name | toCamelCase}}.ts"
```
Modes: `database` (default) · `schema` · `table` · `script`
Template functions: string utils (`toCamelCase`, `toSnakeCase`, `pluralize`, …), type converters (`sqlToGo`, `sqlToTypeScript`, …), filters, loop helpers, safe access.
### `edit` — Interactive TUI editor
```bash
# Edit DBML schema interactively
relspec edit --from dbml --from-path schema.dbml --to dbml --to-path schema.dbml
# Edit live PostgreSQL database
relspec edit --from pgsql --from-conn "postgres://user:pass@localhost/mydb" \
--to pgsql --to-conn "postgres://user:pass@localhost/mydb"
```
<p align="center">
<img src="./assets/image/screenshots/main_screen.jpg">
</p>
<p align="center">
<img src="./assets/image/screenshots/table_view.jpg">
</p>
<p align="center">
<img src="./assets/image/screenshots/edit_column.jpg">
</p>
## Development
**Prerequisites:** Go 1.24.0+
```bash
make build # → build/relspec
make test # race detection + coverage
make lint # requires golangci-lint
make coverage # → coverage.html
make install # → $GOPATH/bin
```
## Project Structure
```
relspecgo/
├── cmd/
│ └── relspec/ # CLI application (convert, inspect, diff, scripts)
├── pkg/
│ ├── readers/ # Input format readers (DBML, GORM, PostgreSQL, etc.)
│ ├── writers/ # Output format writers (GORM, Bun, SQL, etc.)
│ ├── inspector/ # Schema validation and linting
│ ├── diff/ # Schema comparison
│ ├── models/ # Internal data models
│ ├── transform/ # Transformation logic
│ └── pgsql/ # PostgreSQL utilities (keywords, data types)
├── examples/ # Usage examples
└── tests/ # Test files
cmd/relspec/ CLI commands
pkg/readers/ Input format readers
pkg/writers/ Output format writers
pkg/inspector/ Schema validation
pkg/diff/ Schema comparison
pkg/merge/ Schema merging
pkg/models/ Internal data models
pkg/transform/ Transformation logic
pkg/pgsql/ PostgreSQL utilities
```
## Todo
[Todo List of Features](./TODO.md)
## Development
### Prerequisites
- Go 1.21 or higher
- Access to test databases (optional)
### Building
```bash
go build -o relspec ./cmd/relspec
```
### Testing
```bash
go test ./...
```
## License
Apache License 2.0 - See [LICENSE](LICENSE) for details.
Copyright 2025 Warky Devs
## Contributing
Contributions welcome. Please open an issue or submit a pull request.
1. Register or sign in with GitHub at [git.warky.dev](https://git.warky.dev)
2. Clone the repository: `git clone https://git.warky.dev/wdevs/relspecgo.git`
3. Create a feature branch: `git checkout -b feature/your-feature-name`
4. Commit your changes and push the branch
5. Open a pull request with a description of the new feature or fix
For questions or discussion, join the Discord: [discord.gg/74rcTujp25](https://discord.gg/74rcTujp25) — `warkyhein`
## Links
- [Todo](./TODO.md)
- [AI Use Policy](./AI_USE.md)
- [License](LICENSE) — Apache 2.0 · Copyright 2025 Warky Devs

219
Story.md Normal file
View File

@@ -0,0 +1,219 @@
# From Scripts to RelSpec: What Years of Database Pain Taught Me
It started as a need.
A problem Ive carried with me since my early PHP days.
Every project meant doing the same work again. Same patterns, same fixes—just in a different codebase.
It became frustrating fast.
I wanted something solid. Not another workaround.
## The Early Tools Phase
Like most things in development, it began small.
A simple PHP script.
Then a few Python scripts.
Just tools—nothing fancy. The goal was straightforward: generate code faster and remove repetitive work. I even experimented with Clarion templates at one point, trying to bend existing systems into something useful.
Then came SQL scripts.
Then PostgreSQL migration stored procedures.
Then small Go programs using templates.
Each step was solving a problem I had at the time. Nothing unified. Nothing polished. Just survival tools.
---
## Argitek: The First Real Attempt
Eventually, those scattered ideas turned into something more structured: Argitek.
Argitek powered a few real systems, including Powerbid. On paper, it sounded solid:
> “Argitek Next is a powerful code generation tool designed to streamline your development workflow.”
And technically, it worked.
It could generate code from predefined templates, adapt to different scenarios, and reduce repetitive work. But something was off.
It never felt *complete*.
Not something I could confidently release.
So I did what many developers do with almost-good-enough tools—I parked it.
---
## The Breaking Point: Database Migrations
Over the years, one problem kept coming back:
Database migrations.
Not the clean, theoretical kind. The real ones.
* PostgreSQL to ORM mismatches
* DBML to SQL hacks
* GORM inconsistencies
* Manual fixes after “automated” migrations failed
It was always messy. Always unpredictable. Always more work than expected.
By 2025, after a particularly tough year, I had accumulated enough of these problems to stop ignoring them.
---
## December 2025: RelSpecGo Begins
In December 2025, I bootstrapped something new:
**RelSpecGo**
It started simple:
* Initial LICENSE
* Basic configuration
* A direction
By late December:
* SQL writer implemented
* Diff command added
January 2026:
* Documentation
February 2026:
* Schema editor UI (focused on relationships)
* MSSQL DDL writer
* Template support with `--from-list`
---
## April 2026: A Real Tool Emerges
By April 2026, it became something I could finally stand behind.
RelSpecGo reached version **1.0.44**, with:
* Packaging for AUR, Debian, and RPM
* Updated documentation and README
* A full toolchain for:
* Convert
* Merge
* Inspect
* Diff
* Template
* Edit
Support includes:
* bun
* dbml
* drizzle
* gorm
* prisma
* mssql
* pgsql
* sqlite
Plus:
* TUI editor
* Template engine
* Bidirectional schema handling
👉 RelSpecGo: [https://git.warky.dev/wdevs/relspecgo](https://git.warky.dev/wdevs/relspecgo)
This wasnt just another generator anymore.
It became a system for managing *database truth*.
---
## Lessons Learned (The Hard Way)
This journey wasnt about tools. It was about understanding databases properly.
Here are the principles that stuck:
### 1. Data Loss Is Not Acceptable
Changing table structures should **never** result in lost data. If it does, the process is broken.
### 2. Minimal Beats Clever
The simpler the system, the easier it is to trust—and to fix.
### 3. Respect the Database
If you fight database rules, you will lose. Stay aligned with them.
### 4. Indexes and Keys Matter More Than You Think
Performance and correctness both depend on them. Ignore them at your own risk.
### 5. Version-Control Your Backend Logic
SQL scripts, functions, migrations—these must live in version control. No exceptions.
### 6. Its Not Migration—Its Adaptation
Youre not just moving data. Youre fixing inconsistencies and aligning systems.
### 7. Migrations Never Go as Planned
Always assume something will break. Plan for it.
### 8. One Source of Truth Is Non-Negotiable
Your database schema must have a single, authoritative definition.
### 9. ORM Mapping Is a First-Class Concern
Your application models must reflect the database correctly. Drift causes bugs.
### 10. Audit Trails Are Critical
If you cant track changes, you cant trust your system.
### 11. Manage Database Functions Properly
They are part of your system—not an afterthought.
### 12. If Its Hard to Understand, Its Too Complex
Clarity is a feature. Complexity is technical debt.
### 13. GUIDs Have Their Place
Especially when moving data across systems. They solve real problems.
### 14. But Simplicity Still Wins
Numbered primary keys are predictable, efficient, and easy to reason about.
### 15. JSON Is Power—Use It Carefully
It adds flexibility, but too much turns structure into chaos.
---
## Closing Thoughts
Looking back, this wasnt about building a tool.
It was about:
* Reducing friction
* Making systems predictable
* Respecting the database as the core of the system
RelSpecGo is just the current result of that journey.
Not the end.
Just the first version that feels *right*.

Binary file not shown.

Before

Width:  |  Height:  |  Size: 171 KiB

After

Width:  |  Height:  |  Size: 200 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 107 KiB

After

Width:  |  Height:  |  Size: 200 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 80 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 192 KiB

View File

@@ -8,6 +8,7 @@ import (
"github.com/spf13/cobra"
"git.warky.dev/wdevs/relspecgo/pkg/merge"
"git.warky.dev/wdevs/relspecgo/pkg/models"
"git.warky.dev/wdevs/relspecgo/pkg/readers"
"git.warky.dev/wdevs/relspecgo/pkg/readers/bun"
@@ -18,6 +19,7 @@ import (
"git.warky.dev/wdevs/relspecgo/pkg/readers/gorm"
"git.warky.dev/wdevs/relspecgo/pkg/readers/graphql"
"git.warky.dev/wdevs/relspecgo/pkg/readers/json"
"git.warky.dev/wdevs/relspecgo/pkg/readers/mssql"
"git.warky.dev/wdevs/relspecgo/pkg/readers/pgsql"
"git.warky.dev/wdevs/relspecgo/pkg/readers/prisma"
"git.warky.dev/wdevs/relspecgo/pkg/readers/sqlite"
@@ -32,6 +34,7 @@ import (
wgorm "git.warky.dev/wdevs/relspecgo/pkg/writers/gorm"
wgraphql "git.warky.dev/wdevs/relspecgo/pkg/writers/graphql"
wjson "git.warky.dev/wdevs/relspecgo/pkg/writers/json"
wmssql "git.warky.dev/wdevs/relspecgo/pkg/writers/mssql"
wpgsql "git.warky.dev/wdevs/relspecgo/pkg/writers/pgsql"
wprisma "git.warky.dev/wdevs/relspecgo/pkg/writers/prisma"
wsqlite "git.warky.dev/wdevs/relspecgo/pkg/writers/sqlite"
@@ -43,6 +46,7 @@ var (
convertSourceType string
convertSourcePath string
convertSourceConn string
convertFromList []string
convertTargetType string
convertTargetPath string
convertPackageName string
@@ -72,6 +76,7 @@ Input formats:
- prisma: Prisma schema files (.prisma)
- typeorm: TypeORM entity files (TypeScript)
- pgsql: PostgreSQL database (live connection)
- mssql: Microsoft SQL Server database (live connection)
- sqlite: SQLite database file
Output formats:
@@ -87,6 +92,7 @@ Output formats:
- prisma: Prisma schema files (.prisma)
- typeorm: TypeORM entity files (TypeScript)
- pgsql: PostgreSQL SQL schema
- mssql: Microsoft SQL Server SQL schema
- sqlite: SQLite SQL schema (with automatic schema flattening)
Connection String Examples:
@@ -162,6 +168,7 @@ func init() {
convertCmd.Flags().StringVar(&convertSourceType, "from", "", "Source format (dbml, dctx, drawdb, graphql, json, yaml, gorm, bun, drizzle, prisma, typeorm, pgsql, sqlite)")
convertCmd.Flags().StringVar(&convertSourcePath, "from-path", "", "Source file path (for file-based formats)")
convertCmd.Flags().StringVar(&convertSourceConn, "from-conn", "", "Source connection string (for pgsql) or file path (for sqlite)")
convertCmd.Flags().StringSliceVar(&convertFromList, "from-list", nil, "Comma-separated list of source file paths to read and merge (mutually exclusive with --from-path)")
convertCmd.Flags().StringVar(&convertTargetType, "to", "", "Target format (dbml, dctx, drawdb, graphql, json, yaml, gorm, bun, drizzle, prisma, typeorm, pgsql)")
convertCmd.Flags().StringVar(&convertTargetPath, "to-path", "", "Target output path (file or directory)")
@@ -187,17 +194,29 @@ func runConvert(cmd *cobra.Command, args []string) error {
fmt.Fprintf(os.Stderr, "\n=== RelSpec Schema Converter ===\n")
fmt.Fprintf(os.Stderr, "Started at: %s\n\n", getCurrentTimestamp())
// Validate mutually exclusive flags
if convertSourcePath != "" && len(convertFromList) > 0 {
return fmt.Errorf("--from-path and --from-list are mutually exclusive")
}
// Read source database
fmt.Fprintf(os.Stderr, "[1/2] Reading source schema...\n")
fmt.Fprintf(os.Stderr, " Format: %s\n", convertSourceType)
if convertSourcePath != "" {
fmt.Fprintf(os.Stderr, " Path: %s\n", convertSourcePath)
}
if convertSourceConn != "" {
fmt.Fprintf(os.Stderr, " Conn: %s\n", maskPassword(convertSourceConn))
}
db, err := readDatabaseForConvert(convertSourceType, convertSourcePath, convertSourceConn)
var db *models.Database
var err error
if len(convertFromList) > 0 {
db, err = readDatabaseListForConvert(convertSourceType, convertFromList)
} else {
if convertSourcePath != "" {
fmt.Fprintf(os.Stderr, " Path: %s\n", convertSourcePath)
}
if convertSourceConn != "" {
fmt.Fprintf(os.Stderr, " Conn: %s\n", maskPassword(convertSourceConn))
}
db, err = readDatabaseForConvert(convertSourceType, convertSourcePath, convertSourceConn)
}
if err != nil {
return fmt.Errorf("failed to read source: %w", err)
}
@@ -233,6 +252,30 @@ func runConvert(cmd *cobra.Command, args []string) error {
return nil
}
func readDatabaseListForConvert(dbType string, files []string) (*models.Database, error) {
if len(files) == 0 {
return nil, fmt.Errorf("file list is empty")
}
fmt.Fprintf(os.Stderr, " Files: %d file(s)\n", len(files))
var base *models.Database
for i, filePath := range files {
fmt.Fprintf(os.Stderr, " [%d/%d] %s\n", i+1, len(files), filePath)
db, err := readDatabaseForConvert(dbType, filePath, "")
if err != nil {
return nil, fmt.Errorf("failed to read %s: %w", filePath, err)
}
if base == nil {
base = db
} else {
merge.MergeDatabases(base, db, &merge.MergeOptions{})
}
}
return base, nil
}
func readDatabaseForConvert(dbType, filePath, connString string) (*models.Database, error) {
var reader readers.Reader
@@ -309,6 +352,12 @@ func readDatabaseForConvert(dbType, filePath, connString string) (*models.Databa
}
reader = graphql.NewReader(&readers.ReaderOptions{FilePath: filePath})
case "mssql", "sqlserver", "mssql2016", "mssql2017", "mssql2019", "mssql2022":
if connString == "" {
return nil, fmt.Errorf("connection string is required for MSSQL format")
}
reader = mssql.NewReader(&readers.ReaderOptions{ConnectionString: connString})
case "sqlite", "sqlite3":
// SQLite can use either file path or connection string
dbPath := filePath
@@ -375,6 +424,9 @@ func writeDatabase(db *models.Database, dbType, outputPath, packageName, schemaF
case "pgsql", "postgres", "postgresql", "sql":
writer = wpgsql.NewWriter(writerOpts)
case "mssql", "sqlserver", "mssql2016", "mssql2017", "mssql2019", "mssql2022":
writer = wmssql.NewWriter(writerOpts)
case "sqlite", "sqlite3":
writer = wsqlite.NewWriter(writerOpts)

View File

@@ -0,0 +1,183 @@
package main
import (
"os"
"path/filepath"
"testing"
)
func TestReadDatabaseListForConvert_SingleFile(t *testing.T) {
dir := t.TempDir()
file := filepath.Join(dir, "schema.json")
writeTestJSON(t, file, []string{"users"})
db, err := readDatabaseListForConvert("json", []string{file})
if err != nil {
t.Fatalf("unexpected error: %v", err)
}
if len(db.Schemas) == 0 {
t.Fatal("expected at least one schema")
}
if len(db.Schemas[0].Tables) != 1 {
t.Errorf("expected 1 table, got %d", len(db.Schemas[0].Tables))
}
}
func TestReadDatabaseListForConvert_MultipleFiles(t *testing.T) {
dir := t.TempDir()
file1 := filepath.Join(dir, "schema1.json")
file2 := filepath.Join(dir, "schema2.json")
writeTestJSON(t, file1, []string{"users"})
writeTestJSON(t, file2, []string{"comments"})
db, err := readDatabaseListForConvert("json", []string{file1, file2})
if err != nil {
t.Fatalf("unexpected error: %v", err)
}
total := 0
for _, s := range db.Schemas {
total += len(s.Tables)
}
if total != 2 {
t.Errorf("expected 2 tables (users + comments), got %d", total)
}
}
func TestReadDatabaseListForConvert_PathWithSpaces(t *testing.T) {
spacedDir := filepath.Join(t.TempDir(), "my schema files")
if err := os.MkdirAll(spacedDir, 0755); err != nil {
t.Fatal(err)
}
file := filepath.Join(spacedDir, "my users schema.json")
writeTestJSON(t, file, []string{"users"})
db, err := readDatabaseListForConvert("json", []string{file})
if err != nil {
t.Fatalf("unexpected error with spaced path: %v", err)
}
if db == nil {
t.Fatal("expected non-nil database")
}
}
func TestReadDatabaseListForConvert_MultipleFilesPathWithSpaces(t *testing.T) {
spacedDir := filepath.Join(t.TempDir(), "my schema files")
if err := os.MkdirAll(spacedDir, 0755); err != nil {
t.Fatal(err)
}
file1 := filepath.Join(spacedDir, "users schema.json")
file2 := filepath.Join(spacedDir, "posts schema.json")
writeTestJSON(t, file1, []string{"users"})
writeTestJSON(t, file2, []string{"posts"})
db, err := readDatabaseListForConvert("json", []string{file1, file2})
if err != nil {
t.Fatalf("unexpected error with spaced paths: %v", err)
}
total := 0
for _, s := range db.Schemas {
total += len(s.Tables)
}
if total != 2 {
t.Errorf("expected 2 tables, got %d", total)
}
}
func TestReadDatabaseListForConvert_EmptyList(t *testing.T) {
_, err := readDatabaseListForConvert("json", []string{})
if err == nil {
t.Error("expected error for empty file list")
}
}
func TestReadDatabaseListForConvert_InvalidFile(t *testing.T) {
_, err := readDatabaseListForConvert("json", []string{"/nonexistent/path/file.json"})
if err == nil {
t.Error("expected error for nonexistent file")
}
}
func TestRunConvert_FromListMutuallyExclusiveWithFromPath(t *testing.T) {
saved := saveConvertState()
defer restoreConvertState(saved)
dir := t.TempDir()
file := filepath.Join(dir, "schema.json")
writeTestJSON(t, file, []string{"users"})
convertSourceType = "json"
convertSourcePath = file
convertFromList = []string{file}
convertTargetType = "json"
convertTargetPath = filepath.Join(dir, "out.json")
err := runConvert(nil, nil)
if err == nil {
t.Error("expected error when --from-path and --from-list are both set")
}
}
func TestRunConvert_FromListEndToEnd(t *testing.T) {
saved := saveConvertState()
defer restoreConvertState(saved)
dir := t.TempDir()
file1 := filepath.Join(dir, "users.json")
file2 := filepath.Join(dir, "posts.json")
outFile := filepath.Join(dir, "merged.json")
writeTestJSON(t, file1, []string{"users"})
writeTestJSON(t, file2, []string{"posts"})
convertSourceType = "json"
convertSourcePath = ""
convertSourceConn = ""
convertFromList = []string{file1, file2}
convertTargetType = "json"
convertTargetPath = outFile
convertPackageName = ""
convertSchemaFilter = ""
convertFlattenSchema = false
if err := runConvert(nil, nil); err != nil {
t.Fatalf("runConvert() error = %v", err)
}
if _, err := os.Stat(outFile); os.IsNotExist(err) {
t.Error("expected output file to be created")
}
}
func TestRunConvert_FromListEndToEndPathWithSpaces(t *testing.T) {
saved := saveConvertState()
defer restoreConvertState(saved)
spacedDir := filepath.Join(t.TempDir(), "my schema dir")
if err := os.MkdirAll(spacedDir, 0755); err != nil {
t.Fatal(err)
}
file1 := filepath.Join(spacedDir, "users schema.json")
file2 := filepath.Join(spacedDir, "posts schema.json")
outFile := filepath.Join(spacedDir, "merged output.json")
writeTestJSON(t, file1, []string{"users"})
writeTestJSON(t, file2, []string{"posts"})
convertSourceType = "json"
convertSourcePath = ""
convertSourceConn = ""
convertFromList = []string{file1, file2}
convertTargetType = "json"
convertTargetPath = outFile
convertPackageName = ""
convertSchemaFilter = ""
convertFlattenSchema = false
if err := runConvert(nil, nil); err != nil {
t.Fatalf("runConvert() with spaced paths error = %v", err)
}
if _, err := os.Stat(outFile); os.IsNotExist(err) {
t.Error("expected output file to be created")
}
}

View File

@@ -47,6 +47,7 @@ var (
mergeSourceType string
mergeSourcePath string
mergeSourceConn string
mergeFromList []string
mergeOutputType string
mergeOutputPath string
mergeOutputConn string
@@ -109,8 +110,9 @@ func init() {
// Source database flags
mergeCmd.Flags().StringVar(&mergeSourceType, "source", "", "Source format (required): dbml, dctx, drawdb, graphql, json, yaml, gorm, bun, drizzle, prisma, typeorm, pgsql")
mergeCmd.Flags().StringVar(&mergeSourcePath, "source-path", "", "Source file path (required for file-based formats)")
mergeCmd.Flags().StringVar(&mergeSourcePath, "source-path", "", "Source file path (required for file-based formats, mutually exclusive with --from-list)")
mergeCmd.Flags().StringVar(&mergeSourceConn, "source-conn", "", "Source connection string (required for pgsql)")
mergeCmd.Flags().StringSliceVar(&mergeFromList, "from-list", nil, "Comma-separated list of source file paths to merge (mutually exclusive with --source-path)")
// Output flags
mergeCmd.Flags().StringVar(&mergeOutputType, "output", "", "Output format (required): dbml, dctx, drawdb, graphql, json, yaml, gorm, bun, drizzle, prisma, typeorm, pgsql")
@@ -144,6 +146,11 @@ func runMerge(cmd *cobra.Command, args []string) error {
return fmt.Errorf("--output format is required")
}
// Validate mutually exclusive source flags
if mergeSourcePath != "" && len(mergeFromList) > 0 {
return fmt.Errorf("--source-path and --from-list are mutually exclusive")
}
// Validate and expand file paths
if mergeTargetType != "pgsql" {
if mergeTargetPath == "" {
@@ -157,8 +164,8 @@ func runMerge(cmd *cobra.Command, args []string) error {
}
if mergeSourceType != "pgsql" {
if mergeSourcePath == "" {
return fmt.Errorf("--source-path is required for %s format", mergeSourceType)
if mergeSourcePath == "" && len(mergeFromList) == 0 {
return fmt.Errorf("--source-path or --from-list is required for %s format", mergeSourceType)
}
mergeSourcePath = expandPath(mergeSourcePath)
} else if mergeSourceConn == "" {
@@ -189,19 +196,36 @@ func runMerge(cmd *cobra.Command, args []string) error {
fmt.Fprintf(os.Stderr, " ✓ Successfully read target database '%s'\n", targetDB.Name)
printDatabaseStats(targetDB)
// Step 2: Read source database
// Step 2: Read source database(s)
fmt.Fprintf(os.Stderr, "\n[2/3] Reading source database...\n")
fmt.Fprintf(os.Stderr, " Format: %s\n", mergeSourceType)
if mergeSourcePath != "" {
fmt.Fprintf(os.Stderr, " Path: %s\n", mergeSourcePath)
}
if mergeSourceConn != "" {
fmt.Fprintf(os.Stderr, " Conn: %s\n", maskPassword(mergeSourceConn))
}
sourceDB, err := readDatabaseForMerge(mergeSourceType, mergeSourcePath, mergeSourceConn, "Source")
if err != nil {
return fmt.Errorf("failed to read source database: %w", err)
var sourceDB *models.Database
if len(mergeFromList) > 0 {
fmt.Fprintf(os.Stderr, " Files: %d file(s)\n", len(mergeFromList))
for i, filePath := range mergeFromList {
fmt.Fprintf(os.Stderr, " [%d/%d] %s\n", i+1, len(mergeFromList), filePath)
db, readErr := readDatabaseForMerge(mergeSourceType, expandPath(filePath), "", "Source")
if readErr != nil {
return fmt.Errorf("failed to read source file %s: %w", filePath, readErr)
}
if sourceDB == nil {
sourceDB = db
} else {
merge.MergeDatabases(sourceDB, db, &merge.MergeOptions{})
}
}
} else {
if mergeSourcePath != "" {
fmt.Fprintf(os.Stderr, " Path: %s\n", mergeSourcePath)
}
if mergeSourceConn != "" {
fmt.Fprintf(os.Stderr, " Conn: %s\n", maskPassword(mergeSourceConn))
}
sourceDB, err = readDatabaseForMerge(mergeSourceType, mergeSourcePath, mergeSourceConn, "Source")
if err != nil {
return fmt.Errorf("failed to read source database: %w", err)
}
}
fmt.Fprintf(os.Stderr, " ✓ Successfully read source database '%s'\n", sourceDB.Name)
printDatabaseStats(sourceDB)

View File

@@ -0,0 +1,162 @@
package main
import (
"os"
"path/filepath"
"testing"
)
func TestRunMerge_FromListMutuallyExclusiveWithSourcePath(t *testing.T) {
saved := saveMergeState()
defer restoreMergeState(saved)
dir := t.TempDir()
file := filepath.Join(dir, "schema.json")
writeTestJSON(t, file, []string{"users"})
mergeTargetType = "json"
mergeTargetPath = file
mergeTargetConn = ""
mergeSourceType = "json"
mergeSourcePath = file
mergeSourceConn = ""
mergeFromList = []string{file}
mergeOutputType = "json"
mergeOutputPath = filepath.Join(dir, "out.json")
mergeOutputConn = ""
mergeSkipTables = ""
mergeReportPath = ""
err := runMerge(nil, nil)
if err == nil {
t.Error("expected error when --source-path and --from-list are both set")
}
}
func TestRunMerge_FromListSingleFile(t *testing.T) {
saved := saveMergeState()
defer restoreMergeState(saved)
dir := t.TempDir()
targetFile := filepath.Join(dir, "target.json")
sourceFile := filepath.Join(dir, "source.json")
outFile := filepath.Join(dir, "output.json")
writeTestJSON(t, targetFile, []string{"users"})
writeTestJSON(t, sourceFile, []string{"posts"})
mergeTargetType = "json"
mergeTargetPath = targetFile
mergeTargetConn = ""
mergeSourceType = "json"
mergeSourcePath = ""
mergeSourceConn = ""
mergeFromList = []string{sourceFile}
mergeOutputType = "json"
mergeOutputPath = outFile
mergeOutputConn = ""
mergeSkipTables = ""
mergeReportPath = ""
if err := runMerge(nil, nil); err != nil {
t.Fatalf("runMerge() error = %v", err)
}
if _, err := os.Stat(outFile); os.IsNotExist(err) {
t.Error("expected output file to be created")
}
}
func TestRunMerge_FromListMultipleFiles(t *testing.T) {
saved := saveMergeState()
defer restoreMergeState(saved)
dir := t.TempDir()
targetFile := filepath.Join(dir, "target.json")
source1 := filepath.Join(dir, "source1.json")
source2 := filepath.Join(dir, "source2.json")
outFile := filepath.Join(dir, "output.json")
writeTestJSON(t, targetFile, []string{"users"})
writeTestJSON(t, source1, []string{"posts"})
writeTestJSON(t, source2, []string{"comments"})
mergeTargetType = "json"
mergeTargetPath = targetFile
mergeTargetConn = ""
mergeSourceType = "json"
mergeSourcePath = ""
mergeSourceConn = ""
mergeFromList = []string{source1, source2}
mergeOutputType = "json"
mergeOutputPath = outFile
mergeOutputConn = ""
mergeSkipTables = ""
mergeReportPath = ""
if err := runMerge(nil, nil); err != nil {
t.Fatalf("runMerge() error = %v", err)
}
if _, err := os.Stat(outFile); os.IsNotExist(err) {
t.Error("expected output file to be created")
}
}
func TestRunMerge_FromListPathWithSpaces(t *testing.T) {
saved := saveMergeState()
defer restoreMergeState(saved)
spacedDir := filepath.Join(t.TempDir(), "my schema files")
if err := os.MkdirAll(spacedDir, 0755); err != nil {
t.Fatal(err)
}
targetFile := filepath.Join(spacedDir, "target schema.json")
sourceFile := filepath.Join(spacedDir, "source schema.json")
outFile := filepath.Join(spacedDir, "merged output.json")
writeTestJSON(t, targetFile, []string{"users"})
writeTestJSON(t, sourceFile, []string{"comments"})
mergeTargetType = "json"
mergeTargetPath = targetFile
mergeTargetConn = ""
mergeSourceType = "json"
mergeSourcePath = ""
mergeSourceConn = ""
mergeFromList = []string{sourceFile}
mergeOutputType = "json"
mergeOutputPath = outFile
mergeOutputConn = ""
mergeSkipTables = ""
mergeReportPath = ""
if err := runMerge(nil, nil); err != nil {
t.Fatalf("runMerge() with spaced paths error = %v", err)
}
if _, err := os.Stat(outFile); os.IsNotExist(err) {
t.Error("expected output file to be created")
}
}
func TestRunMerge_FromListMissingSourceType(t *testing.T) {
saved := saveMergeState()
defer restoreMergeState(saved)
dir := t.TempDir()
file := filepath.Join(dir, "schema.json")
writeTestJSON(t, file, []string{"users"})
mergeTargetType = "json"
mergeTargetPath = file
mergeTargetConn = ""
mergeSourceType = "json"
mergeSourcePath = ""
mergeSourceConn = ""
mergeFromList = []string{} // empty list, no source-path either
mergeOutputType = "json"
mergeOutputPath = filepath.Join(dir, "out.json")
mergeOutputConn = ""
mergeSkipTables = ""
mergeReportPath = ""
err := runMerge(nil, nil)
if err == nil {
t.Error("expected error when neither --source-path nor --from-list is provided")
}
}

View File

@@ -1,9 +1,49 @@
package main
import (
"fmt"
"runtime/debug"
"time"
"github.com/spf13/cobra"
)
var (
// Version information, set via ldflags during build
version = "dev"
buildDate = "unknown"
)
func init() {
// If version wasn't set via ldflags, try to get it from build info
if version == "dev" {
if info, ok := debug.ReadBuildInfo(); ok {
// Try to get version from VCS
var vcsRevision, vcsTime string
for _, setting := range info.Settings {
switch setting.Key {
case "vcs.revision":
if len(setting.Value) >= 7 {
vcsRevision = setting.Value[:7]
}
case "vcs.time":
vcsTime = setting.Value
}
}
if vcsRevision != "" {
version = vcsRevision
}
if vcsTime != "" {
if t, err := time.Parse(time.RFC3339, vcsTime); err == nil {
buildDate = t.UTC().Format("2006-01-02 15:04:05 UTC")
}
}
}
}
}
var rootCmd = &cobra.Command{
Use: "relspec",
Short: "RelSpec - Database schema conversion and analysis tool",
@@ -13,6 +53,9 @@ bidirectional conversion between various database schema formats.
It reads database schemas from multiple sources (live databases, DBML,
DCTX, DrawDB, etc.) and writes them to various formats (GORM, Bun,
JSON, YAML, SQL, etc.).`,
PersistentPreRun: func(cmd *cobra.Command, args []string) {
fmt.Printf("RelSpec %s (built: %s)\n\n", version, buildDate)
},
}
func init() {
@@ -24,4 +67,5 @@ func init() {
rootCmd.AddCommand(editCmd)
rootCmd.AddCommand(mergeCmd)
rootCmd.AddCommand(splitCmd)
rootCmd.AddCommand(versionCmd)
}

View File

@@ -15,6 +15,7 @@ var (
templSourceType string
templSourcePath string
templSourceConn string
templFromList []string
templTemplatePath string
templOutputPath string
templSchemaFilter string
@@ -78,8 +79,9 @@ Examples:
func init() {
templCmd.Flags().StringVar(&templSourceType, "from", "", "Source format (dbml, pgsql, json, etc.)")
templCmd.Flags().StringVar(&templSourcePath, "from-path", "", "Source file path (for file-based sources)")
templCmd.Flags().StringVar(&templSourcePath, "from-path", "", "Source file path (for file-based sources, mutually exclusive with --from-list)")
templCmd.Flags().StringVar(&templSourceConn, "from-conn", "", "Source connection string (for database sources)")
templCmd.Flags().StringSliceVar(&templFromList, "from-list", nil, "Comma-separated list of source file paths to read and merge (mutually exclusive with --from-path)")
templCmd.Flags().StringVar(&templTemplatePath, "template", "", "Template file path (required)")
templCmd.Flags().StringVar(&templOutputPath, "output", "", "Output path (file or directory, empty for stdout)")
templCmd.Flags().StringVar(&templSchemaFilter, "schema", "", "Filter to specific schema")
@@ -95,9 +97,20 @@ func runTempl(cmd *cobra.Command, args []string) error {
fmt.Fprintf(os.Stderr, "=== RelSpec Template Execution ===\n")
fmt.Fprintf(os.Stderr, "Started at: %s\n\n", getCurrentTimestamp())
// Validate mutually exclusive flags
if templSourcePath != "" && len(templFromList) > 0 {
return fmt.Errorf("--from-path and --from-list are mutually exclusive")
}
// Read database using the same function as convert
fmt.Fprintf(os.Stderr, "Reading from %s...\n", templSourceType)
db, err := readDatabaseForConvert(templSourceType, templSourcePath, templSourceConn)
var db *models.Database
var err error
if len(templFromList) > 0 {
db, err = readDatabaseListForConvert(templSourceType, templFromList)
} else {
db, err = readDatabaseForConvert(templSourceType, templSourcePath, templSourceConn)
}
if err != nil {
return fmt.Errorf("failed to read source: %w", err)
}

View File

@@ -0,0 +1,134 @@
package main
import (
"os"
"path/filepath"
"testing"
)
// writeTestTemplate writes a minimal Go text template file.
func writeTestTemplate(t *testing.T, path string) {
t.Helper()
content := []byte(`{{.Name}}`)
if err := os.WriteFile(path, content, 0644); err != nil {
t.Fatalf("failed to write template file %s: %v", path, err)
}
}
func TestRunTempl_FromListMutuallyExclusiveWithFromPath(t *testing.T) {
saved := saveTemplState()
defer restoreTemplState(saved)
dir := t.TempDir()
file := filepath.Join(dir, "schema.json")
tmpl := filepath.Join(dir, "tmpl.tmpl")
writeTestJSON(t, file, []string{"users"})
writeTestTemplate(t, tmpl)
templSourceType = "json"
templSourcePath = file
templFromList = []string{file}
templTemplatePath = tmpl
templOutputPath = ""
templMode = "database"
templFilenamePattern = "{{.Name}}.txt"
err := runTempl(nil, nil)
if err == nil {
t.Error("expected error when --from-path and --from-list are both set")
}
}
func TestRunTempl_FromListSingleFile(t *testing.T) {
saved := saveTemplState()
defer restoreTemplState(saved)
dir := t.TempDir()
file := filepath.Join(dir, "schema.json")
tmpl := filepath.Join(dir, "tmpl.tmpl")
outFile := filepath.Join(dir, "output.txt")
writeTestJSON(t, file, []string{"users"})
writeTestTemplate(t, tmpl)
templSourceType = "json"
templSourcePath = ""
templSourceConn = ""
templFromList = []string{file}
templTemplatePath = tmpl
templOutputPath = outFile
templSchemaFilter = ""
templMode = "database"
templFilenamePattern = "{{.Name}}.txt"
if err := runTempl(nil, nil); err != nil {
t.Fatalf("runTempl() error = %v", err)
}
if _, err := os.Stat(outFile); os.IsNotExist(err) {
t.Error("expected output file to be created")
}
}
func TestRunTempl_FromListMultipleFiles(t *testing.T) {
saved := saveTemplState()
defer restoreTemplState(saved)
dir := t.TempDir()
file1 := filepath.Join(dir, "users.json")
file2 := filepath.Join(dir, "posts.json")
tmpl := filepath.Join(dir, "tmpl.tmpl")
outFile := filepath.Join(dir, "output.txt")
writeTestJSON(t, file1, []string{"users"})
writeTestJSON(t, file2, []string{"posts"})
writeTestTemplate(t, tmpl)
templSourceType = "json"
templSourcePath = ""
templSourceConn = ""
templFromList = []string{file1, file2}
templTemplatePath = tmpl
templOutputPath = outFile
templSchemaFilter = ""
templMode = "database"
templFilenamePattern = "{{.Name}}.txt"
if err := runTempl(nil, nil); err != nil {
t.Fatalf("runTempl() error = %v", err)
}
if _, err := os.Stat(outFile); os.IsNotExist(err) {
t.Error("expected output file to be created")
}
}
func TestRunTempl_FromListPathWithSpaces(t *testing.T) {
saved := saveTemplState()
defer restoreTemplState(saved)
spacedDir := filepath.Join(t.TempDir(), "my schema files")
if err := os.MkdirAll(spacedDir, 0755); err != nil {
t.Fatal(err)
}
file1 := filepath.Join(spacedDir, "users schema.json")
file2 := filepath.Join(spacedDir, "posts schema.json")
tmpl := filepath.Join(spacedDir, "my template.tmpl")
outFile := filepath.Join(spacedDir, "output file.txt")
writeTestJSON(t, file1, []string{"users"})
writeTestJSON(t, file2, []string{"posts"})
writeTestTemplate(t, tmpl)
templSourceType = "json"
templSourcePath = ""
templSourceConn = ""
templFromList = []string{file1, file2}
templTemplatePath = tmpl
templOutputPath = outFile
templSchemaFilter = ""
templMode = "database"
templFilenamePattern = "{{.Name}}.txt"
if err := runTempl(nil, nil); err != nil {
t.Fatalf("runTempl() with spaced paths error = %v", err)
}
if _, err := os.Stat(outFile); os.IsNotExist(err) {
t.Error("expected output file to be created")
}
}

View File

@@ -0,0 +1,219 @@
package main
import (
"encoding/json"
"os"
"testing"
)
// minimalColumn is used to build test JSON fixtures.
type minimalColumn struct {
Name string `json:"name"`
Table string `json:"table"`
Schema string `json:"schema"`
Type string `json:"type"`
NotNull bool `json:"not_null"`
IsPrimaryKey bool `json:"is_primary_key"`
AutoIncrement bool `json:"auto_increment"`
}
type minimalTable struct {
Name string `json:"name"`
Schema string `json:"schema"`
Columns map[string]minimalColumn `json:"columns"`
}
type minimalSchema struct {
Name string `json:"name"`
Tables []minimalTable `json:"tables"`
}
type minimalDatabase struct {
Name string `json:"name"`
Schemas []minimalSchema `json:"schemas"`
}
// writeTestJSON writes a minimal JSON database file with one schema ("public")
// containing tables with the given names. Each table has a single "id" PK column.
func writeTestJSON(t *testing.T, path string, tableNames []string) {
t.Helper()
tables := make([]minimalTable, len(tableNames))
for i, name := range tableNames {
tables[i] = minimalTable{
Name: name,
Schema: "public",
Columns: map[string]minimalColumn{
"id": {
Name: "id",
Table: name,
Schema: "public",
Type: "bigint",
NotNull: true,
IsPrimaryKey: true,
AutoIncrement: true,
},
},
}
}
db := minimalDatabase{
Name: "test_db",
Schemas: []minimalSchema{{Name: "public", Tables: tables}},
}
data, err := json.Marshal(db)
if err != nil {
t.Fatalf("failed to marshal test JSON: %v", err)
}
if err := os.WriteFile(path, data, 0644); err != nil {
t.Fatalf("failed to write test file %s: %v", path, err)
}
}
// convertState captures and restores all convert global vars.
type convertState struct {
sourceType string
sourcePath string
sourceConn string
fromList []string
targetType string
targetPath string
packageName string
schemaFilter string
flattenSchema bool
}
func saveConvertState() convertState {
return convertState{
sourceType: convertSourceType,
sourcePath: convertSourcePath,
sourceConn: convertSourceConn,
fromList: convertFromList,
targetType: convertTargetType,
targetPath: convertTargetPath,
packageName: convertPackageName,
schemaFilter: convertSchemaFilter,
flattenSchema: convertFlattenSchema,
}
}
func restoreConvertState(s convertState) {
convertSourceType = s.sourceType
convertSourcePath = s.sourcePath
convertSourceConn = s.sourceConn
convertFromList = s.fromList
convertTargetType = s.targetType
convertTargetPath = s.targetPath
convertPackageName = s.packageName
convertSchemaFilter = s.schemaFilter
convertFlattenSchema = s.flattenSchema
}
// templState captures and restores all templ global vars.
type templState struct {
sourceType string
sourcePath string
sourceConn string
fromList []string
templatePath string
outputPath string
schemaFilter string
mode string
filenamePattern string
}
func saveTemplState() templState {
return templState{
sourceType: templSourceType,
sourcePath: templSourcePath,
sourceConn: templSourceConn,
fromList: templFromList,
templatePath: templTemplatePath,
outputPath: templOutputPath,
schemaFilter: templSchemaFilter,
mode: templMode,
filenamePattern: templFilenamePattern,
}
}
func restoreTemplState(s templState) {
templSourceType = s.sourceType
templSourcePath = s.sourcePath
templSourceConn = s.sourceConn
templFromList = s.fromList
templTemplatePath = s.templatePath
templOutputPath = s.outputPath
templSchemaFilter = s.schemaFilter
templMode = s.mode
templFilenamePattern = s.filenamePattern
}
// mergeState captures and restores all merge global vars.
type mergeState struct {
targetType string
targetPath string
targetConn string
sourceType string
sourcePath string
sourceConn string
fromList []string
outputType string
outputPath string
outputConn string
skipDomains bool
skipRelations bool
skipEnums bool
skipViews bool
skipSequences bool
skipTables string
verbose bool
reportPath string
flattenSchema bool
}
func saveMergeState() mergeState {
return mergeState{
targetType: mergeTargetType,
targetPath: mergeTargetPath,
targetConn: mergeTargetConn,
sourceType: mergeSourceType,
sourcePath: mergeSourcePath,
sourceConn: mergeSourceConn,
fromList: mergeFromList,
outputType: mergeOutputType,
outputPath: mergeOutputPath,
outputConn: mergeOutputConn,
skipDomains: mergeSkipDomains,
skipRelations: mergeSkipRelations,
skipEnums: mergeSkipEnums,
skipViews: mergeSkipViews,
skipSequences: mergeSkipSequences,
skipTables: mergeSkipTables,
verbose: mergeVerbose,
reportPath: mergeReportPath,
flattenSchema: mergeFlattenSchema,
}
}
func restoreMergeState(s mergeState) {
mergeTargetType = s.targetType
mergeTargetPath = s.targetPath
mergeTargetConn = s.targetConn
mergeSourceType = s.sourceType
mergeSourcePath = s.sourcePath
mergeSourceConn = s.sourceConn
mergeFromList = s.fromList
mergeOutputType = s.outputType
mergeOutputPath = s.outputPath
mergeOutputConn = s.outputConn
mergeSkipDomains = s.skipDomains
mergeSkipRelations = s.skipRelations
mergeSkipEnums = s.skipEnums
mergeSkipViews = s.skipViews
mergeSkipSequences = s.skipSequences
mergeSkipTables = s.skipTables
mergeVerbose = s.verbose
mergeReportPath = s.reportPath
mergeFlattenSchema = s.flattenSchema
}

16
cmd/relspec/version.go Normal file
View File

@@ -0,0 +1,16 @@
package main
import (
"fmt"
"github.com/spf13/cobra"
)
var versionCmd = &cobra.Command{
Use: "version",
Short: "Print version information",
Run: func(cmd *cobra.Command, args []string) {
fmt.Printf("RelSpec %s\n", version)
fmt.Printf("Built: %s\n", buildDate)
},
}

View File

@@ -1,6 +1,21 @@
version: '3.8'
services:
mssql:
image: mcr.microsoft.com/mssql/server:2022-latest
environment:
- ACCEPT_EULA=Y
- SA_PASSWORD=StrongPassword123!
- MSSQL_PID=Express
ports:
- "1433:1433"
volumes:
- ./test_data/mssql/test_schema.sql:/test_schema.sql
healthcheck:
test: ["CMD", "/opt/mssql-tools/bin/sqlcmd", "-S", "localhost", "-U", "sa", "-P", "StrongPassword123!", "-Q", "SELECT 1"]
interval: 5s
timeout: 3s
retries: 10
postgres:
image: postgres:16-alpine
container_name: relspec-test-postgres

10
go.mod
View File

@@ -6,11 +6,12 @@ require (
github.com/gdamore/tcell/v2 v2.8.1
github.com/google/uuid v1.6.0
github.com/jackc/pgx/v5 v5.7.6
github.com/microsoft/go-mssqldb v1.9.6
github.com/rivo/tview v0.42.0
github.com/spf13/cobra v1.10.2
github.com/stretchr/testify v1.11.1
github.com/uptrace/bun v1.2.16
golang.org/x/text v0.28.0
golang.org/x/text v0.31.0
gopkg.in/yaml.v3 v3.0.1
modernc.org/sqlite v1.44.3
)
@@ -19,6 +20,8 @@ require (
github.com/davecgh/go-spew v1.1.1 // indirect
github.com/dustin/go-humanize v1.0.1 // indirect
github.com/gdamore/encoding v1.0.1 // indirect
github.com/golang-sql/civil v0.0.0-20220223132316-b832511892a9 // indirect
github.com/golang-sql/sqlexp v0.1.0 // indirect
github.com/inconshreveable/mousetrap v1.1.0 // indirect
github.com/jackc/pgpassfile v1.0.0 // indirect
github.com/jackc/pgservicefile v0.0.0-20240606120523-5a60cdf6a761 // indirect
@@ -33,14 +36,15 @@ require (
github.com/remyoudompheng/bigfft v0.0.0-20230129092748-24d4a6f8daec // indirect
github.com/rivo/uniseg v0.4.7 // indirect
github.com/rogpeppe/go-internal v1.14.1 // indirect
github.com/shopspring/decimal v1.4.0 // indirect
github.com/spf13/pflag v1.0.10 // indirect
github.com/tmthrgd/go-hex v0.0.0-20190904060850-447a3041c3bc // indirect
github.com/vmihailenco/msgpack/v5 v5.4.1 // indirect
github.com/vmihailenco/tagparser/v2 v2.0.0 // indirect
golang.org/x/crypto v0.41.0 // indirect
golang.org/x/crypto v0.45.0 // indirect
golang.org/x/exp v0.0.0-20251023183803-a4bb9ffd2546 // indirect
golang.org/x/sys v0.38.0 // indirect
golang.org/x/term v0.34.0 // indirect
golang.org/x/term v0.37.0 // indirect
modernc.org/libc v1.67.6 // indirect
modernc.org/mathutil v1.7.1 // indirect
modernc.org/memory v1.11.0 // indirect

44
go.sum
View File

@@ -1,3 +1,15 @@
github.com/Azure/azure-sdk-for-go/sdk/azcore v1.18.0 h1:Gt0j3wceWMwPmiazCa8MzMA0MfhmPIz0Qp0FJ6qcM0U=
github.com/Azure/azure-sdk-for-go/sdk/azcore v1.18.0/go.mod h1:Ot/6aikWnKWi4l9QB7qVSwa8iMphQNqkWALMoNT3rzM=
github.com/Azure/azure-sdk-for-go/sdk/azidentity v1.10.1 h1:B+blDbyVIG3WaikNxPnhPiJ1MThR03b3vKGtER95TP4=
github.com/Azure/azure-sdk-for-go/sdk/azidentity v1.10.1/go.mod h1:JdM5psgjfBf5fo2uWOZhflPWyDBZ/O/CNAH9CtsuZE4=
github.com/Azure/azure-sdk-for-go/sdk/internal v1.11.1 h1:FPKJS1T+clwv+OLGt13a8UjqeRuh0O4SJ3lUriThc+4=
github.com/Azure/azure-sdk-for-go/sdk/internal v1.11.1/go.mod h1:j2chePtV91HrC22tGoRX3sGY42uF13WzmmV80/OdVAA=
github.com/Azure/azure-sdk-for-go/sdk/security/keyvault/azkeys v1.3.1 h1:Wgf5rZba3YZqeTNJPtvqZoBu1sBN/L4sry+u2U3Y75w=
github.com/Azure/azure-sdk-for-go/sdk/security/keyvault/azkeys v1.3.1/go.mod h1:xxCBG/f/4Vbmh2XQJBsOmNdxWUY5j/s27jujKPbQf14=
github.com/Azure/azure-sdk-for-go/sdk/security/keyvault/internal v1.1.1 h1:bFWuoEKg+gImo7pvkiQEFAc8ocibADgXeiLAxWhWmkI=
github.com/Azure/azure-sdk-for-go/sdk/security/keyvault/internal v1.1.1/go.mod h1:Vih/3yc6yac2JzU4hzpaDupBJP0Flaia9rXXrU8xyww=
github.com/AzureAD/microsoft-authentication-library-for-go v1.4.2 h1:oygO0locgZJe7PpYPXT5A29ZkwJaPqcva7BVeemZOZs=
github.com/AzureAD/microsoft-authentication-library-for-go v1.4.2/go.mod h1:wP83P5OoQ5p6ip3ScPr0BAq0BvuPAvacpEuSzyouqAI=
github.com/cpuguy83/go-md2man/v2 v2.0.6/go.mod h1:oOW0eioCTA6cOiMLiUPZOpcVxMig6NIQQ7OS05n1F4g=
github.com/creack/pty v1.1.9/go.mod h1:oKZEueFk5CKHvIhNR5MUki03XCEU+Q6VDXinZuGJ33E=
github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
@@ -9,6 +21,12 @@ github.com/gdamore/encoding v1.0.1 h1:YzKZckdBL6jVt2Gc+5p82qhrGiqMdG/eNs6Wy0u3Uh
github.com/gdamore/encoding v1.0.1/go.mod h1:0Z0cMFinngz9kS1QfMjCP8TY7em3bZYeeklsSDPivEo=
github.com/gdamore/tcell/v2 v2.8.1 h1:KPNxyqclpWpWQlPLx6Xui1pMk8S+7+R37h3g07997NU=
github.com/gdamore/tcell/v2 v2.8.1/go.mod h1:bj8ori1BG3OYMjmb3IklZVWfZUJ1UBQt9JXrOCOhGWw=
github.com/golang-jwt/jwt/v5 v5.2.2 h1:Rl4B7itRWVtYIHFrSNd7vhTiz9UpLdi6gZhZ3wEeDy8=
github.com/golang-jwt/jwt/v5 v5.2.2/go.mod h1:pqrtFR0X4osieyHYxtmOUWsAWrfe1Q5UVIyoH402zdk=
github.com/golang-sql/civil v0.0.0-20220223132316-b832511892a9 h1:au07oEsX2xN0ktxqI+Sida1w446QrXBRJ0nee3SNZlA=
github.com/golang-sql/civil v0.0.0-20220223132316-b832511892a9/go.mod h1:8vg3r2VgvsThLBIFL93Qb5yWzgyZWhEmBwUJWevAkK0=
github.com/golang-sql/sqlexp v0.1.0 h1:ZCD6MBpcuOVfGVqsEmY5/4FtYiKz6tSyUv9LPEDei6A=
github.com/golang-sql/sqlexp v0.1.0/go.mod h1:J4ad9Vo8ZCWQ2GMrC4UCQy1JpCbwU9m3EOqtpKwwwHI=
github.com/google/go-cmp v0.6.0/go.mod h1:17dUlkBOakJ0+DkrSSNjCkIjxS6bF9zb3elmeNGIjoY=
github.com/google/pprof v0.0.0-20250317173921-a4b03ec1a45e h1:ijClszYn+mADRFY17kjQEVQ1XRhq2/JR1M3sGqeJoxs=
github.com/google/pprof v0.0.0-20250317173921-a4b03ec1a45e/go.mod h1:boTsfXsheKC2y+lKOCMpSfarhxDeIzfZG1jqGcPl3cA=
@@ -32,14 +50,20 @@ github.com/kr/pretty v0.3.1 h1:flRD4NNwYAUpkphVc1HcthR4KEIFJ65n8Mw5qdRn3LE=
github.com/kr/pretty v0.3.1/go.mod h1:hoEshYVHaxMs3cyo3Yncou5ZscifuDolrwPKZanG3xk=
github.com/kr/text v0.2.0 h1:5Nx0Ya0ZqY2ygV366QzturHI13Jq95ApcVaJBhpS+AY=
github.com/kr/text v0.2.0/go.mod h1:eLer722TekiGuMkidMxC/pM04lWEeraHUUmBw8l2grE=
github.com/kylelemons/godebug v1.1.0 h1:RPNrshWIDI6G2gRW9EHilWtl7Z6Sb1BR0xunSBf0SNc=
github.com/kylelemons/godebug v1.1.0/go.mod h1:9/0rRGxNHcop5bhtWyNeEfOS8JIWk580+fNqagV/RAw=
github.com/lucasb-eyer/go-colorful v1.2.0 h1:1nnpGOrhyZZuNyfu1QjKiUICQ74+3FNCN69Aj6K7nkY=
github.com/lucasb-eyer/go-colorful v1.2.0/go.mod h1:R4dSotOR9KMtayYi1e77YzuveK+i7ruzyGqttikkLy0=
github.com/mattn/go-isatty v0.0.20 h1:xfD0iDuEKnDkl03q4limB+vH+GxLEtL/jb4xVJSWWEY=
github.com/mattn/go-isatty v0.0.20/go.mod h1:W+V8PltTTMOvKvAeJH7IuucS94S2C6jfK/D7dTCTo3Y=
github.com/mattn/go-runewidth v0.0.16 h1:E5ScNMtiwvlvB5paMFdw9p4kSQzbXFikJ5SQO6TULQc=
github.com/mattn/go-runewidth v0.0.16/go.mod h1:Jdepj2loyihRzMpdS35Xk/zdY8IAYHsh153qUoGf23w=
github.com/microsoft/go-mssqldb v1.9.6 h1:1MNQg5UiSsokiPz3++K2KPx4moKrwIqly1wv+RyCKTw=
github.com/microsoft/go-mssqldb v1.9.6/go.mod h1:yYMPDufyoF2vVuVCUGtZARr06DKFIhMrluTcgWlXpr4=
github.com/ncruces/go-strftime v1.0.0 h1:HMFp8mLCTPp341M/ZnA4qaf7ZlsbTc+miZjCLOFAw7w=
github.com/ncruces/go-strftime v1.0.0/go.mod h1:Fwc5htZGVVkseilnfgOVb9mKy6w1naJmn9CehxcKcls=
github.com/pkg/browser v0.0.0-20240102092130-5ac0b6a4141c h1:+mdjkGKdHQG3305AYmdv1U2eRNDiU2ErMBj1gwrq8eQ=
github.com/pkg/browser v0.0.0-20240102092130-5ac0b6a4141c/go.mod h1:7rwL4CYBLnjLxUqIJNnCWiEdr3bn6IUYi15bNlnbCCU=
github.com/pkg/diff v0.0.0-20210226163009-20ebb0f2a09e/go.mod h1:pJLUxLENpZxwdsKMEsNbx1VGcRFpLqf3715MtcvvzbA=
github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM=
github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
@@ -57,6 +81,8 @@ github.com/rogpeppe/go-internal v1.9.0/go.mod h1:WtVeX8xhTBvf0smdhujwtBcq4Qrzq/f
github.com/rogpeppe/go-internal v1.14.1 h1:UQB4HGPB6osV0SQTLymcB4TgvyWu6ZyliaW0tI/otEQ=
github.com/rogpeppe/go-internal v1.14.1/go.mod h1:MaRKkUm5W0goXpeCfT7UZI6fk/L7L7so1lCWt35ZSgc=
github.com/russross/blackfriday/v2 v2.1.0/go.mod h1:+Rmxgy9KzJVeS9/2gXHxylqXiyQDYRxCVz55jmeOWTM=
github.com/shopspring/decimal v1.4.0 h1:bxl37RwXBklmTi0C79JfXCEBD1cqqHt0bbgBAGFp81k=
github.com/shopspring/decimal v1.4.0/go.mod h1:gawqmDU56v4yIKSwfBSFip1HdCCXN8/+DMd9qYNcwME=
github.com/spf13/cobra v1.10.2 h1:DMTTonx5m65Ic0GOoRY2c16WCbHxOOw6xxezuLaBpcU=
github.com/spf13/cobra v1.10.2/go.mod h1:7C1pvHqHw5A4vrJfjNwvOdzYu0Gml16OCs2GRiTUUS4=
github.com/spf13/pflag v1.0.9/go.mod h1:McXfInJRrz4CZXVZOBLb0bTZqETkiAhM9Iw0y3An2Bg=
@@ -82,8 +108,8 @@ golang.org/x/crypto v0.0.0-20210921155107-089bfa567519/go.mod h1:GvvjBRRGRdwPK5y
golang.org/x/crypto v0.13.0/go.mod h1:y6Z2r+Rw4iayiXXAIxJIDAJ1zMW4yaTpebo8fPOliYc=
golang.org/x/crypto v0.19.0/go.mod h1:Iy9bg/ha4yyC70EfRS8jz+B6ybOBKMaSxLj6P6oBDfU=
golang.org/x/crypto v0.23.0/go.mod h1:CKFgDieR+mRhux2Lsu27y0fO304Db0wZe70UKqHu0v8=
golang.org/x/crypto v0.41.0 h1:WKYxWedPGCTVVl5+WHSSrOBT0O8lx32+zxmHxijgXp4=
golang.org/x/crypto v0.41.0/go.mod h1:pO5AFd7FA68rFak7rOAGVuygIISepHftHnr8dr6+sUc=
golang.org/x/crypto v0.45.0 h1:jMBrvKuj23MTlT0bQEOBcAE0mjg8mK9RXFhRH6nyF3Q=
golang.org/x/crypto v0.45.0/go.mod h1:XTGrrkGJve7CYK7J8PEww4aY7gM3qMCElcJQ8n8JdX4=
golang.org/x/exp v0.0.0-20251023183803-a4bb9ffd2546 h1:mgKeJMpvi0yx/sU5GsxQ7p6s2wtOnGAHZWCHUM4KGzY=
golang.org/x/exp v0.0.0-20251023183803-a4bb9ffd2546/go.mod h1:j/pmGrbnkbPtQfxEe5D0VQhZC6qKbfKifgD0oM7sR70=
golang.org/x/mod v0.6.0-dev.0.20220419223038-86c51ed26bb4/go.mod h1:jJ57K6gSWd91VN4djpZkiMVwK6gcyfeH4XE8wZrZaV4=
@@ -101,6 +127,8 @@ golang.org/x/net v0.10.0/go.mod h1:0qNGK6F8kojg2nk9dLZ2mShWaEBan6FAoqfSigmmuDg=
golang.org/x/net v0.15.0/go.mod h1:idbUs1IY1+zTqbi8yxTbhexhEEk5ur9LInksu6HrEpk=
golang.org/x/net v0.21.0/go.mod h1:bIjVDfnllIU7BJ2DNgfnXvpSvtn8VRwhlsaeUTyUS44=
golang.org/x/net v0.25.0/go.mod h1:JkAGAh7GEvH74S6FOH42FLoXpXbE/aqXSrIQjXgsiwM=
golang.org/x/net v0.47.0 h1:Mx+4dIFzqraBXUugkia1OOvlD6LemFo1ALMHjrXDOhY=
golang.org/x/net v0.47.0/go.mod h1:/jNxtkgq5yWUGYkaZGqo27cfGZ1c5Nen03aYrrKpVRU=
golang.org/x/sync v0.0.0-20190423024810-112230192c58/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20220722155255-886fb9371eb4/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.1.0/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
@@ -108,8 +136,8 @@ golang.org/x/sync v0.3.0/go.mod h1:FU7BRWz2tNW+3quACPkgCx/L+uEAv1htQ0V83Z9Rj+Y=
golang.org/x/sync v0.6.0/go.mod h1:Czt+wKu1gCyEFDUtn0jG5QVvpJ6rzVqr5aXyt9drQfk=
golang.org/x/sync v0.7.0/go.mod h1:Czt+wKu1gCyEFDUtn0jG5QVvpJ6rzVqr5aXyt9drQfk=
golang.org/x/sync v0.10.0/go.mod h1:Czt+wKu1gCyEFDUtn0jG5QVvpJ6rzVqr5aXyt9drQfk=
golang.org/x/sync v0.17.0 h1:l60nONMj9l5drqw6jlhIELNv9I0A4OFgRsG9k2oT9Ug=
golang.org/x/sync v0.17.0/go.mod h1:9KTHXmSnoGruLpwFjVSX0lNNA75CykiMECbovNTZqGI=
golang.org/x/sync v0.18.0 h1:kr88TuHDroi+UVf+0hZnirlk8o8T+4MrK6mr60WkH/I=
golang.org/x/sync v0.18.0/go.mod h1:9KTHXmSnoGruLpwFjVSX0lNNA75CykiMECbovNTZqGI=
golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20201119102817-f84b799fce68/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210615035016-665e8c7367d1/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
@@ -133,8 +161,8 @@ golang.org/x/term v0.12.0/go.mod h1:owVbMEjm3cBLCHdkQu9b1opXd4ETQWc3BhuQGKgXgvU=
golang.org/x/term v0.17.0/go.mod h1:lLRBjIVuehSbZlaOtGMbcMncT+aqLLLmKrsjNrUguwk=
golang.org/x/term v0.20.0/go.mod h1:8UkIAJTvZgivsXaD6/pH6U9ecQzZ45awqEOzuCvwpFY=
golang.org/x/term v0.28.0/go.mod h1:Sw/lC2IAUZ92udQNf3WodGtn4k/XoLyZoh8v/8uiwek=
golang.org/x/term v0.34.0 h1:O/2T7POpk0ZZ7MAzMeWFSg6S5IpWd/RXDlM9hgM3DR4=
golang.org/x/term v0.34.0/go.mod h1:5jC53AEywhIVebHgPVeg0mj8OD3VO9OzclacVrqpaAw=
golang.org/x/term v0.37.0 h1:8EGAD0qCmHYZg6J17DvsMy9/wJ7/D/4pV/wfnld5lTU=
golang.org/x/term v0.37.0/go.mod h1:5pB4lxRNYYVZuTLmy8oR2BH8dflOR+IbTYFD8fi3254=
golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
golang.org/x/text v0.3.3/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
golang.org/x/text v0.3.7/go.mod h1:u+2+/6zg+i71rQMx5EYifcz6MCKuco9NR6JIITiCfzQ=
@@ -144,8 +172,8 @@ golang.org/x/text v0.13.0/go.mod h1:TvPlkZtksWOMsz7fbANvkp4WM8x/WCo/om8BMLbz+aE=
golang.org/x/text v0.14.0/go.mod h1:18ZOQIKpY8NJVqYksKHtTdi31H5itFRjB5/qKTNYzSU=
golang.org/x/text v0.15.0/go.mod h1:18ZOQIKpY8NJVqYksKHtTdi31H5itFRjB5/qKTNYzSU=
golang.org/x/text v0.21.0/go.mod h1:4IBbMaMmOPCJ8SecivzSH54+73PCFmPWxNTLm+vZkEQ=
golang.org/x/text v0.28.0 h1:rhazDwis8INMIwQ4tpjLDzUhx6RlXqZNPEM0huQojng=
golang.org/x/text v0.28.0/go.mod h1:U8nCwOR8jO/marOQ0QbDiOngZVEBB7MAiitBuMjXiNU=
golang.org/x/text v0.31.0 h1:aC8ghyu4JhP8VojJ2lEHBnochRno1sgL6nEi9WGFGMM=
golang.org/x/text v0.31.0/go.mod h1:tKRAlv61yKIjGGHX/4tP1LTbc13YSec1pxVEWXzfoeM=
golang.org/x/tools v0.0.0-20180917221912-90fa682c2a6e/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
golang.org/x/tools v0.0.0-20191119224855-298f0cb1881e/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
golang.org/x/tools v0.1.12/go.mod h1:hNGJHUnrk76NpqgfD5Aqm5Crs+Hm0VOH/i9J2+nxYbc=

35
linux/arch/PKGBUILD Normal file
View File

@@ -0,0 +1,35 @@
# Maintainer: Hein (Warky Devs) <hein@warky.dev>
pkgname=relspec
pkgver=1.0.44
pkgrel=1
pkgdesc="RelSpec is a comprehensive database relations management tool that reads, transforms, and writes database table specifications across multiple formats and ORMs."
arch=('x86_64' 'aarch64')
url="https://git.warky.dev/wdevs/relspecgo"
license=('MIT')
makedepends=('go')
source=("$pkgname-$pkgver.zip::$url/archive/v$pkgver.zip")
sha256sums=('SKIP')
build() {
cd "relspecgo"
export CGO_ENABLED=0
go build \
-trimpath \
-ldflags "-X git.warky.dev/wdevs/relspecgo/cmd/relspec.version=$pkgver" \
-o "$pkgname" ./cmd/relspec
}
check() {
cd "relspecgo"
go test ./...
}
package() {
cd "relspecgo"
# Binary
install -Dm755 "$pkgname" "$pkgdir/usr/bin/$pkgname"
# Default config dir
install -dm755 "$pkgdir/etc/relspec"
}

43
linux/centos/relspec.spec Normal file
View File

@@ -0,0 +1,43 @@
Name: relspec
Version: 1.0.44
Release: 1%{?dist}
Summary: RelSpec is a comprehensive database relations management tool that reads, transforms, and writes database table specifications across multiple formats and ORMs.
License: MIT
URL: https://git.warky.dev/wdevs/relspecgo
Source0: %{name}-%{version}.tar.gz
BuildRequires: golang >= 1.24
%global debug_package %{nil}
%define _debugsource_packages 0
%define _debuginfo_subpackages 0
%description
RelSpec provides bidirectional conversion between various database schema
formats including PostgreSQL, MySQL, SQLite, Prisma, TypeORM, GORM, Drizzle,
DBML, GraphQL, and more.
%prep
%autosetup
%build
export CGO_ENABLED=0
go build \
-trimpath \
-ldflags "-X git.warky.dev/wdevs/relspecgo/cmd/relspec.version=%{version}" \
-o %{name} ./cmd/relspec
%install
install -Dm755 %{name} %{buildroot}%{_bindir}/%{name}
install -Dm644 LICENSE %{buildroot}%{_licensedir}/%{name}/LICENSE
install -dm755 %{buildroot}%{_sysconfdir}/relspec
%files
%license LICENSE
%{_bindir}/%{name}
%dir %{_sysconfdir}/relspec
%changelog
* Wed Apr 08 2026 Hein (Warky Devs) <hein@warky.dev> - 1.0.42-1
- Initial package

11
linux/debian/control Normal file
View File

@@ -0,0 +1,11 @@
Package: relspec
Version: VERSION
Architecture: ARCH
Maintainer: Hein (Warky Devs) <hein@warky.dev>
Section: database
Priority: optional
Homepage: https://git.warky.dev/wdevs/relspecgo
Description: Database schema conversion and analysis tool
RelSpec provides bidirectional conversion between various database schema
formats including PostgreSQL, MySQL, SQLite, Prisma, TypeORM, GORM, Drizzle,
DBML, GraphQL, and more.

View File

@@ -60,19 +60,19 @@ func (f *MarkdownFormatter) Format(report *InspectorReport) (string, error) {
// Summary
sb.WriteString(f.formatHeader("Summary"))
sb.WriteString("\n")
sb.WriteString(fmt.Sprintf("- Rules Checked: %d\n", report.Summary.RulesChecked))
fmt.Fprintf(&sb, "- Rules Checked: %d\n", report.Summary.RulesChecked)
// Color-code error and warning counts
if report.Summary.ErrorCount > 0 {
sb.WriteString(f.colorize(fmt.Sprintf("- Errors: %d\n", report.Summary.ErrorCount), colorRed))
} else {
sb.WriteString(fmt.Sprintf("- Errors: %d\n", report.Summary.ErrorCount))
fmt.Fprintf(&sb, "- Errors: %d\n", report.Summary.ErrorCount)
}
if report.Summary.WarningCount > 0 {
sb.WriteString(f.colorize(fmt.Sprintf("- Warnings: %d\n", report.Summary.WarningCount), colorYellow))
} else {
sb.WriteString(fmt.Sprintf("- Warnings: %d\n", report.Summary.WarningCount))
fmt.Fprintf(&sb, "- Warnings: %d\n", report.Summary.WarningCount)
}
if report.Summary.PassedCount > 0 {

99
pkg/mssql/README.md Normal file
View File

@@ -0,0 +1,99 @@
# MSSQL Package
Provides utilities for working with Microsoft SQL Server data types and conversions.
## Components
### Type Mapping
Provides bidirectional conversion between canonical types and MSSQL types:
- **CanonicalToMSSQL**: Convert abstract types to MSSQL-specific types
- **MSSQLToCanonical**: Convert MSSQL types to abstract representation
## Type Conversion Tables
### Canonical → MSSQL
| Canonical | MSSQL | Notes |
|-----------|-------|-------|
| int | INT | 32-bit signed integer |
| int64 | BIGINT | 64-bit signed integer |
| int32 | INT | 32-bit signed integer |
| int16 | SMALLINT | 16-bit signed integer |
| int8 | TINYINT | 8-bit unsigned integer |
| bool | BIT | 0 (false) or 1 (true) |
| float32 | REAL | Single precision floating point |
| float64 | FLOAT | Double precision floating point |
| decimal | NUMERIC | Fixed-point decimal number |
| string | NVARCHAR(255) | Unicode variable-length string |
| text | NVARCHAR(MAX) | Unicode large text |
| timestamp | DATETIME2 | Date and time without timezone |
| timestamptz | DATETIMEOFFSET | Date and time with timezone offset |
| uuid | UNIQUEIDENTIFIER | GUID/UUID type |
| bytea | VARBINARY(MAX) | Variable-length binary data |
| date | DATE | Date only |
| time | TIME | Time only |
| json | NVARCHAR(MAX) | Stored as text (MSSQL v2016+) |
| jsonb | NVARCHAR(MAX) | Stored as text (MSSQL v2016+) |
### MSSQL → Canonical
| MSSQL | Canonical | Notes |
|-------|-----------|-------|
| INT, INTEGER | int | Standard integer |
| BIGINT | int64 | Large integer |
| SMALLINT | int16 | Small integer |
| TINYINT | int8 | Tiny integer |
| BIT | bool | Boolean/bit flag |
| REAL | float32 | Single precision |
| FLOAT | float64 | Double precision |
| NUMERIC, DECIMAL | decimal | Exact decimal |
| NVARCHAR, VARCHAR | string | Variable-length string |
| NCHAR, CHAR | string | Fixed-length string |
| DATETIME2 | timestamp | Default timestamp |
| DATETIMEOFFSET | timestamptz | Timestamp with timezone |
| DATE | date | Date only |
| TIME | time | Time only |
| UNIQUEIDENTIFIER | uuid | UUID/GUID |
| VARBINARY, BINARY | bytea | Binary data |
| XML | string | Stored as text |
## Usage
```go
package main
import (
"fmt"
"git.warky.dev/wdevs/relspecgo/pkg/mssql"
)
func main() {
// Convert canonical to MSSQL
mssqlType := mssql.ConvertCanonicalToMSSQL("int")
fmt.Println(mssqlType) // Output: INT
// Convert MSSQL to canonical
canonicalType := mssql.ConvertMSSQLToCanonical("BIGINT")
fmt.Println(canonicalType) // Output: int64
// Handle parameterized types
canonicalType = mssql.ConvertMSSQLToCanonical("NVARCHAR(255)")
fmt.Println(canonicalType) // Output: string
}
```
## Testing
Run tests with:
```bash
go test ./pkg/mssql/...
```
## Notes
- Type conversions are case-insensitive
- Parameterized types (e.g., `NVARCHAR(255)`) have their base type extracted
- Unmapped types default to `string` for safety
- The package supports SQL Server 2016 and later versions

114
pkg/mssql/datatypes.go Normal file
View File

@@ -0,0 +1,114 @@
package mssql
import "strings"
// CanonicalToMSSQLTypes maps canonical types to MSSQL types
var CanonicalToMSSQLTypes = map[string]string{
"bool": "BIT",
"int8": "TINYINT",
"int16": "SMALLINT",
"int": "INT",
"int32": "INT",
"int64": "BIGINT",
"uint": "BIGINT",
"uint8": "SMALLINT",
"uint16": "INT",
"uint32": "BIGINT",
"uint64": "BIGINT",
"float32": "REAL",
"float64": "FLOAT",
"decimal": "NUMERIC",
"string": "NVARCHAR(255)",
"text": "NVARCHAR(MAX)",
"date": "DATE",
"time": "TIME",
"timestamp": "DATETIME2",
"timestamptz": "DATETIMEOFFSET",
"uuid": "UNIQUEIDENTIFIER",
"json": "NVARCHAR(MAX)",
"jsonb": "NVARCHAR(MAX)",
"bytea": "VARBINARY(MAX)",
}
// MSSQLToCanonicalTypes maps MSSQL types to canonical types
var MSSQLToCanonicalTypes = map[string]string{
"bit": "bool",
"tinyint": "int8",
"smallint": "int16",
"int": "int",
"integer": "int",
"bigint": "int64",
"real": "float32",
"float": "float64",
"numeric": "decimal",
"decimal": "decimal",
"money": "decimal",
"smallmoney": "decimal",
"nvarchar": "string",
"nchar": "string",
"varchar": "string",
"char": "string",
"text": "string",
"ntext": "string",
"date": "date",
"time": "time",
"datetime": "timestamp",
"datetime2": "timestamp",
"smalldatetime": "timestamp",
"datetimeoffset": "timestamptz",
"uniqueidentifier": "uuid",
"varbinary": "bytea",
"binary": "bytea",
"image": "bytea",
"xml": "string",
"json": "json",
"sql_variant": "string",
"hierarchyid": "string",
"geography": "string",
"geometry": "string",
}
// ConvertCanonicalToMSSQL converts a canonical type to MSSQL type
func ConvertCanonicalToMSSQL(canonicalType string) string {
// Check direct mapping
if mssqlType, exists := CanonicalToMSSQLTypes[strings.ToLower(canonicalType)]; exists {
return mssqlType
}
// Try to find by prefix
lowerType := strings.ToLower(canonicalType)
for canonical, mssql := range CanonicalToMSSQLTypes {
if strings.HasPrefix(lowerType, canonical) {
return mssql
}
}
// Default to NVARCHAR
return "NVARCHAR(255)"
}
// ConvertMSSQLToCanonical converts an MSSQL type to canonical type
func ConvertMSSQLToCanonical(mssqlType string) string {
// Extract base type (remove parentheses and parameters)
baseType := mssqlType
if idx := strings.Index(baseType, "("); idx != -1 {
baseType = baseType[:idx]
}
baseType = strings.TrimSpace(baseType)
// Check direct mapping
if canonicalType, exists := MSSQLToCanonicalTypes[strings.ToLower(baseType)]; exists {
return canonicalType
}
// Try to find by prefix
lowerType := strings.ToLower(baseType)
for mssql, canonical := range MSSQLToCanonicalTypes {
if strings.HasPrefix(lowerType, mssql) {
return canonical
}
}
// Default to string
return "string"
}

246
pkg/pgsql/types_registry.go Normal file
View File

@@ -0,0 +1,246 @@
package pgsql
import "strings"
// TypeSpec describes PostgreSQL type capabilities used by parsers/writers.
type TypeSpec struct {
SupportsLength bool
SupportsPrecision bool
}
var postgresBaseTypes = map[string]TypeSpec{
// Numeric types
"smallint": {},
"integer": {},
"bigint": {},
"decimal": {SupportsPrecision: true},
"numeric": {SupportsPrecision: true},
"real": {},
"double precision": {},
"smallserial": {},
"serial": {},
"bigserial": {},
"money": {},
// Character types
"char": {SupportsLength: true},
"character": {SupportsLength: true},
"varchar": {SupportsLength: true},
"character varying": {SupportsLength: true},
"text": {},
"name": {},
// Binary
"bytea": {},
// Date/time
"timestamp": {SupportsPrecision: true},
"timestamp without time zone": {SupportsPrecision: true},
"timestamp with time zone": {SupportsPrecision: true},
"time": {SupportsPrecision: true},
"time without time zone": {SupportsPrecision: true},
"time with time zone": {SupportsPrecision: true},
"date": {},
"interval": {SupportsPrecision: true},
// Boolean
"boolean": {},
// Geometric
"point": {},
"line": {},
"lseg": {},
"box": {},
"path": {},
"polygon": {},
"circle": {},
// Network
"cidr": {},
"inet": {},
"macaddr": {},
"macaddr8": {},
// Bit string
"bit": {SupportsLength: true},
"bit varying": {SupportsLength: true},
"varbit": {SupportsLength: true},
// Text search
"tsvector": {},
"tsquery": {},
// UUID/XML/JSON
"uuid": {},
"xml": {},
"json": {},
"jsonb": {},
// Range
"int4range": {},
"int8range": {},
"numrange": {},
"tsrange": {},
"tstzrange": {},
"daterange": {},
"int4multirange": {},
"int8multirange": {},
"nummultirange": {},
"tsmultirange": {},
"tstzmultirange": {},
"datemultirange": {},
// Object identifier
"oid": {},
"regclass": {},
"regproc": {},
"regtype": {},
// Pseudo-ish/common built-ins seen in schemas
"record": {},
"void": {},
// Common extensions
"citext": {},
"hstore": {},
"ltree": {},
"lquery": {},
"ltxtquery": {},
"vector": {SupportsLength: true}, // pgvector: vector(dim)
"halfvec": {SupportsLength: true}, // pgvector: halfvec(dim)
"sparsevec": {SupportsLength: true}, // pgvector: sparsevec(dim)
}
var postgresTypeAliases = map[string]string{
// Integer aliases
"int2": "smallint",
"int4": "integer",
"int8": "bigint",
"int": "integer",
// Serial aliases
"serial2": "smallserial",
"serial4": "serial",
"serial8": "bigserial",
// Character aliases
"bpchar": "char",
// Float aliases
"float4": "real",
"float8": "double precision",
"float": "double precision",
// Time aliases
"timestamptz": "timestamp with time zone",
"timetz": "time with time zone",
// Bit alias
"varbit": "bit varying",
// Boolean alias
"bool": "boolean",
}
// GetPostgresBaseTypes returns a sorted-ish stable list of registered base type names.
func GetPostgresBaseTypes() []string {
result := make([]string, 0, len(postgresBaseTypes))
for t := range postgresBaseTypes {
result = append(result, t)
}
return result
}
// GetPostgresTypes returns the registered PostgreSQL types.
// When includeArrays is true, each base type also includes an array variant ("type[]").
func GetPostgresTypes(includeArrays bool) []string {
base := GetPostgresBaseTypes()
if !includeArrays {
return base
}
result := make([]string, 0, len(base)*2)
result = append(result, base...)
for _, t := range base {
result = append(result, t+"[]")
}
return result
}
// ExtractBaseType returns the type without outer array suffixes and modifiers.
// Examples:
// - varchar(255) -> varchar
// - text[] -> text
// - numeric(10,2)[] -> numeric
func ExtractBaseType(sqlType string) string {
t := normalizeTypeToken(sqlType)
t = strings.TrimSpace(stripArraySuffixes(t))
if idx := strings.Index(t, "("); idx > 0 {
t = strings.TrimSpace(t[:idx])
}
return t
}
// ExtractBaseTypeLower is ExtractBaseType with lowercase normalization.
func ExtractBaseTypeLower(sqlType string) string {
return strings.ToLower(ExtractBaseType(sqlType))
}
// IsArrayType reports whether the SQL type has one or more [] suffixes.
func IsArrayType(sqlType string) bool {
t := normalizeTypeToken(sqlType)
return strings.HasSuffix(t, "[]")
}
// ElementType returns the underlying element type for array types.
// For non-array types, it returns the input unchanged.
func ElementType(sqlType string) string {
t := normalizeTypeToken(sqlType)
return stripArraySuffixes(t)
}
// CanonicalizeBaseType resolves aliases to canonical PostgreSQL type names.
func CanonicalizeBaseType(baseType string) string {
base := strings.ToLower(normalizeTypeToken(baseType))
if canonical, ok := postgresTypeAliases[base]; ok {
return canonical
}
return base
}
// IsKnownPostgresType reports whether a type (including array forms) exists in the registry.
func IsKnownPostgresType(sqlType string) bool {
base := CanonicalizeBaseType(ExtractBaseTypeLower(sqlType))
_, ok := postgresBaseTypes[base]
return ok
}
// SupportsLength reports if this SQL type accepts a single length/dimension modifier.
func SupportsLength(sqlType string) bool {
base := CanonicalizeBaseType(ExtractBaseTypeLower(sqlType))
spec, ok := postgresBaseTypes[base]
return ok && spec.SupportsLength
}
// SupportsPrecision reports if this SQL type accepts precision (and possibly scale).
func SupportsPrecision(sqlType string) bool {
base := CanonicalizeBaseType(ExtractBaseTypeLower(sqlType))
spec, ok := postgresBaseTypes[base]
return ok && spec.SupportsPrecision
}
// HasExplicitTypeModifier reports if the type already includes "(...)".
func HasExplicitTypeModifier(sqlType string) bool {
return strings.Contains(sqlType, "(")
}
func stripArraySuffixes(t string) string {
for strings.HasSuffix(t, "[]") {
t = strings.TrimSpace(strings.TrimSuffix(t, "[]"))
}
return t
}
func normalizeTypeToken(t string) string {
return strings.Join(strings.Fields(strings.TrimSpace(t)), " ")
}

View File

@@ -0,0 +1,99 @@
package pgsql
import "testing"
func TestPostgresTypeRegistry_MasterListIncludesRequestedTypes(t *testing.T) {
required := []string{
"vector",
"integer",
"citext",
}
types := make(map[string]bool)
for _, typ := range GetPostgresTypes(true) {
types[typ] = true
}
for _, typ := range required {
if !types[typ] {
t.Fatalf("master type list missing %q", typ)
}
if !types[typ+"[]"] {
t.Fatalf("master type list missing array variant %q", typ+"[]")
}
}
}
func TestPostgresTypeRegistry_TypeParsingAndCapabilities(t *testing.T) {
tests := []struct {
input string
wantBase string
wantCanonicalBase string
wantArray bool
wantKnown bool
wantLength bool
wantPrecision bool
}{
{
input: "integer[]",
wantBase: "integer",
wantCanonicalBase: "integer",
wantArray: true,
wantKnown: true,
},
{
input: "citext[]",
wantBase: "citext",
wantCanonicalBase: "citext",
wantArray: true,
wantKnown: true,
},
{
input: "vector(1536)",
wantBase: "vector",
wantCanonicalBase: "vector",
wantKnown: true,
wantLength: true,
},
{
input: "numeric(10,2)",
wantBase: "numeric",
wantCanonicalBase: "numeric",
wantKnown: true,
wantPrecision: true,
},
{
input: "int4",
wantBase: "int4",
wantCanonicalBase: "integer",
wantKnown: true,
},
}
for _, tt := range tests {
t.Run(tt.input, func(t *testing.T) {
base := ExtractBaseTypeLower(tt.input)
if base != tt.wantBase {
t.Fatalf("ExtractBaseTypeLower(%q) = %q, want %q", tt.input, base, tt.wantBase)
}
canonical := CanonicalizeBaseType(base)
if canonical != tt.wantCanonicalBase {
t.Fatalf("CanonicalizeBaseType(%q) = %q, want %q", base, canonical, tt.wantCanonicalBase)
}
if IsArrayType(tt.input) != tt.wantArray {
t.Fatalf("IsArrayType(%q) = %v, want %v", tt.input, IsArrayType(tt.input), tt.wantArray)
}
if IsKnownPostgresType(tt.input) != tt.wantKnown {
t.Fatalf("IsKnownPostgresType(%q) = %v, want %v", tt.input, IsKnownPostgresType(tt.input), tt.wantKnown)
}
if SupportsLength(tt.input) != tt.wantLength {
t.Fatalf("SupportsLength(%q) = %v, want %v", tt.input, SupportsLength(tt.input), tt.wantLength)
}
if SupportsPrecision(tt.input) != tt.wantPrecision {
t.Fatalf("SupportsPrecision(%q) = %v, want %v", tt.input, SupportsPrecision(tt.input), tt.wantPrecision)
}
})
}
}

View File

@@ -12,6 +12,7 @@ import (
"strings"
"git.warky.dev/wdevs/relspecgo/pkg/models"
"git.warky.dev/wdevs/relspecgo/pkg/pgsql"
"git.warky.dev/wdevs/relspecgo/pkg/readers"
)
@@ -700,16 +701,21 @@ func (r *Reader) extractBunTag(tag string) string {
// parseTypeWithLength parses a type string and extracts length if present
// e.g., "varchar(255)" returns ("varchar", 255)
func (r *Reader) parseTypeWithLength(typeStr string) (baseType string, length int) {
typeStr = strings.TrimSpace(typeStr)
baseType = typeStr
// Check for type with length: varchar(255), char(10), etc.
re := regexp.MustCompile(`^([a-zA-Z\s]+)\((\d+)\)$`)
matches := re.FindStringSubmatch(typeStr)
if len(matches) == 3 {
if _, err := fmt.Sscanf(matches[2], "%d", &length); err == nil {
baseType = strings.TrimSpace(matches[1])
return
rawBaseType := strings.TrimSpace(matches[1])
if pgsql.SupportsLength(rawBaseType) {
if _, err := fmt.Sscanf(matches[2], "%d", &length); err == nil {
return
}
}
}
baseType = typeStr
return
}

View File

@@ -71,8 +71,11 @@ func TestReader_ReadDatabase_Simple(t *testing.T) {
if !emailCol.NotNull {
t.Error("Column 'email' should be NOT NULL (explicit 'notnull' tag)")
}
if emailCol.Type != "varchar" || emailCol.Length != 255 {
t.Errorf("Expected email type 'varchar(255)', got '%s' with length %d", emailCol.Type, emailCol.Length)
if emailCol.Type != "varchar" && emailCol.Type != "varchar(255)" {
t.Errorf("Expected email type 'varchar' or 'varchar(255)', got '%s' with length %d", emailCol.Type, emailCol.Length)
}
if emailCol.Length != 255 {
t.Errorf("Expected email length 255, got %d", emailCol.Length)
}
// Verify name column - primitive string type should be NOT NULL by default in Bun
@@ -356,6 +359,33 @@ func TestReader_ReadDatabase_Complex(t *testing.T) {
}
}
func TestParseTypeWithLength_PreservesExplicitTypeModifiers(t *testing.T) {
reader := &Reader{}
tests := []struct {
input string
wantType string
wantLength int
}{
{"varchar(255)", "varchar(255)", 255},
{"character varying(120)", "character varying(120)", 120},
{"vector(1536)", "vector(1536)", 1536},
{"numeric(10,2)", "numeric(10,2)", 0},
}
for _, tt := range tests {
t.Run(tt.input, func(t *testing.T) {
gotType, gotLength := reader.parseTypeWithLength(tt.input)
if gotType != tt.wantType {
t.Fatalf("parseTypeWithLength(%q) type = %q, want %q", tt.input, gotType, tt.wantType)
}
if gotLength != tt.wantLength {
t.Fatalf("parseTypeWithLength(%q) length = %d, want %d", tt.input, gotLength, tt.wantLength)
}
})
}
}
func TestReader_ReadSchema(t *testing.T) {
opts := &readers.ReaderOptions{
FilePath: filepath.Join("..", "..", "..", "tests", "assets", "bun", "simple.go"),
@@ -485,9 +515,9 @@ func TestReader_NullableTypes(t *testing.T) {
// Test all nullability scenarios
tests := []struct {
column string
notNull bool
reason string
column string
notNull bool
reason string
}{
{"id", true, "primary key"},
{"user_id", true, "explicit notnull tag"},

View File

@@ -567,110 +567,182 @@ func (r *Reader) parseDBML(content string) (*models.Database, error) {
// parseColumn parses a DBML column definition
func (r *Reader) parseColumn(line, tableName, schemaName string) (*models.Column, *models.Constraint) {
// Format: column_name type [attributes] // comment
parts := strings.Fields(line)
if len(parts) < 2 {
lineNoComment, inlineComment := splitInlineComment(line)
signature, attrs := splitColumnSignatureAndAttrs(lineNoComment)
columnName, columnType, ok := parseColumnSignature(signature)
if !ok {
return nil, nil
}
columnName := stripQuotes(parts[0])
columnType := stripQuotes(parts[1])
column := models.InitColumn(columnName, tableName, schemaName)
column.Type = columnType
var constraint *models.Constraint
// Parse attributes in brackets
if strings.Contains(line, "[") && strings.Contains(line, "]") {
attrStart := strings.Index(line, "[")
attrEnd := strings.Index(line, "]")
if attrStart < attrEnd {
attrs := line[attrStart+1 : attrEnd]
attrList := strings.Split(attrs, ",")
if attrs != "" {
attrList := strings.Split(attrs, ",")
for _, attr := range attrList {
attr = strings.TrimSpace(attr)
for _, attr := range attrList {
attr = strings.TrimSpace(attr)
if strings.Contains(attr, "primary key") || attr == "pk" {
column.IsPrimaryKey = true
column.NotNull = true
} else if strings.Contains(attr, "not null") {
column.NotNull = true
} else if attr == "increment" {
column.AutoIncrement = true
} else if strings.HasPrefix(attr, "default:") {
defaultVal := strings.TrimSpace(strings.TrimPrefix(attr, "default:"))
column.Default = strings.Trim(defaultVal, "'\"")
} else if attr == "unique" {
// Create a unique constraint
// Clean table name by removing leading underscores to avoid double underscores
cleanTableName := strings.TrimLeft(tableName, "_")
uniqueConstraint := models.InitConstraint(
fmt.Sprintf("ukey_%s_%s", cleanTableName, columnName),
models.UniqueConstraint,
)
uniqueConstraint.Schema = schemaName
uniqueConstraint.Table = tableName
uniqueConstraint.Columns = []string{columnName}
// Store it to be added later
if constraint == nil {
constraint = uniqueConstraint
}
} else if strings.HasPrefix(attr, "note:") {
// Parse column note/comment
note := strings.TrimSpace(strings.TrimPrefix(attr, "note:"))
column.Comment = strings.Trim(note, "'\"")
} else if strings.HasPrefix(attr, "ref:") {
// Parse inline reference
// DBML semantics depend on context:
// - On FK column: ref: < target means "this FK references target"
// - On PK column: ref: < source means "source references this PK" (reverse notation)
refStr := strings.TrimSpace(strings.TrimPrefix(attr, "ref:"))
// Check relationship direction operator
refOp := strings.TrimSpace(refStr)
var isReverse bool
if strings.HasPrefix(refOp, "<") {
// < means "is referenced by" - only makes sense on PK columns
isReverse = column.IsPrimaryKey
}
// > means "references" - always a forward FK, never reverse
constraint = r.parseRef(refStr)
if constraint != nil {
if isReverse {
// Reverse: parsed ref is SOURCE, current column is TARGET
// Constraint should be ON the source table
constraint.Schema = constraint.ReferencedSchema
constraint.Table = constraint.ReferencedTable
constraint.Columns = constraint.ReferencedColumns
constraint.ReferencedSchema = schemaName
constraint.ReferencedTable = tableName
constraint.ReferencedColumns = []string{columnName}
} else {
// Forward: current column is SOURCE, parsed ref is TARGET
// Standard FK: constraint is ON current table
constraint.Schema = schemaName
constraint.Table = tableName
constraint.Columns = []string{columnName}
}
// Generate constraint name based on table and columns
constraint.Name = fmt.Sprintf("fk_%s_%s", constraint.Table, strings.Join(constraint.Columns, "_"))
if strings.Contains(attr, "primary key") || attr == "pk" {
column.IsPrimaryKey = true
column.NotNull = true
} else if strings.Contains(attr, "not null") {
column.NotNull = true
} else if attr == "increment" {
column.AutoIncrement = true
} else if strings.HasPrefix(attr, "default:") {
defaultVal := strings.TrimSpace(strings.TrimPrefix(attr, "default:"))
column.Default = strings.Trim(defaultVal, "'\"")
} else if attr == "unique" {
// Create a unique constraint
// Clean table name by removing leading underscores to avoid double underscores
cleanTableName := strings.TrimLeft(tableName, "_")
uniqueConstraint := models.InitConstraint(
fmt.Sprintf("ukey_%s_%s", cleanTableName, columnName),
models.UniqueConstraint,
)
uniqueConstraint.Schema = schemaName
uniqueConstraint.Table = tableName
uniqueConstraint.Columns = []string{columnName}
// Store it to be added later
if constraint == nil {
constraint = uniqueConstraint
}
} else if strings.HasPrefix(attr, "note:") {
// Parse column note/comment
note := strings.TrimSpace(strings.TrimPrefix(attr, "note:"))
column.Comment = strings.Trim(note, "'\"")
} else if strings.HasPrefix(attr, "ref:") {
// Parse inline reference
// DBML semantics depend on context:
// - On FK column: ref: < target means "this FK references target"
// - On PK column: ref: < source means "source references this PK" (reverse notation)
refStr := strings.TrimSpace(strings.TrimPrefix(attr, "ref:"))
// Check relationship direction operator
refOp := strings.TrimSpace(refStr)
var isReverse bool
if strings.HasPrefix(refOp, "<") {
// < means "is referenced by" - only makes sense on PK columns
isReverse = column.IsPrimaryKey
}
// > means "references" - always a forward FK, never reverse
constraint = r.parseRef(refStr)
if constraint != nil {
if isReverse {
// Reverse: parsed ref is SOURCE, current column is TARGET
// Constraint should be ON the source table
constraint.Schema = constraint.ReferencedSchema
constraint.Table = constraint.ReferencedTable
constraint.Columns = constraint.ReferencedColumns
constraint.ReferencedSchema = schemaName
constraint.ReferencedTable = tableName
constraint.ReferencedColumns = []string{columnName}
} else {
// Forward: current column is SOURCE, parsed ref is TARGET
// Standard FK: constraint is ON current table
constraint.Schema = schemaName
constraint.Table = tableName
constraint.Columns = []string{columnName}
}
// Generate constraint name based on table and columns
constraint.Name = fmt.Sprintf("fk_%s_%s", constraint.Table, strings.Join(constraint.Columns, "_"))
}
}
}
}
// Parse inline comment
if strings.Contains(line, "//") {
commentStart := strings.Index(line, "//")
column.Comment = strings.TrimSpace(line[commentStart+2:])
if inlineComment != "" {
column.Comment = inlineComment
}
return column, constraint
}
func splitInlineComment(line string) (string, string) {
commentStart := strings.Index(line, "//")
if commentStart == -1 {
return line, ""
}
return strings.TrimSpace(line[:commentStart]), strings.TrimSpace(line[commentStart+2:])
}
func splitColumnSignatureAndAttrs(line string) (string, string) {
trimmed := strings.TrimSpace(line)
if trimmed == "" || !strings.HasSuffix(trimmed, "]") {
return trimmed, ""
}
bracketDepth := 0
for i := len(trimmed) - 1; i >= 0; i-- {
switch trimmed[i] {
case ']':
bracketDepth++
case '[':
bracketDepth--
if bracketDepth == 0 {
// DBML attributes are a trailing [ ... ] block preceded by whitespace.
// This avoids confusing array types like text[] with attribute blocks.
if i > 0 && (trimmed[i-1] == ' ' || trimmed[i-1] == '\t') {
return strings.TrimSpace(trimmed[:i]), strings.TrimSpace(trimmed[i+1 : len(trimmed)-1])
}
}
}
}
return trimmed, ""
}
func parseColumnSignature(signature string) (string, string, bool) {
signature = strings.TrimSpace(signature)
if signature == "" {
return "", "", false
}
var splitAt int
if signature[0] == '"' || signature[0] == '\'' {
quote := signature[0]
splitAt = 1
for splitAt < len(signature) {
if signature[splitAt] == quote {
splitAt++
break
}
splitAt++
}
} else {
for splitAt < len(signature) && signature[splitAt] != ' ' && signature[splitAt] != '\t' {
splitAt++
}
}
if splitAt <= 0 || splitAt >= len(signature) {
return "", "", false
}
columnName := stripQuotes(strings.TrimSpace(signature[:splitAt]))
columnType := stripWrappingQuotes(strings.TrimSpace(signature[splitAt:]))
if columnName == "" || columnType == "" {
return "", "", false
}
return columnName, columnType, true
}
func stripWrappingQuotes(s string) string {
s = strings.TrimSpace(s)
if len(s) >= 2 && ((s[0] == '"' && s[len(s)-1] == '"') || (s[0] == '\'' && s[len(s)-1] == '\'')) {
return s[1 : len(s)-1]
}
return s
}
// parseIndex parses a DBML index definition
func (r *Reader) parseIndex(line, tableName, schemaName string) *models.Index {
// Format: (columns) [attributes] OR columnname [attributes]
@@ -832,7 +904,11 @@ func (r *Reader) parseRef(refStr string) *models.Constraint {
for _, action := range actionList {
action = strings.TrimSpace(action)
if strings.HasPrefix(action, "ondelete:") {
if strings.HasPrefix(action, "delete:") {
constraint.OnDelete = strings.TrimSpace(strings.TrimPrefix(action, "delete:"))
} else if strings.HasPrefix(action, "update:") {
constraint.OnUpdate = strings.TrimSpace(strings.TrimPrefix(action, "update:"))
} else if strings.HasPrefix(action, "ondelete:") {
constraint.OnDelete = strings.TrimSpace(strings.TrimPrefix(action, "ondelete:"))
} else if strings.HasPrefix(action, "onupdate:") {
constraint.OnUpdate = strings.TrimSpace(strings.TrimPrefix(action, "onupdate:"))

View File

@@ -839,6 +839,67 @@ func TestConstraintNaming(t *testing.T) {
}
}
func TestParseColumn_PostgresTypes(t *testing.T) {
reader := &Reader{}
tests := []struct {
name string
line string
wantName string
wantType string
wantNotNull bool
wantComment string
}{
{
name: "array type with attrs",
line: "tags text[] [not null]",
wantName: "tags",
wantType: "text[]",
wantNotNull: true,
},
{
name: "vector with dimension",
line: "embedding vector(1536)",
wantName: "embedding",
wantType: "vector(1536)",
},
{
name: "multi word timestamp type",
line: "published_at timestamp with time zone",
wantName: "published_at",
wantType: "timestamp with time zone",
},
{
name: "array type with inline comment",
line: "labels varchar(20)[] // column labels",
wantName: "labels",
wantType: "varchar(20)[]",
wantComment: "column labels",
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
col, _ := reader.parseColumn(tt.line, "events", "public")
if col == nil {
t.Fatalf("parseColumn() returned nil column")
}
if col.Name != tt.wantName {
t.Errorf("column name = %q, want %q", col.Name, tt.wantName)
}
if col.Type != tt.wantType {
t.Errorf("column type = %q, want %q", col.Type, tt.wantType)
}
if col.NotNull != tt.wantNotNull {
t.Errorf("column not null = %v, want %v", col.NotNull, tt.wantNotNull)
}
if col.Comment != tt.wantComment {
t.Errorf("column comment = %q, want %q", col.Comment, tt.wantComment)
}
})
}
}
func getKeys[V any](m map[string]V) []string {
keys := make([]string, 0, len(m))
for k := range m {

View File

@@ -7,6 +7,7 @@ import (
"strings"
"git.warky.dev/wdevs/relspecgo/pkg/models"
"git.warky.dev/wdevs/relspecgo/pkg/pgsql"
"git.warky.dev/wdevs/relspecgo/pkg/readers"
)
@@ -232,7 +233,19 @@ func (r *Reader) convertField(dctxField *models.DCTXField, tableName string) ([]
// mapDataType maps Clarion data types to SQL types
func (r *Reader) mapDataType(clarionType string, size int) (sqlType string, precision int) {
switch strings.ToUpper(clarionType) {
trimmedType := strings.TrimSpace(clarionType)
// Preserve known PostgreSQL types (including arrays and extension types)
// from DCTX input instead of coercing them to generic text.
if pgsql.IsKnownPostgresType(trimmedType) {
pgType := canonicalizePostgresType(trimmedType)
if !pgsql.HasExplicitTypeModifier(pgType) && size > 0 && pgsql.SupportsLength(pgType) {
return pgType, size
}
return pgType, 0
}
switch strings.ToUpper(trimmedType) {
case "LONG":
if size == 8 {
return "bigint", 0
@@ -306,6 +319,32 @@ func (r *Reader) mapDataType(clarionType string, size int) (sqlType string, prec
}
}
func canonicalizePostgresType(typeStr string) string {
t := strings.ToLower(strings.Join(strings.Fields(strings.TrimSpace(typeStr)), " "))
if t == "" {
return ""
}
// Handle array suffixes
arrayCount := 0
for strings.HasSuffix(t, "[]") {
arrayCount++
t = strings.TrimSpace(strings.TrimSuffix(t, "[]"))
}
// Handle optional type modifier
modifier := ""
if idx := strings.Index(t, "("); idx > 0 {
if end := strings.LastIndex(t, ")"); end > idx {
modifier = t[idx : end+1]
t = strings.TrimSpace(t[:idx])
}
}
base := pgsql.CanonicalizeBaseType(t)
return base + modifier + strings.Repeat("[]", arrayCount)
}
// processKeys processes DCTX keys and converts them to indexes and primary keys
func (r *Reader) processKeys(dctxTable *models.DCTXTable, table *models.Table, fieldGuidMap map[string]string) error {
for _, dctxKey := range dctxTable.Keys {

View File

@@ -493,3 +493,55 @@ func TestRelationships(t *testing.T) {
}
}
}
func TestMapDataType_PostgresTypes(t *testing.T) {
reader := &Reader{}
tests := []struct {
name string
inputType string
size int
wantType string
wantLength int
}{
{
name: "integer array preserved",
inputType: "integer[]",
wantType: "integer[]",
},
{
name: "citext array preserved",
inputType: "citext[]",
wantType: "citext[]",
},
{
name: "vector modifier preserved",
inputType: "vector(1536)",
wantType: "vector(1536)",
},
{
name: "alias canonicalized in array",
inputType: "int4[]",
wantType: "integer[]",
},
{
name: "varchar length from size",
inputType: "varchar",
size: 120,
wantType: "varchar",
wantLength: 120,
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
gotType, gotLength := reader.mapDataType(tt.inputType, tt.size)
if gotType != tt.wantType {
t.Fatalf("mapDataType(%q, %d) type = %q, want %q", tt.inputType, tt.size, gotType, tt.wantType)
}
if gotLength != tt.wantLength {
t.Fatalf("mapDataType(%q, %d) length = %d, want %d", tt.inputType, tt.size, gotLength, tt.wantLength)
}
})
}
}

View File

@@ -8,6 +8,7 @@ import (
"strings"
"git.warky.dev/wdevs/relspecgo/pkg/models"
"git.warky.dev/wdevs/relspecgo/pkg/pgsql"
"git.warky.dev/wdevs/relspecgo/pkg/readers"
"git.warky.dev/wdevs/relspecgo/pkg/writers/drawdb"
)
@@ -231,30 +232,35 @@ func (r *Reader) convertToColumn(field *drawdb.DrawDBField, tableName, schemaNam
// Parse type and dimensions
typeStr := field.Type
typeStr = strings.TrimSpace(typeStr)
column.Type = typeStr
// Try to extract length/precision from type string like "varchar(255)" or "decimal(10,2)"
if strings.Contains(typeStr, "(") {
parts := strings.Split(typeStr, "(")
column.Type = parts[0]
baseType := strings.TrimSpace(parts[0])
if len(parts) > 1 {
dimensions := strings.TrimSuffix(parts[1], ")")
if strings.Contains(dimensions, ",") {
// Precision and scale (e.g., decimal(10,2))
dims := strings.Split(dimensions, ",")
if precision, err := strconv.Atoi(strings.TrimSpace(dims[0])); err == nil {
column.Precision = precision
}
if len(dims) > 1 {
if scale, err := strconv.Atoi(strings.TrimSpace(dims[1])); err == nil {
column.Scale = scale
// Precision and scale (e.g., decimal(10,2), numeric(10,2))
if pgsql.SupportsPrecision(baseType) {
dims := strings.Split(dimensions, ",")
if precision, err := strconv.Atoi(strings.TrimSpace(dims[0])); err == nil {
column.Precision = precision
}
if len(dims) > 1 {
if scale, err := strconv.Atoi(strings.TrimSpace(dims[1])); err == nil {
column.Scale = scale
}
}
}
} else {
// Just length (e.g., varchar(255))
if length, err := strconv.Atoi(dimensions); err == nil {
column.Length = length
if pgsql.SupportsLength(baseType) {
if length, err := strconv.Atoi(dimensions); err == nil {
column.Length = length
}
}
}
}

View File

@@ -6,6 +6,7 @@ import (
"git.warky.dev/wdevs/relspecgo/pkg/models"
"git.warky.dev/wdevs/relspecgo/pkg/readers"
"git.warky.dev/wdevs/relspecgo/pkg/writers/drawdb"
)
func TestReader_ReadDatabase_Simple(t *testing.T) {
@@ -288,6 +289,61 @@ func TestReader_ReadDatabase_Complex(t *testing.T) {
}
}
func TestConvertToColumn_PreservesExplicitTypeModifiers(t *testing.T) {
reader := &Reader{}
tests := []struct {
name string
fieldType string
wantType string
wantLength int
wantPrecision int
wantScale int
}{
{
name: "varchar with length",
fieldType: "varchar(255)",
wantType: "varchar(255)",
wantLength: 255,
},
{
name: "numeric precision/scale",
fieldType: "numeric(10,2)",
wantType: "numeric(10,2)",
wantPrecision: 10,
wantScale: 2,
},
{
name: "custom vector modifier",
fieldType: "vector(1536)",
wantType: "vector(1536)",
wantLength: 1536,
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
field := &drawdb.DrawDBField{
Name: tt.name,
Type: tt.fieldType,
}
col := reader.convertToColumn(field, "events", "public")
if col.Type != tt.wantType {
t.Fatalf("column type = %q, want %q", col.Type, tt.wantType)
}
if col.Length != tt.wantLength {
t.Fatalf("column length = %d, want %d", col.Length, tt.wantLength)
}
if col.Precision != tt.wantPrecision {
t.Fatalf("column precision = %d, want %d", col.Precision, tt.wantPrecision)
}
if col.Scale != tt.wantScale {
t.Fatalf("column scale = %d, want %d", col.Scale, tt.wantScale)
}
})
}
}
func TestReader_ReadSchema(t *testing.T) {
opts := &readers.ReaderOptions{
FilePath: filepath.Join("..", "..", "..", "tests", "assets", "drawdb", "simple.json"),

View File

@@ -12,6 +12,7 @@ import (
"strings"
"git.warky.dev/wdevs/relspecgo/pkg/models"
"git.warky.dev/wdevs/relspecgo/pkg/pgsql"
"git.warky.dev/wdevs/relspecgo/pkg/readers"
)
@@ -784,11 +785,14 @@ func (r *Reader) extractGormTag(tag string) string {
// parseTypeWithLength parses a type string and extracts length if present
// e.g., "varchar(255)" returns ("varchar", 255)
func (r *Reader) parseTypeWithLength(typeStr string) (baseType string, length int) {
typeStr = strings.TrimSpace(typeStr)
baseType = typeStr
// Check for type with length: varchar(255), char(10), etc.
// Also handle precision/scale: numeric(10,2)
if strings.Contains(typeStr, "(") {
idx := strings.Index(typeStr, "(")
baseType = strings.TrimSpace(typeStr[:idx])
rawBaseType := strings.TrimSpace(typeStr[:idx])
// Extract numbers from parentheses
parens := typeStr[idx+1:]
@@ -796,14 +800,15 @@ func (r *Reader) parseTypeWithLength(typeStr string) (baseType string, length in
parens = parens[:endIdx]
}
// For now, just handle single number (length)
if !strings.Contains(parens, ",") {
// Only treat as "length" for text-ish SQL types.
// This avoids converting custom modifiers like vector(1536) into Length.
if pgsql.SupportsLength(rawBaseType) && !strings.Contains(parens, ",") {
if _, err := fmt.Sscanf(parens, "%d", &length); err == nil {
return
}
}
}
baseType = typeStr
return
}

View File

@@ -71,8 +71,11 @@ func TestReader_ReadDatabase_Simple(t *testing.T) {
if !emailCol.NotNull {
t.Error("Column 'email' should be NOT NULL (explicit 'not null' tag)")
}
if emailCol.Type != "varchar" || emailCol.Length != 255 {
t.Errorf("Expected email type 'varchar(255)', got '%s' with length %d", emailCol.Type, emailCol.Length)
if emailCol.Type != "varchar" && emailCol.Type != "varchar(255)" {
t.Errorf("Expected email type 'varchar' or 'varchar(255)', got '%s' with length %d", emailCol.Type, emailCol.Length)
}
if emailCol.Length != 255 {
t.Errorf("Expected email length 255, got %d", emailCol.Length)
}
// Verify name column - primitive string type should be NOT NULL by default
@@ -363,6 +366,33 @@ func TestReader_ReadDatabase_Complex(t *testing.T) {
}
}
func TestParseTypeWithLength_PreservesExplicitTypeModifiers(t *testing.T) {
reader := &Reader{}
tests := []struct {
input string
wantType string
wantLength int
}{
{"varchar(255)", "varchar(255)", 255},
{"character varying(120)", "character varying(120)", 120},
{"vector(1536)", "vector(1536)", 1536},
{"numeric(10,2)", "numeric(10,2)", 0},
}
for _, tt := range tests {
t.Run(tt.input, func(t *testing.T) {
gotType, gotLength := reader.parseTypeWithLength(tt.input)
if gotType != tt.wantType {
t.Fatalf("parseTypeWithLength(%q) type = %q, want %q", tt.input, gotType, tt.wantType)
}
if gotLength != tt.wantLength {
t.Fatalf("parseTypeWithLength(%q) length = %d, want %d", tt.input, gotLength, tt.wantLength)
}
})
}
}
func TestReader_ReadSchema(t *testing.T) {
opts := &readers.ReaderOptions{
FilePath: filepath.Join("..", "..", "..", "tests", "assets", "gorm", "simple.go"),

View File

@@ -0,0 +1,91 @@
# MSSQL Reader
Reads database schema from Microsoft SQL Server databases using a live connection.
## Features
- **Live Connection**: Connects to MSSQL databases using the Microsoft ODBC driver
- **Multi-Schema Support**: Reads multiple schemas with full support for user-defined schemas
- **Comprehensive Metadata**: Reads tables, columns, constraints, indexes, and extended properties
- **Type Mapping**: Converts MSSQL types to canonical types for cross-database compatibility
- **Extended Properties**: Extracts table and column descriptions from MS_Description
- **Identity Columns**: Maps IDENTITY columns to AutoIncrement
- **Relationships**: Derives relationships from foreign key constraints
## Connection String Format
```
sqlserver://[user[:password]@][host][:port][?query]
```
Examples:
```
sqlserver://sa:password@localhost/dbname
sqlserver://user:pass@192.168.1.100:1433/production
sqlserver://localhost/testdb?encrypt=disable
```
## Supported Constraints
- Primary Keys
- Foreign Keys (with ON DELETE and ON UPDATE actions)
- Unique Constraints
- Check Constraints
## Type Mappings
| MSSQL Type | Canonical Type |
|------------|----------------|
| INT | int |
| BIGINT | int64 |
| SMALLINT | int16 |
| TINYINT | int8 |
| BIT | bool |
| REAL | float32 |
| FLOAT | float64 |
| NUMERIC, DECIMAL | decimal |
| NVARCHAR, VARCHAR | string |
| DATETIME2 | timestamp |
| DATETIMEOFFSET | timestamptz |
| UNIQUEIDENTIFIER | uuid |
| VARBINARY | bytea |
| DATE | date |
| TIME | time |
## Usage
```go
import "git.warky.dev/wdevs/relspecgo/pkg/readers/mssql"
import "git.warky.dev/wdevs/relspecgo/pkg/readers"
reader := mssql.NewReader(&readers.ReaderOptions{
ConnectionString: "sqlserver://sa:password@localhost/mydb",
})
db, err := reader.ReadDatabase()
if err != nil {
panic(err)
}
// Process schema...
for _, schema := range db.Schemas {
fmt.Printf("Schema: %s\n", schema.Name)
for _, table := range schema.Tables {
fmt.Printf(" Table: %s\n", table.Name)
}
}
```
## Testing
Run tests with:
```bash
go test ./pkg/readers/mssql/...
```
For integration testing with a live MSSQL database:
```bash
docker-compose up -d mssql
go test -tags=integration ./pkg/readers/mssql/...
docker-compose down
```

View File

@@ -0,0 +1,416 @@
package mssql
import (
"strings"
"git.warky.dev/wdevs/relspecgo/pkg/models"
)
// querySchemas retrieves all user-defined schemas from the database
func (r *Reader) querySchemas() ([]*models.Schema, error) {
query := `
SELECT s.name, ISNULL(ep.value, '') as description
FROM sys.schemas s
LEFT JOIN sys.extended_properties ep
ON ep.major_id = s.schema_id
AND ep.minor_id = 0
AND ep.class = 3
AND ep.name = 'MS_Description'
WHERE s.name NOT IN ('dbo', 'guest', 'INFORMATION_SCHEMA', 'sys')
ORDER BY s.name
`
rows, err := r.db.QueryContext(r.ctx, query)
if err != nil {
return nil, err
}
defer rows.Close()
schemas := make([]*models.Schema, 0)
for rows.Next() {
var name, description string
if err := rows.Scan(&name, &description); err != nil {
return nil, err
}
schema := models.InitSchema(name)
if description != "" {
schema.Description = description
}
schemas = append(schemas, schema)
}
// Always include dbo schema if it has tables
dboSchema := models.InitSchema("dbo")
schemas = append(schemas, dboSchema)
return schemas, rows.Err()
}
// queryTables retrieves all tables for a given schema
func (r *Reader) queryTables(schemaName string) ([]*models.Table, error) {
query := `
SELECT t.table_schema, t.table_name, ISNULL(ep.value, '') as description
FROM information_schema.tables t
LEFT JOIN sys.extended_properties ep
ON ep.major_id = OBJECT_ID(QUOTENAME(t.table_schema) + '.' + QUOTENAME(t.table_name))
AND ep.minor_id = 0
AND ep.class = 1
AND ep.name = 'MS_Description'
WHERE t.table_schema = ? AND t.table_type = 'BASE TABLE'
ORDER BY t.table_name
`
rows, err := r.db.QueryContext(r.ctx, query, schemaName)
if err != nil {
return nil, err
}
defer rows.Close()
tables := make([]*models.Table, 0)
for rows.Next() {
var schema, tableName, description string
if err := rows.Scan(&schema, &tableName, &description); err != nil {
return nil, err
}
table := models.InitTable(tableName, schema)
if description != "" {
table.Description = description
}
tables = append(tables, table)
}
return tables, rows.Err()
}
// queryColumns retrieves all columns for tables in a schema
// Returns map[schema.table]map[columnName]*Column
func (r *Reader) queryColumns(schemaName string) (map[string]map[string]*models.Column, error) {
query := `
SELECT
c.table_schema,
c.table_name,
c.column_name,
c.ordinal_position,
c.column_default,
c.is_nullable,
c.data_type,
c.character_maximum_length,
c.numeric_precision,
c.numeric_scale,
ISNULL(ep.value, '') as description,
COLUMNPROPERTY(OBJECT_ID(QUOTENAME(c.table_schema) + '.' + QUOTENAME(c.table_name)), c.column_name, 'IsIdentity') as is_identity
FROM information_schema.columns c
LEFT JOIN sys.extended_properties ep
ON ep.major_id = OBJECT_ID(QUOTENAME(c.table_schema) + '.' + QUOTENAME(c.table_name))
AND ep.minor_id = COLUMNPROPERTY(OBJECT_ID(QUOTENAME(c.table_schema) + '.' + QUOTENAME(c.table_name)), c.column_name, 'ColumnId')
AND ep.class = 1
AND ep.name = 'MS_Description'
WHERE c.table_schema = ?
ORDER BY c.table_schema, c.table_name, c.ordinal_position
`
rows, err := r.db.QueryContext(r.ctx, query, schemaName)
if err != nil {
return nil, err
}
defer rows.Close()
columnsMap := make(map[string]map[string]*models.Column)
for rows.Next() {
var schema, tableName, columnName, isNullable, dataType, description string
var ordinalPosition int
var columnDefault, charMaxLength, numPrecision, numScale, isIdentity *int
if err := rows.Scan(&schema, &tableName, &columnName, &ordinalPosition, &columnDefault, &isNullable, &dataType, &charMaxLength, &numPrecision, &numScale, &description, &isIdentity); err != nil {
return nil, err
}
column := models.InitColumn(columnName, tableName, schema)
column.Type = r.mapDataType(dataType)
column.NotNull = (isNullable == "NO")
column.Sequence = uint(ordinalPosition)
if description != "" {
column.Description = description
}
// Check if this is an identity column (auto-increment)
if isIdentity != nil && *isIdentity == 1 {
column.AutoIncrement = true
}
if charMaxLength != nil && *charMaxLength > 0 {
column.Length = *charMaxLength
}
if numPrecision != nil && *numPrecision > 0 {
column.Precision = *numPrecision
}
if numScale != nil && *numScale > 0 {
column.Scale = *numScale
}
// Create table key
tableKey := schema + "." + tableName
if columnsMap[tableKey] == nil {
columnsMap[tableKey] = make(map[string]*models.Column)
}
columnsMap[tableKey][columnName] = column
}
return columnsMap, rows.Err()
}
// queryPrimaryKeys retrieves all primary key constraints for a schema
// Returns map[schema.table]*Constraint
func (r *Reader) queryPrimaryKeys(schemaName string) (map[string]*models.Constraint, error) {
query := `
SELECT
s.name as schema_name,
t.name as table_name,
i.name as constraint_name,
STRING_AGG(c.name, ',') WITHIN GROUP (ORDER BY ic.key_ordinal) as columns
FROM sys.tables t
INNER JOIN sys.indexes i ON t.object_id = i.object_id AND i.is_primary_key = 1
INNER JOIN sys.schemas s ON t.schema_id = s.schema_id
INNER JOIN sys.index_columns ic ON i.object_id = ic.object_id AND i.index_id = ic.index_id
INNER JOIN sys.columns c ON t.object_id = c.object_id AND ic.column_id = c.column_id
WHERE s.name = ?
GROUP BY s.name, t.name, i.name
`
rows, err := r.db.QueryContext(r.ctx, query, schemaName)
if err != nil {
return nil, err
}
defer rows.Close()
primaryKeys := make(map[string]*models.Constraint)
for rows.Next() {
var schema, tableName, constraintName, columnsStr string
if err := rows.Scan(&schema, &tableName, &constraintName, &columnsStr); err != nil {
return nil, err
}
columns := strings.Split(columnsStr, ",")
constraint := models.InitConstraint(constraintName, models.PrimaryKeyConstraint)
constraint.Schema = schema
constraint.Table = tableName
constraint.Columns = columns
tableKey := schema + "." + tableName
primaryKeys[tableKey] = constraint
}
return primaryKeys, rows.Err()
}
// queryForeignKeys retrieves all foreign key constraints for a schema
// Returns map[schema.table][]*Constraint
func (r *Reader) queryForeignKeys(schemaName string) (map[string][]*models.Constraint, error) {
query := `
SELECT
s.name as schema_name,
t.name as table_name,
fk.name as constraint_name,
rs.name as referenced_schema,
rt.name as referenced_table,
STRING_AGG(c.name, ',') WITHIN GROUP (ORDER BY fkc.constraint_column_id) as columns,
STRING_AGG(rc.name, ',') WITHIN GROUP (ORDER BY fkc.constraint_column_id) as referenced_columns,
fk.delete_referential_action_desc,
fk.update_referential_action_desc
FROM sys.foreign_keys fk
INNER JOIN sys.tables t ON fk.parent_object_id = t.object_id
INNER JOIN sys.tables rt ON fk.referenced_object_id = rt.object_id
INNER JOIN sys.schemas s ON t.schema_id = s.schema_id
INNER JOIN sys.schemas rs ON rt.schema_id = rs.schema_id
INNER JOIN sys.foreign_key_columns fkc ON fk.object_id = fkc.constraint_object_id
INNER JOIN sys.columns c ON fkc.parent_object_id = c.object_id AND fkc.parent_column_id = c.column_id
INNER JOIN sys.columns rc ON fkc.referenced_object_id = rc.object_id AND fkc.referenced_column_id = rc.column_id
WHERE s.name = ?
GROUP BY s.name, t.name, fk.name, rs.name, rt.name, fk.delete_referential_action_desc, fk.update_referential_action_desc
`
rows, err := r.db.QueryContext(r.ctx, query, schemaName)
if err != nil {
return nil, err
}
defer rows.Close()
foreignKeys := make(map[string][]*models.Constraint)
for rows.Next() {
var schema, tableName, constraintName, refSchema, refTable, columnsStr, refColumnsStr, deleteAction, updateAction string
if err := rows.Scan(&schema, &tableName, &constraintName, &refSchema, &refTable, &columnsStr, &refColumnsStr, &deleteAction, &updateAction); err != nil {
return nil, err
}
columns := strings.Split(columnsStr, ",")
refColumns := strings.Split(refColumnsStr, ",")
constraint := models.InitConstraint(constraintName, models.ForeignKeyConstraint)
constraint.Schema = schema
constraint.Table = tableName
constraint.Columns = columns
constraint.ReferencedSchema = refSchema
constraint.ReferencedTable = refTable
constraint.ReferencedColumns = refColumns
constraint.OnDelete = strings.ToUpper(deleteAction)
constraint.OnUpdate = strings.ToUpper(updateAction)
tableKey := schema + "." + tableName
foreignKeys[tableKey] = append(foreignKeys[tableKey], constraint)
}
return foreignKeys, rows.Err()
}
// queryUniqueConstraints retrieves all unique constraints for a schema
// Returns map[schema.table][]*Constraint
func (r *Reader) queryUniqueConstraints(schemaName string) (map[string][]*models.Constraint, error) {
query := `
SELECT
s.name as schema_name,
t.name as table_name,
i.name as constraint_name,
STRING_AGG(c.name, ',') WITHIN GROUP (ORDER BY ic.key_ordinal) as columns
FROM sys.tables t
INNER JOIN sys.indexes i ON t.object_id = i.object_id AND i.is_unique = 1 AND i.is_primary_key = 0
INNER JOIN sys.schemas s ON t.schema_id = s.schema_id
INNER JOIN sys.index_columns ic ON i.object_id = ic.object_id AND i.index_id = ic.index_id
INNER JOIN sys.columns c ON t.object_id = c.object_id AND ic.column_id = c.column_id
WHERE s.name = ?
GROUP BY s.name, t.name, i.name
`
rows, err := r.db.QueryContext(r.ctx, query, schemaName)
if err != nil {
return nil, err
}
defer rows.Close()
uniqueConstraints := make(map[string][]*models.Constraint)
for rows.Next() {
var schema, tableName, constraintName, columnsStr string
if err := rows.Scan(&schema, &tableName, &constraintName, &columnsStr); err != nil {
return nil, err
}
columns := strings.Split(columnsStr, ",")
constraint := models.InitConstraint(constraintName, models.UniqueConstraint)
constraint.Schema = schema
constraint.Table = tableName
constraint.Columns = columns
tableKey := schema + "." + tableName
uniqueConstraints[tableKey] = append(uniqueConstraints[tableKey], constraint)
}
return uniqueConstraints, rows.Err()
}
// queryCheckConstraints retrieves all check constraints for a schema
// Returns map[schema.table][]*Constraint
func (r *Reader) queryCheckConstraints(schemaName string) (map[string][]*models.Constraint, error) {
query := `
SELECT
s.name as schema_name,
t.name as table_name,
cc.name as constraint_name,
cc.definition
FROM sys.tables t
INNER JOIN sys.check_constraints cc ON t.object_id = cc.parent_object_id
INNER JOIN sys.schemas s ON t.schema_id = s.schema_id
WHERE s.name = ?
`
rows, err := r.db.QueryContext(r.ctx, query, schemaName)
if err != nil {
return nil, err
}
defer rows.Close()
checkConstraints := make(map[string][]*models.Constraint)
for rows.Next() {
var schema, tableName, constraintName, definition string
if err := rows.Scan(&schema, &tableName, &constraintName, &definition); err != nil {
return nil, err
}
constraint := models.InitConstraint(constraintName, models.CheckConstraint)
constraint.Schema = schema
constraint.Table = tableName
constraint.Expression = definition
tableKey := schema + "." + tableName
checkConstraints[tableKey] = append(checkConstraints[tableKey], constraint)
}
return checkConstraints, rows.Err()
}
// queryIndexes retrieves all indexes for a schema
// Returns map[schema.table][]*Index
func (r *Reader) queryIndexes(schemaName string) (map[string][]*models.Index, error) {
query := `
SELECT
s.name as schema_name,
t.name as table_name,
i.name as index_name,
i.is_unique,
STRING_AGG(c.name, ',') WITHIN GROUP (ORDER BY ic.key_ordinal) as columns
FROM sys.tables t
INNER JOIN sys.indexes i ON t.object_id = i.object_id AND i.is_primary_key = 0 AND i.name IS NOT NULL
INNER JOIN sys.schemas s ON t.schema_id = s.schema_id
INNER JOIN sys.index_columns ic ON i.object_id = ic.object_id AND i.index_id = ic.index_id
INNER JOIN sys.columns c ON t.object_id = c.object_id AND ic.column_id = c.column_id
WHERE s.name = ?
GROUP BY s.name, t.name, i.name, i.is_unique
`
rows, err := r.db.QueryContext(r.ctx, query, schemaName)
if err != nil {
return nil, err
}
defer rows.Close()
indexes := make(map[string][]*models.Index)
for rows.Next() {
var schema, tableName, indexName, columnsStr string
var isUnique int
if err := rows.Scan(&schema, &tableName, &indexName, &isUnique, &columnsStr); err != nil {
return nil, err
}
columns := strings.Split(columnsStr, ",")
index := models.InitIndex(indexName, tableName, schema)
index.Columns = columns
index.Unique = (isUnique == 1)
index.Type = "btree" // MSSQL uses btree by default
tableKey := schema + "." + tableName
indexes[tableKey] = append(indexes[tableKey], index)
}
return indexes, rows.Err()
}

266
pkg/readers/mssql/reader.go Normal file
View File

@@ -0,0 +1,266 @@
package mssql
import (
"context"
"database/sql"
"fmt"
_ "github.com/microsoft/go-mssqldb" // MSSQL driver
"git.warky.dev/wdevs/relspecgo/pkg/models"
"git.warky.dev/wdevs/relspecgo/pkg/mssql"
"git.warky.dev/wdevs/relspecgo/pkg/readers"
)
// Reader implements the readers.Reader interface for MSSQL databases
type Reader struct {
options *readers.ReaderOptions
db *sql.DB
ctx context.Context
}
// NewReader creates a new MSSQL reader
func NewReader(options *readers.ReaderOptions) *Reader {
return &Reader{
options: options,
ctx: context.Background(),
}
}
// ReadDatabase reads the entire database schema from MSSQL
func (r *Reader) ReadDatabase() (*models.Database, error) {
// Validate connection string
if r.options.ConnectionString == "" {
return nil, fmt.Errorf("connection string is required")
}
// Connect to the database
if err := r.connect(); err != nil {
return nil, fmt.Errorf("failed to connect: %w", err)
}
defer r.close()
// Get database name
var dbName string
err := r.db.QueryRowContext(r.ctx, "SELECT DB_NAME()").Scan(&dbName)
if err != nil {
return nil, fmt.Errorf("failed to get database name: %w", err)
}
// Initialize database model
db := models.InitDatabase(dbName)
db.DatabaseType = models.MSSQLDatabaseType
db.SourceFormat = "mssql"
// Get MSSQL version
var version string
err = r.db.QueryRowContext(r.ctx, "SELECT @@VERSION").Scan(&version)
if err == nil {
db.DatabaseVersion = version
}
// Query all schemas
schemas, err := r.querySchemas()
if err != nil {
return nil, fmt.Errorf("failed to query schemas: %w", err)
}
// Process each schema
for _, schema := range schemas {
// Query tables for this schema
tables, err := r.queryTables(schema.Name)
if err != nil {
return nil, fmt.Errorf("failed to query tables for schema %s: %w", schema.Name, err)
}
schema.Tables = tables
// Query columns for tables
columnsMap, err := r.queryColumns(schema.Name)
if err != nil {
return nil, fmt.Errorf("failed to query columns for schema %s: %w", schema.Name, err)
}
// Populate table columns
for _, table := range schema.Tables {
tableKey := schema.Name + "." + table.Name
if cols, exists := columnsMap[tableKey]; exists {
table.Columns = cols
}
}
// Query primary keys
primaryKeys, err := r.queryPrimaryKeys(schema.Name)
if err != nil {
return nil, fmt.Errorf("failed to query primary keys for schema %s: %w", schema.Name, err)
}
// Apply primary keys to tables
for _, table := range schema.Tables {
tableKey := schema.Name + "." + table.Name
if pk, exists := primaryKeys[tableKey]; exists {
table.Constraints[pk.Name] = pk
// Mark columns as primary key and not null
for _, colName := range pk.Columns {
if col, colExists := table.Columns[colName]; colExists {
col.IsPrimaryKey = true
col.NotNull = true
}
}
}
}
// Query foreign keys
foreignKeys, err := r.queryForeignKeys(schema.Name)
if err != nil {
return nil, fmt.Errorf("failed to query foreign keys for schema %s: %w", schema.Name, err)
}
// Apply foreign keys to tables
for _, table := range schema.Tables {
tableKey := schema.Name + "." + table.Name
if fks, exists := foreignKeys[tableKey]; exists {
for _, fk := range fks {
table.Constraints[fk.Name] = fk
// Derive relationship from foreign key
r.deriveRelationship(table, fk)
}
}
}
// Query unique constraints
uniqueConstraints, err := r.queryUniqueConstraints(schema.Name)
if err != nil {
return nil, fmt.Errorf("failed to query unique constraints for schema %s: %w", schema.Name, err)
}
// Apply unique constraints to tables
for _, table := range schema.Tables {
tableKey := schema.Name + "." + table.Name
if ucs, exists := uniqueConstraints[tableKey]; exists {
for _, uc := range ucs {
table.Constraints[uc.Name] = uc
}
}
}
// Query check constraints
checkConstraints, err := r.queryCheckConstraints(schema.Name)
if err != nil {
return nil, fmt.Errorf("failed to query check constraints for schema %s: %w", schema.Name, err)
}
// Apply check constraints to tables
for _, table := range schema.Tables {
tableKey := schema.Name + "." + table.Name
if ccs, exists := checkConstraints[tableKey]; exists {
for _, cc := range ccs {
table.Constraints[cc.Name] = cc
}
}
}
// Query indexes
indexes, err := r.queryIndexes(schema.Name)
if err != nil {
return nil, fmt.Errorf("failed to query indexes for schema %s: %w", schema.Name, err)
}
// Apply indexes to tables
for _, table := range schema.Tables {
tableKey := schema.Name + "." + table.Name
if idxs, exists := indexes[tableKey]; exists {
for _, idx := range idxs {
table.Indexes[idx.Name] = idx
}
}
}
// Set RefDatabase for schema
schema.RefDatabase = db
// Set RefSchema for tables
for _, table := range schema.Tables {
table.RefSchema = schema
}
// Add schema to database
db.Schemas = append(db.Schemas, schema)
}
return db, nil
}
// ReadSchema reads a single schema (returns the first schema from the database)
func (r *Reader) ReadSchema() (*models.Schema, error) {
db, err := r.ReadDatabase()
if err != nil {
return nil, err
}
if len(db.Schemas) == 0 {
return nil, fmt.Errorf("no schemas found in database")
}
return db.Schemas[0], nil
}
// ReadTable reads a single table (returns the first table from the first schema)
func (r *Reader) ReadTable() (*models.Table, error) {
schema, err := r.ReadSchema()
if err != nil {
return nil, err
}
if len(schema.Tables) == 0 {
return nil, fmt.Errorf("no tables found in schema")
}
return schema.Tables[0], nil
}
// connect establishes a connection to the MSSQL database
func (r *Reader) connect() error {
db, err := sql.Open("mssql", r.options.ConnectionString)
if err != nil {
return err
}
// Test connection
if err = db.PingContext(r.ctx); err != nil {
db.Close()
return err
}
r.db = db
return nil
}
// close closes the database connection
func (r *Reader) close() {
if r.db != nil {
r.db.Close()
}
}
// mapDataType maps MSSQL data types to canonical types
func (r *Reader) mapDataType(mssqlType string) string {
return mssql.ConvertMSSQLToCanonical(mssqlType)
}
// deriveRelationship creates a relationship from a foreign key constraint
func (r *Reader) deriveRelationship(table *models.Table, fk *models.Constraint) {
relationshipName := fmt.Sprintf("%s_to_%s", table.Name, fk.ReferencedTable)
relationship := models.InitRelationship(relationshipName, models.OneToMany)
relationship.FromTable = table.Name
relationship.FromSchema = table.Schema
relationship.ToTable = fk.ReferencedTable
relationship.ToSchema = fk.ReferencedSchema
relationship.ForeignKey = fk.Name
// Store constraint actions in properties
if fk.OnDelete != "" {
relationship.Properties["on_delete"] = fk.OnDelete
}
if fk.OnUpdate != "" {
relationship.Properties["on_update"] = fk.OnUpdate
}
table.Relationships[relationshipName] = relationship
}

View File

@@ -0,0 +1,86 @@
package mssql
import (
"testing"
"git.warky.dev/wdevs/relspecgo/pkg/mssql"
"git.warky.dev/wdevs/relspecgo/pkg/readers"
"github.com/stretchr/testify/assert"
)
// TestMapDataType tests MSSQL type mapping to canonical types
func TestMapDataType(t *testing.T) {
reader := NewReader(&readers.ReaderOptions{})
tests := []struct {
name string
mssqlType string
expectedType string
}{
{"INT to int", "INT", "int"},
{"BIGINT to int64", "BIGINT", "int64"},
{"BIT to bool", "BIT", "bool"},
{"NVARCHAR to string", "NVARCHAR(255)", "string"},
{"DATETIME2 to timestamp", "DATETIME2", "timestamp"},
{"DATETIMEOFFSET to timestamptz", "DATETIMEOFFSET", "timestamptz"},
{"UNIQUEIDENTIFIER to uuid", "UNIQUEIDENTIFIER", "uuid"},
{"FLOAT to float64", "FLOAT", "float64"},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
result := reader.mapDataType(tt.mssqlType)
assert.Equal(t, tt.expectedType, result)
})
}
}
// TestConvertCanonicalToMSSQL tests canonical to MSSQL type conversion
func TestConvertCanonicalToMSSQL(t *testing.T) {
tests := []struct {
name string
canonicalType string
expectedMSSQL string
}{
{"int to INT", "int", "INT"},
{"int64 to BIGINT", "int64", "BIGINT"},
{"bool to BIT", "bool", "BIT"},
{"string to NVARCHAR(255)", "string", "NVARCHAR(255)"},
{"text to NVARCHAR(MAX)", "text", "NVARCHAR(MAX)"},
{"timestamp to DATETIME2", "timestamp", "DATETIME2"},
{"timestamptz to DATETIMEOFFSET", "timestamptz", "DATETIMEOFFSET"},
{"uuid to UNIQUEIDENTIFIER", "uuid", "UNIQUEIDENTIFIER"},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
result := mssql.ConvertCanonicalToMSSQL(tt.canonicalType)
assert.Equal(t, tt.expectedMSSQL, result)
})
}
}
// TestConvertMSSQLToCanonical tests MSSQL to canonical type conversion
func TestConvertMSSQLToCanonical(t *testing.T) {
tests := []struct {
name string
mssqlType string
expectedType string
}{
{"INT to int", "INT", "int"},
{"BIGINT to int64", "BIGINT", "int64"},
{"BIT to bool", "BIT", "bool"},
{"NVARCHAR with params", "NVARCHAR(255)", "string"},
{"DATETIME2 to timestamp", "DATETIME2", "timestamp"},
{"DATETIMEOFFSET to timestamptz", "DATETIMEOFFSET", "timestamptz"},
{"UNIQUEIDENTIFIER to uuid", "UNIQUEIDENTIFIER", "uuid"},
{"VARBINARY to bytea", "VARBINARY(MAX)", "bytea"},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
result := mssql.ConvertMSSQLToCanonical(tt.mssqlType)
assert.Equal(t, tt.expectedType, result)
})
}
}

View File

@@ -206,8 +206,19 @@ func (r *Reader) queryColumns(schemaName string) (map[string]map[string]*models.
c.numeric_precision,
c.numeric_scale,
c.udt_name,
pg_catalog.format_type(a.atttypid, a.atttypmod) as formatted_data_type,
col_description((c.table_schema||'.'||c.table_name)::regclass, c.ordinal_position) as description
FROM information_schema.columns c
JOIN pg_catalog.pg_namespace n
ON n.nspname = c.table_schema
JOIN pg_catalog.pg_class cls
ON cls.relname = c.table_name
AND cls.relnamespace = n.oid
JOIN pg_catalog.pg_attribute a
ON a.attrelid = cls.oid
AND a.attname = c.column_name
AND a.attnum > 0
AND NOT a.attisdropped
WHERE c.table_schema = $1
ORDER BY c.table_schema, c.table_name, c.ordinal_position
`
@@ -221,24 +232,23 @@ func (r *Reader) queryColumns(schemaName string) (map[string]map[string]*models.
columnsMap := make(map[string]map[string]*models.Column)
for rows.Next() {
var schema, tableName, columnName, isNullable, dataType, udtName string
var schema, tableName, columnName, isNullable, dataType, udtName, formattedDataType string
var ordinalPosition int
var columnDefault, description *string
var charMaxLength, numPrecision, numScale *int
if err := rows.Scan(&schema, &tableName, &columnName, &ordinalPosition, &columnDefault, &isNullable, &dataType, &charMaxLength, &numPrecision, &numScale, &udtName, &description); err != nil {
if err := rows.Scan(&schema, &tableName, &columnName, &ordinalPosition, &columnDefault, &isNullable, &dataType, &charMaxLength, &numPrecision, &numScale, &udtName, &formattedDataType, &description); err != nil {
return nil, err
}
column := models.InitColumn(columnName, tableName, schema)
column.Type = r.mapDataType(dataType, udtName)
column.NotNull = (isNullable == "NO")
column.Sequence = uint(ordinalPosition)
// Check if this is a serial type (has nextval default)
hasNextval := false
if columnDefault != nil {
// Parse default value - remove nextval for sequences
defaultVal := *columnDefault
if strings.HasPrefix(defaultVal, "nextval") {
hasNextval = true
column.AutoIncrement = true
column.Default = defaultVal
} else {
@@ -246,6 +256,11 @@ func (r *Reader) queryColumns(schemaName string) (map[string]map[string]*models.
}
}
// Map data type, preserving serial types when detected
column.Type = r.mapDataType(dataType, udtName, formattedDataType, hasNextval)
column.NotNull = (isNullable == "NO")
column.Sequence = uint(ordinalPosition)
if description != nil {
column.Description = *description
}

View File

@@ -3,6 +3,7 @@ package pgsql
import (
"context"
"fmt"
"strings"
"github.com/jackc/pgx/v5"
@@ -258,34 +259,60 @@ func (r *Reader) close() {
}
}
// mapDataType maps PostgreSQL data types to canonical types
func (r *Reader) mapDataType(pgType, udtName string) string {
// mapDataType maps PostgreSQL data types while preserving exact type text when available.
func (r *Reader) mapDataType(pgType, udtName, formattedType string, hasNextval bool) string {
normalizedPGType := strings.ToLower(strings.TrimSpace(pgType))
// If the column has a nextval default, it's likely a serial type
// Map to the appropriate serial type instead of the base integer type
if hasNextval {
switch normalizedPGType {
case "integer", "int", "int4":
return "serial"
case "bigint", "int8":
return "bigserial"
case "smallint", "int2":
return "smallserial"
}
}
// Prefer the database-provided formatted type; this preserves arrays/custom
// types/modifiers like text[], vector(1536), numeric(10,2), etc.
if strings.TrimSpace(formattedType) != "" {
return formattedType
}
// information_schema reports arrays generically as "ARRAY" with udt_name like "_text".
if strings.EqualFold(pgType, "ARRAY") && strings.HasPrefix(udtName, "_") && len(udtName) > 1 {
return udtName[1:] + "[]"
}
// Map common PostgreSQL types
typeMap := map[string]string{
"integer": "int",
"bigint": "int64",
"smallint": "int16",
"int": "int",
"int2": "int16",
"int4": "int",
"int8": "int64",
"serial": "int",
"bigserial": "int64",
"smallserial": "int16",
"numeric": "decimal",
"integer": "integer",
"bigint": "bigint",
"smallint": "smallint",
"int": "integer",
"int2": "smallint",
"int4": "integer",
"int8": "bigint",
"serial": "serial",
"bigserial": "bigserial",
"smallserial": "smallserial",
"numeric": "numeric",
"decimal": "decimal",
"real": "float32",
"double precision": "float64",
"float4": "float32",
"float8": "float64",
"money": "decimal",
"character varying": "string",
"varchar": "string",
"character": "string",
"char": "string",
"text": "string",
"boolean": "bool",
"bool": "bool",
"real": "real",
"double precision": "double precision",
"float4": "real",
"float8": "double precision",
"money": "money",
"character varying": "varchar",
"varchar": "varchar",
"character": "char",
"char": "char",
"text": "text",
"boolean": "boolean",
"bool": "boolean",
"date": "date",
"time": "time",
"time without time zone": "time",
@@ -306,7 +333,7 @@ func (r *Reader) mapDataType(pgType, udtName string) string {
}
// Try mapped type first
if mapped, exists := typeMap[pgType]; exists {
if mapped, exists := typeMap[normalizedPGType]; exists {
return mapped
}
@@ -315,8 +342,11 @@ func (r *Reader) mapDataType(pgType, udtName string) string {
return pgsql.GetSQLType(pgType)
}
// Return UDT name for custom types
// Return UDT name for custom types (including array fallback when needed)
if udtName != "" {
if strings.HasPrefix(udtName, "_") && len(udtName) > 1 {
return udtName[1:] + "[]"
}
return udtName
}

View File

@@ -173,35 +173,58 @@ func TestMapDataType(t *testing.T) {
reader := &Reader{}
tests := []struct {
pgType string
udtName string
expected string
pgType string
udtName string
formattedType string
expected string
}{
{"integer", "int4", "int"},
{"bigint", "int8", "int64"},
{"smallint", "int2", "int16"},
{"character varying", "varchar", "string"},
{"text", "text", "string"},
{"boolean", "bool", "bool"},
{"timestamp without time zone", "timestamp", "timestamp"},
{"timestamp with time zone", "timestamptz", "timestamptz"},
{"json", "json", "json"},
{"jsonb", "jsonb", "jsonb"},
{"uuid", "uuid", "uuid"},
{"numeric", "numeric", "decimal"},
{"real", "float4", "float32"},
{"double precision", "float8", "float64"},
{"date", "date", "date"},
{"time without time zone", "time", "time"},
{"bytea", "bytea", "bytea"},
{"unknown_type", "custom", "custom"}, // Should return UDT name
{"integer", "int4", "", "integer"},
{"bigint", "int8", "", "bigint"},
{"smallint", "int2", "", "smallint"},
{"character varying", "varchar", "", "varchar"},
{"text", "text", "", "text"},
{"boolean", "bool", "", "boolean"},
{"timestamp without time zone", "timestamp", "", "timestamp"},
{"timestamp with time zone", "timestamptz", "", "timestamptz"},
{"json", "json", "", "json"},
{"jsonb", "jsonb", "", "jsonb"},
{"uuid", "uuid", "", "uuid"},
{"numeric", "numeric", "", "numeric"},
{"real", "float4", "", "real"},
{"double precision", "float8", "", "double precision"},
{"date", "date", "", "date"},
{"time without time zone", "time", "", "time"},
{"bytea", "bytea", "", "bytea"},
{"unknown_type", "custom", "", "custom"}, // Should return UDT name
{"ARRAY", "_text", "", "text[]"},
{"USER-DEFINED", "vector", "vector(1536)", "vector(1536)"},
{"character varying", "varchar", "character varying(255)", "character varying(255)"},
}
for _, tt := range tests {
t.Run(tt.pgType, func(t *testing.T) {
result := reader.mapDataType(tt.pgType, tt.udtName)
result := reader.mapDataType(tt.pgType, tt.udtName, tt.formattedType, false)
if result != tt.expected {
t.Errorf("mapDataType(%s, %s) = %s, expected %s", tt.pgType, tt.udtName, result, tt.expected)
t.Errorf("mapDataType(%s, %s, %s) = %s, expected %s", tt.pgType, tt.udtName, tt.formattedType, result, tt.expected)
}
})
}
// Test serial type detection with hasNextval=true
serialTests := []struct {
pgType string
expected string
}{
{"integer", "serial"},
{"bigint", "bigserial"},
{"smallint", "smallserial"},
}
for _, tt := range serialTests {
t.Run(tt.pgType+"_with_nextval", func(t *testing.T) {
result := reader.mapDataType(tt.pgType, "", "", true)
if result != tt.expected {
t.Errorf("mapDataType(%s, '', '', true) = %s, expected %s", tt.pgType, result, tt.expected)
}
})
}
@@ -211,63 +234,63 @@ func TestParseIndexDefinition(t *testing.T) {
reader := &Reader{}
tests := []struct {
name string
indexName string
tableName string
schema string
indexDef string
wantType string
wantUnique bool
name string
indexName string
tableName string
schema string
indexDef string
wantType string
wantUnique bool
wantColumns int
}{
{
name: "simple btree index",
indexName: "idx_users_email",
tableName: "users",
schema: "public",
indexDef: "CREATE INDEX idx_users_email ON public.users USING btree (email)",
wantType: "btree",
wantUnique: false,
name: "simple btree index",
indexName: "idx_users_email",
tableName: "users",
schema: "public",
indexDef: "CREATE INDEX idx_users_email ON public.users USING btree (email)",
wantType: "btree",
wantUnique: false,
wantColumns: 1,
},
{
name: "unique index",
indexName: "idx_users_username",
tableName: "users",
schema: "public",
indexDef: "CREATE UNIQUE INDEX idx_users_username ON public.users USING btree (username)",
wantType: "btree",
wantUnique: true,
name: "unique index",
indexName: "idx_users_username",
tableName: "users",
schema: "public",
indexDef: "CREATE UNIQUE INDEX idx_users_username ON public.users USING btree (username)",
wantType: "btree",
wantUnique: true,
wantColumns: 1,
},
{
name: "composite index",
indexName: "idx_users_name",
tableName: "users",
schema: "public",
indexDef: "CREATE INDEX idx_users_name ON public.users USING btree (first_name, last_name)",
wantType: "btree",
wantUnique: false,
name: "composite index",
indexName: "idx_users_name",
tableName: "users",
schema: "public",
indexDef: "CREATE INDEX idx_users_name ON public.users USING btree (first_name, last_name)",
wantType: "btree",
wantUnique: false,
wantColumns: 2,
},
{
name: "gin index",
indexName: "idx_posts_tags",
tableName: "posts",
schema: "public",
indexDef: "CREATE INDEX idx_posts_tags ON public.posts USING gin (tags)",
wantType: "gin",
wantUnique: false,
name: "gin index",
indexName: "idx_posts_tags",
tableName: "posts",
schema: "public",
indexDef: "CREATE INDEX idx_posts_tags ON public.posts USING gin (tags)",
wantType: "gin",
wantUnique: false,
wantColumns: 1,
},
{
name: "partial index with where clause",
indexName: "idx_users_active",
tableName: "users",
schema: "public",
indexDef: "CREATE INDEX idx_users_active ON public.users USING btree (id) WHERE (active = true)",
wantType: "btree",
wantUnique: false,
name: "partial index with where clause",
indexName: "idx_users_active",
tableName: "users",
schema: "public",
indexDef: "CREATE INDEX idx_users_active ON public.users USING btree (id) WHERE (active = true)",
wantType: "btree",
wantUnique: false,
wantColumns: 1,
},
}

View File

@@ -5,9 +5,11 @@ import (
"fmt"
"os"
"regexp"
"strconv"
"strings"
"git.warky.dev/wdevs/relspecgo/pkg/models"
"git.warky.dev/wdevs/relspecgo/pkg/pgsql"
"git.warky.dev/wdevs/relspecgo/pkg/readers"
)
@@ -549,6 +551,41 @@ func (r *Reader) parseColumnOptions(decorator string, column *models.Column, tab
}
}
// Preserve explicit type modifiers from options where present.
// Example: @Column({ type: 'varchar', length: 255 }) -> varchar(255)
if column.Type != "" && !strings.Contains(column.Type, "(") {
lengthRegex := regexp.MustCompile(`length:\s*(\d+)`)
precisionRegex := regexp.MustCompile(`precision:\s*(\d+)`)
scaleRegex := regexp.MustCompile(`scale:\s*(\d+)`)
baseType := strings.ToLower(strings.TrimSpace(column.Type))
if pgsql.SupportsLength(baseType) {
if matches := lengthRegex.FindStringSubmatch(content); len(matches) == 2 {
if n, err := strconv.Atoi(matches[1]); err == nil && n > 0 {
column.Length = n
column.Type = fmt.Sprintf("%s(%d)", column.Type, n)
}
}
}
if pgsql.SupportsPrecision(baseType) {
if matches := precisionRegex.FindStringSubmatch(content); len(matches) == 2 {
if p, err := strconv.Atoi(matches[1]); err == nil && p > 0 {
column.Precision = p
if sm := scaleRegex.FindStringSubmatch(content); len(sm) == 2 {
if s, err := strconv.Atoi(sm[1]); err == nil && s >= 0 {
column.Scale = s
column.Type = fmt.Sprintf("%s(%d,%d)", column.Type, p, s)
}
} else {
column.Type = fmt.Sprintf("%s(%d)", column.Type, p)
}
}
}
}
}
if strings.Contains(content, "nullable: true") || strings.Contains(content, "nullable:true") {
column.NotNull = false
}

View File

@@ -0,0 +1,60 @@
package typeorm
import (
"testing"
"git.warky.dev/wdevs/relspecgo/pkg/models"
)
func TestParseColumnOptions_PreservesTypeModifiers(t *testing.T) {
reader := &Reader{}
table := models.InitTable("users", "public")
tests := []struct {
name string
decorator string
wantType string
wantLength int
wantPrecision int
wantScale int
}{
{
name: "varchar with length",
decorator: `@Column({ type: 'varchar', length: 255 })`,
wantType: "varchar(255)",
wantLength: 255,
},
{
name: "numeric with precision and scale",
decorator: `@Column({ type: 'numeric', precision: 10, scale: 2 })`,
wantType: "numeric(10,2)",
wantPrecision: 10,
wantScale: 2,
},
{
name: "custom type with explicit modifier is preserved",
decorator: `@Column({ type: 'vector(1536)' })`,
wantType: "vector(1536)",
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
col := models.InitColumn("sample", table.Name, table.Schema)
reader.parseColumnOptions(tt.decorator, col, table)
if col.Type != tt.wantType {
t.Fatalf("column type = %q, want %q", col.Type, tt.wantType)
}
if col.Length != tt.wantLength {
t.Fatalf("column length = %d, want %d", col.Length, tt.wantLength)
}
if col.Precision != tt.wantPrecision {
t.Fatalf("column precision = %d, want %d", col.Precision, tt.wantPrecision)
}
if col.Scale != tt.wantScale {
t.Fatalf("column scale = %d, want %d", col.Scale, tt.wantScale)
}
})
}
}

View File

@@ -110,8 +110,7 @@ func NewModelData(table *models.Table, schema string, typeMapper *TypeMapper, fl
tableName := writers.QualifiedTableName(schema, table.Name, flattenSchema)
// Generate model name: Model + Schema + Table (all PascalCase)
singularTable := Singularize(table.Name)
tablePart := SnakeCaseToPascalCase(singularTable)
tablePart := SnakeCaseToPascalCase(table.Name)
// Include schema name in model name
var modelName string
@@ -217,6 +216,21 @@ func resolveFieldNameCollision(fieldName string) string {
return fieldName
}
// sortConstraints sorts constraints by sequence, then by name
func sortConstraints(constraints map[string]*models.Constraint) []*models.Constraint {
result := make([]*models.Constraint, 0, len(constraints))
for _, c := range constraints {
result = append(result, c)
}
sort.Slice(result, func(i, j int) bool {
if result[i].Sequence > 0 && result[j].Sequence > 0 {
return result[i].Sequence < result[j].Sequence
}
return result[i].Name < result[j].Name
})
return result
}
// sortColumns sorts columns by sequence, then by name
func sortColumns(columns map[string]*models.Column) []*models.Column {
result := make([]*models.Column, 0, len(columns))

View File

@@ -5,6 +5,7 @@ import (
"strings"
"git.warky.dev/wdevs/relspecgo/pkg/models"
"git.warky.dev/wdevs/relspecgo/pkg/pgsql"
"git.warky.dev/wdevs/relspecgo/pkg/writers"
)
@@ -39,14 +40,7 @@ func (tm *TypeMapper) SQLTypeToGoType(sqlType string, notNull bool) string {
// extractBaseType extracts the base type from a SQL type string
func (tm *TypeMapper) extractBaseType(sqlType string) string {
sqlType = strings.ToLower(strings.TrimSpace(sqlType))
// Remove everything after '('
if idx := strings.Index(sqlType, "("); idx > 0 {
sqlType = sqlType[:idx]
}
return sqlType
return pgsql.CanonicalizeBaseType(pgsql.ExtractBaseTypeLower(sqlType))
}
// isSimpleType checks if a type should use base Go type when NOT NULL
@@ -62,6 +56,17 @@ func (tm *TypeMapper) isSimpleType(sqlType string) bool {
return simpleTypes[sqlType]
}
// isSerialType checks if a SQL type is a serial type (auto-incrementing)
func (tm *TypeMapper) isSerialType(sqlType string) bool {
baseType := tm.extractBaseType(sqlType)
serialTypes := map[string]bool{
"serial": true,
"bigserial": true,
"smallserial": true,
}
return serialTypes[baseType]
}
// baseGoType returns the base Go type for a SQL type (not null, simple types only)
func (tm *TypeMapper) baseGoType(sqlType string) string {
typeMap := map[string]string{
@@ -122,10 +127,10 @@ func (tm *TypeMapper) bunGoType(sqlType string) string {
"decimal": tm.sqlTypesAlias + ".SqlFloat64",
// Date/Time types
"timestamp": tm.sqlTypesAlias + ".SqlTime",
"timestamp without time zone": tm.sqlTypesAlias + ".SqlTime",
"timestamp with time zone": tm.sqlTypesAlias + ".SqlTime",
"timestamptz": tm.sqlTypesAlias + ".SqlTime",
"timestamp": tm.sqlTypesAlias + ".SqlTimeStamp",
"timestamp without time zone": tm.sqlTypesAlias + ".SqlTimeStamp",
"timestamp with time zone": tm.sqlTypesAlias + ".SqlTimeStamp",
"timestamptz": tm.sqlTypesAlias + ".SqlTimeStamp",
"date": tm.sqlTypesAlias + ".SqlDate",
"time": tm.sqlTypesAlias + ".SqlTime",
"time without time zone": tm.sqlTypesAlias + ".SqlTime",
@@ -173,9 +178,10 @@ func (tm *TypeMapper) BuildBunTag(column *models.Column, table *models.Table) st
if column.Type != "" {
// Sanitize type to remove backticks
typeStr := writers.SanitizeStructTagValue(column.Type)
if column.Length > 0 {
hasExplicitTypeModifier := pgsql.HasExplicitTypeModifier(typeStr)
if !hasExplicitTypeModifier && column.Length > 0 {
typeStr = fmt.Sprintf("%s(%d)", typeStr, column.Length)
} else if column.Precision > 0 {
} else if !hasExplicitTypeModifier && column.Precision > 0 {
if column.Scale > 0 {
typeStr = fmt.Sprintf("%s(%d,%d)", typeStr, column.Precision, column.Scale)
} else {
@@ -190,10 +196,15 @@ func (tm *TypeMapper) BuildBunTag(column *models.Column, table *models.Table) st
parts = append(parts, "pk")
}
// Auto increment (for serial types or explicit auto_increment)
if column.AutoIncrement || tm.isSerialType(column.Type) {
parts = append(parts, "autoincrement")
}
// Default value
if column.Default != nil {
// Sanitize default value to remove backticks
safeDefault := writers.SanitizeStructTagValue(fmt.Sprintf("%v", column.Default))
// Sanitize default value to remove backticks, then quote based on column type
safeDefault := writers.QuoteDefaultValue(writers.SanitizeStructTagValue(fmt.Sprintf("%v", column.Default)), column.Type)
parts = append(parts, fmt.Sprintf("default:%s", safeDefault))
}
@@ -251,7 +262,15 @@ func (tm *TypeMapper) BuildRelationshipTag(constraint *models.Constraint, relTyp
if len(constraint.Columns) > 0 && len(constraint.ReferencedColumns) > 0 {
localCol := constraint.Columns[0]
foreignCol := constraint.ReferencedColumns[0]
parts = append(parts, fmt.Sprintf("join:%s=%s", localCol, foreignCol))
// For has-many relationships, swap the columns
// has-one: join:fk_in_this_table=pk_in_other_table
// has-many: join:pk_in_this_table=fk_in_other_table
if relType == "has-many" {
parts = append(parts, fmt.Sprintf("join:%s=%s", foreignCol, localCol))
} else {
parts = append(parts, fmt.Sprintf("join:%s=%s", localCol, foreignCol))
}
}
return strings.Join(parts, ",")

View File

@@ -242,7 +242,7 @@ func (w *Writer) addRelationshipFields(modelData *ModelData, table *models.Table
usedFieldNames := make(map[string]int)
// For each foreign key in this table, add a belongs-to/has-one relationship
for _, constraint := range table.Constraints {
for _, constraint := range sortConstraints(table.Constraints) {
if constraint.Type != models.ForeignKeyConstraint {
continue
}
@@ -275,7 +275,7 @@ func (w *Writer) addRelationshipFields(modelData *ModelData, table *models.Table
continue // Skip self
}
for _, constraint := range otherTable.Constraints {
for _, constraint := range sortConstraints(otherTable.Constraints) {
if constraint.Type != models.ForeignKeyConstraint {
continue
}
@@ -318,8 +318,7 @@ func (w *Writer) findTable(schemaName, tableName string, db *models.Database) *m
// getModelName generates the model name from schema and table name
func (w *Writer) getModelName(schemaName, tableName string) string {
singular := Singularize(tableName)
tablePart := SnakeCaseToPascalCase(singular)
tablePart := SnakeCaseToPascalCase(tableName)
// Include schema name in model name
var modelName string

View File

@@ -66,7 +66,7 @@ func TestWriter_WriteTable(t *testing.T) {
// Verify key elements are present
expectations := []string{
"package models",
"type ModelPublicUser struct",
"type ModelPublicUsers struct",
"bun.BaseModel",
"table:public.users",
"alias:users",
@@ -78,9 +78,9 @@ func TestWriter_WriteTable(t *testing.T) {
"resolvespec_common.SqlTime",
"bun:\"id",
"bun:\"email",
"func (m ModelPublicUser) TableName() string",
"func (m ModelPublicUsers) TableName() string",
"return \"public.users\"",
"func (m ModelPublicUser) GetID() int64",
"func (m ModelPublicUsers) GetID() int64",
}
for _, expected := range expectations {
@@ -90,8 +90,8 @@ func TestWriter_WriteTable(t *testing.T) {
}
// Verify Bun-specific elements
if !strings.Contains(generated, "bun:\"id,type:bigint,pk,") {
t.Errorf("Missing Bun-style primary key tag")
if !strings.Contains(generated, "bun:\"id,type:bigint,pk,autoincrement,") {
t.Errorf("Missing Bun-style primary key tag with autoincrement")
}
}
@@ -308,14 +308,20 @@ func TestWriter_MultipleReferencesToSameTable(t *testing.T) {
filepointerStr := string(filepointerContent)
// Should have two different has-many relationships with unique names
hasManyExpectations := []string{
"RelRIDFilepointerRequestOrgAPIEvents", // Has many via rid_filepointer_request
"RelRIDFilepointerResponseOrgAPIEvents", // Has many via rid_filepointer_response
hasManyExpectations := []struct {
fieldName string
tag string
}{
{"RelRIDFilepointerRequestOrgAPIEvents", "join:id_filepointer=rid_filepointer_request"}, // Has many via rid_filepointer_request
{"RelRIDFilepointerResponseOrgAPIEvents", "join:id_filepointer=rid_filepointer_response"}, // Has many via rid_filepointer_response
}
for _, exp := range hasManyExpectations {
if !strings.Contains(filepointerStr, exp) {
t.Errorf("Missing has-many relationship field: %s\nGenerated:\n%s", exp, filepointerStr)
if !strings.Contains(filepointerStr, exp.fieldName) {
t.Errorf("Missing has-many relationship field: %s\nGenerated:\n%s", exp.fieldName, filepointerStr)
}
if !strings.Contains(filepointerStr, exp.tag) {
t.Errorf("Missing has-many relationship join tag: %s\nGenerated:\n%s", exp.tag, filepointerStr)
}
}
}
@@ -455,10 +461,10 @@ func TestWriter_MultipleHasManyRelationships(t *testing.T) {
// Verify all has-many relationships have unique names
hasManyExpectations := []string{
"RelRIDAPIProviderOrgLogins", // Has many via Login
"RelRIDAPIProviderOrgLogins", // Has many via Login
"RelRIDAPIProviderOrgFilepointers", // Has many via Filepointer
"RelRIDAPIProviderOrgAPIEvents", // Has many via APIEvent
"RelRIDOwner", // Has one via rid_owner
"RelRIDAPIProviderOrgAPIEvents", // Has many via APIEvent
"RelRIDOwner", // Has one via rid_owner
}
for _, exp := range hasManyExpectations {
@@ -561,8 +567,8 @@ func TestTypeMapper_SQLTypeToGoType_Bun(t *testing.T) {
{"bigint", false, "resolvespec_common.SqlInt64"},
{"varchar", true, "resolvespec_common.SqlString"}, // Bun uses sql types even for NOT NULL strings
{"varchar", false, "resolvespec_common.SqlString"},
{"timestamp", true, "resolvespec_common.SqlTime"},
{"timestamp", false, "resolvespec_common.SqlTime"},
{"timestamp", true, "resolvespec_common.SqlTimeStamp"},
{"timestamp", false, "resolvespec_common.SqlTimeStamp"},
{"date", false, "resolvespec_common.SqlDate"},
{"boolean", true, "bool"},
{"boolean", false, "resolvespec_common.SqlBool"},
@@ -609,14 +615,75 @@ func TestTypeMapper_BuildBunTag(t *testing.T) {
want: []string{"email,", "type:varchar(255),", "nullzero,"},
},
{
name: "with default",
name: "with default string",
column: &models.Column{
Name: "status",
Type: "text",
NotNull: true,
Default: "active",
},
want: []string{"status,", "type:text,", "default:active,"},
want: []string{"status,", "type:text,", "default:'active',"},
},
{
name: "with default integer",
column: &models.Column{
Name: "retries",
Type: "integer",
NotNull: true,
Default: "0",
},
want: []string{"retries,", "type:integer,", "default:0,"},
},
{
name: "with default boolean",
column: &models.Column{
Name: "active",
Type: "boolean",
NotNull: true,
Default: "true",
},
want: []string{"active,", "type:boolean,", "default:true,"},
},
{
name: "with default function call",
column: &models.Column{
Name: "created_at",
Type: "timestamp",
NotNull: true,
Default: "now()",
},
want: []string{"created_at,", "type:timestamp,", "default:now(),"},
},
{
name: "auto increment with AutoIncrement flag",
column: &models.Column{
Name: "id",
Type: "bigint",
NotNull: true,
IsPrimaryKey: true,
AutoIncrement: true,
},
want: []string{"id,", "type:bigint,", "pk,", "autoincrement,"},
},
{
name: "serial type (auto-increment)",
column: &models.Column{
Name: "id",
Type: "serial",
NotNull: true,
IsPrimaryKey: true,
},
want: []string{"id,", "type:serial,", "pk,", "autoincrement,"},
},
{
name: "bigserial type (auto-increment)",
column: &models.Column{
Name: "id",
Type: "bigserial",
NotNull: true,
IsPrimaryKey: true,
},
want: []string{"id,", "type:bigserial,", "pk,", "autoincrement,"},
},
}
@@ -631,3 +698,23 @@ func TestTypeMapper_BuildBunTag(t *testing.T) {
})
}
}
func TestTypeMapper_BuildBunTag_PreservesExplicitTypeModifiers(t *testing.T) {
mapper := NewTypeMapper()
col := &models.Column{
Name: "embedding",
Type: "vector(1536)",
Length: 1536,
Precision: 0,
Scale: 0,
}
tag := mapper.BuildBunTag(col, nil)
if !strings.Contains(tag, "type:vector(1536),") {
t.Fatalf("expected explicit modifier to be preserved, got %q", tag)
}
if strings.Contains(tag, ")(") {
t.Fatalf("type modifier appears duplicated in %q", tag)
}
}

View File

@@ -62,10 +62,10 @@ func (w *Writer) databaseToDBML(d *models.Database) string {
var sb strings.Builder
if d.Description != "" {
sb.WriteString(fmt.Sprintf("// %s\n", d.Description))
fmt.Fprintf(&sb, "// %s\n", d.Description)
}
if d.Comment != "" {
sb.WriteString(fmt.Sprintf("// %s\n", d.Comment))
fmt.Fprintf(&sb, "// %s\n", d.Comment)
}
if d.Description != "" || d.Comment != "" {
sb.WriteString("\n")
@@ -94,7 +94,7 @@ func (w *Writer) schemaToDBML(schema *models.Schema) string {
var sb strings.Builder
if schema.Description != "" {
sb.WriteString(fmt.Sprintf("// Schema: %s - %s\n", schema.Name, schema.Description))
fmt.Fprintf(&sb, "// Schema: %s - %s\n", schema.Name, schema.Description)
}
for _, table := range schema.Tables {
@@ -110,10 +110,10 @@ func (w *Writer) tableToDBML(t *models.Table) string {
var sb strings.Builder
tableName := fmt.Sprintf("%s.%s", t.Schema, t.Name)
sb.WriteString(fmt.Sprintf("Table %s {\n", tableName))
fmt.Fprintf(&sb, "Table %s {\n", tableName)
for _, column := range t.Columns {
sb.WriteString(fmt.Sprintf(" %s %s", column.Name, column.Type))
fmt.Fprintf(&sb, " %s %s", column.Name, column.Type)
var attrs []string
if column.IsPrimaryKey {
@@ -138,11 +138,11 @@ func (w *Writer) tableToDBML(t *models.Table) string {
}
if len(attrs) > 0 {
sb.WriteString(fmt.Sprintf(" [%s]", strings.Join(attrs, ", ")))
fmt.Fprintf(&sb, " [%s]", strings.Join(attrs, ", "))
}
if column.Comment != "" {
sb.WriteString(fmt.Sprintf(" // %s", column.Comment))
fmt.Fprintf(&sb, " // %s", column.Comment)
}
sb.WriteString("\n")
}
@@ -161,9 +161,9 @@ func (w *Writer) tableToDBML(t *models.Table) string {
indexAttrs = append(indexAttrs, fmt.Sprintf("type: %s", index.Type))
}
sb.WriteString(fmt.Sprintf(" (%s)", strings.Join(index.Columns, ", ")))
fmt.Fprintf(&sb, " (%s)", strings.Join(index.Columns, ", "))
if len(indexAttrs) > 0 {
sb.WriteString(fmt.Sprintf(" [%s]", strings.Join(indexAttrs, ", ")))
fmt.Fprintf(&sb, " [%s]", strings.Join(indexAttrs, ", "))
}
sb.WriteString("\n")
}
@@ -172,7 +172,7 @@ func (w *Writer) tableToDBML(t *models.Table) string {
note := strings.TrimSpace(t.Description + " " + t.Comment)
if note != "" {
sb.WriteString(fmt.Sprintf("\n Note: '%s'\n", note))
fmt.Fprintf(&sb, "\n Note: '%s'\n", note)
}
sb.WriteString("}\n")

View File

@@ -4,6 +4,7 @@ import (
"encoding/xml"
"fmt"
"os"
"sort"
"strings"
"github.com/google/uuid"
@@ -155,8 +156,15 @@ func (w *Writer) mapTableFields(table *models.Table) models.DCTXTable {
},
}
columnNames := make([]string, 0, len(table.Columns))
for name := range table.Columns {
columnNames = append(columnNames, name)
}
sort.Strings(columnNames)
i := 0
for _, column := range table.Columns {
for _, colName := range columnNames {
column := table.Columns[colName]
dctxTable.Fields[i] = w.mapField(column)
i++
}
@@ -165,12 +173,27 @@ func (w *Writer) mapTableFields(table *models.Table) models.DCTXTable {
}
func (w *Writer) mapTableKeys(table *models.Table) []models.DCTXKey {
keys := make([]models.DCTXKey, len(table.Indexes))
i := 0
indexes := make([]*models.Index, 0, len(table.Indexes))
for _, index := range table.Indexes {
keys[i] = w.mapKey(index, table)
i++
indexes = append(indexes, index)
}
// Stable ordering for deterministic output and test reproducibility:
// primary keys first, then lexicographic by index name.
sort.Slice(indexes, func(i, j int) bool {
iPrimary := strings.HasSuffix(indexes[i].Name, "_pkey")
jPrimary := strings.HasSuffix(indexes[j].Name, "_pkey")
if iPrimary != jPrimary {
return iPrimary
}
return indexes[i].Name < indexes[j].Name
})
keys := make([]models.DCTXKey, len(indexes))
for i, index := range indexes {
keys[i] = w.mapKey(index, table)
}
return keys
}

View File

@@ -5,6 +5,7 @@ import (
"strings"
"git.warky.dev/wdevs/relspecgo/pkg/models"
"git.warky.dev/wdevs/relspecgo/pkg/pgsql"
)
// TypeMapper handles SQL to Drizzle type conversions
@@ -18,7 +19,7 @@ func NewTypeMapper() *TypeMapper {
// SQLTypeToDrizzle converts SQL types to Drizzle column type functions
// Returns the Drizzle column constructor (e.g., "integer", "varchar", "text")
func (tm *TypeMapper) SQLTypeToDrizzle(sqlType string) string {
sqlTypeLower := strings.ToLower(sqlType)
sqlTypeLower := pgsql.CanonicalizeBaseType(pgsql.ExtractBaseTypeLower(sqlType))
// PostgreSQL type mapping to Drizzle
typeMap := map[string]string{
@@ -87,13 +88,6 @@ func (tm *TypeMapper) SQLTypeToDrizzle(sqlType string) string {
return drizzleType
}
// Check for partial matches (e.g., "varchar(255)" -> "varchar")
for sqlPattern, drizzleType := range typeMap {
if strings.HasPrefix(sqlTypeLower, sqlPattern) {
return drizzleType
}
}
// Default to text for unknown types
return "text"
}

View File

@@ -109,8 +109,7 @@ func NewModelData(table *models.Table, schema string, typeMapper *TypeMapper, fl
tableName := writers.QualifiedTableName(schema, table.Name, flattenSchema)
// Generate model name: Model + Schema + Table (all PascalCase)
singularTable := Singularize(table.Name)
tablePart := SnakeCaseToPascalCase(singularTable)
tablePart := SnakeCaseToPascalCase(table.Name)
// Include schema name in model name
var modelName string
@@ -214,6 +213,21 @@ func resolveFieldNameCollision(fieldName string) string {
return fieldName
}
// sortConstraints sorts constraints by sequence, then by name
func sortConstraints(constraints map[string]*models.Constraint) []*models.Constraint {
result := make([]*models.Constraint, 0, len(constraints))
for _, c := range constraints {
result = append(result, c)
}
sort.Slice(result, func(i, j int) bool {
if result[i].Sequence > 0 && result[j].Sequence > 0 {
return result[i].Sequence < result[j].Sequence
}
return result[i].Name < result[j].Name
})
return result
}
// sortColumns sorts columns by sequence, then by name
func sortColumns(columns map[string]*models.Column) []*models.Column {
result := make([]*models.Column, 0, len(columns))

View File

@@ -5,6 +5,7 @@ import (
"strings"
"git.warky.dev/wdevs/relspecgo/pkg/models"
"git.warky.dev/wdevs/relspecgo/pkg/pgsql"
"git.warky.dev/wdevs/relspecgo/pkg/writers"
)
@@ -39,14 +40,7 @@ func (tm *TypeMapper) SQLTypeToGoType(sqlType string, notNull bool) string {
// extractBaseType extracts the base type from a SQL type string
// Examples: varchar(100) → varchar, numeric(10,2) → numeric
func (tm *TypeMapper) extractBaseType(sqlType string) string {
sqlType = strings.ToLower(strings.TrimSpace(sqlType))
// Remove everything after '('
if idx := strings.Index(sqlType, "("); idx > 0 {
sqlType = sqlType[:idx]
}
return sqlType
return pgsql.CanonicalizeBaseType(pgsql.ExtractBaseTypeLower(sqlType))
}
// baseGoType returns the base Go type for a SQL type (not null)
@@ -158,10 +152,10 @@ func (tm *TypeMapper) nullableGoType(sqlType string) string {
"decimal": tm.sqlTypesAlias + ".SqlFloat64",
// Date/Time types
"timestamp": tm.sqlTypesAlias + ".SqlTime",
"timestamp without time zone": tm.sqlTypesAlias + ".SqlTime",
"timestamp with time zone": tm.sqlTypesAlias + ".SqlTime",
"timestamptz": tm.sqlTypesAlias + ".SqlTime",
"timestamp": tm.sqlTypesAlias + ".SqlTimeStamp",
"timestamp without time zone": tm.sqlTypesAlias + ".SqlTimeStamp",
"timestamp with time zone": tm.sqlTypesAlias + ".SqlTimeStamp",
"timestamptz": tm.sqlTypesAlias + ".SqlTimeStamp",
"date": tm.sqlTypesAlias + ".SqlDate",
"time": tm.sqlTypesAlias + ".SqlTime",
"time without time zone": tm.sqlTypesAlias + ".SqlTime",
@@ -209,9 +203,10 @@ func (tm *TypeMapper) BuildGormTag(column *models.Column, table *models.Table) s
// Include length, precision, scale if present
// Sanitize type to remove backticks
typeStr := writers.SanitizeStructTagValue(column.Type)
if column.Length > 0 {
hasExplicitTypeModifier := pgsql.HasExplicitTypeModifier(typeStr)
if !hasExplicitTypeModifier && column.Length > 0 {
typeStr = fmt.Sprintf("%s(%d)", typeStr, column.Length)
} else if column.Precision > 0 {
} else if !hasExplicitTypeModifier && column.Precision > 0 {
if column.Scale > 0 {
typeStr = fmt.Sprintf("%s(%d,%d)", typeStr, column.Precision, column.Scale)
} else {
@@ -238,8 +233,8 @@ func (tm *TypeMapper) BuildGormTag(column *models.Column, table *models.Table) s
// Default value
if column.Default != nil {
// Sanitize default value to remove backticks
safeDefault := writers.SanitizeStructTagValue(fmt.Sprintf("%v", column.Default))
// Sanitize default value to remove backticks, then quote based on column type
safeDefault := writers.QuoteDefaultValue(writers.SanitizeStructTagValue(fmt.Sprintf("%v", column.Default)), column.Type)
parts = append(parts, fmt.Sprintf("default:%s", safeDefault))
}

View File

@@ -236,7 +236,7 @@ func (w *Writer) addRelationshipFields(modelData *ModelData, table *models.Table
usedFieldNames := make(map[string]int)
// For each foreign key in this table, add a belongs-to relationship
for _, constraint := range table.Constraints {
for _, constraint := range sortConstraints(table.Constraints) {
if constraint.Type != models.ForeignKeyConstraint {
continue
}
@@ -269,7 +269,7 @@ func (w *Writer) addRelationshipFields(modelData *ModelData, table *models.Table
continue // Skip self
}
for _, constraint := range otherTable.Constraints {
for _, constraint := range sortConstraints(otherTable.Constraints) {
if constraint.Type != models.ForeignKeyConstraint {
continue
}
@@ -312,8 +312,7 @@ func (w *Writer) findTable(schemaName, tableName string, db *models.Database) *m
// getModelName generates the model name from schema and table name
func (w *Writer) getModelName(schemaName, tableName string) string {
singular := Singularize(tableName)
tablePart := SnakeCaseToPascalCase(singular)
tablePart := SnakeCaseToPascalCase(tableName)
// Include schema name in model name
var modelName string

View File

@@ -14,12 +14,12 @@ func TestWriter_WriteTable(t *testing.T) {
// Create a simple table
table := models.InitTable("users", "public")
table.Columns["id"] = &models.Column{
Name: "id",
Type: "bigint",
NotNull: true,
IsPrimaryKey: true,
Name: "id",
Type: "bigint",
NotNull: true,
IsPrimaryKey: true,
AutoIncrement: true,
Sequence: 1,
Sequence: 1,
}
table.Columns["email"] = &models.Column{
Name: "email",
@@ -66,7 +66,7 @@ func TestWriter_WriteTable(t *testing.T) {
// Verify key elements are present
expectations := []string{
"package models",
"type ModelPublicUser struct",
"type ModelPublicUsers struct",
"ID",
"int64",
"Email",
@@ -75,9 +75,9 @@ func TestWriter_WriteTable(t *testing.T) {
"time.Time",
"gorm:\"column:id",
"gorm:\"column:email",
"func (m ModelPublicUser) TableName() string",
"func (m ModelPublicUsers) TableName() string",
"return \"public.users\"",
"func (m ModelPublicUser) GetID() int64",
"func (m ModelPublicUsers) GetID() int64",
}
for _, expected := range expectations {
@@ -444,10 +444,10 @@ func TestWriter_MultipleHasManyRelationships(t *testing.T) {
// Verify all has-many relationships have unique names
hasManyExpectations := []string{
"RelRIDAPIProviderOrgLogins", // Has many via Login
"RelRIDAPIProviderOrgLogins", // Has many via Login
"RelRIDAPIProviderOrgFilepointers", // Has many via Filepointer
"RelRIDAPIProviderOrgAPIEvents", // Has many via APIEvent
"RelRIDOwner", // Belongs to via rid_owner
"RelRIDAPIProviderOrgAPIEvents", // Has many via APIEvent
"RelRIDOwner", // Belongs to via rid_owner
}
for _, exp := range hasManyExpectations {
@@ -655,7 +655,7 @@ func TestTypeMapper_SQLTypeToGoType(t *testing.T) {
{"varchar", true, "string"},
{"varchar", false, "sql_types.SqlString"},
{"timestamp", true, "time.Time"},
{"timestamp", false, "sql_types.SqlTime"},
{"timestamp", false, "sql_types.SqlTimeStamp"},
{"boolean", true, "bool"},
{"boolean", false, "sql_types.SqlBool"},
}
@@ -669,3 +669,23 @@ func TestTypeMapper_SQLTypeToGoType(t *testing.T) {
})
}
}
func TestTypeMapper_BuildGormTag_PreservesExplicitTypeModifiers(t *testing.T) {
mapper := NewTypeMapper()
col := &models.Column{
Name: "embedding",
Type: "vector(1536)",
Length: 1536,
Precision: 0,
Scale: 0,
}
tag := mapper.BuildGormTag(col, nil)
if !strings.Contains(tag, "type:vector(1536)") {
t.Fatalf("expected explicit modifier to be preserved, got %q", tag)
}
if strings.Contains(tag, ")(") {
t.Fatalf("type modifier appears duplicated in %q", tag)
}
}

View File

@@ -4,6 +4,7 @@ import (
"strings"
"git.warky.dev/wdevs/relspecgo/pkg/models"
"git.warky.dev/wdevs/relspecgo/pkg/pgsql"
)
func (w *Writer) sqlTypeToGraphQL(sqlType string, column *models.Column, table *models.Table, schema *models.Schema) string {
@@ -33,12 +34,11 @@ func (w *Writer) sqlTypeToGraphQL(sqlType string, column *models.Column, table *
}
// Standard type mappings
baseType := strings.Split(sqlType, "(")[0] // Remove length/precision
baseType = strings.TrimSpace(baseType)
baseType := pgsql.CanonicalizeBaseType(pgsql.ExtractBaseTypeLower(sqlType))
// Handle array types
if strings.HasSuffix(baseType, "[]") {
elemType := strings.TrimSuffix(baseType, "[]")
if pgsql.IsArrayType(sqlType) {
elemType := pgsql.CanonicalizeBaseType(pgsql.ExtractBaseTypeLower(pgsql.ElementType(sqlType)))
gqlType := w.mapBaseTypeToGraphQL(elemType)
return "[" + gqlType + "]"
}
@@ -108,8 +108,7 @@ func (w *Writer) sqlTypeToCustomScalar(sqlType string) string {
"date": "Date",
}
baseType := strings.Split(sqlType, "(")[0]
baseType = strings.TrimSpace(baseType)
baseType := pgsql.CanonicalizeBaseType(pgsql.ExtractBaseTypeLower(sqlType))
if scalar, ok := scalarMap[baseType]; ok {
return scalar
@@ -132,8 +131,7 @@ func (w *Writer) isIntegerType(sqlType string) bool {
"smallserial": true,
}
baseType := strings.Split(sqlType, "(")[0]
baseType = strings.TrimSpace(baseType)
baseType := pgsql.CanonicalizeBaseType(pgsql.ExtractBaseTypeLower(sqlType))
return intTypes[baseType]
}

View File

@@ -52,7 +52,7 @@ func (w *Writer) databaseToGraphQL(db *models.Database) string {
if w.shouldIncludeComments() {
sb.WriteString("# Generated GraphQL Schema\n")
if db.Name != "" {
sb.WriteString(fmt.Sprintf("# Database: %s\n", db.Name))
fmt.Fprintf(&sb, "# Database: %s\n", db.Name)
}
sb.WriteString("\n")
}
@@ -62,7 +62,7 @@ func (w *Writer) databaseToGraphQL(db *models.Database) string {
scalars := w.collectCustomScalars(db)
if len(scalars) > 0 {
for _, scalar := range scalars {
sb.WriteString(fmt.Sprintf("scalar %s\n", scalar))
fmt.Fprintf(&sb, "scalar %s\n", scalar)
}
sb.WriteString("\n")
}
@@ -176,9 +176,9 @@ func (w *Writer) isJoinTable(table *models.Table) bool {
func (w *Writer) enumToGraphQL(enum *models.Enum) string {
var sb strings.Builder
sb.WriteString(fmt.Sprintf("enum %s {\n", enum.Name))
fmt.Fprintf(&sb, "enum %s {\n", enum.Name)
for _, value := range enum.Values {
sb.WriteString(fmt.Sprintf(" %s\n", value))
fmt.Fprintf(&sb, " %s\n", value)
}
sb.WriteString("}\n")
@@ -197,10 +197,10 @@ func (w *Writer) tableToGraphQL(table *models.Table, db *models.Database, schema
if desc == "" {
desc = table.Comment
}
sb.WriteString(fmt.Sprintf("# %s\n", desc))
fmt.Fprintf(&sb, "# %s\n", desc)
}
sb.WriteString(fmt.Sprintf("type %s {\n", typeName))
fmt.Fprintf(&sb, "type %s {\n", typeName)
// Collect and categorize fields
var idFields, scalarFields, relationFields []string

130
pkg/writers/mssql/README.md Normal file
View File

@@ -0,0 +1,130 @@
# MSSQL Writer
Generates Microsoft SQL Server DDL (Data Definition Language) from database schema models.
## Features
- **DDL Generation**: Generates complete SQL scripts for creating MSSQL schema
- **Schema Support**: Creates multiple schemas with proper naming
- **Bracket Notation**: Uses [schema].[table] bracket notation for identifiers
- **Identity Columns**: Generates IDENTITY(1,1) for auto-increment columns
- **Constraints**: Generates primary keys, foreign keys, unique, and check constraints
- **Indexes**: Creates indexes with unique support
- **Extended Properties**: Uses sp_addextendedproperty for comments
- **Direct Execution**: Can directly execute DDL on MSSQL database
- **Schema Flattening**: Optional schema flattening for compatibility
## Features by Phase
1. **Phase 1**: Create schemas
2. **Phase 2**: Create tables with columns, identity, and defaults
3. **Phase 3**: Add primary key constraints
4. **Phase 4**: Create indexes
5. **Phase 5**: Add unique constraints
6. **Phase 6**: Add check constraints
7. **Phase 7**: Add foreign key constraints
8. **Phase 8**: Add extended properties (comments)
## Type Mappings
| Canonical Type | MSSQL Type |
|----------------|-----------|
| int | INT |
| int64 | BIGINT |
| int16 | SMALLINT |
| int8 | TINYINT |
| bool | BIT |
| float32 | REAL |
| float64 | FLOAT |
| decimal | NUMERIC |
| string | NVARCHAR(255) |
| text | NVARCHAR(MAX) |
| timestamp | DATETIME2 |
| timestamptz | DATETIMEOFFSET |
| uuid | UNIQUEIDENTIFIER |
| bytea | VARBINARY(MAX) |
| date | DATE |
| time | TIME |
## Usage
### Generate SQL File
```go
import "git.warky.dev/wdevs/relspecgo/pkg/writers/mssql"
import "git.warky.dev/wdevs/relspecgo/pkg/writers"
writer := mssql.NewWriter(&writers.WriterOptions{
OutputPath: "schema.sql",
FlattenSchema: false,
})
err := writer.WriteDatabase(db)
if err != nil {
panic(err)
}
```
### Direct Database Execution
```go
writer := mssql.NewWriter(&writers.WriterOptions{
OutputPath: "",
Metadata: map[string]interface{}{
"connection_string": "sqlserver://sa:password@localhost/newdb",
},
})
err := writer.WriteDatabase(db)
if err != nil {
panic(err)
}
```
### CLI Usage
Generate SQL file:
```bash
relspec convert --from json --from-path schema.json \
--to mssql --to-path schema.sql
```
Execute directly to database:
```bash
relspec convert --from json --from-path schema.json \
--to mssql \
--metadata '{"connection_string":"sqlserver://sa:password@localhost/mydb"}'
```
## Default Values
The writer supports several default value patterns:
- Functions: `GETDATE()`, `CURRENT_TIMESTAMP()`
- Literals: strings wrapped in quotes, numbers, booleans (0/1 for BIT)
- CAST expressions
## Comments/Extended Properties
Table and column descriptions are stored as MS_Description extended properties:
```sql
EXEC sp_addextendedproperty
@name = 'MS_Description',
@value = 'Table description here',
@level0type = 'SCHEMA', @level0name = 'dbo',
@level1type = 'TABLE', @level1name = 'my_table';
```
## Testing
Run tests with:
```bash
go test ./pkg/writers/mssql/...
```
## Limitations
- Views are not currently supported in the writer
- Sequences are not supported (MSSQL uses IDENTITY instead)
- Partitioning and advanced features are not supported
- Generated DDL assumes no triggers or computed columns

579
pkg/writers/mssql/writer.go Normal file
View File

@@ -0,0 +1,579 @@
package mssql
import (
"context"
"database/sql"
"fmt"
"io"
"os"
"sort"
"strings"
_ "github.com/microsoft/go-mssqldb" // MSSQL driver
"git.warky.dev/wdevs/relspecgo/pkg/models"
"git.warky.dev/wdevs/relspecgo/pkg/mssql"
"git.warky.dev/wdevs/relspecgo/pkg/writers"
)
// Writer implements the Writer interface for MSSQL SQL output
type Writer struct {
options *writers.WriterOptions
writer io.Writer
}
// NewWriter creates a new MSSQL SQL writer
func NewWriter(options *writers.WriterOptions) *Writer {
return &Writer{
options: options,
}
}
// qualTable returns a schema-qualified name using bracket notation
func (w *Writer) qualTable(schema, name string) string {
if w.options.FlattenSchema {
return fmt.Sprintf("[%s]", name)
}
return fmt.Sprintf("[%s].[%s]", schema, name)
}
// WriteDatabase writes the entire database schema as SQL
func (w *Writer) WriteDatabase(db *models.Database) error {
// Check if we should execute SQL directly on a database
if connString, ok := w.options.Metadata["connection_string"].(string); ok && connString != "" {
return w.executeDatabaseSQL(db, connString)
}
var writer io.Writer
var file *os.File
var err error
// Use existing writer if already set (for testing)
if w.writer != nil {
writer = w.writer
} else if w.options.OutputPath != "" {
// Determine output destination
file, err = os.Create(w.options.OutputPath)
if err != nil {
return fmt.Errorf("failed to create output file: %w", err)
}
defer file.Close()
writer = file
} else {
writer = os.Stdout
}
w.writer = writer
// Write header comment
fmt.Fprintf(w.writer, "-- MSSQL Database Schema\n")
fmt.Fprintf(w.writer, "-- Database: %s\n", db.Name)
fmt.Fprintf(w.writer, "-- Generated by RelSpec\n\n")
// Process each schema in the database
for _, schema := range db.Schemas {
if err := w.WriteSchema(schema); err != nil {
return fmt.Errorf("failed to write schema %s: %w", schema.Name, err)
}
}
return nil
}
// WriteSchema writes a single schema and all its tables
func (w *Writer) WriteSchema(schema *models.Schema) error {
if w.writer == nil {
w.writer = os.Stdout
}
// Phase 1: Create schema (skip dbo schema and when flattening)
if schema.Name != "dbo" && !w.options.FlattenSchema {
fmt.Fprintf(w.writer, "-- Schema: %s\n", schema.Name)
fmt.Fprintf(w.writer, "CREATE SCHEMA [%s];\n\n", schema.Name)
}
// Phase 2: Create tables with columns
fmt.Fprintf(w.writer, "-- Tables for schema: %s\n", schema.Name)
for _, table := range schema.Tables {
if err := w.writeCreateTable(schema, table); err != nil {
return err
}
}
// Phase 3: Primary keys
fmt.Fprintf(w.writer, "-- Primary keys for schema: %s\n", schema.Name)
for _, table := range schema.Tables {
if err := w.writePrimaryKey(schema, table); err != nil {
return err
}
}
// Phase 4: Indexes
fmt.Fprintf(w.writer, "-- Indexes for schema: %s\n", schema.Name)
for _, table := range schema.Tables {
if err := w.writeIndexes(schema, table); err != nil {
return err
}
}
// Phase 5: Unique constraints
fmt.Fprintf(w.writer, "-- Unique constraints for schema: %s\n", schema.Name)
for _, table := range schema.Tables {
if err := w.writeUniqueConstraints(schema, table); err != nil {
return err
}
}
// Phase 6: Check constraints
fmt.Fprintf(w.writer, "-- Check constraints for schema: %s\n", schema.Name)
for _, table := range schema.Tables {
if err := w.writeCheckConstraints(schema, table); err != nil {
return err
}
}
// Phase 7: Foreign keys
fmt.Fprintf(w.writer, "-- Foreign keys for schema: %s\n", schema.Name)
for _, table := range schema.Tables {
if err := w.writeForeignKeys(schema, table); err != nil {
return err
}
}
// Phase 8: Comments
fmt.Fprintf(w.writer, "-- Comments for schema: %s\n", schema.Name)
for _, table := range schema.Tables {
if err := w.writeComments(schema, table); err != nil {
return err
}
}
return nil
}
// WriteTable writes a single table with all its elements
func (w *Writer) WriteTable(table *models.Table) error {
if w.writer == nil {
w.writer = os.Stdout
}
// Create a temporary schema with just this table
schema := models.InitSchema(table.Schema)
schema.Tables = append(schema.Tables, table)
return w.WriteSchema(schema)
}
// writeCreateTable generates CREATE TABLE statement
func (w *Writer) writeCreateTable(schema *models.Schema, table *models.Table) error {
fmt.Fprintf(w.writer, "CREATE TABLE %s (\n", w.qualTable(schema.Name, table.Name))
// Sort columns by sequence
columns := getSortedColumns(table.Columns)
columnDefs := make([]string, 0, len(columns))
for _, col := range columns {
def := w.generateColumnDefinition(col)
columnDefs = append(columnDefs, " "+def)
}
fmt.Fprintf(w.writer, "%s\n", strings.Join(columnDefs, ",\n"))
fmt.Fprintf(w.writer, ");\n\n")
return nil
}
// generateColumnDefinition generates MSSQL column definition
func (w *Writer) generateColumnDefinition(col *models.Column) string {
parts := []string{fmt.Sprintf("[%s]", col.Name)}
// Type with length/precision
baseType := mssql.ConvertCanonicalToMSSQL(col.Type)
typeStr := baseType
// Handle specific type parameters for MSSQL
if col.Length > 0 && col.Precision == 0 {
// String types with length - override the default length from baseType
if strings.HasPrefix(baseType, "NVARCHAR") || strings.HasPrefix(baseType, "VARCHAR") ||
strings.HasPrefix(baseType, "CHAR") || strings.HasPrefix(baseType, "NCHAR") {
if col.Length > 0 && col.Length < 8000 {
// Extract base type without length specification
baseName := strings.Split(baseType, "(")[0]
typeStr = fmt.Sprintf("%s(%d)", baseName, col.Length)
}
}
} else if col.Precision > 0 {
// Numeric types with precision/scale
baseName := strings.Split(baseType, "(")[0]
if col.Scale > 0 {
typeStr = fmt.Sprintf("%s(%d,%d)", baseName, col.Precision, col.Scale)
} else {
typeStr = fmt.Sprintf("%s(%d)", baseName, col.Precision)
}
}
parts = append(parts, typeStr)
// IDENTITY for auto-increment
if col.AutoIncrement {
parts = append(parts, "IDENTITY(1,1)")
}
// NOT NULL
if col.NotNull {
parts = append(parts, "NOT NULL")
}
// DEFAULT
if col.Default != nil {
switch v := col.Default.(type) {
case string:
cleanDefault := stripBackticks(v)
if strings.HasPrefix(strings.ToUpper(cleanDefault), "GETDATE") ||
strings.HasPrefix(strings.ToUpper(cleanDefault), "CURRENT_") {
parts = append(parts, fmt.Sprintf("DEFAULT %s", cleanDefault))
} else if cleanDefault == "true" || cleanDefault == "false" {
if cleanDefault == "true" {
parts = append(parts, "DEFAULT 1")
} else {
parts = append(parts, "DEFAULT 0")
}
} else {
parts = append(parts, fmt.Sprintf("DEFAULT '%s'", escapeQuote(cleanDefault)))
}
case bool:
if v {
parts = append(parts, "DEFAULT 1")
} else {
parts = append(parts, "DEFAULT 0")
}
case int, int64:
parts = append(parts, fmt.Sprintf("DEFAULT %v", v))
}
}
return strings.Join(parts, " ")
}
// writePrimaryKey generates ALTER TABLE statement for primary key
func (w *Writer) writePrimaryKey(schema *models.Schema, table *models.Table) error {
// Find primary key constraint
var pkConstraint *models.Constraint
for _, constraint := range table.Constraints {
if constraint.Type == models.PrimaryKeyConstraint {
pkConstraint = constraint
break
}
}
var columnNames []string
pkName := fmt.Sprintf("PK_%s_%s", schema.Name, table.Name)
if pkConstraint != nil {
pkName = pkConstraint.Name
columnNames = make([]string, 0, len(pkConstraint.Columns))
for _, colName := range pkConstraint.Columns {
columnNames = append(columnNames, fmt.Sprintf("[%s]", colName))
}
} else {
// Check for columns with IsPrimaryKey = true
for _, col := range table.Columns {
if col.IsPrimaryKey {
columnNames = append(columnNames, fmt.Sprintf("[%s]", col.Name))
}
}
sort.Strings(columnNames)
}
if len(columnNames) == 0 {
return nil
}
fmt.Fprintf(w.writer, "ALTER TABLE %s ADD CONSTRAINT [%s] PRIMARY KEY (%s);\n\n",
w.qualTable(schema.Name, table.Name), pkName, strings.Join(columnNames, ", "))
return nil
}
// writeIndexes generates CREATE INDEX statements
func (w *Writer) writeIndexes(schema *models.Schema, table *models.Table) error {
// Sort indexes by name
indexNames := make([]string, 0, len(table.Indexes))
for name := range table.Indexes {
indexNames = append(indexNames, name)
}
sort.Strings(indexNames)
for _, name := range indexNames {
index := table.Indexes[name]
// Skip if it's a primary key index
if strings.HasPrefix(strings.ToLower(index.Name), "pk_") {
continue
}
// Build column list
columnExprs := make([]string, 0, len(index.Columns))
for _, colName := range index.Columns {
columnExprs = append(columnExprs, fmt.Sprintf("[%s]", colName))
}
if len(columnExprs) == 0 {
continue
}
unique := ""
if index.Unique {
unique = "UNIQUE "
}
fmt.Fprintf(w.writer, "CREATE %sINDEX [%s] ON %s (%s);\n\n",
unique, index.Name, w.qualTable(schema.Name, table.Name), strings.Join(columnExprs, ", "))
}
return nil
}
// writeUniqueConstraints generates ALTER TABLE statements for unique constraints
func (w *Writer) writeUniqueConstraints(schema *models.Schema, table *models.Table) error {
// Sort constraints by name
constraintNames := make([]string, 0)
for name, constraint := range table.Constraints {
if constraint.Type == models.UniqueConstraint {
constraintNames = append(constraintNames, name)
}
}
sort.Strings(constraintNames)
for _, name := range constraintNames {
constraint := table.Constraints[name]
// Build column list
columnExprs := make([]string, 0, len(constraint.Columns))
for _, colName := range constraint.Columns {
columnExprs = append(columnExprs, fmt.Sprintf("[%s]", colName))
}
if len(columnExprs) == 0 {
continue
}
fmt.Fprintf(w.writer, "ALTER TABLE %s ADD CONSTRAINT [%s] UNIQUE (%s);\n\n",
w.qualTable(schema.Name, table.Name), constraint.Name, strings.Join(columnExprs, ", "))
}
return nil
}
// writeCheckConstraints generates ALTER TABLE statements for check constraints
func (w *Writer) writeCheckConstraints(schema *models.Schema, table *models.Table) error {
// Sort constraints by name
constraintNames := make([]string, 0)
for name, constraint := range table.Constraints {
if constraint.Type == models.CheckConstraint {
constraintNames = append(constraintNames, name)
}
}
sort.Strings(constraintNames)
for _, name := range constraintNames {
constraint := table.Constraints[name]
if constraint.Expression == "" {
continue
}
fmt.Fprintf(w.writer, "ALTER TABLE %s ADD CONSTRAINT [%s] CHECK (%s);\n\n",
w.qualTable(schema.Name, table.Name), constraint.Name, constraint.Expression)
}
return nil
}
// writeForeignKeys generates ALTER TABLE statements for foreign keys
func (w *Writer) writeForeignKeys(schema *models.Schema, table *models.Table) error {
// Process foreign key constraints
constraintNames := make([]string, 0)
for name, constraint := range table.Constraints {
if constraint.Type == models.ForeignKeyConstraint {
constraintNames = append(constraintNames, name)
}
}
sort.Strings(constraintNames)
for _, name := range constraintNames {
constraint := table.Constraints[name]
// Build column lists
sourceColumns := make([]string, 0, len(constraint.Columns))
for _, colName := range constraint.Columns {
sourceColumns = append(sourceColumns, fmt.Sprintf("[%s]", colName))
}
targetColumns := make([]string, 0, len(constraint.ReferencedColumns))
for _, colName := range constraint.ReferencedColumns {
targetColumns = append(targetColumns, fmt.Sprintf("[%s]", colName))
}
if len(sourceColumns) == 0 || len(targetColumns) == 0 {
continue
}
refSchema := constraint.ReferencedSchema
if refSchema == "" {
refSchema = schema.Name
}
onDelete := "NO ACTION"
if constraint.OnDelete != "" {
onDelete = strings.ToUpper(constraint.OnDelete)
}
onUpdate := "NO ACTION"
if constraint.OnUpdate != "" {
onUpdate = strings.ToUpper(constraint.OnUpdate)
}
fmt.Fprintf(w.writer, "ALTER TABLE %s ADD CONSTRAINT [%s] FOREIGN KEY (%s)\n",
w.qualTable(schema.Name, table.Name), constraint.Name, strings.Join(sourceColumns, ", "))
fmt.Fprintf(w.writer, " REFERENCES %s (%s)\n",
w.qualTable(refSchema, constraint.ReferencedTable), strings.Join(targetColumns, ", "))
fmt.Fprintf(w.writer, " ON DELETE %s ON UPDATE %s;\n\n",
onDelete, onUpdate)
}
return nil
}
// writeComments generates EXEC sp_addextendedproperty statements for table and column descriptions
func (w *Writer) writeComments(schema *models.Schema, table *models.Table) error {
// Table comment
if table.Description != "" {
fmt.Fprintf(w.writer, "EXEC sp_addextendedproperty\n")
fmt.Fprintf(w.writer, " @name = 'MS_Description',\n")
fmt.Fprintf(w.writer, " @value = '%s',\n", escapeQuote(table.Description))
fmt.Fprintf(w.writer, " @level0type = 'SCHEMA', @level0name = '%s',\n", schema.Name)
fmt.Fprintf(w.writer, " @level1type = 'TABLE', @level1name = '%s';\n\n", table.Name)
}
// Column comments
for _, col := range getSortedColumns(table.Columns) {
if col.Description != "" {
fmt.Fprintf(w.writer, "EXEC sp_addextendedproperty\n")
fmt.Fprintf(w.writer, " @name = 'MS_Description',\n")
fmt.Fprintf(w.writer, " @value = '%s',\n", escapeQuote(col.Description))
fmt.Fprintf(w.writer, " @level0type = 'SCHEMA', @level0name = '%s',\n", schema.Name)
fmt.Fprintf(w.writer, " @level1type = 'TABLE', @level1name = '%s',\n", table.Name)
fmt.Fprintf(w.writer, " @level2type = 'COLUMN', @level2name = '%s';\n\n", col.Name)
}
}
return nil
}
// executeDatabaseSQL executes SQL statements directly on an MSSQL database
func (w *Writer) executeDatabaseSQL(db *models.Database, connString string) error {
// Generate SQL statements
statements := []string{}
statements = append(statements, "-- MSSQL Database Schema")
statements = append(statements, fmt.Sprintf("-- Database: %s", db.Name))
statements = append(statements, "-- Generated by RelSpec")
for _, schema := range db.Schemas {
if err := w.generateSchemaStatements(schema, &statements); err != nil {
return fmt.Errorf("failed to generate statements for schema %s: %w", schema.Name, err)
}
}
// Connect to database
dbConn, err := sql.Open("mssql", connString)
if err != nil {
return fmt.Errorf("failed to connect to database: %w", err)
}
defer dbConn.Close()
ctx := context.Background()
if err = dbConn.PingContext(ctx); err != nil {
return fmt.Errorf("failed to ping database: %w", err)
}
// Execute statements
executedCount := 0
for i, stmt := range statements {
stmtTrimmed := strings.TrimSpace(stmt)
// Skip comments and empty statements
if strings.HasPrefix(stmtTrimmed, "--") || stmtTrimmed == "" {
continue
}
fmt.Fprintf(os.Stderr, "Executing statement %d/%d...\n", i+1, len(statements))
_, execErr := dbConn.ExecContext(ctx, stmt)
if execErr != nil {
fmt.Fprintf(os.Stderr, "⚠ Warning: Statement failed: %v\n", execErr)
continue
}
executedCount++
}
fmt.Fprintf(os.Stderr, "✓ Successfully executed %d statements\n", executedCount)
return nil
}
// generateSchemaStatements generates SQL statements for a schema
func (w *Writer) generateSchemaStatements(schema *models.Schema, statements *[]string) error {
// Phase 1: Create schema
if schema.Name != "dbo" && !w.options.FlattenSchema {
*statements = append(*statements, fmt.Sprintf("-- Schema: %s", schema.Name))
*statements = append(*statements, fmt.Sprintf("CREATE SCHEMA [%s];", schema.Name))
}
// Phase 2: Create tables
*statements = append(*statements, fmt.Sprintf("-- Tables for schema: %s", schema.Name))
for _, table := range schema.Tables {
createTableSQL := fmt.Sprintf("CREATE TABLE %s (", w.qualTable(schema.Name, table.Name))
columnDefs := make([]string, 0)
columns := getSortedColumns(table.Columns)
for _, col := range columns {
def := w.generateColumnDefinition(col)
columnDefs = append(columnDefs, " "+def)
}
createTableSQL += "\n" + strings.Join(columnDefs, ",\n") + "\n)"
*statements = append(*statements, createTableSQL)
}
// Phase 3-7: Constraints and indexes will be added by WriteSchema logic
// For now, just create tables
return nil
}
// Helper functions
// getSortedColumns returns columns sorted by sequence
func getSortedColumns(columns map[string]*models.Column) []*models.Column {
names := make([]string, 0, len(columns))
for name := range columns {
names = append(names, name)
}
sort.Strings(names)
sorted := make([]*models.Column, 0, len(columns))
for _, name := range names {
sorted = append(sorted, columns[name])
}
return sorted
}
// escapeQuote escapes single quotes in strings for SQL
func escapeQuote(s string) string {
return strings.ReplaceAll(s, "'", "''")
}
// stripBackticks removes backticks from SQL expressions
func stripBackticks(s string) string {
return strings.ReplaceAll(s, "`", "")
}

View File

@@ -0,0 +1,205 @@
package mssql
import (
"bytes"
"testing"
"git.warky.dev/wdevs/relspecgo/pkg/models"
"git.warky.dev/wdevs/relspecgo/pkg/writers"
"github.com/stretchr/testify/assert"
)
// TestGenerateColumnDefinition tests column definition generation
func TestGenerateColumnDefinition(t *testing.T) {
writer := NewWriter(&writers.WriterOptions{})
tests := []struct {
name string
column *models.Column
expected string
}{
{
name: "INT NOT NULL",
column: &models.Column{
Name: "id",
Type: "int",
NotNull: true,
Sequence: 1,
},
expected: "[id] INT NOT NULL",
},
{
name: "VARCHAR with length",
column: &models.Column{
Name: "name",
Type: "string",
Length: 100,
NotNull: true,
Sequence: 2,
},
expected: "[name] NVARCHAR(100) NOT NULL",
},
{
name: "DATETIME2 with default",
column: &models.Column{
Name: "created_at",
Type: "timestamp",
NotNull: true,
Default: "GETDATE()",
Sequence: 3,
},
expected: "[created_at] DATETIME2 NOT NULL DEFAULT GETDATE()",
},
{
name: "IDENTITY column",
column: &models.Column{
Name: "id",
Type: "int",
AutoIncrement: true,
NotNull: true,
Sequence: 1,
},
expected: "[id] INT IDENTITY(1,1) NOT NULL",
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
result := writer.generateColumnDefinition(tt.column)
assert.Equal(t, tt.expected, result)
})
}
}
// TestWriteCreateTable tests CREATE TABLE statement generation
func TestWriteCreateTable(t *testing.T) {
writer := NewWriter(&writers.WriterOptions{})
// Create a test schema with a table
schema := models.InitSchema("dbo")
table := models.InitTable("users", "dbo")
col1 := models.InitColumn("id", "users", "dbo")
col1.Type = "int"
col1.AutoIncrement = true
col1.NotNull = true
col1.Sequence = 1
col2 := models.InitColumn("email", "users", "dbo")
col2.Type = "string"
col2.Length = 255
col2.NotNull = true
col2.Sequence = 2
table.Columns["id"] = col1
table.Columns["email"] = col2
// Write to buffer
buf := &bytes.Buffer{}
writer.writer = buf
err := writer.writeCreateTable(schema, table)
assert.NoError(t, err)
output := buf.String()
assert.Contains(t, output, "CREATE TABLE [dbo].[users]")
assert.Contains(t, output, "[id] INT IDENTITY(1,1) NOT NULL")
assert.Contains(t, output, "[email] NVARCHAR(255) NOT NULL")
}
// TestWritePrimaryKey tests PRIMARY KEY constraint generation
func TestWritePrimaryKey(t *testing.T) {
writer := NewWriter(&writers.WriterOptions{})
schema := models.InitSchema("dbo")
table := models.InitTable("users", "dbo")
// Add primary key constraint
pk := models.InitConstraint("PK_users_id", models.PrimaryKeyConstraint)
pk.Columns = []string{"id"}
table.Constraints[pk.Name] = pk
// Add column
col := models.InitColumn("id", "users", "dbo")
col.Type = "int"
col.Sequence = 1
table.Columns["id"] = col
// Write to buffer
buf := &bytes.Buffer{}
writer.writer = buf
err := writer.writePrimaryKey(schema, table)
assert.NoError(t, err)
output := buf.String()
assert.Contains(t, output, "ALTER TABLE [dbo].[users]")
assert.Contains(t, output, "PRIMARY KEY")
assert.Contains(t, output, "[id]")
}
// TestWriteForeignKey tests FOREIGN KEY constraint generation
func TestWriteForeignKey(t *testing.T) {
writer := NewWriter(&writers.WriterOptions{})
schema := models.InitSchema("dbo")
table := models.InitTable("orders", "dbo")
// Add foreign key constraint
fk := models.InitConstraint("FK_orders_users", models.ForeignKeyConstraint)
fk.Columns = []string{"user_id"}
fk.ReferencedSchema = "dbo"
fk.ReferencedTable = "users"
fk.ReferencedColumns = []string{"id"}
fk.OnDelete = "CASCADE"
fk.OnUpdate = "NO ACTION"
table.Constraints[fk.Name] = fk
// Add column
col := models.InitColumn("user_id", "orders", "dbo")
col.Type = "int"
col.Sequence = 1
table.Columns["user_id"] = col
// Write to buffer
buf := &bytes.Buffer{}
writer.writer = buf
err := writer.writeForeignKeys(schema, table)
assert.NoError(t, err)
output := buf.String()
assert.Contains(t, output, "ALTER TABLE [dbo].[orders]")
assert.Contains(t, output, "FK_orders_users")
assert.Contains(t, output, "FOREIGN KEY")
assert.Contains(t, output, "CASCADE")
assert.Contains(t, output, "NO ACTION")
}
// TestWriteComments tests extended property generation for comments
func TestWriteComments(t *testing.T) {
writer := NewWriter(&writers.WriterOptions{})
schema := models.InitSchema("dbo")
table := models.InitTable("users", "dbo")
table.Description = "User accounts table"
col := models.InitColumn("id", "users", "dbo")
col.Type = "int"
col.Description = "Primary key"
col.Sequence = 1
table.Columns["id"] = col
// Write to buffer
buf := &bytes.Buffer{}
writer.writer = buf
err := writer.writeComments(schema, table)
assert.NoError(t, err)
output := buf.String()
assert.Contains(t, output, "sp_addextendedproperty")
assert.Contains(t, output, "MS_Description")
assert.Contains(t, output, "User accounts table")
assert.Contains(t, output, "Primary key")
}

View File

@@ -493,18 +493,19 @@ func (w *Writer) generateColumnDefinition(col *models.Column) string {
// Type with length/precision - convert to valid PostgreSQL type
baseType := pgsql.ConvertSQLType(col.Type)
typeStr := baseType
hasExplicitTypeModifier := pgsql.HasExplicitTypeModifier(baseType)
// Only add size specifiers for types that support them
if col.Length > 0 && col.Precision == 0 {
if supportsLength(baseType) {
if !hasExplicitTypeModifier && col.Length > 0 && col.Precision == 0 {
if pgsql.SupportsLength(baseType) {
typeStr = fmt.Sprintf("%s(%d)", baseType, col.Length)
} else if isTextTypeWithoutLength(baseType) {
// Convert text with length to varchar
typeStr = fmt.Sprintf("varchar(%d)", col.Length)
}
// For types that don't support length (integer, bigint, etc.), ignore the length
} else if col.Precision > 0 {
if supportsPrecision(baseType) {
} else if !hasExplicitTypeModifier && col.Precision > 0 {
if pgsql.SupportsPrecision(baseType) {
if col.Scale > 0 {
typeStr = fmt.Sprintf("%s(%d,%d)", baseType, col.Precision, col.Scale)
} else {
@@ -1268,30 +1269,6 @@ func isTextType(colType string) bool {
return false
}
// supportsLength checks if a PostgreSQL type supports length specification
func supportsLength(colType string) bool {
lengthTypes := []string{"varchar", "character varying", "char", "character", "bit", "bit varying", "varbit"}
lowerType := strings.ToLower(colType)
for _, t := range lengthTypes {
if lowerType == t || strings.HasPrefix(lowerType, t+"(") {
return true
}
}
return false
}
// supportsPrecision checks if a PostgreSQL type supports precision/scale specification
func supportsPrecision(colType string) bool {
precisionTypes := []string{"numeric", "decimal", "time", "timestamp", "timestamptz", "timestamp with time zone", "timestamp without time zone", "time with time zone", "time without time zone", "interval"}
lowerType := strings.ToLower(colType)
for _, t := range precisionTypes {
if lowerType == t || strings.HasPrefix(lowerType, t+"(") {
return true
}
}
return false
}
// isTextTypeWithoutLength checks if type is text (which should convert to varchar when length is specified)
func isTextTypeWithoutLength(colType string) bool {
return strings.EqualFold(colType, "text")

View File

@@ -426,11 +426,11 @@ func TestWriteAllConstraintTypes(t *testing.T) {
// Verify all constraint types are present
expectedConstraints := map[string]string{
"Primary Key": "PRIMARY KEY",
"Unique": "ADD CONSTRAINT uq_order_number UNIQUE (order_number)",
"Check (total)": "ADD CONSTRAINT ck_total_positive CHECK (total > 0)",
"Check (status)": "ADD CONSTRAINT ck_status_valid CHECK (status IN ('pending', 'completed', 'cancelled'))",
"Foreign Key": "FOREIGN KEY",
"Primary Key": "PRIMARY KEY",
"Unique": "ADD CONSTRAINT uq_order_number UNIQUE (order_number)",
"Check (total)": "ADD CONSTRAINT ck_total_positive CHECK (total > 0)",
"Check (status)": "ADD CONSTRAINT ck_status_valid CHECK (status IN ('pending', 'completed', 'cancelled'))",
"Foreign Key": "FOREIGN KEY",
}
for name, expected := range expectedConstraints {
@@ -715,11 +715,11 @@ func TestColumnSizeSpecifiers(t *testing.T) {
// Verify valid patterns ARE present
validPatterns := []string{
"integer", // without size
"bigint", // without size
"smallint", // without size
"varchar(100)", // text converted to varchar with length
"varchar(50)", // varchar with length
"integer", // without size
"bigint", // without size
"smallint", // without size
"varchar(100)", // text converted to varchar with length
"varchar(50)", // varchar with length
"decimal(19,4)", // decimal with precision and scale
}
for _, pattern := range validPatterns {
@@ -729,6 +729,56 @@ func TestColumnSizeSpecifiers(t *testing.T) {
}
}
func TestGenerateColumnDefinition_PreservesExplicitTypeModifiers(t *testing.T) {
writer := NewWriter(&writers.WriterOptions{})
cases := []struct {
name string
colType string
length int
precision int
scale int
wantType string
}{
{
name: "character varying already includes length",
colType: "character varying(50)",
length: 50,
wantType: "character varying(50)",
},
{
name: "numeric already includes precision",
colType: "numeric(10,2)",
precision: 10,
scale: 2,
wantType: "numeric(10,2)",
},
{
name: "custom vector modifier preserved",
colType: "vector(1536)",
wantType: "vector(1536)",
},
}
for _, tc := range cases {
t.Run(tc.name, func(t *testing.T) {
col := models.InitColumn("sample", "events", "public")
col.Type = tc.colType
col.Length = tc.length
col.Precision = tc.precision
col.Scale = tc.scale
def := writer.generateColumnDefinition(col)
if !strings.Contains(def, " "+tc.wantType+" ") && !strings.HasSuffix(def, " "+tc.wantType) {
t.Fatalf("generated definition %q does not contain expected type %q", def, tc.wantType)
}
if strings.Contains(def, ")(") {
t.Fatalf("generated definition %q appears to duplicate modifiers", def)
}
})
}
}
func TestGenerateAddColumnStatements(t *testing.T) {
// Create a test database with tables that have new columns
db := models.InitDatabase("testdb")

View File

@@ -125,9 +125,9 @@ func (w *Writer) generateGenerator() string {
func (w *Writer) enumToPrisma(enum *models.Enum) string {
var sb strings.Builder
sb.WriteString(fmt.Sprintf("enum %s {\n", enum.Name))
fmt.Fprintf(&sb, "enum %s {\n", enum.Name)
for _, value := range enum.Values {
sb.WriteString(fmt.Sprintf(" %s\n", value))
fmt.Fprintf(&sb, " %s\n", value)
}
sb.WriteString("}\n")
@@ -179,7 +179,7 @@ func (w *Writer) identifyJoinTables(schema *models.Schema) map[string]bool {
func (w *Writer) tableToPrisma(table *models.Table, schema *models.Schema, joinTables map[string]bool) string {
var sb strings.Builder
sb.WriteString(fmt.Sprintf("model %s {\n", table.Name))
fmt.Fprintf(&sb, "model %s {\n", table.Name)
// Collect columns to write
columns := make([]*models.Column, 0, len(table.Columns))
@@ -219,11 +219,11 @@ func (w *Writer) columnToField(col *models.Column, table *models.Table, schema *
var sb strings.Builder
// Field name
sb.WriteString(fmt.Sprintf(" %s", col.Name))
fmt.Fprintf(&sb, " %s", col.Name)
// Field type
prismaType := w.sqlTypeToPrisma(col.Type, schema)
sb.WriteString(fmt.Sprintf(" %s", prismaType))
fmt.Fprintf(&sb, " %s", prismaType)
// Optional modifier
if !col.NotNull && !col.IsPrimaryKey {
@@ -413,7 +413,7 @@ func (w *Writer) generateRelationFields(table *models.Table, schema *models.Sche
relationName = relationName[:len(relationName)-1]
}
sb.WriteString(fmt.Sprintf(" %s %s", strings.ToLower(relationName), relationType))
fmt.Fprintf(&sb, " %s %s", strings.ToLower(relationName), relationType)
if isOptional {
sb.WriteString("?")
@@ -479,8 +479,8 @@ func (w *Writer) generateInverseRelations(table *models.Table, schema *models.Sc
if fk.ReferencedTable != table.Name {
// This is the other side
otherSide := fk.ReferencedTable
sb.WriteString(fmt.Sprintf(" %ss %s[]\n",
strings.ToLower(otherSide), otherSide))
fmt.Fprintf(&sb, " %ss %s[]\n",
strings.ToLower(otherSide), otherSide)
break
}
}
@@ -497,8 +497,8 @@ func (w *Writer) generateInverseRelations(table *models.Table, schema *models.Sc
pluralName += "s"
}
sb.WriteString(fmt.Sprintf(" %s %s[]\n",
strings.ToLower(pluralName), otherTable.Name))
fmt.Fprintf(&sb, " %s %s[]\n",
strings.ToLower(pluralName), otherTable.Name)
}
}
}
@@ -530,20 +530,20 @@ func (w *Writer) generateBlockAttributes(table *models.Table) string {
if len(pkCols) > 1 {
sort.Strings(pkCols)
sb.WriteString(fmt.Sprintf(" @@id([%s])\n", strings.Join(pkCols, ", ")))
fmt.Fprintf(&sb, " @@id([%s])\n", strings.Join(pkCols, ", "))
}
// @@unique for multi-column unique constraints
for _, constraint := range table.Constraints {
if constraint.Type == models.UniqueConstraint && len(constraint.Columns) > 1 {
sb.WriteString(fmt.Sprintf(" @@unique([%s])\n", strings.Join(constraint.Columns, ", ")))
fmt.Fprintf(&sb, " @@unique([%s])\n", strings.Join(constraint.Columns, ", "))
}
}
// @@index for indexes
for _, index := range table.Indexes {
if !index.Unique { // Unique indexes are handled by @@unique
sb.WriteString(fmt.Sprintf(" @@index([%s])\n", strings.Join(index.Columns, ", ")))
fmt.Fprintf(&sb, " @@index([%s])\n", strings.Join(index.Columns, ", "))
}
}

View File

@@ -207,7 +207,7 @@ func (w *Writer) tableToEntity(table *models.Table, schema *models.Schema, joinT
// Generate @Entity decorator with options
entityOptions := w.buildEntityOptions(table)
sb.WriteString(fmt.Sprintf("@Entity({\n%s\n})\n", entityOptions))
fmt.Fprintf(&sb, "@Entity({\n%s\n})\n", entityOptions)
// Get class name (from metadata if different from table name)
className := table.Name
@@ -219,7 +219,7 @@ func (w *Writer) tableToEntity(table *models.Table, schema *models.Schema, joinT
}
}
sb.WriteString(fmt.Sprintf("export class %s {\n", className))
fmt.Fprintf(&sb, "export class %s {\n", className)
// Collect and sort columns
columns := make([]*models.Column, 0, len(table.Columns))
@@ -272,7 +272,7 @@ func (w *Writer) viewToEntity(view *models.View) string {
sb.WriteString("})\n")
// Generate class
sb.WriteString(fmt.Sprintf("export class %s {\n", view.Name))
fmt.Fprintf(&sb, "export class %s {\n", view.Name)
// Generate field definitions (without decorators for view fields)
columns := make([]*models.Column, 0, len(view.Columns))
@@ -285,7 +285,7 @@ func (w *Writer) viewToEntity(view *models.View) string {
for _, col := range columns {
tsType := w.sqlTypeToTypeScript(col.Type)
sb.WriteString(fmt.Sprintf(" %s: %s;\n", col.Name, tsType))
fmt.Fprintf(&sb, " %s: %s;\n", col.Name, tsType)
}
sb.WriteString("}\n")
@@ -314,7 +314,7 @@ func (w *Writer) columnToField(col *models.Column, table *models.Table) string {
// Regular @Column decorator
options := w.buildColumnOptions(col, table)
if options != "" {
sb.WriteString(fmt.Sprintf(" @Column({ %s })\n", options))
fmt.Fprintf(&sb, " @Column({ %s })\n", options)
} else {
sb.WriteString(" @Column()\n")
}
@@ -327,7 +327,7 @@ func (w *Writer) columnToField(col *models.Column, table *models.Table) string {
nullable = " | null"
}
sb.WriteString(fmt.Sprintf(" %s: %s%s;", col.Name, tsType, nullable))
fmt.Fprintf(&sb, " %s: %s%s;", col.Name, tsType, nullable)
return sb.String()
}
@@ -464,17 +464,17 @@ func (w *Writer) generateRelationFields(table *models.Table, schema *models.Sche
inverseField := w.findInverseFieldName(table.Name, relatedTable, schema)
if inverseField != "" {
sb.WriteString(fmt.Sprintf(" @ManyToOne(() => %s, %s => %s.%s)\n",
relatedTable, strings.ToLower(relatedTable), strings.ToLower(relatedTable), inverseField))
fmt.Fprintf(&sb, " @ManyToOne(() => %s, %s => %s.%s)\n",
relatedTable, strings.ToLower(relatedTable), strings.ToLower(relatedTable), inverseField)
} else {
if isNullable {
sb.WriteString(fmt.Sprintf(" @ManyToOne(() => %s, { nullable: true })\n", relatedTable))
fmt.Fprintf(&sb, " @ManyToOne(() => %s, { nullable: true })\n", relatedTable)
} else {
sb.WriteString(fmt.Sprintf(" @ManyToOne(() => %s)\n", relatedTable))
fmt.Fprintf(&sb, " @ManyToOne(() => %s)\n", relatedTable)
}
}
sb.WriteString(fmt.Sprintf(" %s: %s%s;\n", fieldName, relatedTable, nullable))
fmt.Fprintf(&sb, " %s: %s%s;\n", fieldName, relatedTable, nullable)
sb.WriteString("\n")
}

View File

@@ -81,6 +81,64 @@ func SanitizeFilename(name string) string {
return name
}
// QuoteDefaultValue wraps a sanitized default value in single quotes when the SQL
// column type requires it (strings, dates, times, UUIDs, enums). Numeric types
// (integers, floats, serials) and boolean types are left unquoted. Function-call
// expressions such as now() or gen_random_uuid() are always left unquoted regardless
// of type, because they contain parentheses.
//
// Examples (varchar): "disconnected" → "'disconnected'"
// Examples (boolean): "true" → "true"
// Examples (bigint): "0" → "0"
// Examples (timestamp): "now()" → "now()" (function call never quoted)
func QuoteDefaultValue(value, sqlType string) string {
// Function calls are never quoted regardless of column type.
if strings.Contains(value, "(") || strings.Contains(value, ")") {
return value
}
// Normalise the SQL type: lowercase, strip length/precision suffix.
baseType := strings.ToLower(strings.TrimSpace(sqlType))
if idx := strings.Index(baseType, "("); idx > 0 {
baseType = baseType[:idx]
}
// Types whose default values must NOT be quoted.
unquotedTypes := map[string]bool{
// Integer types
"integer": true,
"int": true,
"int2": true,
"int4": true,
"int8": true,
"smallint": true,
"bigint": true,
"serial": true,
"smallserial": true,
"bigserial": true,
// Float / numeric types
"real": true,
"float": true,
"float4": true,
"float8": true,
"double precision": true,
"numeric": true,
"decimal": true,
"money": true,
// Boolean
"boolean": true,
"bool": true,
}
if unquotedTypes[baseType] {
return value
}
// Everything else (text, varchar, char, uuid, date, time, timestamp, json, …)
// is treated as a quoted literal.
return "'" + value + "'"
}
// SanitizeStructTagValue sanitizes a value to be safely used inside Go struct tags.
// Go struct tags are delimited by backticks, so any backtick in the value would break the syntax.
// This function:

286
test_data/mssql/TESTING.md Normal file
View File

@@ -0,0 +1,286 @@
# MSSQL Reader and Writer Testing Guide
## Prerequisites
- Docker and Docker Compose installed
- RelSpec binary built (`make build`)
- jq (optional, for JSON processing)
## Quick Start
### 1. Start SQL Server Express
```bash
docker-compose up -d mssql
# Wait for container to be healthy
docker-compose ps
# Monitor startup logs
docker-compose logs -f mssql
```
### 2. Verify Database Creation
```bash
docker exec -it $(docker-compose ps -q mssql) \
/opt/mssql-tools/bin/sqlcmd \
-S localhost \
-U sa \
-P 'StrongPassword123!' \
-Q "SELECT name FROM sys.databases WHERE name = 'RelSpecTest'"
```
## Testing Scenarios
### Scenario 1: Read MSSQL Database to JSON
Read the test schema from MSSQL and export to JSON:
```bash
./build/relspec convert \
--from mssql \
--from-conn "sqlserver://sa:StrongPassword123!@localhost:1433/RelSpecTest" \
--to json \
--to-path test_output.json
```
Verify output:
```bash
jq '.Schemas[0].Tables | length' test_output.json
jq '.Schemas[0].Tables[0]' test_output.json
```
### Scenario 2: Read MSSQL Database to DBML
Convert MSSQL schema to DBML format:
```bash
./build/relspec convert \
--from mssql \
--from-conn "sqlserver://sa:StrongPassword123!@localhost:1433/RelSpecTest" \
--to dbml \
--to-path test_output.dbml
```
### Scenario 3: Generate SQL Script (No Direct Execution)
Generate SQL script without executing:
```bash
./build/relspec convert \
--from mssql \
--from-conn "sqlserver://sa:StrongPassword123!@localhost:1433/RelSpecTest" \
--to mssql \
--to-path test_output.sql
```
Inspect generated SQL:
```bash
head -50 test_output.sql
```
### Scenario 4: Round-Trip Conversion (MSSQL → JSON → MSSQL)
Test bidirectional conversion:
```bash
# Step 1: MSSQL → JSON
./build/relspec convert \
--from mssql \
--from-conn "sqlserver://sa:StrongPassword123!@localhost:1433/RelSpecTest" \
--to json \
--to-path backup.json
# Step 2: JSON → MSSQL SQL
./build/relspec convert \
--from json \
--from-path backup.json \
--to mssql \
--to-path restore.sql
# Inspect SQL
cat restore.sql | head -50
```
### Scenario 5: Cross-Database Conversion
If you have PostgreSQL running, test conversion:
```bash
# MSSQL → PostgreSQL SQL
./build/relspec convert \
--from mssql \
--from-conn "sqlserver://sa:StrongPassword123!@localhost:1433/RelSpecTest" \
--to pgsql \
--to-path mssql_to_pg.sql
```
### Scenario 6: Test Type Mappings
Create a JSON file with various types and convert to MSSQL:
```json
{
"Name": "TypeTest",
"Schemas": [
{
"Name": "dbo",
"Tables": [
{
"Name": "type_samples",
"Columns": {
"id": {
"Name": "id",
"Type": "int",
"AutoIncrement": true,
"NotNull": true,
"Sequence": 1
},
"big_num": {
"Name": "big_num",
"Type": "int64",
"Sequence": 2
},
"is_active": {
"Name": "is_active",
"Type": "bool",
"Sequence": 3
},
"description": {
"Name": "description",
"Type": "text",
"Sequence": 4
},
"created_at": {
"Name": "created_at",
"Type": "timestamp",
"NotNull": true,
"Default": "GETDATE()",
"Sequence": 5
},
"unique_id": {
"Name": "unique_id",
"Type": "uuid",
"Sequence": 6
},
"metadata": {
"Name": "metadata",
"Type": "json",
"Sequence": 7
},
"binary_data": {
"Name": "binary_data",
"Type": "bytea",
"Sequence": 8
}
},
"Constraints": {
"PK_type_samples_id": {
"Name": "PK_type_samples_id",
"Type": "PRIMARY_KEY",
"Columns": ["id"]
}
}
}
]
}
]
}
```
Convert to MSSQL:
```bash
./build/relspec convert \
--from json \
--from-path type_test.json \
--to mssql \
--to-path type_test.sql
cat type_test.sql
```
## Cleanup
Stop and remove the SQL Server container:
```bash
docker-compose down
# Clean up test files
rm -f test_output.* backup.json restore.sql
```
## Troubleshooting
### Container won't start
Check Docker daemon is running and database logs:
```bash
docker-compose logs mssql
```
### Connection refused errors
Wait for container to be healthy:
```bash
docker-compose ps
# Wait until STATUS shows "healthy"
# Or check manually
docker exec -it $(docker-compose ps -q mssql) \
/opt/mssql-tools/bin/sqlcmd \
-S localhost \
-U sa \
-P 'StrongPassword123!' \
-Q "SELECT @@VERSION"
```
### Test schema not found
Initialize the test schema:
```bash
docker exec -i $(docker-compose ps -q mssql) \
/opt/mssql-tools/bin/sqlcmd \
-S localhost \
-U sa \
-P 'StrongPassword123!' \
< test_data/mssql/test_schema.sql
```
### Connection string format issues
Use the correct format for connection strings:
- Default port: 1433
- Username: `sa`
- Password: `StrongPassword123!`
- Database: `RelSpecTest`
Format: `sqlserver://sa:StrongPassword123!@localhost:1433/RelSpecTest`
## Performance Notes
- Initial reader setup may take a few seconds
- Type mapping queries are cached within a single read operation
- Direct execution mode is atomic per table/constraint
- Large schemas (100+ tables) should complete in under 5 seconds
## Unit Test Verification
Run the MSSQL-specific tests:
```bash
# Type mapping tests
go test ./pkg/mssql/... -v
# Reader tests
go test ./pkg/readers/mssql/... -v
# Writer tests
go test ./pkg/writers/mssql/... -v
# All together
go test ./pkg/mssql/... ./pkg/readers/mssql/... ./pkg/writers/mssql/... -v
```
Expected output: All tests should PASS

View File

@@ -0,0 +1,187 @@
-- Test schema for MSSQL Reader integration tests
-- This script creates a sample database for testing the MSSQL reader
USE master;
GO
-- Drop existing database if it exists
IF EXISTS (SELECT 1 FROM sys.databases WHERE name = 'RelSpecTest')
BEGIN
ALTER DATABASE RelSpecTest SET SINGLE_USER WITH ROLLBACK IMMEDIATE;
DROP DATABASE RelSpecTest;
END
GO
-- Create test database
CREATE DATABASE RelSpecTest;
GO
USE RelSpecTest;
GO
-- Create schemas
CREATE SCHEMA [public];
GO
CREATE SCHEMA [auth];
GO
-- Create tables in public schema
CREATE TABLE [public].[users] (
[id] INT IDENTITY(1,1) NOT NULL,
[email] NVARCHAR(255) NOT NULL,
[username] NVARCHAR(100) NOT NULL,
[created_at] DATETIME2 NOT NULL DEFAULT GETDATE(),
[updated_at] DATETIME2 NULL,
[is_active] BIT NOT NULL DEFAULT 1,
PRIMARY KEY ([id]),
UNIQUE ([email]),
UNIQUE ([username])
);
GO
CREATE TABLE [public].[posts] (
[id] INT IDENTITY(1,1) NOT NULL,
[user_id] INT NOT NULL,
[title] NVARCHAR(255) NOT NULL,
[content] NVARCHAR(MAX) NOT NULL,
[published_at] DATETIME2 NULL,
[created_at] DATETIME2 NOT NULL DEFAULT GETDATE(),
PRIMARY KEY ([id])
);
GO
CREATE TABLE [public].[comments] (
[id] INT IDENTITY(1,1) NOT NULL,
[post_id] INT NOT NULL,
[user_id] INT NOT NULL,
[content] NVARCHAR(MAX) NOT NULL,
[created_at] DATETIME2 NOT NULL DEFAULT GETDATE(),
PRIMARY KEY ([id])
);
GO
-- Create tables in auth schema
CREATE TABLE [auth].[roles] (
[id] INT IDENTITY(1,1) NOT NULL,
[name] NVARCHAR(100) NOT NULL,
[description] NVARCHAR(MAX) NULL,
PRIMARY KEY ([id]),
UNIQUE ([name])
);
GO
CREATE TABLE [auth].[user_roles] (
[id] INT IDENTITY(1,1) NOT NULL,
[user_id] INT NOT NULL,
[role_id] INT NOT NULL,
PRIMARY KEY ([id]),
UNIQUE ([user_id], [role_id])
);
GO
-- Add foreign keys
ALTER TABLE [public].[posts]
ADD CONSTRAINT [FK_posts_users]
FOREIGN KEY ([user_id])
REFERENCES [public].[users] ([id])
ON DELETE CASCADE ON UPDATE NO ACTION;
GO
ALTER TABLE [public].[comments]
ADD CONSTRAINT [FK_comments_posts]
FOREIGN KEY ([post_id])
REFERENCES [public].[posts] ([id])
ON DELETE CASCADE ON UPDATE NO ACTION;
GO
ALTER TABLE [public].[comments]
ADD CONSTRAINT [FK_comments_users]
FOREIGN KEY ([user_id])
REFERENCES [public].[users] ([id])
ON DELETE CASCADE ON UPDATE NO ACTION;
GO
ALTER TABLE [auth].[user_roles]
ADD CONSTRAINT [FK_user_roles_users]
FOREIGN KEY ([user_id])
REFERENCES [public].[users] ([id])
ON DELETE CASCADE ON UPDATE NO ACTION;
GO
ALTER TABLE [auth].[user_roles]
ADD CONSTRAINT [FK_user_roles_roles]
FOREIGN KEY ([role_id])
REFERENCES [auth].[roles] ([id])
ON DELETE CASCADE ON UPDATE NO ACTION;
GO
-- Create indexes
CREATE INDEX [IDX_users_email] ON [public].[users] ([email]);
GO
CREATE INDEX [IDX_posts_user_id] ON [public].[posts] ([user_id]);
GO
CREATE INDEX [IDX_comments_post_id] ON [public].[comments] ([post_id]);
GO
CREATE INDEX [IDX_comments_user_id] ON [public].[comments] ([user_id]);
GO
-- Add extended properties (comments)
EXEC sp_addextendedproperty
@name = 'MS_Description',
@value = 'User accounts table',
@level0type = 'SCHEMA', @level0name = 'public',
@level1type = 'TABLE', @level1name = 'users';
GO
EXEC sp_addextendedproperty
@name = 'MS_Description',
@value = 'User unique identifier',
@level0type = 'SCHEMA', @level0name = 'public',
@level1type = 'TABLE', @level1name = 'users',
@level2type = 'COLUMN', @level2name = 'id';
GO
EXEC sp_addextendedproperty
@name = 'MS_Description',
@value = 'User email address',
@level0type = 'SCHEMA', @level0name = 'public',
@level1type = 'TABLE', @level1name = 'users',
@level2type = 'COLUMN', @level2name = 'email';
GO
EXEC sp_addextendedproperty
@name = 'MS_Description',
@value = 'Blog posts table',
@level0type = 'SCHEMA', @level0name = 'public',
@level1type = 'TABLE', @level1name = 'posts';
GO
EXEC sp_addextendedproperty
@name = 'MS_Description',
@value = 'User roles mapping table',
@level0type = 'SCHEMA', @level0name = 'auth',
@level1type = 'TABLE', @level1name = 'user_roles';
GO
-- Add check constraint
ALTER TABLE [public].[users]
ADD CONSTRAINT [CK_users_email_format]
CHECK (LEN(email) > 0 AND email LIKE '%@%.%');
GO
-- Verify schema was created
SELECT
SCHEMA_NAME(s.schema_id) as [Schema],
t.name as [Table],
COUNT(c.column_id) as [ColumnCount]
FROM sys.tables t
INNER JOIN sys.schemas s ON t.schema_id = s.schema_id
LEFT JOIN sys.columns c ON t.object_id = c.object_id
WHERE SCHEMA_NAME(s.schema_id) IN ('public', 'auth')
GROUP BY SCHEMA_NAME(s.schema_id), t.name
ORDER BY [Schema], [Table];
GO

View File

@@ -56,7 +56,7 @@ Table admin.audit_logs {
}
// Relationships
Ref: public.posts.user_id > public.users.id [ondelete: CASCADE, onupdate: CASCADE]
Ref: public.comments.post_id > public.posts.id [ondelete: CASCADE]
Ref: public.comments.user_id > public.users.id [ondelete: SET NULL]
Ref: admin.audit_logs.user_id > public.users.id [ondelete: SET NULL]
Ref: public.posts.user_id > public.users.id [delete: CASCADE, update: CASCADE]
Ref: public.comments.post_id > public.posts.id [delete: CASCADE]
Ref: public.comments.user_id > public.users.id [delete: SET NULL]
Ref: admin.audit_logs.user_id > public.users.id [delete: SET NULL]

73
vendor/github.com/golang-sql/civil/CONTRIBUTING.md generated vendored Normal file
View File

@@ -0,0 +1,73 @@
# Contributing
1. Sign one of the contributor license agreements below.
#### Running
Once you've done the necessary setup, you can run the integration tests by
running:
``` sh
$ go test -v github.com/golang-sql/civil
```
## Contributor License Agreements
Before we can accept your pull requests you'll need to sign a Contributor
License Agreement (CLA):
- **If you are an individual writing original source code** and **you own the
intellectual property**, then you'll need to sign an [individual CLA][indvcla].
- **If you work for a company that wants to allow you to contribute your
work**, then you'll need to sign a [corporate CLA][corpcla].
You can sign these electronically (just scroll to the bottom). After that,
we'll be able to accept your pull requests.
## Contributor Code of Conduct
As contributors and maintainers of this project,
and in the interest of fostering an open and welcoming community,
we pledge to respect all people who contribute through reporting issues,
posting feature requests, updating documentation,
submitting pull requests or patches, and other activities.
We are committed to making participation in this project
a harassment-free experience for everyone,
regardless of level of experience, gender, gender identity and expression,
sexual orientation, disability, personal appearance,
body size, race, ethnicity, age, religion, or nationality.
Examples of unacceptable behavior by participants include:
* The use of sexualized language or imagery
* Personal attacks
* Trolling or insulting/derogatory comments
* Public or private harassment
* Publishing other's private information,
such as physical or electronic
addresses, without explicit permission
* Other unethical or unprofessional conduct.
Project maintainers have the right and responsibility to remove, edit, or reject
comments, commits, code, wiki edits, issues, and other contributions
that are not aligned to this Code of Conduct.
By adopting this Code of Conduct,
project maintainers commit themselves to fairly and consistently
applying these principles to every aspect of managing this project.
Project maintainers who do not follow or enforce the Code of Conduct
may be permanently removed from the project team.
This code of conduct applies both within project spaces and in public spaces
when an individual is representing the project or its community.
Instances of abusive, harassing, or otherwise unacceptable behavior
may be reported by opening an issue
or contacting one or more of the project maintainers.
This Code of Conduct is adapted from the [Contributor Covenant](http://contributor-covenant.org), version 1.2.0,
available at [http://contributor-covenant.org/version/1/2/0/](http://contributor-covenant.org/version/1/2/0/)
[gcloudcli]: https://developers.google.com/cloud/sdk/gcloud/
[indvcla]: https://developers.google.com/open-source/cla/individual
[corpcla]: https://developers.google.com/open-source/cla/corporate

202
vendor/github.com/golang-sql/civil/LICENSE generated vendored Normal file
View File

@@ -0,0 +1,202 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "[]"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright [yyyy] [name of copyright owner]
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

15
vendor/github.com/golang-sql/civil/README.md generated vendored Normal file
View File

@@ -0,0 +1,15 @@
# Civil Date and Time
[![GoDoc](https://godoc.org/github.com/golang-sql/civil?status.svg)](https://godoc.org/github.com/golang-sql/civil)
Civil provides Date, Time of Day, and DateTime data types.
While there are many uses, using specific types when working
with databases make is conceptually eaiser to understand what value
is set in the remote system.
## Source
This civil package was extracted and forked from `cloud.google.com/go/civil`.
As such the license and contributing requirements remain the same as that
module.

292
vendor/github.com/golang-sql/civil/civil.go generated vendored Normal file
View File

@@ -0,0 +1,292 @@
// Copyright 2016 Google LLC
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
// Package civil implements types for civil time, a time-zone-independent
// representation of time that follows the rules of the proleptic
// Gregorian calendar with exactly 24-hour days, 60-minute hours, and 60-second
// minutes.
//
// Because they lack location information, these types do not represent unique
// moments or intervals of time. Use time.Time for that purpose.
package civil
import (
"fmt"
"time"
)
// A Date represents a date (year, month, day).
//
// This type does not include location information, and therefore does not
// describe a unique 24-hour timespan.
type Date struct {
Year int // Year (e.g., 2014).
Month time.Month // Month of the year (January = 1, ...).
Day int // Day of the month, starting at 1.
}
// DateOf returns the Date in which a time occurs in that time's location.
func DateOf(t time.Time) Date {
var d Date
d.Year, d.Month, d.Day = t.Date()
return d
}
// ParseDate parses a string in RFC3339 full-date format and returns the date value it represents.
func ParseDate(s string) (Date, error) {
t, err := time.Parse("2006-01-02", s)
if err != nil {
return Date{}, err
}
return DateOf(t), nil
}
// String returns the date in RFC3339 full-date format.
func (d Date) String() string {
return fmt.Sprintf("%04d-%02d-%02d", d.Year, d.Month, d.Day)
}
// IsValid reports whether the date is valid.
func (d Date) IsValid() bool {
return DateOf(d.In(time.UTC)) == d
}
// In returns the time corresponding to time 00:00:00 of the date in the location.
//
// In is always consistent with time.Date, even when time.Date returns a time
// on a different day. For example, if loc is America/Indiana/Vincennes, then both
// time.Date(1955, time.May, 1, 0, 0, 0, 0, loc)
// and
// civil.Date{Year: 1955, Month: time.May, Day: 1}.In(loc)
// return 23:00:00 on April 30, 1955.
//
// In panics if loc is nil.
func (d Date) In(loc *time.Location) time.Time {
return time.Date(d.Year, d.Month, d.Day, 0, 0, 0, 0, loc)
}
// AddDays returns the date that is n days in the future.
// n can also be negative to go into the past.
func (d Date) AddDays(n int) Date {
return DateOf(d.In(time.UTC).AddDate(0, 0, n))
}
// DaysSince returns the signed number of days between the date and s, not including the end day.
// This is the inverse operation to AddDays.
func (d Date) DaysSince(s Date) (days int) {
// We convert to Unix time so we do not have to worry about leap seconds:
// Unix time increases by exactly 86400 seconds per day.
deltaUnix := d.In(time.UTC).Unix() - s.In(time.UTC).Unix()
return int(deltaUnix / 86400)
}
// Before reports whether d1 occurs before d2.
func (d1 Date) Before(d2 Date) bool {
if d1.Year != d2.Year {
return d1.Year < d2.Year
}
if d1.Month != d2.Month {
return d1.Month < d2.Month
}
return d1.Day < d2.Day
}
// After reports whether d1 occurs after d2.
func (d1 Date) After(d2 Date) bool {
return d2.Before(d1)
}
// IsZero reports whether date fields are set to their default value.
func (d Date) IsZero() bool {
return (d.Year == 0) && (int(d.Month) == 0) && (d.Day == 0)
}
// MarshalText implements the encoding.TextMarshaler interface.
// The output is the result of d.String().
func (d Date) MarshalText() ([]byte, error) {
return []byte(d.String()), nil
}
// UnmarshalText implements the encoding.TextUnmarshaler interface.
// The date is expected to be a string in a format accepted by ParseDate.
func (d *Date) UnmarshalText(data []byte) error {
var err error
*d, err = ParseDate(string(data))
return err
}
// A Time represents a time with nanosecond precision.
//
// This type does not include location information, and therefore does not
// describe a unique moment in time.
//
// This type exists to represent the TIME type in storage-based APIs like BigQuery.
// Most operations on Times are unlikely to be meaningful. Prefer the DateTime type.
type Time struct {
Hour int // The hour of the day in 24-hour format; range [0-23]
Minute int // The minute of the hour; range [0-59]
Second int // The second of the minute; range [0-59]
Nanosecond int // The nanosecond of the second; range [0-999999999]
}
// TimeOf returns the Time representing the time of day in which a time occurs
// in that time's location. It ignores the date.
func TimeOf(t time.Time) Time {
var tm Time
tm.Hour, tm.Minute, tm.Second = t.Clock()
tm.Nanosecond = t.Nanosecond()
return tm
}
// ParseTime parses a string and returns the time value it represents.
// ParseTime accepts an extended form of the RFC3339 partial-time format. After
// the HH:MM:SS part of the string, an optional fractional part may appear,
// consisting of a decimal point followed by one to nine decimal digits.
// (RFC3339 admits only one digit after the decimal point).
func ParseTime(s string) (Time, error) {
t, err := time.Parse("15:04:05.999999999", s)
if err != nil {
return Time{}, err
}
return TimeOf(t), nil
}
// String returns the date in the format described in ParseTime. If Nanoseconds
// is zero, no fractional part will be generated. Otherwise, the result will
// end with a fractional part consisting of a decimal point and nine digits.
func (t Time) String() string {
s := fmt.Sprintf("%02d:%02d:%02d", t.Hour, t.Minute, t.Second)
if t.Nanosecond == 0 {
return s
}
return s + fmt.Sprintf(".%09d", t.Nanosecond)
}
// IsValid reports whether the time is valid.
func (t Time) IsValid() bool {
// Construct a non-zero time.
tm := time.Date(2, 2, 2, t.Hour, t.Minute, t.Second, t.Nanosecond, time.UTC)
return TimeOf(tm) == t
}
// IsZero reports whether time fields are set to their default value.
func (t Time) IsZero() bool {
return (t.Hour == 0) && (t.Minute == 0) && (t.Second == 0) && (t.Nanosecond == 0)
}
// MarshalText implements the encoding.TextMarshaler interface.
// The output is the result of t.String().
func (t Time) MarshalText() ([]byte, error) {
return []byte(t.String()), nil
}
// UnmarshalText implements the encoding.TextUnmarshaler interface.
// The time is expected to be a string in a format accepted by ParseTime.
func (t *Time) UnmarshalText(data []byte) error {
var err error
*t, err = ParseTime(string(data))
return err
}
// A DateTime represents a date and time.
//
// This type does not include location information, and therefore does not
// describe a unique moment in time.
type DateTime struct {
Date Date
Time Time
}
// Note: We deliberately do not embed Date into DateTime, to avoid promoting AddDays and Sub.
// DateTimeOf returns the DateTime in which a time occurs in that time's location.
func DateTimeOf(t time.Time) DateTime {
return DateTime{
Date: DateOf(t),
Time: TimeOf(t),
}
}
// ParseDateTime parses a string and returns the DateTime it represents.
// ParseDateTime accepts a variant of the RFC3339 date-time format that omits
// the time offset but includes an optional fractional time, as described in
// ParseTime. Informally, the accepted format is
// YYYY-MM-DDTHH:MM:SS[.FFFFFFFFF]
// where the 'T' may be a lower-case 't'.
func ParseDateTime(s string) (DateTime, error) {
t, err := time.Parse("2006-01-02T15:04:05.999999999", s)
if err != nil {
t, err = time.Parse("2006-01-02t15:04:05.999999999", s)
if err != nil {
return DateTime{}, err
}
}
return DateTimeOf(t), nil
}
// String returns the date in the format described in ParseDate.
func (dt DateTime) String() string {
return dt.Date.String() + "T" + dt.Time.String()
}
// IsValid reports whether the datetime is valid.
func (dt DateTime) IsValid() bool {
return dt.Date.IsValid() && dt.Time.IsValid()
}
// In returns the time corresponding to the DateTime in the given location.
//
// If the time is missing or ambigous at the location, In returns the same
// result as time.Date. For example, if loc is America/Indiana/Vincennes, then
// both
// time.Date(1955, time.May, 1, 0, 30, 0, 0, loc)
// and
// civil.DateTime{
// civil.Date{Year: 1955, Month: time.May, Day: 1}},
// civil.Time{Minute: 30}}.In(loc)
// return 23:30:00 on April 30, 1955.
//
// In panics if loc is nil.
func (dt DateTime) In(loc *time.Location) time.Time {
return time.Date(dt.Date.Year, dt.Date.Month, dt.Date.Day, dt.Time.Hour, dt.Time.Minute, dt.Time.Second, dt.Time.Nanosecond, loc)
}
// Before reports whether dt1 occurs before dt2.
func (dt1 DateTime) Before(dt2 DateTime) bool {
return dt1.In(time.UTC).Before(dt2.In(time.UTC))
}
// After reports whether dt1 occurs after dt2.
func (dt1 DateTime) After(dt2 DateTime) bool {
return dt2.Before(dt1)
}
// IsZero reports whether datetime fields are set to their default value.
func (dt DateTime) IsZero() bool {
return dt.Date.IsZero() && dt.Time.IsZero()
}
// MarshalText implements the encoding.TextMarshaler interface.
// The output is the result of dt.String().
func (dt DateTime) MarshalText() ([]byte, error) {
return []byte(dt.String()), nil
}
// UnmarshalText implements the encoding.TextUnmarshaler interface.
// The datetime is expected to be a string in a format accepted by ParseDateTime
func (dt *DateTime) UnmarshalText(data []byte) error {
var err error
*dt, err = ParseDateTime(string(data))
return err
}

27
vendor/github.com/golang-sql/sqlexp/LICENSE generated vendored Normal file
View File

@@ -0,0 +1,27 @@
Copyright (c) 2017 The Go Authors. All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are
met:
* Redistributions of source code must retain the above copyright
notice, this list of conditions and the following disclaimer.
* Redistributions in binary form must reproduce the above
copyright notice, this list of conditions and the following disclaimer
in the documentation and/or other materials provided with the
distribution.
* Neither the name of Google Inc. nor the names of its
contributors may be used to endorse or promote products derived from
this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.

22
vendor/github.com/golang-sql/sqlexp/PATENTS generated vendored Normal file
View File

@@ -0,0 +1,22 @@
Additional IP Rights Grant (Patents)
"This implementation" means the copyrightable works distributed by
Google as part of the Go project.
Google hereby grants to You a perpetual, worldwide, non-exclusive,
no-charge, royalty-free, irrevocable (except as stated in this section)
patent license to make, have made, use, offer to sell, sell, import,
transfer and otherwise run, modify and propagate the contents of this
implementation of Go, where such license applies only to those patent
claims, both currently owned or controlled by Google and acquired in
the future, licensable by Google that are necessarily infringed by this
implementation of Go. This grant does not include claims that would be
infringed only as a consequence of further modification of this
implementation. If you or your agent or exclusive licensee institute or
order or agree to the institution of patent litigation against any
entity (including a cross-claim or counterclaim in a lawsuit) alleging
that this implementation of Go or any code incorporated within this
implementation of Go constitutes direct or contributory patent
infringement, or inducement of patent infringement, then any patent
rights granted to you under this License for this implementation of Go
shall terminate as of the date such litigation is filed.

5
vendor/github.com/golang-sql/sqlexp/README.md generated vendored Normal file
View File

@@ -0,0 +1,5 @@
# golang-sql exp
https://godoc.org/github.com/golang-sql/sqlexp
All contributions must have a valid golang CLA.

8
vendor/github.com/golang-sql/sqlexp/doc.go generated vendored Normal file
View File

@@ -0,0 +1,8 @@
// Copyright 2017 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// Package sqlexp provides interfaces and functions that may be adopted into
// the database/sql package in the future. All features may change or be removed
// in the future.
package sqlexp // imports github.com/golang-sql/sqlexp

80
vendor/github.com/golang-sql/sqlexp/messages.go generated vendored Normal file
View File

@@ -0,0 +1,80 @@
package sqlexp
import (
"context"
"fmt"
)
// RawMessage is returned from RowsMessage.
type RawMessage interface{}
// ReturnMessage may be passed into a Query argument.
//
// Drivers must implement driver.NamedValueChecker,
// call ReturnMessageInit on it, save it internally,
// and return driver.ErrOmitArgument to prevent
// this from appearing in the query arguments.
//
// Queries that recieve this message should also not return
// SQL errors from the Query method, but wait to return
// it in a Message.
type ReturnMessage struct {
queue chan RawMessage
}
// Message is called by clients after Query to dequeue messages.
func (m *ReturnMessage) Message(ctx context.Context) RawMessage {
select {
case <-ctx.Done():
return MsgNextResultSet{}
case raw := <-m.queue:
return raw
}
}
// ReturnMessageEnqueue is called by the driver to enqueue the driver.
// Drivers should not call this until after it returns from Query.
func ReturnMessageEnqueue(ctx context.Context, m *ReturnMessage, raw RawMessage) error {
select {
case <-ctx.Done():
return ctx.Err()
case m.queue <- raw:
return nil
}
}
// ReturnMessageInit is called by database/sql setup the ReturnMessage internals.
func ReturnMessageInit(m *ReturnMessage) {
m.queue = make(chan RawMessage, 15)
}
type (
// MsgNextResultSet must be checked for. When received, NextResultSet
// should be called and if false the message loop should be exited.
MsgNextResultSet struct{}
// MsgNext indicates the result set ready to be scanned.
// This message will often be followed with:
//
// for rows.Next() {
// rows.Scan(&v)
// }
MsgNext struct{}
// MsgRowsAffected returns the number of rows affected.
// Not all operations that affect rows return results, thus this message
// may be received multiple times.
MsgRowsAffected struct{ Count int64 }
// MsgLastInsertID returns the value of last inserted row. For many
// database systems and tables this will return int64. Some databases
// may return a string or GUID equivalent.
MsgLastInsertID struct{ Value interface{} }
// MsgNotice is raised from the SQL text and is only informational.
MsgNotice struct{ Message fmt.Stringer }
// MsgError returns SQL errors from the database system (not transport
// or other system level errors).
MsgError struct{ Error error }
)

73
vendor/github.com/golang-sql/sqlexp/mssql.go generated vendored Normal file
View File

@@ -0,0 +1,73 @@
// Copyright 2017 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package sqlexp
import (
"context"
"database/sql/driver"
"fmt"
"strings"
)
type mssql struct{}
var (
_ DriverNamer = mssql{}
_ DriverQuoter = mssql{}
_ DriverSavepointer = mssql{}
)
func (mssql) Open(string) (driver.Conn, error) {
panic("not implemented")
}
func (mssql) Namer(ctx context.Context) (Namer, error) {
return sqlServerNamer{}, nil
}
func (mssql) Quoter(ctx context.Context) (Quoter, error) {
return sqlServerQuoter{}, nil
}
func (mssql) Savepointer() (Savepointer, error) {
return sqlServerSavepointer{}, nil
}
type sqlServerNamer struct{}
func (sqlServerNamer) Name() string {
return "sqlserver"
}
func (sqlServerNamer) Dialect() string {
return DialectTSQL
}
type sqlServerQuoter struct{}
func (sqlServerQuoter) ID(name string) string {
return "[" + strings.Replace(name, "]", "]]", -1) + "]"
}
func (sqlServerQuoter) Value(v interface{}) string {
switch v := v.(type) {
default:
panic("unsupported value")
case string:
return "'" + strings.Replace(v, "'", "''", -1) + "'"
}
}
type sqlServerSavepointer struct{}
func (sqlServerSavepointer) Release(name string) string {
return ""
}
func (sqlServerSavepointer) Create(name string) string {
return fmt.Sprintf("save tran %s;", name)
}
func (sqlServerSavepointer) Rollback(name string) string {
return fmt.Sprintf("rollback tran %s;", name)
}

59
vendor/github.com/golang-sql/sqlexp/namer.go generated vendored Normal file
View File

@@ -0,0 +1,59 @@
// Copyright 2017 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package sqlexp
import (
"context"
"database/sql/driver"
"errors"
"reflect"
)
const (
DialectPostgres = "postgres"
DialectTSQL = "tsql"
DialectMySQL = "mysql"
DialectSQLite = "sqlite"
DialectOracle = "oracle"
)
// Namer returns the name of the database and the SQL dialect it
// uses.
type Namer interface {
// Name of the database management system.
//
// Examples:
// "posgresql-9.6"
// "sqlserver-10.54.32"
// "cockroachdb-1.0"
Name() string
// Dialect of SQL used in the database.
Dialect() string
}
// DriverNamer may be implemented on the driver.Driver interface.
// It may need to request information from the server to return
// the correct information.
type DriverNamer interface {
Namer(ctx context.Context) (Namer, error)
}
// NamerFromDriver returns the DriverNamer from the DB if
// it is implemented.
func NamerFromDriver(d driver.Driver, ctx context.Context) (Namer, error) {
if q, is := d.(DriverNamer); is {
return q.Namer(ctx)
}
dv := reflect.ValueOf(d)
d, found := internalDrivers[dv.Type().String()]
if found {
if q, is := d.(DriverNamer); is {
return q.Namer(ctx)
}
}
return nil, errors.New("namer not found")
}

67
vendor/github.com/golang-sql/sqlexp/pg.go generated vendored Normal file
View File

@@ -0,0 +1,67 @@
// Copyright 2017 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package sqlexp
import (
"context"
"database/sql/driver"
"fmt"
)
type postgresql struct{}
var (
_ DriverNamer = postgresql{}
_ DriverQuoter = postgresql{}
_ DriverSavepointer = postgresql{}
)
func (postgresql) Open(string) (driver.Conn, error) {
panic("not implemented")
}
func (postgresql) Namer(ctx context.Context) (Namer, error) {
return pgNamer{}, nil
}
func (postgresql) Quoter(ctx context.Context) (Quoter, error) {
panic("not implemented")
}
func (postgresql) Savepointer() (Savepointer, error) {
return pgSavepointer{}, nil
}
type pgNamer struct{}
func (pgNamer) Name() string {
return "postgresql"
}
func (pgNamer) Dialect() string {
return DialectPostgres
}
type pgQuoter struct{}
func (pgQuoter) ID(name string) string {
return ""
}
func (pgQuoter) Value(v interface{}) string {
return ""
}
type pgSavepointer struct{}
func (pgSavepointer) Release(name string) string {
return fmt.Sprintf("release savepoint %s;", name)
}
func (pgSavepointer) Create(name string) string {
return fmt.Sprintf("savepoint %s;", name)
}
func (pgSavepointer) Rollback(name string) string {
return fmt.Sprintf("rollback to savepoint %s;", name)
}

22
vendor/github.com/golang-sql/sqlexp/querier.go generated vendored Normal file
View File

@@ -0,0 +1,22 @@
// Copyright 2017 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package sqlexp
import (
"context"
"database/sql"
)
// Querier is the common interface to execute queries on a DB, Tx, or Conn.
type Querier interface {
ExecContext(ctx context.Context, query string, args ...interface{}) (sql.Result, error)
QueryContext(ctx context.Context, query string, args ...interface{}) (*sql.Rows, error)
QueryRowContext(ctx context.Context, query string, args ...interface{}) *sql.Row
}
var (
_ Querier = &sql.DB{}
_ Querier = &sql.Tx{}
)

57
vendor/github.com/golang-sql/sqlexp/quoter.go generated vendored Normal file
View File

@@ -0,0 +1,57 @@
// Copyright 2017 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package sqlexp
import (
"context"
"database/sql/driver"
"errors"
"reflect"
)
// BUG(kardianos): Both the Quoter and Namer may need to access the server.
// Quoter returns safe and valid SQL strings to use when building a SQL text.
type Quoter interface {
// ID quotes identifiers such as schema, table, or column names.
// ID does not operate on multipart identifiers such as "public.Table",
// it only operates on single identifiers such as "public" and "Table".
ID(name string) string
// Value quotes database values such as string or []byte types as strings
// that are suitable and safe to embed in SQL text. The returned value
// of a string will include all surrounding quotes.
//
// If a value type is not supported it must panic.
Value(v interface{}) string
}
// DriverQuoter returns a Quoter interface and is suitable for extending
// the driver.Driver type.
//
// The driver may need to hit the database to determine how it is configured to
// ensure the correct escaping rules are used.
type DriverQuoter interface {
Quoter(ctx context.Context) (Quoter, error)
}
// QuoterFromDriver takes a database driver, often obtained through a sql.DB.Driver
// call or from using it directly to get the quoter interface.
//
// Currently MssqlDriver is hard-coded to also return a valided Quoter.
func QuoterFromDriver(d driver.Driver, ctx context.Context) (Quoter, error) {
if q, is := d.(DriverQuoter); is {
return q.Quoter(ctx)
}
dv := reflect.ValueOf(d)
d, found := internalDrivers[dv.Type().String()]
if found {
if q, is := d.(DriverQuoter); is {
return q.Quoter(ctx)
}
}
return nil, errors.New("quoter interface not found")
}

15
vendor/github.com/golang-sql/sqlexp/registry.go generated vendored Normal file
View File

@@ -0,0 +1,15 @@
// Copyright 2017 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package sqlexp
import (
"database/sql/driver"
)
var internalDrivers = map[string]driver.Driver{
"*mssql.MssqlDriver": mssql{},
"*pq.Driver": postgresql{},
"*stdlib.Driver": postgresql{},
}

37
vendor/github.com/golang-sql/sqlexp/savepoint.go generated vendored Normal file
View File

@@ -0,0 +1,37 @@
// Copyright 2017 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package sqlexp
import (
"database/sql/driver"
"errors"
"reflect"
)
type Savepointer interface {
Release(name string) string
Create(name string) string
Rollback(name string) string
}
type DriverSavepointer interface {
Savepointer() (Savepointer, error)
}
// SavepointFromDriver
func SavepointFromDriver(d driver.Driver) (Savepointer, error) {
if q, is := d.(DriverSavepointer); is {
return q.Savepointer()
}
dv := reflect.ValueOf(d)
d, found := internalDrivers[dv.Type().String()]
if found {
if q, is := d.(DriverSavepointer); is {
return q.Savepointer()
}
}
return nil, errors.New("savepointer interface not found")
}

24
vendor/github.com/microsoft/go-mssqldb/.gitignore generated vendored Normal file
View File

@@ -0,0 +1,24 @@
/.idea
/.connstr
.vscode
.terraform
*.tfstate*
*.log
*.swp
*~
coverage.json
coverage.txt
coverage.xml
testresults.xml
.azureconnstr
# Example binaries
examples/*/simple
examples/*/azuread-service-principal
examples/*/tsql
examples/*/bulk
examples/*/routine
examples/*/tvp
examples/*/aws-rds-proxy-iam-auth
examples/*/azuread-accesstoken
examples/*/azuread-service-principal-authtoken

10
vendor/github.com/microsoft/go-mssqldb/.golangci.yml generated vendored Normal file
View File

@@ -0,0 +1,10 @@
version: "2"
linters:
enable:
# basic go linters
- govet
- revive # replacing golint as it is deprecated
# sql related linters
- rowserrcheck
- sqlclosecheck

147
vendor/github.com/microsoft/go-mssqldb/CHANGELOG.md generated vendored Normal file
View File

@@ -0,0 +1,147 @@
# Changelog
## 1.9.6
### Features
* Added new `serverCertificate` connection parameter for byte-for-byte certificate validation, matching Microsoft.Data.SqlClient behavior. This parameter skips hostname validation, chain validation, and expiry checks, only verifying that the server's certificate exactly matches the provided file. This is useful when the server's hostname doesn't match the certificate CN/SAN. (#304)
* The existing `certificate` parameter maintains backward compatibility with traditional X.509 chain validation including hostname checks, expiry validation, and chain-of-trust verification.
* `serverCertificate` cannot be used with `certificate` or `hostnameincertificate` parameters to prevent conflicting validation methods.
## 1.9.3
### Bug fixes
* Fix parsing of ADO connection strings with double-quoted values containing semicolons (#282)
## 1.9.2
### Bug fixes
* Fix race condition in message queue query model (#277)
## 1.9.1
### Bug fixes
* Fix bulk insert failure with datetime values near midnight due to day overflow (#271)
* Fix: apply guidConversion option in TestBulkcopy (#255)
### Features
* support configuring custom time.Location for datetime encoding and decoding via DSN (#260)
* Implement support for the latest Azure credential types in the azuread package (#269)
## 1.8.2
### Bug fixes
* Added "Pwd" as a recognized alias for "Password" in connection strings (#262)
* Updated `isProc` to detect more keywords
## 1.7.0
### Changed
* Changed always encrypted key provider error handling not to panic on failure
### Features
* Support DER certificates for server authentication (#152)
### Bug fixes
* Improved speed of CharsetToUTF8 (#154)
## 1.7.0
### Changed
* krb5 authenticator supports standard Kerberos environment variables for configuration
## 1.6.0
### Changed
* Go.mod updated to Go 1.17
* Azure SDK for Go dependencies updated
### Features
* Added `ActiveDirectoryAzCli` and `ActiveDirectoryDeviceCode` authentication types to `azuread` package
* Always Encrypted encryption and decryption with 2 hour key cache (#116)
* 'pfx', 'MSSQL_CERTIFICATE_STORE', and 'AZURE_KEY_VAULT' encryption key providers
* TDS8 can now be used for connections by setting encrypt="strict"
## 1.5.0
### Features
### Bug fixes
* Handle extended character in SQL instance names for browser lookup (#122)
## 1.4.0
### Features
* Adds UnmarshalJSON interface for UniqueIdentifier (#126)
### Bug fixes
* Fixes MarshalText prototype for UniqueIdentifier
## 1.2.0
### Features
* A connector's dialer can now be used to resolve DNS if the dialer implements the `HostDialer` interface
## 1.0.0
### Features
* `admin` protocol for dedicated administrator connections
### Changed
* Added `Hidden()` method to `ProtocolParser` interface
## 0.21.0
### Features
* Updated azidentity to 1.2.1, which adds in memory cache for managed credentials ([#90](https://github.com/microsoft/go-mssqldb/pull/90))
### Bug fixes
* Fixed uninitialized server name in TLS config ([#93](https://github.com/microsoft/go-mssqldb/issues/93))([#94](https://github.com/microsoft/go-mssqldb/pull/94))
* Fixed several kerberos authentication usages on Linux with new krb5 authentication provider. ([#65](https://github.com/microsoft/go-mssqldb/pull/65))
### Changed
* New kerberos authenticator implementation uses more explicit connection string parameters.
| Old | New |
|--------------|--------------------|
| krb5conffile | krb5-configfile |
| krbcache | krb5-credcachefile |
| keytabfile | krb5-keytabfile |
| realm | krb5-realm |
## 0.20.0
### Features
* Add driver version and name to TDS login packets
* Add `pipe` connection string parameter for named pipe dialer
* Expose network errors that occur during connection establishment. Now they are
wrapped, and can be detected by using errors.As/Is practise. This connection
errors can, and could even before, happen anytime the sql.DB doesn't have free
connection for executed query.
### Bug fixes
* Added checks while reading prelogin for invalid data ([#64](https://github.com/microsoft/go-mssqldb/issues/64))([86ecefd8b](https://github.com/microsoft/go-mssqldb/commit/86ecefd8b57683aeb5ad9328066ee73fbccd62f5))
* Fixed multi-protocol dialer path to avoid unneeded SQL Browser queries

14
vendor/github.com/microsoft/go-mssqldb/CONTRIBUTING.md generated vendored Normal file
View File

@@ -0,0 +1,14 @@
# Contributing
This project welcomes contributions and suggestions. Most contributions require you to
agree to a Contributor License Agreement (CLA) declaring that you have the right to,
and actually do, grant us the rights to use your contribution. For details, visit
https://cla.microsoft.com.
When you submit a pull request, a CLA-bot will automatically determine whether you need
to provide a CLA and decorate the PR appropriately (e.g., label, comment). Simply follow the
instructions provided by the bot. You will only need to do this once across all repositories using our CLA.
This project has adopted the [Microsoft Open Source Code of Conduct](https://opensource.microsoft.com/codeofconduct/).
For more information see the [Code of Conduct FAQ](https://opensource.microsoft.com/codeofconduct/faq/)
or contact [opencode@microsoft.com](mailto:opencode@microsoft.com) with any additional questions or comments.

28
vendor/github.com/microsoft/go-mssqldb/LICENSE.txt generated vendored Normal file
View File

@@ -0,0 +1,28 @@
Copyright (c) 2012 The Go Authors. All rights reserved.
Copyright (c) Microsoft Corporation.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are
met:
* Redistributions of source code must retain the above copyright
notice, this list of conditions and the following disclaimer.
* Redistributions in binary form must reproduce the above
copyright notice, this list of conditions and the following disclaimer
in the documentation and/or other materials provided with the
distribution.
* Neither the name of Google Inc. nor the names of its
contributors may be used to endorse or promote products derived from
this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.

Some files were not shown because too many files have changed in this diff Show More