Compare commits

..

15 Commits

Author SHA1 Message Date
bd54e85727 chore(release): update package version to 1.0.44
All checks were successful
Release / pkg-deb (push) Successful in -29m54s
Release / pkg-rpm (push) Successful in -27m4s
Release / test (push) Successful in -30m26s
Release / release (push) Successful in -29m48s
Release / pkg-aur (push) Successful in -28m28s
2026-04-08 21:34:28 +02:00
b042b2d508 docs: 📝 Update documentation 2026-04-08 21:34:00 +02:00
af1733dc9a feat(pkg): update package description for clarity and consistency 2026-04-08 21:21:33 +02:00
389fff2b44 chore(release): update package version to 1.0.43
Some checks failed
Release / test (push) Successful in -29m19s
Release / release (push) Successful in -26m35s
Release / pkg-aur (push) Failing after -30m58s
Release / pkg-deb (push) Successful in -29m54s
Release / pkg-rpm (push) Successful in -28m15s
2026-04-08 20:59:23 +02:00
f331ba2b61 chore(release): update package version and add packaging files for AUR, Debian, and RPM 2026-04-08 20:59:11 +02:00
f4b8fc5382 feat(writers): add sortConstraints function to sort constraints by sequence and name
All checks were successful
CI / Test (1.24) (push) Successful in -29m11s
CI / Test (1.25) (push) Successful in -28m38s
CI / Lint (push) Successful in -29m38s
CI / Build (push) Successful in -29m42s
Integration Tests / Integration Tests (push) Successful in -29m26s
Release / Build and Release (push) Successful in -29m46s
2026-02-28 19:52:04 +02:00
dc9172cc7c feat(templ): add support for --from-list flag and related tests
All checks were successful
CI / Test (1.24) (push) Successful in -29m0s
CI / Test (1.25) (push) Successful in -29m10s
CI / Build (push) Successful in -30m1s
CI / Lint (push) Successful in -29m43s
Integration Tests / Integration Tests (push) Successful in -29m6s
Release / Build and Release (push) Successful in -29m56s
2026-02-28 19:32:19 +02:00
ee88c07989 style(report, writers, graphql, prisma, typeorm): replace sb.WriteString with fmt.Fprintf for consistency
All checks were successful
CI / Test (1.24) (push) Successful in -26m1s
CI / Test (1.25) (push) Successful in -25m59s
CI / Build (push) Successful in -29m11s
CI / Lint (push) Successful in -28m32s
Integration Tests / Integration Tests (push) Successful in -29m16s
Release / Build and Release (push) Successful in -26m36s
2026-02-28 17:08:12 +02:00
ff1180524a feat(merge): add support for merging from a list of source files
Some checks failed
CI / Lint (push) Has been cancelled
CI / Build (push) Has been cancelled
CI / Test (1.25) (push) Has started running
CI / Test (1.24) (push) Has been cancelled
Integration Tests / Integration Tests (push) Has been cancelled
2026-02-28 17:06:49 +02:00
Hein
480038d51d feat(writers): quote default values based on SQL column type
Some checks failed
CI / Test (1.24) (push) Successful in -22m47s
CI / Test (1.25) (push) Successful in -22m35s
CI / Lint (push) Failing after -24m34s
CI / Build (push) Successful in -24m43s
Integration Tests / Integration Tests (push) Successful in -25m0s
Release / Build and Release (push) Successful in -21m46s
Bun and GORM struct tags now emit quoted defaults for string/date/time/UUID
columns (e.g. default:'disconnected') and unquoted defaults for numeric and
boolean columns (e.g. default:0, default:true). Function-call expressions
such as now() or gen_random_uuid() are never quoted regardless of type.

Adds QuoteDefaultValue(value, sqlType) helper in pkg/writers and updates
both type mappers and the bun writer tests accordingly.
2026-02-20 16:03:50 +02:00
77436757c8 fix(type_mapper): update timestamp type mapping to use SqlTimeStamp
All checks were successful
CI / Test (1.24) (push) Successful in -25m13s
CI / Test (1.25) (push) Successful in -25m10s
CI / Build (push) Successful in -26m2s
CI / Lint (push) Successful in -25m39s
Release / Build and Release (push) Successful in -25m49s
Integration Tests / Integration Tests (push) Successful in -25m26s
2026-02-08 21:35:27 +02:00
5e6f03e412 feat(type_mapper): add support for serial types and auto-increment tags
All checks were successful
CI / Test (1.24) (push) Successful in -24m39s
CI / Test (1.25) (push) Successful in -24m24s
CI / Build (push) Successful in -25m39s
CI / Lint (push) Successful in -25m9s
Integration Tests / Integration Tests (push) Successful in -25m15s
Release / Build and Release (push) Successful in -25m21s
2026-02-08 17:48:58 +02:00
1dcbc79387 feat(pgsql): enhance data type mapping to support serial types
All checks were successful
CI / Test (1.25) (push) Successful in -24m18s
CI / Test (1.24) (push) Successful in -24m6s
CI / Build (push) Successful in -25m14s
CI / Lint (push) Successful in -24m47s
Release / Build and Release (push) Successful in -25m37s
Integration Tests / Integration Tests (push) Successful in -25m9s
2026-02-08 17:31:28 +02:00
59c4a5ebf8 test(writer): enhance has-many relationship tests with join tag verification
All checks were successful
CI / Test (1.24) (push) Successful in -25m9s
CI / Test (1.25) (push) Successful in -25m0s
CI / Build (push) Successful in -25m57s
CI / Lint (push) Successful in -25m29s
Release / Build and Release (push) Successful in -25m38s
Integration Tests / Integration Tests (push) Successful in -25m19s
2026-02-08 15:20:20 +02:00
091e1913ee feat(version): retrieve version and build date from VCS if unset
All checks were successful
CI / Test (1.24) (push) Successful in -25m19s
CI / Test (1.25) (push) Successful in -25m1s
CI / Build (push) Successful in -25m56s
CI / Lint (push) Successful in -25m33s
Integration Tests / Integration Tests (push) Successful in -25m32s
2026-02-08 15:04:03 +02:00
42 changed files with 1705 additions and 522 deletions

View File

@@ -1,5 +0,0 @@
---
description: Build the RelSpec binary
---
Build the RelSpec project by running `make build`. Report the build status and any errors encountered.

View File

@@ -1,9 +0,0 @@
---
description: Generate test coverage report
---
Generate and display test coverage for RelSpec:
1. Run `go test -cover ./...` to get coverage percentage
2. If detailed coverage is needed, run `go test -coverprofile=coverage.out ./...` and then `go tool cover -html=coverage.out` to generate HTML report
Show coverage statistics and identify areas needing more tests.

View File

@@ -1,10 +0,0 @@
---
description: Run Go linters on the codebase
---
Run linting tools on the RelSpec codebase:
1. First run `gofmt -l .` to check formatting
2. If golangci-lint is available, run `golangci-lint run ./...`
3. Run `go vet ./...` to check for suspicious constructs
Report any issues found and suggest fixes if needed.

View File

@@ -1,5 +0,0 @@
---
description: Run all tests for the RelSpec project
---
Run `go test ./...` to execute all unit tests in the project. Show a summary of the results and highlight any failures.

View File

@@ -0,0 +1,327 @@
name: Release
on:
push:
tags:
- 'v*'
workflow_dispatch:
inputs:
tag:
description: 'Tag to release (e.g. v1.2.3)'
required: true
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-go@v5
with:
go-version-file: go.mod
- name: Test
run: go test ./...
- name: Lint
run: go vet ./...
release:
needs: test
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
with:
fetch-depth: 0
- uses: actions/setup-go@v5
with:
go-version-file: go.mod
- name: Build release binaries
run: |
VERSION="${{ github.event.inputs.tag || github.ref_name }}"
for target in "linux/amd64" "linux/arm64" "darwin/amd64" "darwin/arm64" "windows/amd64"; do
GOOS="${target%/*}"
GOARCH="${target#*/}"
EXT=""
[ "$GOOS" = "windows" ] && EXT=".exe"
NAME="relspec-${GOOS}-${GOARCH}${EXT}"
GOOS="$GOOS" GOARCH="$GOARCH" go build \
-trimpath \
-ldflags "-X git.warky.dev/wdevs/relspecgo/cmd/relspec.version=${VERSION}" \
-o "$NAME" ./cmd/relspec
echo "Built $NAME"
done
- name: Create release and upload assets
run: |
TAG="${{ github.event.inputs.tag || github.ref_name }}"
API="${GITHUB_API_URL}/repos/${GITHUB_REPOSITORY}/releases"
# Collect commits since the previous tag (or last 20 if no prior tag)
PREV_TAG=$(git tag --sort=-version:refname | grep -v "^${TAG}$" | head -1)
if [ -n "$PREV_TAG" ]; then
RANGE="${PREV_TAG}..${TAG}"
else
RANGE="HEAD~20..HEAD"
fi
NOTES=$(git log "$RANGE" --pretty=format:"- %s" --no-merges)
BODY="## What's changed"$'\n'"${NOTES}"
# Escape for JSON
BODY_JSON=$(printf '%s' "$BODY" | python3 -c 'import json,sys; print(json.dumps(sys.stdin.read()))')
RELEASE=$(curl -s -X POST "$API" \
-H "Authorization: token ${GITHUB_TOKEN}" \
-H "Content-Type: application/json" \
-d "{\"tag_name\":\"${TAG}\",\"name\":\"${TAG}\",\"body\":${BODY_JSON}}")
UPLOAD_URL=$(echo "$RELEASE" | grep -o '"upload_url":"[^"]*"' | cut -d'"' -f4 | sed 's/{[^}]*}//')
if [ -z "$UPLOAD_URL" ]; then
echo "Failed to create release: $RELEASE"
exit 1
fi
for f in relspec-*; do
echo "Uploading $f..."
curl -s -X POST "${UPLOAD_URL}?name=${f}" \
-H "Authorization: token ${GITHUB_TOKEN}" \
-H "Content-Type: application/octet-stream" \
--data-binary "@${f}" > /dev/null
done
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
pkg-aur:
needs: release
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Publish to AUR
env:
AUR_SSH_KEY: ${{ secrets.AUR_SSH_KEY }}
run: |
set -euo pipefail
VERSION="${{ github.event.inputs.tag || github.ref_name }}"
PKGVER="${VERSION#v}"
AUR_KEY_PATH="$HOME/.ssh/aur"
AUR_KNOWN_HOSTS="$HOME/.ssh/known_hosts"
# Setup SSH for AUR
mkdir -p ~/.ssh
chmod 700 ~/.ssh
if [ -z "${AUR_SSH_KEY:-}" ]; then
echo "AUR_SSH_KEY is empty"
exit 1
fi
# Support raw multiline keys, escaped \\n secrets, or base64-encoded keys.
CLEAN_AUR_SSH_KEY="$(printf '%s' "$AUR_SSH_KEY" | tr -d '\r')"
if printf '%s' "$CLEAN_AUR_SSH_KEY" | grep -q "^-----BEGIN .*PRIVATE KEY-----$"; then
printf '%s\n' "$CLEAN_AUR_SSH_KEY" > "$AUR_KEY_PATH"
elif printf '%s' "$CLEAN_AUR_SSH_KEY" | grep -q '\\n'; then
printf '%b\n' "$CLEAN_AUR_SSH_KEY" > "$AUR_KEY_PATH"
else
if printf '%s' "$CLEAN_AUR_SSH_KEY" | tr -d '[:space:]' | base64 --decode > "$AUR_KEY_PATH" 2>/dev/null; then
:
else
printf '%s\n' "$CLEAN_AUR_SSH_KEY" > "$AUR_KEY_PATH"
fi
fi
chmod 600 "$AUR_KEY_PATH"
if ! ssh-keygen -y -f "$AUR_KEY_PATH" >/dev/null 2>&1; then
echo "AUR_SSH_KEY is not a valid private key."
echo "Store it as a raw private key, an escaped private key with \\n, or a base64-encoded private key."
exit 1
fi
ssh-keyscan -t rsa,ed25519 aur.archlinux.org >> "$AUR_KNOWN_HOSTS"
chmod 644 "$AUR_KNOWN_HOSTS"
# Clone AUR repo
GIT_SSH_COMMAND="ssh -o IdentitiesOnly=yes -o StrictHostKeyChecking=yes -o UserKnownHostsFile=$AUR_KNOWN_HOSTS -i $AUR_KEY_PATH" \
git clone ssh://aur@aur.archlinux.org/relspec.git aur-repo
CURRENT_PKGVER=$(awk -F= '/^pkgver=/ {print $2; exit}' aur-repo/PKGBUILD | tr -d "[:space:]")
CURRENT_PKGREL=$(awk -F= '/^pkgrel=/ {print $2; exit}' aur-repo/PKGBUILD | tr -d "[:space:]")
if [ "$CURRENT_PKGVER" = "$PKGVER" ]; then
case "$CURRENT_PKGREL" in
''|*[!0-9]*)
echo "Unsupported pkgrel in AUR repo: ${CURRENT_PKGREL}"
exit 1
;;
*)
PKGREL=$((CURRENT_PKGREL + 1))
;;
esac
else
PKGREL=1
fi
echo "Publishing AUR package version ${PKGVER}-${PKGREL}"
# Compute SHA256 of the source archive from the same URL the PKGBUILD will download.
SHA=$(curl -fsSL "https://git.warky.dev/wdevs/relspecgo/archive/v${PKGVER}.zip" | sha256sum | cut -d' ' -f1)
# Update PKGBUILD — keep remote source URL, bump version/checksum, and increment pkgrel for same-version rebuilds.
sed -e "s/^pkgver=.*/pkgver=${PKGVER}/" \
-e "s/^pkgrel=.*/pkgrel=${PKGREL}/" \
-e "s/^sha256sums=.*/sha256sums=('${SHA}')/" \
linux/arch/PKGBUILD > aur-repo/PKGBUILD
# Generate .SRCINFO inside an Arch container (docker cp avoids DinD volume mount issues)
CID=$(docker run -d archlinux:latest sleep infinity)
docker cp aur-repo/PKGBUILD $CID:/build/PKGBUILD || (docker exec $CID mkdir -p /build && docker cp aur-repo/PKGBUILD $CID:/build/PKGBUILD)
docker exec $CID bash -c "
pacman -Sy --noconfirm base-devel &&
useradd -m builder &&
chown -R builder:builder /build &&
runuser -u builder -- bash -c 'cd /build && makepkg --printsrcinfo > .SRCINFO'
"
docker cp $CID:/build/.SRCINFO aur-repo/.SRCINFO
docker rm -f $CID
# Commit and push to AUR master
cd aur-repo
git config user.email "hein@warky.dev"
git config user.name "Hein"
git add PKGBUILD .SRCINFO
git commit -m "Update to v${PKGVER}-${PKGREL}"
GIT_SSH_COMMAND="ssh -o IdentitiesOnly=yes -o StrictHostKeyChecking=yes -o UserKnownHostsFile=$AUR_KNOWN_HOSTS -i $AUR_KEY_PATH" \
git push origin HEAD:master
pkg-deb:
needs: release
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
with:
fetch-depth: 0
- uses: actions/setup-go@v5
with:
go-version-file: go.mod
- name: Build Debian packages
run: |
VERSION="${{ github.event.inputs.tag || github.ref_name }}"
PKGVER="${VERSION#v}"
for GOARCH in amd64 arm64; do
GOOS=linux GOARCH=$GOARCH go build \
-trimpath \
-ldflags "-X git.warky.dev/wdevs/relspecgo/cmd/relspec.version=${PKGVER}" \
-o relspec ./cmd/relspec
PKGDIR="relspec_${PKGVER}_${GOARCH}"
mkdir -p "${PKGDIR}/DEBIAN"
mkdir -p "${PKGDIR}/usr/bin"
install -m755 relspec "${PKGDIR}/usr/bin/relspec"
sed -e "s/VERSION/${PKGVER}/" \
-e "s/ARCH/${GOARCH}/" \
linux/debian/control > "${PKGDIR}/DEBIAN/control"
dpkg-deb --build --root-owner-group "${PKGDIR}"
echo "Built ${PKGDIR}.deb"
done
- name: Upload to release
run: |
TAG="${{ github.event.inputs.tag || github.ref_name }}"
RELEASE=$(curl -s "${GITHUB_API_URL}/repos/${GITHUB_REPOSITORY}/releases/tags/${TAG}" \
-H "Authorization: token ${GITHUB_TOKEN}")
UPLOAD_URL=$(echo "$RELEASE" | grep -o '"upload_url":"[^"]*"' | cut -d'"' -f4 | sed 's/{[^}]*}//')
for f in *.deb; do
FNAME=$(basename "$f")
echo "Uploading $FNAME..."
curl -s -X POST "${UPLOAD_URL}?name=${FNAME}" \
-H "Authorization: token ${GITHUB_TOKEN}" \
-H "Content-Type: application/octet-stream" \
--data-binary "@${f}" > /dev/null
done
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
pkg-rpm:
needs: release
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
with:
fetch-depth: 0
- name: Build RPM
run: |
set -euo pipefail
VERSION="${{ github.event.inputs.tag || github.ref_name }}"
PKGVER="${VERSION#v}"
GO_VER="$(awk '/^go / { print $2; exit }' go.mod)"
if [ -z "${GO_VER}" ]; then
echo "Failed to determine Go version from go.mod"
exit 1
fi
# Source tarball — prefix=relspec-VERSION/ matches RPM %autosetup convention
git archive --format=tar.gz --prefix=relspec-${PKGVER}/ HEAD \
> relspec-${PKGVER}.tar.gz
# Patch spec version
sed -i "s/^Version:.*/Version: ${PKGVER}/" linux/centos/relspec.spec
mkdir -p linux/centos/out
CID=$(docker create \
-e GO_VER="${GO_VER}" \
-e PKGVER="${PKGVER}" \
-w /build \
rockylinux:9 \
bash -lc "
set -euo pipefail
dnf install -y rpm-build git &&
curl -fsSL https://go.dev/dl/go\${GO_VER}.linux-amd64.tar.gz | tar -C /usr/local -xz &&
export PATH=\$PATH:/usr/local/go/bin &&
mkdir -p ~/rpmbuild/{BUILD,BUILDROOT,RPMS,SOURCES,SPECS,SRPMS} &&
cp relspec-${PKGVER}.tar.gz ~/rpmbuild/SOURCES/ &&
cp linux/centos/relspec.spec ~/rpmbuild/SPECS/ &&
rpmbuild --nodeps -ba ~/rpmbuild/SPECS/relspec.spec
")
cleanup() {
docker rm -f "$CID" >/dev/null 2>&1 || true
}
trap cleanup EXIT
docker cp relspec-${PKGVER}.tar.gz "$CID:/build/relspec-${PKGVER}.tar.gz"
docker cp linux "$CID:/build/linux"
docker start -a "$CID"
docker cp "$CID:/root/rpmbuild/RPMS/." linux/centos/out/
trap - EXIT
cleanup
- name: Upload to release
run: |
TAG="${{ github.event.inputs.tag || github.ref_name }}"
RELEASE=$(curl -s "${GITHUB_API_URL}/repos/${GITHUB_REPOSITORY}/releases/tags/${TAG}" \
-H "Authorization: token ${GITHUB_TOKEN}")
UPLOAD_URL=$(echo "$RELEASE" | grep -o '"upload_url":"[^"]*"' | cut -d'"' -f4 | sed 's/{[^}]*}//')
while IFS= read -r f; do
FNAME=$(basename "$f")
echo "Uploading $FNAME..."
curl -s -X POST "${UPLOAD_URL}?name=${FNAME}" \
-H "Authorization: token ${GITHUB_TOKEN}" \
-H "Content-Type: application/octet-stream" \
--data-binary "@${f}" > /dev/null
done < <(find linux/centos/out -name "*.rpm")
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}

View File

@@ -1,117 +0,0 @@
name: Release
run-name: "Making Release"
on:
push:
tags:
- 'v*.*.*'
jobs:
build-and-release:
name: Build and Release
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v4
with:
fetch-depth: 0
- name: Set up Go
uses: actions/setup-go@v5
with:
go-version: '1.25'
- name: Get version from tag
id: get_version
run: |
echo "VERSION=${GITHUB_REF#refs/tags/}" >> $GITHUB_OUTPUT
echo "BUILD_DATE=$(date -u '+%Y-%m-%d %H:%M:%S UTC')" >> $GITHUB_OUTPUT
echo "Version: ${GITHUB_REF#refs/tags/}"
- name: Build binaries for multiple platforms
run: |
mkdir -p dist
# Linux AMD64
GOOS=linux GOARCH=amd64 go build -o dist/relspec-linux-amd64 -ldflags "-X 'main.version=${{ steps.get_version.outputs.VERSION }}' -X 'main.buildDate=${{ steps.get_version.outputs.BUILD_DATE }}'" ./cmd/relspec
# Linux ARM64
GOOS=linux GOARCH=arm64 go build -o dist/relspec-linux-arm64 -ldflags "-X 'main.version=${{ steps.get_version.outputs.VERSION }}' -X 'main.buildDate=${{ steps.get_version.outputs.BUILD_DATE }}'" ./cmd/relspec
# macOS AMD64
GOOS=darwin GOARCH=amd64 go build -o dist/relspec-darwin-amd64 -ldflags "-X 'main.version=${{ steps.get_version.outputs.VERSION }}' -X 'main.buildDate=${{ steps.get_version.outputs.BUILD_DATE }}'" ./cmd/relspec
# macOS ARM64 (Apple Silicon)
GOOS=darwin GOARCH=arm64 go build -o dist/relspec-darwin-arm64 -ldflags "-X 'main.version=${{ steps.get_version.outputs.VERSION }}' -X 'main.buildDate=${{ steps.get_version.outputs.BUILD_DATE }}'" ./cmd/relspec
# Windows AMD64
GOOS=windows GOARCH=amd64 go build -o dist/relspec-windows-amd64.exe -ldflags "-X 'main.version=${{ steps.get_version.outputs.VERSION }}' -X 'main.buildDate=${{ steps.get_version.outputs.BUILD_DATE }}'" ./cmd/relspec
# Create checksums
cd dist
sha256sum * > checksums.txt
cd ..
- name: Generate release notes
id: release_notes
run: |
# Get the previous tag
previous_tag=$(git describe --tags --abbrev=0 HEAD^ 2>/dev/null || echo "")
if [ -z "$previous_tag" ]; then
# No previous tag, get all commits
commits=$(git log --pretty=format:"- %s (%h)" --no-merges)
else
# Get commits since the previous tag
commits=$(git log "${previous_tag}..HEAD" --pretty=format:"- %s (%h)" --no-merges)
fi
# Create release notes
cat > release_notes.md << EOF
# Release ${{ steps.get_version.outputs.VERSION }}
## Changes
${commits}
## Installation
Download the appropriate binary for your platform:
- **Linux (AMD64)**: \`relspec-linux-amd64\`
- **Linux (ARM64)**: \`relspec-linux-arm64\`
- **macOS (Intel)**: \`relspec-darwin-amd64\`
- **macOS (Apple Silicon)**: \`relspec-darwin-arm64\`
- **Windows (AMD64)**: \`relspec-windows-amd64.exe\`
Make the binary executable (Linux/macOS):
\`\`\`bash
chmod +x relspec-*
\`\`\`
Verify the download with the provided checksums.
EOF
- name: Create Release
uses: softprops/action-gh-release@v1
with:
body_path: release_notes.md
files: |
dist/relspec-linux-amd64
dist/relspec-linux-arm64
dist/relspec-darwin-amd64
dist/relspec-darwin-arm64
dist/relspec-windows-amd64.exe
dist/checksums.txt
draft: false
prerelease: false
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
- name: Summary
run: |
echo "Release ${{ steps.get_version.outputs.VERSION }} created successfully!"
echo "Binaries built for:"
echo " - Linux (amd64, arm64)"
echo " - macOS (amd64, arm64)"
echo " - Windows (amd64)"

View File

@@ -204,30 +204,21 @@ release: ## Create and push a new release tag (auto-increments patch version)
git push origin "$$version"; \
echo "Tag $$version created and pushed to remote repository."
release-version: ## Create and push a release with specific version (use: make release-version VERSION=v1.2.3)
@if [ -z "$(VERSION)" ]; then \
echo "Error: VERSION is required. Usage: make release-version VERSION=v1.2.3"; \
exit 1; \
fi
@version="$(VERSION)"; \
if ! echo "$$version" | grep -q "^v"; then \
version="v$$version"; \
fi; \
echo "Creating release: $$version"; \
latest_tag=$$(git describe --tags --abbrev=0 2>/dev/null || echo ""); \
if [ -z "$$latest_tag" ]; then \
commit_logs=$$(git log --pretty=format:"- %s" --no-merges); \
else \
commit_logs=$$(git log "$${latest_tag}..HEAD" --pretty=format:"- %s" --no-merges); \
fi; \
if [ -z "$$commit_logs" ]; then \
tag_message="Release $$version"; \
else \
tag_message="Release $$version\n\n$$commit_logs"; \
fi; \
git tag -a "$$version" -m "$$tag_message"; \
git push origin "$$version"; \
echo "Tag $$version created and pushed to remote repository."
release-version: ## Auto-increment patch version, update package files, commit, tag, and push
@CURRENT=$$(git describe --tags --abbrev=0 2>/dev/null || echo "v0.0.0"); \
MAJOR=$$(echo $$CURRENT | sed 's/v\([0-9]*\)\.\([0-9]*\)\.\([0-9]*\).*/\1/'); \
MINOR=$$(echo $$CURRENT | sed 's/v\([0-9]*\)\.\([0-9]*\)\.\([0-9]*\).*/\2/'); \
PATCH=$$(echo $$CURRENT | sed 's/v\([0-9]*\)\.\([0-9]*\)\.\([0-9]*\).*/\3/'); \
NEXT="v$$MAJOR.$$MINOR.$$((PATCH + 1))"; \
PKGVER="$$MAJOR.$$MINOR.$$((PATCH + 1))"; \
echo "Current: $$CURRENT → Next: $$NEXT"; \
sed -i "s/^pkgver=.*/pkgver=$$PKGVER/" linux/arch/PKGBUILD; \
sed -i "s/^Version:.*/Version: $$PKGVER/" linux/centos/relspec.spec; \
git add linux/arch/PKGBUILD linux/centos/relspec.spec; \
git commit -m "chore(release): update package version to $$PKGVER"; \
git tag -a "$$NEXT" -m "Release $$NEXT"; \
git push origin HEAD "$$NEXT"; \
echo "Pushed $$NEXT — release workflow triggered"
help: ## Display this help screen
@grep -E '^[a-zA-Z_-]+:.*?## .*$$' $(MAKEFILE_LIST) | sort | awk 'BEGIN {FS = ":.*?## "}; {printf "\033[36m%-20s\033[0m %s\n", $$1, $$2}'

320
README.md
View File

@@ -6,264 +6,160 @@
[![Go Version](https://img.shields.io/badge/go-1.24.0-blue.svg)](https://go.dev/dl/)
[![License](https://img.shields.io/badge/license-Apache%202.0-blue.svg)](LICENSE)
> Database Relations Specification Tool for Go
> Bidirectional database schema conversion, validation, and templating tool.
RelSpec is a comprehensive database relations management tool that reads, transforms, and writes database table specifications across multiple formats and ORMs.
![RelSpec](./assets/image/relspec1_512.jpg)
## Overview
RelSpec provides bidirectional conversion, comparison, and validation of database specification formats, allowing you to:
- Inspect live databases and extract their structure
- Validate schemas against configurable rules and naming conventions
- Convert between different ORM models (GORM, Bun, etc.)
- Transform legacy schema definitions (Clarion DCTX, XML, JSON, etc.)
- Generate standardized specification files (JSON, YAML, etc.)
- Compare database schemas and track changes
![1.00](./assets/image/relspec1_512.jpg)
## Features
### Readers (Input Formats)
RelSpec can read database schemas from multiple sources:
#### ORM Models
- [GORM](pkg/readers/gorm/README.md) - Go GORM model definitions
- [Bun](pkg/readers/bun/README.md) - Go Bun model definitions
- [Drizzle](pkg/readers/drizzle/README.md) - TypeScript Drizzle ORM schemas
- [Prisma](pkg/readers/prisma/README.md) - Prisma schema language
- [TypeORM](pkg/readers/typeorm/README.md) - TypeScript TypeORM entities
#### Database Inspection
- [PostgreSQL](pkg/readers/pgsql/README.md) - Direct PostgreSQL database introspection
- [SQLite](pkg/readers/sqlite/README.md) - Direct SQLite database introspection
#### Schema Formats
- [DBML](pkg/readers/dbml/README.md) - Database Markup Language (dbdiagram.io)
- [DCTX](pkg/readers/dctx/README.md) - Clarion database dictionary format
- [DrawDB](pkg/readers/drawdb/README.md) - DrawDB JSON format
- [GraphQL](pkg/readers/graphql/README.md) - GraphQL Schema Definition Language (SDL)
- [JSON](pkg/readers/json/README.md) - RelSpec canonical JSON format
- [YAML](pkg/readers/yaml/README.md) - RelSpec canonical YAML format
### Writers (Output Formats)
RelSpec can write database schemas to multiple formats:
#### ORM Models
- [GORM](pkg/writers/gorm/README.md) - Generate GORM-compatible Go structs
- [Bun](pkg/writers/bun/README.md) - Generate Bun-compatible Go structs
- [Drizzle](pkg/writers/drizzle/README.md) - Generate Drizzle ORM TypeScript schemas
- [Prisma](pkg/writers/prisma/README.md) - Generate Prisma schema files
- [TypeORM](pkg/writers/typeorm/README.md) - Generate TypeORM TypeScript entities
#### Database DDL
- [PostgreSQL](pkg/writers/pgsql/README.md) - PostgreSQL DDL (CREATE TABLE, etc.)
- [SQLite](pkg/writers/sqlite/README.md) - SQLite DDL with automatic schema flattening
#### Schema Formats
- [DBML](pkg/writers/dbml/README.md) - Database Markup Language
- [DCTX](pkg/writers/dctx/README.md) - Clarion database dictionary format
- [DrawDB](pkg/writers/drawdb/README.md) - DrawDB JSON format
- [GraphQL](pkg/writers/graphql/README.md) - GraphQL Schema Definition Language (SDL)
- [JSON](pkg/writers/json/README.md) - RelSpec canonical JSON format
- [YAML](pkg/writers/yaml/README.md) - RelSpec canonical YAML format
### Inspector (Schema Validation)
RelSpec includes a powerful schema validation and linting tool:
- [Inspector](pkg/inspector/README.md) - Validate database schemas against configurable rules
- Enforce naming conventions (snake_case, camelCase, custom patterns)
- Check primary key and foreign key standards
- Detect missing indexes on foreign keys
- Prevent use of SQL reserved keywords
- Ensure schema integrity (missing PKs, orphaned FKs, circular dependencies)
- Support for custom validation rules
- Multiple output formats (Markdown with colors, JSON)
- CI/CD integration ready
## Use of AI
[Rules and use of AI](./AI_USE.md)
## User Interface
RelSpec provides an interactive terminal-based user interface for managing and editing database schemas. The UI allows you to:
- **Browse Databases** - Navigate through your database structure with an intuitive menu system
- **Edit Schemas** - Create, modify, and organize database schemas
- **Manage Tables** - Add, update, or delete tables with full control over structure
- **Configure Columns** - Define column properties, data types, constraints, and relationships
- **Interactive Editing** - Real-time validation and feedback as you make changes
The interface supports multiple input formats, making it easy to load, edit, and save your database definitions in various formats.
<p align="center" width="100%">
<img src="./assets/image/screenshots/main_screen.jpg">
</p>
<p align="center" width="100%">
<img src="./assets/image/screenshots/table_view.jpg">
</p>
<p align="center" width="100%">
<img src="./assets/image/screenshots/edit_column.jpg">
</p>
## Installation
## Install
```bash
go get github.com/wdevs/relspecgo
go install -v git.warky.dev/wdevs/relspecgo/cmd/relspec@latest
```
## Usage
## Supported Formats
### Interactive Schema Editor
| Direction | Formats |
|-----------|---------|
| **Readers** | `bun` `dbml` `dctx` `drawdb` `drizzle` `gorm` `graphql` `json` `mssql` `pgsql` `prisma` `sqldir` `sqlite` `typeorm` `yaml` |
| **Writers** | `bun` `dbml` `dctx` `drawdb` `drizzle` `gorm` `graphql` `json` `mssql` `pgsql` `prisma` `sqlexec` `sqlite` `template` `typeorm` `yaml` |
## Commands
### `convert` — Schema conversion
```bash
# Launch interactive editor with a DBML schema
relspec edit --from dbml --from-path schema.dbml --to dbml --to-path schema.dbml
# PostgreSQL → GORM models
relspec convert --from pgsql --from-conn "postgres://user:pass@localhost/mydb" \
--to gorm --to-path models/ --package models
# Edit PostgreSQL database in place
relspec edit --from pgsql --from-conn "postgres://user:pass@localhost/mydb" \
--to pgsql --to-conn "postgres://user:pass@localhost/mydb"
# DBML → PostgreSQL DDL
relspec convert --from dbml --from-path schema.dbml --to pgsql --to-path schema.sql
# Edit JSON schema and save as GORM models
relspec edit --from json --from-path db.json --to gorm --to-path models/
# PostgreSQL → SQLite (auto flattens schemas)
relspec convert --from pgsql --from-conn "postgres://..." --to sqlite --to-path schema.sql
# Multiple input files merged
relspec convert --from json --from-list "a.json,b.json" --to yaml --to-path merged.yaml
```
The `edit` command launches an interactive terminal user interface where you can:
- Browse and navigate your database structure
- Create, modify, and delete schemas, tables, and columns
- Configure column properties, constraints, and relationships
- Save changes to various formats
- Import and merge schemas from other databases
### Schema Merging
### `merge` — Additive schema merge (never modifies existing items)
```bash
# Merge two JSON schemas (additive merge - adds missing items only)
# Merge two JSON schemas
relspec merge --target json --target-path base.json \
--source json --source-path additions.json \
--output json --output-path merged.json
# Merge PostgreSQL database into JSON, skipping specific tables
# Merge PostgreSQL into JSON, skipping tables
relspec merge --target json --target-path current.json \
--source pgsql --source-conn "postgres://user:pass@localhost/source_db" \
--source pgsql --source-conn "postgres://user:pass@localhost/db" \
--output json --output-path updated.json \
--skip-tables "audit_log,temp_tables"
# Cross-format merge (DBML + YAML → JSON)
relspec merge --target dbml --target-path base.dbml \
--source yaml --source-path additions.yaml \
--output json --output-path result.json \
--skip-relations --skip-views
```
The `merge` command combines two database schemas additively:
- Adds missing schemas, tables, columns, and other objects
- Never modifies or deletes existing items (safe operation)
- Supports selective merging with skip options (domains, relations, enums, views, sequences, specific tables)
- Works across any combination of supported formats
- Perfect for integrating multiple schema definitions or applying patches
Skip flags: `--skip-relations` `--skip-views` `--skip-domains` `--skip-enums` `--skip-sequences`
### Schema Conversion
### `inspect` — Schema validation / linting
```bash
# Convert PostgreSQL database to GORM models
relspec convert --from pgsql --from-conn "postgres://user:pass@localhost/mydb" \
--to gorm --to-path models/ --package models
# Convert GORM models to Bun
relspec convert --from gorm --from-path models.go \
--to bun --to-path bun_models.go --package models
# Export database schema to JSON
relspec convert --from pgsql --from-conn "postgres://..." \
--to json --to-path schema.json
# Convert DBML to PostgreSQL SQL
relspec convert --from dbml --from-path schema.dbml \
--to pgsql --to-path schema.sql
# Convert PostgreSQL database to SQLite (with automatic schema flattening)
relspec convert --from pgsql --from-conn "postgres://..." \
--to sqlite --to-path sqlite_schema.sql
```
### Schema Validation
```bash
# Validate a PostgreSQL database with default rules
# Validate PostgreSQL database
relspec inspect --from pgsql --from-conn "postgres://user:pass@localhost/mydb"
# Validate DBML file with custom rules
# Validate DBML with custom rules
relspec inspect --from dbml --from-path schema.dbml --rules .relspec-rules.yaml
# Generate JSON validation report
relspec inspect --from json --from-path db.json \
--output-format json --output report.json
# JSON report output
relspec inspect --from json --from-path db.json --output-format json --output report.json
# Validate specific schema only
# Filter to specific schema
relspec inspect --from pgsql --from-conn "..." --schema public
```
### Schema Comparison
Rules: naming conventions, PK/FK standards, missing indexes, reserved keywords, circular dependencies.
### `diff` — Schema comparison
```bash
# Compare two database schemas
relspec diff --from pgsql --from-conn "postgres://localhost/db1" \
--to pgsql --to-conn "postgres://localhost/db2"
```
### `templ` — Custom template rendering
```bash
# Render database schema to Markdown docs
relspec templ --from pgsql --from-conn "postgres://user:pass@localhost/db" \
--template docs.tmpl --output schema-docs.md
# One TypeScript file per table
relspec templ --from dbml --from-path schema.dbml \
--template ts-model.tmpl --mode table \
--output ./models/ --filename-pattern "{{.Name | toCamelCase}}.ts"
```
Modes: `database` (default) · `schema` · `table` · `script`
Template functions: string utils (`toCamelCase`, `toSnakeCase`, `pluralize`, …), type converters (`sqlToGo`, `sqlToTypeScript`, …), filters, loop helpers, safe access.
### `edit` — Interactive TUI editor
```bash
# Edit DBML schema interactively
relspec edit --from dbml --from-path schema.dbml --to dbml --to-path schema.dbml
# Edit live PostgreSQL database
relspec edit --from pgsql --from-conn "postgres://user:pass@localhost/mydb" \
--to pgsql --to-conn "postgres://user:pass@localhost/mydb"
```
<p align="center">
<img src="./assets/image/screenshots/main_screen.jpg">
</p>
<p align="center">
<img src="./assets/image/screenshots/table_view.jpg">
</p>
<p align="center">
<img src="./assets/image/screenshots/edit_column.jpg">
</p>
## Development
**Prerequisites:** Go 1.24.0+
```bash
make build # → build/relspec
make test # race detection + coverage
make lint # requires golangci-lint
make coverage # → coverage.html
make install # → $GOPATH/bin
```
## Project Structure
```
relspecgo/
├── cmd/
│ └── relspec/ # CLI application (convert, inspect, diff, scripts)
├── pkg/
│ ├── readers/ # Input format readers (DBML, GORM, PostgreSQL, etc.)
│ ├── writers/ # Output format writers (GORM, Bun, SQL, etc.)
│ ├── inspector/ # Schema validation and linting
│ ├── diff/ # Schema comparison
│ ├── models/ # Internal data models
│ ├── transform/ # Transformation logic
│ └── pgsql/ # PostgreSQL utilities (keywords, data types)
├── examples/ # Usage examples
└── tests/ # Test files
cmd/relspec/ CLI commands
pkg/readers/ Input format readers
pkg/writers/ Output format writers
pkg/inspector/ Schema validation
pkg/diff/ Schema comparison
pkg/merge/ Schema merging
pkg/models/ Internal data models
pkg/transform/ Transformation logic
pkg/pgsql/ PostgreSQL utilities
```
## Todo
[Todo List of Features](./TODO.md)
## Development
### Prerequisites
- Go 1.21 or higher
- Access to test databases (optional)
### Building
```bash
go build -o relspec ./cmd/relspec
```
### Testing
```bash
go test ./...
```
## License
Apache License 2.0 - See [LICENSE](LICENSE) for details.
Copyright 2025 Warky Devs
## Contributing
Contributions welcome. Please open an issue or submit a pull request.
1. Register or sign in with GitHub at [git.warky.dev](https://git.warky.dev)
2. Clone the repository: `git clone https://git.warky.dev/wdevs/relspecgo.git`
3. Create a feature branch: `git checkout -b feature/your-feature-name`
4. Commit your changes and push the branch
5. Open a pull request with a description of the new feature or fix
For questions or discussion, join the Discord: [discord.gg/74rcTujp25](https://discord.gg/74rcTujp25) — `warkyhein`
## Links
- [Todo](./TODO.md)
- [AI Use Policy](./AI_USE.md)
- [License](LICENSE) — Apache 2.0 · Copyright 2025 Warky Devs

Binary file not shown.

Before

Width:  |  Height:  |  Size: 171 KiB

After

Width:  |  Height:  |  Size: 200 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 107 KiB

After

Width:  |  Height:  |  Size: 200 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 80 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 192 KiB

View File

@@ -8,6 +8,7 @@ import (
"github.com/spf13/cobra"
"git.warky.dev/wdevs/relspecgo/pkg/merge"
"git.warky.dev/wdevs/relspecgo/pkg/models"
"git.warky.dev/wdevs/relspecgo/pkg/readers"
"git.warky.dev/wdevs/relspecgo/pkg/readers/bun"
@@ -45,6 +46,7 @@ var (
convertSourceType string
convertSourcePath string
convertSourceConn string
convertFromList []string
convertTargetType string
convertTargetPath string
convertPackageName string
@@ -166,6 +168,7 @@ func init() {
convertCmd.Flags().StringVar(&convertSourceType, "from", "", "Source format (dbml, dctx, drawdb, graphql, json, yaml, gorm, bun, drizzle, prisma, typeorm, pgsql, sqlite)")
convertCmd.Flags().StringVar(&convertSourcePath, "from-path", "", "Source file path (for file-based formats)")
convertCmd.Flags().StringVar(&convertSourceConn, "from-conn", "", "Source connection string (for pgsql) or file path (for sqlite)")
convertCmd.Flags().StringSliceVar(&convertFromList, "from-list", nil, "Comma-separated list of source file paths to read and merge (mutually exclusive with --from-path)")
convertCmd.Flags().StringVar(&convertTargetType, "to", "", "Target format (dbml, dctx, drawdb, graphql, json, yaml, gorm, bun, drizzle, prisma, typeorm, pgsql)")
convertCmd.Flags().StringVar(&convertTargetPath, "to-path", "", "Target output path (file or directory)")
@@ -191,17 +194,29 @@ func runConvert(cmd *cobra.Command, args []string) error {
fmt.Fprintf(os.Stderr, "\n=== RelSpec Schema Converter ===\n")
fmt.Fprintf(os.Stderr, "Started at: %s\n\n", getCurrentTimestamp())
// Validate mutually exclusive flags
if convertSourcePath != "" && len(convertFromList) > 0 {
return fmt.Errorf("--from-path and --from-list are mutually exclusive")
}
// Read source database
fmt.Fprintf(os.Stderr, "[1/2] Reading source schema...\n")
fmt.Fprintf(os.Stderr, " Format: %s\n", convertSourceType)
if convertSourcePath != "" {
fmt.Fprintf(os.Stderr, " Path: %s\n", convertSourcePath)
}
if convertSourceConn != "" {
fmt.Fprintf(os.Stderr, " Conn: %s\n", maskPassword(convertSourceConn))
}
db, err := readDatabaseForConvert(convertSourceType, convertSourcePath, convertSourceConn)
var db *models.Database
var err error
if len(convertFromList) > 0 {
db, err = readDatabaseListForConvert(convertSourceType, convertFromList)
} else {
if convertSourcePath != "" {
fmt.Fprintf(os.Stderr, " Path: %s\n", convertSourcePath)
}
if convertSourceConn != "" {
fmt.Fprintf(os.Stderr, " Conn: %s\n", maskPassword(convertSourceConn))
}
db, err = readDatabaseForConvert(convertSourceType, convertSourcePath, convertSourceConn)
}
if err != nil {
return fmt.Errorf("failed to read source: %w", err)
}
@@ -237,6 +252,30 @@ func runConvert(cmd *cobra.Command, args []string) error {
return nil
}
func readDatabaseListForConvert(dbType string, files []string) (*models.Database, error) {
if len(files) == 0 {
return nil, fmt.Errorf("file list is empty")
}
fmt.Fprintf(os.Stderr, " Files: %d file(s)\n", len(files))
var base *models.Database
for i, filePath := range files {
fmt.Fprintf(os.Stderr, " [%d/%d] %s\n", i+1, len(files), filePath)
db, err := readDatabaseForConvert(dbType, filePath, "")
if err != nil {
return nil, fmt.Errorf("failed to read %s: %w", filePath, err)
}
if base == nil {
base = db
} else {
merge.MergeDatabases(base, db, &merge.MergeOptions{})
}
}
return base, nil
}
func readDatabaseForConvert(dbType, filePath, connString string) (*models.Database, error) {
var reader readers.Reader

View File

@@ -0,0 +1,183 @@
package main
import (
"os"
"path/filepath"
"testing"
)
func TestReadDatabaseListForConvert_SingleFile(t *testing.T) {
dir := t.TempDir()
file := filepath.Join(dir, "schema.json")
writeTestJSON(t, file, []string{"users"})
db, err := readDatabaseListForConvert("json", []string{file})
if err != nil {
t.Fatalf("unexpected error: %v", err)
}
if len(db.Schemas) == 0 {
t.Fatal("expected at least one schema")
}
if len(db.Schemas[0].Tables) != 1 {
t.Errorf("expected 1 table, got %d", len(db.Schemas[0].Tables))
}
}
func TestReadDatabaseListForConvert_MultipleFiles(t *testing.T) {
dir := t.TempDir()
file1 := filepath.Join(dir, "schema1.json")
file2 := filepath.Join(dir, "schema2.json")
writeTestJSON(t, file1, []string{"users"})
writeTestJSON(t, file2, []string{"comments"})
db, err := readDatabaseListForConvert("json", []string{file1, file2})
if err != nil {
t.Fatalf("unexpected error: %v", err)
}
total := 0
for _, s := range db.Schemas {
total += len(s.Tables)
}
if total != 2 {
t.Errorf("expected 2 tables (users + comments), got %d", total)
}
}
func TestReadDatabaseListForConvert_PathWithSpaces(t *testing.T) {
spacedDir := filepath.Join(t.TempDir(), "my schema files")
if err := os.MkdirAll(spacedDir, 0755); err != nil {
t.Fatal(err)
}
file := filepath.Join(spacedDir, "my users schema.json")
writeTestJSON(t, file, []string{"users"})
db, err := readDatabaseListForConvert("json", []string{file})
if err != nil {
t.Fatalf("unexpected error with spaced path: %v", err)
}
if db == nil {
t.Fatal("expected non-nil database")
}
}
func TestReadDatabaseListForConvert_MultipleFilesPathWithSpaces(t *testing.T) {
spacedDir := filepath.Join(t.TempDir(), "my schema files")
if err := os.MkdirAll(spacedDir, 0755); err != nil {
t.Fatal(err)
}
file1 := filepath.Join(spacedDir, "users schema.json")
file2 := filepath.Join(spacedDir, "posts schema.json")
writeTestJSON(t, file1, []string{"users"})
writeTestJSON(t, file2, []string{"posts"})
db, err := readDatabaseListForConvert("json", []string{file1, file2})
if err != nil {
t.Fatalf("unexpected error with spaced paths: %v", err)
}
total := 0
for _, s := range db.Schemas {
total += len(s.Tables)
}
if total != 2 {
t.Errorf("expected 2 tables, got %d", total)
}
}
func TestReadDatabaseListForConvert_EmptyList(t *testing.T) {
_, err := readDatabaseListForConvert("json", []string{})
if err == nil {
t.Error("expected error for empty file list")
}
}
func TestReadDatabaseListForConvert_InvalidFile(t *testing.T) {
_, err := readDatabaseListForConvert("json", []string{"/nonexistent/path/file.json"})
if err == nil {
t.Error("expected error for nonexistent file")
}
}
func TestRunConvert_FromListMutuallyExclusiveWithFromPath(t *testing.T) {
saved := saveConvertState()
defer restoreConvertState(saved)
dir := t.TempDir()
file := filepath.Join(dir, "schema.json")
writeTestJSON(t, file, []string{"users"})
convertSourceType = "json"
convertSourcePath = file
convertFromList = []string{file}
convertTargetType = "json"
convertTargetPath = filepath.Join(dir, "out.json")
err := runConvert(nil, nil)
if err == nil {
t.Error("expected error when --from-path and --from-list are both set")
}
}
func TestRunConvert_FromListEndToEnd(t *testing.T) {
saved := saveConvertState()
defer restoreConvertState(saved)
dir := t.TempDir()
file1 := filepath.Join(dir, "users.json")
file2 := filepath.Join(dir, "posts.json")
outFile := filepath.Join(dir, "merged.json")
writeTestJSON(t, file1, []string{"users"})
writeTestJSON(t, file2, []string{"posts"})
convertSourceType = "json"
convertSourcePath = ""
convertSourceConn = ""
convertFromList = []string{file1, file2}
convertTargetType = "json"
convertTargetPath = outFile
convertPackageName = ""
convertSchemaFilter = ""
convertFlattenSchema = false
if err := runConvert(nil, nil); err != nil {
t.Fatalf("runConvert() error = %v", err)
}
if _, err := os.Stat(outFile); os.IsNotExist(err) {
t.Error("expected output file to be created")
}
}
func TestRunConvert_FromListEndToEndPathWithSpaces(t *testing.T) {
saved := saveConvertState()
defer restoreConvertState(saved)
spacedDir := filepath.Join(t.TempDir(), "my schema dir")
if err := os.MkdirAll(spacedDir, 0755); err != nil {
t.Fatal(err)
}
file1 := filepath.Join(spacedDir, "users schema.json")
file2 := filepath.Join(spacedDir, "posts schema.json")
outFile := filepath.Join(spacedDir, "merged output.json")
writeTestJSON(t, file1, []string{"users"})
writeTestJSON(t, file2, []string{"posts"})
convertSourceType = "json"
convertSourcePath = ""
convertSourceConn = ""
convertFromList = []string{file1, file2}
convertTargetType = "json"
convertTargetPath = outFile
convertPackageName = ""
convertSchemaFilter = ""
convertFlattenSchema = false
if err := runConvert(nil, nil); err != nil {
t.Fatalf("runConvert() with spaced paths error = %v", err)
}
if _, err := os.Stat(outFile); os.IsNotExist(err) {
t.Error("expected output file to be created")
}
}

View File

@@ -47,6 +47,7 @@ var (
mergeSourceType string
mergeSourcePath string
mergeSourceConn string
mergeFromList []string
mergeOutputType string
mergeOutputPath string
mergeOutputConn string
@@ -109,8 +110,9 @@ func init() {
// Source database flags
mergeCmd.Flags().StringVar(&mergeSourceType, "source", "", "Source format (required): dbml, dctx, drawdb, graphql, json, yaml, gorm, bun, drizzle, prisma, typeorm, pgsql")
mergeCmd.Flags().StringVar(&mergeSourcePath, "source-path", "", "Source file path (required for file-based formats)")
mergeCmd.Flags().StringVar(&mergeSourcePath, "source-path", "", "Source file path (required for file-based formats, mutually exclusive with --from-list)")
mergeCmd.Flags().StringVar(&mergeSourceConn, "source-conn", "", "Source connection string (required for pgsql)")
mergeCmd.Flags().StringSliceVar(&mergeFromList, "from-list", nil, "Comma-separated list of source file paths to merge (mutually exclusive with --source-path)")
// Output flags
mergeCmd.Flags().StringVar(&mergeOutputType, "output", "", "Output format (required): dbml, dctx, drawdb, graphql, json, yaml, gorm, bun, drizzle, prisma, typeorm, pgsql")
@@ -144,6 +146,11 @@ func runMerge(cmd *cobra.Command, args []string) error {
return fmt.Errorf("--output format is required")
}
// Validate mutually exclusive source flags
if mergeSourcePath != "" && len(mergeFromList) > 0 {
return fmt.Errorf("--source-path and --from-list are mutually exclusive")
}
// Validate and expand file paths
if mergeTargetType != "pgsql" {
if mergeTargetPath == "" {
@@ -157,8 +164,8 @@ func runMerge(cmd *cobra.Command, args []string) error {
}
if mergeSourceType != "pgsql" {
if mergeSourcePath == "" {
return fmt.Errorf("--source-path is required for %s format", mergeSourceType)
if mergeSourcePath == "" && len(mergeFromList) == 0 {
return fmt.Errorf("--source-path or --from-list is required for %s format", mergeSourceType)
}
mergeSourcePath = expandPath(mergeSourcePath)
} else if mergeSourceConn == "" {
@@ -189,19 +196,36 @@ func runMerge(cmd *cobra.Command, args []string) error {
fmt.Fprintf(os.Stderr, " ✓ Successfully read target database '%s'\n", targetDB.Name)
printDatabaseStats(targetDB)
// Step 2: Read source database
// Step 2: Read source database(s)
fmt.Fprintf(os.Stderr, "\n[2/3] Reading source database...\n")
fmt.Fprintf(os.Stderr, " Format: %s\n", mergeSourceType)
if mergeSourcePath != "" {
fmt.Fprintf(os.Stderr, " Path: %s\n", mergeSourcePath)
}
if mergeSourceConn != "" {
fmt.Fprintf(os.Stderr, " Conn: %s\n", maskPassword(mergeSourceConn))
}
sourceDB, err := readDatabaseForMerge(mergeSourceType, mergeSourcePath, mergeSourceConn, "Source")
if err != nil {
return fmt.Errorf("failed to read source database: %w", err)
var sourceDB *models.Database
if len(mergeFromList) > 0 {
fmt.Fprintf(os.Stderr, " Files: %d file(s)\n", len(mergeFromList))
for i, filePath := range mergeFromList {
fmt.Fprintf(os.Stderr, " [%d/%d] %s\n", i+1, len(mergeFromList), filePath)
db, readErr := readDatabaseForMerge(mergeSourceType, expandPath(filePath), "", "Source")
if readErr != nil {
return fmt.Errorf("failed to read source file %s: %w", filePath, readErr)
}
if sourceDB == nil {
sourceDB = db
} else {
merge.MergeDatabases(sourceDB, db, &merge.MergeOptions{})
}
}
} else {
if mergeSourcePath != "" {
fmt.Fprintf(os.Stderr, " Path: %s\n", mergeSourcePath)
}
if mergeSourceConn != "" {
fmt.Fprintf(os.Stderr, " Conn: %s\n", maskPassword(mergeSourceConn))
}
sourceDB, err = readDatabaseForMerge(mergeSourceType, mergeSourcePath, mergeSourceConn, "Source")
if err != nil {
return fmt.Errorf("failed to read source database: %w", err)
}
}
fmt.Fprintf(os.Stderr, " ✓ Successfully read source database '%s'\n", sourceDB.Name)
printDatabaseStats(sourceDB)

View File

@@ -0,0 +1,162 @@
package main
import (
"os"
"path/filepath"
"testing"
)
func TestRunMerge_FromListMutuallyExclusiveWithSourcePath(t *testing.T) {
saved := saveMergeState()
defer restoreMergeState(saved)
dir := t.TempDir()
file := filepath.Join(dir, "schema.json")
writeTestJSON(t, file, []string{"users"})
mergeTargetType = "json"
mergeTargetPath = file
mergeTargetConn = ""
mergeSourceType = "json"
mergeSourcePath = file
mergeSourceConn = ""
mergeFromList = []string{file}
mergeOutputType = "json"
mergeOutputPath = filepath.Join(dir, "out.json")
mergeOutputConn = ""
mergeSkipTables = ""
mergeReportPath = ""
err := runMerge(nil, nil)
if err == nil {
t.Error("expected error when --source-path and --from-list are both set")
}
}
func TestRunMerge_FromListSingleFile(t *testing.T) {
saved := saveMergeState()
defer restoreMergeState(saved)
dir := t.TempDir()
targetFile := filepath.Join(dir, "target.json")
sourceFile := filepath.Join(dir, "source.json")
outFile := filepath.Join(dir, "output.json")
writeTestJSON(t, targetFile, []string{"users"})
writeTestJSON(t, sourceFile, []string{"posts"})
mergeTargetType = "json"
mergeTargetPath = targetFile
mergeTargetConn = ""
mergeSourceType = "json"
mergeSourcePath = ""
mergeSourceConn = ""
mergeFromList = []string{sourceFile}
mergeOutputType = "json"
mergeOutputPath = outFile
mergeOutputConn = ""
mergeSkipTables = ""
mergeReportPath = ""
if err := runMerge(nil, nil); err != nil {
t.Fatalf("runMerge() error = %v", err)
}
if _, err := os.Stat(outFile); os.IsNotExist(err) {
t.Error("expected output file to be created")
}
}
func TestRunMerge_FromListMultipleFiles(t *testing.T) {
saved := saveMergeState()
defer restoreMergeState(saved)
dir := t.TempDir()
targetFile := filepath.Join(dir, "target.json")
source1 := filepath.Join(dir, "source1.json")
source2 := filepath.Join(dir, "source2.json")
outFile := filepath.Join(dir, "output.json")
writeTestJSON(t, targetFile, []string{"users"})
writeTestJSON(t, source1, []string{"posts"})
writeTestJSON(t, source2, []string{"comments"})
mergeTargetType = "json"
mergeTargetPath = targetFile
mergeTargetConn = ""
mergeSourceType = "json"
mergeSourcePath = ""
mergeSourceConn = ""
mergeFromList = []string{source1, source2}
mergeOutputType = "json"
mergeOutputPath = outFile
mergeOutputConn = ""
mergeSkipTables = ""
mergeReportPath = ""
if err := runMerge(nil, nil); err != nil {
t.Fatalf("runMerge() error = %v", err)
}
if _, err := os.Stat(outFile); os.IsNotExist(err) {
t.Error("expected output file to be created")
}
}
func TestRunMerge_FromListPathWithSpaces(t *testing.T) {
saved := saveMergeState()
defer restoreMergeState(saved)
spacedDir := filepath.Join(t.TempDir(), "my schema files")
if err := os.MkdirAll(spacedDir, 0755); err != nil {
t.Fatal(err)
}
targetFile := filepath.Join(spacedDir, "target schema.json")
sourceFile := filepath.Join(spacedDir, "source schema.json")
outFile := filepath.Join(spacedDir, "merged output.json")
writeTestJSON(t, targetFile, []string{"users"})
writeTestJSON(t, sourceFile, []string{"comments"})
mergeTargetType = "json"
mergeTargetPath = targetFile
mergeTargetConn = ""
mergeSourceType = "json"
mergeSourcePath = ""
mergeSourceConn = ""
mergeFromList = []string{sourceFile}
mergeOutputType = "json"
mergeOutputPath = outFile
mergeOutputConn = ""
mergeSkipTables = ""
mergeReportPath = ""
if err := runMerge(nil, nil); err != nil {
t.Fatalf("runMerge() with spaced paths error = %v", err)
}
if _, err := os.Stat(outFile); os.IsNotExist(err) {
t.Error("expected output file to be created")
}
}
func TestRunMerge_FromListMissingSourceType(t *testing.T) {
saved := saveMergeState()
defer restoreMergeState(saved)
dir := t.TempDir()
file := filepath.Join(dir, "schema.json")
writeTestJSON(t, file, []string{"users"})
mergeTargetType = "json"
mergeTargetPath = file
mergeTargetConn = ""
mergeSourceType = "json"
mergeSourcePath = ""
mergeSourceConn = ""
mergeFromList = []string{} // empty list, no source-path either
mergeOutputType = "json"
mergeOutputPath = filepath.Join(dir, "out.json")
mergeOutputConn = ""
mergeSkipTables = ""
mergeReportPath = ""
err := runMerge(nil, nil)
if err == nil {
t.Error("expected error when neither --source-path nor --from-list is provided")
}
}

View File

@@ -2,6 +2,8 @@ package main
import (
"fmt"
"runtime/debug"
"time"
"github.com/spf13/cobra"
)
@@ -12,6 +14,36 @@ var (
buildDate = "unknown"
)
func init() {
// If version wasn't set via ldflags, try to get it from build info
if version == "dev" {
if info, ok := debug.ReadBuildInfo(); ok {
// Try to get version from VCS
var vcsRevision, vcsTime string
for _, setting := range info.Settings {
switch setting.Key {
case "vcs.revision":
if len(setting.Value) >= 7 {
vcsRevision = setting.Value[:7]
}
case "vcs.time":
vcsTime = setting.Value
}
}
if vcsRevision != "" {
version = vcsRevision
}
if vcsTime != "" {
if t, err := time.Parse(time.RFC3339, vcsTime); err == nil {
buildDate = t.UTC().Format("2006-01-02 15:04:05 UTC")
}
}
}
}
}
var rootCmd = &cobra.Command{
Use: "relspec",
Short: "RelSpec - Database schema conversion and analysis tool",

View File

@@ -15,6 +15,7 @@ var (
templSourceType string
templSourcePath string
templSourceConn string
templFromList []string
templTemplatePath string
templOutputPath string
templSchemaFilter string
@@ -78,8 +79,9 @@ Examples:
func init() {
templCmd.Flags().StringVar(&templSourceType, "from", "", "Source format (dbml, pgsql, json, etc.)")
templCmd.Flags().StringVar(&templSourcePath, "from-path", "", "Source file path (for file-based sources)")
templCmd.Flags().StringVar(&templSourcePath, "from-path", "", "Source file path (for file-based sources, mutually exclusive with --from-list)")
templCmd.Flags().StringVar(&templSourceConn, "from-conn", "", "Source connection string (for database sources)")
templCmd.Flags().StringSliceVar(&templFromList, "from-list", nil, "Comma-separated list of source file paths to read and merge (mutually exclusive with --from-path)")
templCmd.Flags().StringVar(&templTemplatePath, "template", "", "Template file path (required)")
templCmd.Flags().StringVar(&templOutputPath, "output", "", "Output path (file or directory, empty for stdout)")
templCmd.Flags().StringVar(&templSchemaFilter, "schema", "", "Filter to specific schema")
@@ -95,9 +97,20 @@ func runTempl(cmd *cobra.Command, args []string) error {
fmt.Fprintf(os.Stderr, "=== RelSpec Template Execution ===\n")
fmt.Fprintf(os.Stderr, "Started at: %s\n\n", getCurrentTimestamp())
// Validate mutually exclusive flags
if templSourcePath != "" && len(templFromList) > 0 {
return fmt.Errorf("--from-path and --from-list are mutually exclusive")
}
// Read database using the same function as convert
fmt.Fprintf(os.Stderr, "Reading from %s...\n", templSourceType)
db, err := readDatabaseForConvert(templSourceType, templSourcePath, templSourceConn)
var db *models.Database
var err error
if len(templFromList) > 0 {
db, err = readDatabaseListForConvert(templSourceType, templFromList)
} else {
db, err = readDatabaseForConvert(templSourceType, templSourcePath, templSourceConn)
}
if err != nil {
return fmt.Errorf("failed to read source: %w", err)
}

View File

@@ -0,0 +1,134 @@
package main
import (
"os"
"path/filepath"
"testing"
)
// writeTestTemplate writes a minimal Go text template file.
func writeTestTemplate(t *testing.T, path string) {
t.Helper()
content := []byte(`{{.Name}}`)
if err := os.WriteFile(path, content, 0644); err != nil {
t.Fatalf("failed to write template file %s: %v", path, err)
}
}
func TestRunTempl_FromListMutuallyExclusiveWithFromPath(t *testing.T) {
saved := saveTemplState()
defer restoreTemplState(saved)
dir := t.TempDir()
file := filepath.Join(dir, "schema.json")
tmpl := filepath.Join(dir, "tmpl.tmpl")
writeTestJSON(t, file, []string{"users"})
writeTestTemplate(t, tmpl)
templSourceType = "json"
templSourcePath = file
templFromList = []string{file}
templTemplatePath = tmpl
templOutputPath = ""
templMode = "database"
templFilenamePattern = "{{.Name}}.txt"
err := runTempl(nil, nil)
if err == nil {
t.Error("expected error when --from-path and --from-list are both set")
}
}
func TestRunTempl_FromListSingleFile(t *testing.T) {
saved := saveTemplState()
defer restoreTemplState(saved)
dir := t.TempDir()
file := filepath.Join(dir, "schema.json")
tmpl := filepath.Join(dir, "tmpl.tmpl")
outFile := filepath.Join(dir, "output.txt")
writeTestJSON(t, file, []string{"users"})
writeTestTemplate(t, tmpl)
templSourceType = "json"
templSourcePath = ""
templSourceConn = ""
templFromList = []string{file}
templTemplatePath = tmpl
templOutputPath = outFile
templSchemaFilter = ""
templMode = "database"
templFilenamePattern = "{{.Name}}.txt"
if err := runTempl(nil, nil); err != nil {
t.Fatalf("runTempl() error = %v", err)
}
if _, err := os.Stat(outFile); os.IsNotExist(err) {
t.Error("expected output file to be created")
}
}
func TestRunTempl_FromListMultipleFiles(t *testing.T) {
saved := saveTemplState()
defer restoreTemplState(saved)
dir := t.TempDir()
file1 := filepath.Join(dir, "users.json")
file2 := filepath.Join(dir, "posts.json")
tmpl := filepath.Join(dir, "tmpl.tmpl")
outFile := filepath.Join(dir, "output.txt")
writeTestJSON(t, file1, []string{"users"})
writeTestJSON(t, file2, []string{"posts"})
writeTestTemplate(t, tmpl)
templSourceType = "json"
templSourcePath = ""
templSourceConn = ""
templFromList = []string{file1, file2}
templTemplatePath = tmpl
templOutputPath = outFile
templSchemaFilter = ""
templMode = "database"
templFilenamePattern = "{{.Name}}.txt"
if err := runTempl(nil, nil); err != nil {
t.Fatalf("runTempl() error = %v", err)
}
if _, err := os.Stat(outFile); os.IsNotExist(err) {
t.Error("expected output file to be created")
}
}
func TestRunTempl_FromListPathWithSpaces(t *testing.T) {
saved := saveTemplState()
defer restoreTemplState(saved)
spacedDir := filepath.Join(t.TempDir(), "my schema files")
if err := os.MkdirAll(spacedDir, 0755); err != nil {
t.Fatal(err)
}
file1 := filepath.Join(spacedDir, "users schema.json")
file2 := filepath.Join(spacedDir, "posts schema.json")
tmpl := filepath.Join(spacedDir, "my template.tmpl")
outFile := filepath.Join(spacedDir, "output file.txt")
writeTestJSON(t, file1, []string{"users"})
writeTestJSON(t, file2, []string{"posts"})
writeTestTemplate(t, tmpl)
templSourceType = "json"
templSourcePath = ""
templSourceConn = ""
templFromList = []string{file1, file2}
templTemplatePath = tmpl
templOutputPath = outFile
templSchemaFilter = ""
templMode = "database"
templFilenamePattern = "{{.Name}}.txt"
if err := runTempl(nil, nil); err != nil {
t.Fatalf("runTempl() with spaced paths error = %v", err)
}
if _, err := os.Stat(outFile); os.IsNotExist(err) {
t.Error("expected output file to be created")
}
}

View File

@@ -0,0 +1,219 @@
package main
import (
"encoding/json"
"os"
"testing"
)
// minimalColumn is used to build test JSON fixtures.
type minimalColumn struct {
Name string `json:"name"`
Table string `json:"table"`
Schema string `json:"schema"`
Type string `json:"type"`
NotNull bool `json:"not_null"`
IsPrimaryKey bool `json:"is_primary_key"`
AutoIncrement bool `json:"auto_increment"`
}
type minimalTable struct {
Name string `json:"name"`
Schema string `json:"schema"`
Columns map[string]minimalColumn `json:"columns"`
}
type minimalSchema struct {
Name string `json:"name"`
Tables []minimalTable `json:"tables"`
}
type minimalDatabase struct {
Name string `json:"name"`
Schemas []minimalSchema `json:"schemas"`
}
// writeTestJSON writes a minimal JSON database file with one schema ("public")
// containing tables with the given names. Each table has a single "id" PK column.
func writeTestJSON(t *testing.T, path string, tableNames []string) {
t.Helper()
tables := make([]minimalTable, len(tableNames))
for i, name := range tableNames {
tables[i] = minimalTable{
Name: name,
Schema: "public",
Columns: map[string]minimalColumn{
"id": {
Name: "id",
Table: name,
Schema: "public",
Type: "bigint",
NotNull: true,
IsPrimaryKey: true,
AutoIncrement: true,
},
},
}
}
db := minimalDatabase{
Name: "test_db",
Schemas: []minimalSchema{{Name: "public", Tables: tables}},
}
data, err := json.Marshal(db)
if err != nil {
t.Fatalf("failed to marshal test JSON: %v", err)
}
if err := os.WriteFile(path, data, 0644); err != nil {
t.Fatalf("failed to write test file %s: %v", path, err)
}
}
// convertState captures and restores all convert global vars.
type convertState struct {
sourceType string
sourcePath string
sourceConn string
fromList []string
targetType string
targetPath string
packageName string
schemaFilter string
flattenSchema bool
}
func saveConvertState() convertState {
return convertState{
sourceType: convertSourceType,
sourcePath: convertSourcePath,
sourceConn: convertSourceConn,
fromList: convertFromList,
targetType: convertTargetType,
targetPath: convertTargetPath,
packageName: convertPackageName,
schemaFilter: convertSchemaFilter,
flattenSchema: convertFlattenSchema,
}
}
func restoreConvertState(s convertState) {
convertSourceType = s.sourceType
convertSourcePath = s.sourcePath
convertSourceConn = s.sourceConn
convertFromList = s.fromList
convertTargetType = s.targetType
convertTargetPath = s.targetPath
convertPackageName = s.packageName
convertSchemaFilter = s.schemaFilter
convertFlattenSchema = s.flattenSchema
}
// templState captures and restores all templ global vars.
type templState struct {
sourceType string
sourcePath string
sourceConn string
fromList []string
templatePath string
outputPath string
schemaFilter string
mode string
filenamePattern string
}
func saveTemplState() templState {
return templState{
sourceType: templSourceType,
sourcePath: templSourcePath,
sourceConn: templSourceConn,
fromList: templFromList,
templatePath: templTemplatePath,
outputPath: templOutputPath,
schemaFilter: templSchemaFilter,
mode: templMode,
filenamePattern: templFilenamePattern,
}
}
func restoreTemplState(s templState) {
templSourceType = s.sourceType
templSourcePath = s.sourcePath
templSourceConn = s.sourceConn
templFromList = s.fromList
templTemplatePath = s.templatePath
templOutputPath = s.outputPath
templSchemaFilter = s.schemaFilter
templMode = s.mode
templFilenamePattern = s.filenamePattern
}
// mergeState captures and restores all merge global vars.
type mergeState struct {
targetType string
targetPath string
targetConn string
sourceType string
sourcePath string
sourceConn string
fromList []string
outputType string
outputPath string
outputConn string
skipDomains bool
skipRelations bool
skipEnums bool
skipViews bool
skipSequences bool
skipTables string
verbose bool
reportPath string
flattenSchema bool
}
func saveMergeState() mergeState {
return mergeState{
targetType: mergeTargetType,
targetPath: mergeTargetPath,
targetConn: mergeTargetConn,
sourceType: mergeSourceType,
sourcePath: mergeSourcePath,
sourceConn: mergeSourceConn,
fromList: mergeFromList,
outputType: mergeOutputType,
outputPath: mergeOutputPath,
outputConn: mergeOutputConn,
skipDomains: mergeSkipDomains,
skipRelations: mergeSkipRelations,
skipEnums: mergeSkipEnums,
skipViews: mergeSkipViews,
skipSequences: mergeSkipSequences,
skipTables: mergeSkipTables,
verbose: mergeVerbose,
reportPath: mergeReportPath,
flattenSchema: mergeFlattenSchema,
}
}
func restoreMergeState(s mergeState) {
mergeTargetType = s.targetType
mergeTargetPath = s.targetPath
mergeTargetConn = s.targetConn
mergeSourceType = s.sourceType
mergeSourcePath = s.sourcePath
mergeSourceConn = s.sourceConn
mergeFromList = s.fromList
mergeOutputType = s.outputType
mergeOutputPath = s.outputPath
mergeOutputConn = s.outputConn
mergeSkipDomains = s.skipDomains
mergeSkipRelations = s.skipRelations
mergeSkipEnums = s.skipEnums
mergeSkipViews = s.skipViews
mergeSkipSequences = s.skipSequences
mergeSkipTables = s.skipTables
mergeVerbose = s.verbose
mergeReportPath = s.reportPath
mergeFlattenSchema = s.flattenSchema
}

35
linux/arch/PKGBUILD Normal file
View File

@@ -0,0 +1,35 @@
# Maintainer: Hein (Warky Devs) <hein@warky.dev>
pkgname=relspec
pkgver=1.0.44
pkgrel=1
pkgdesc="RelSpec is a comprehensive database relations management tool that reads, transforms, and writes database table specifications across multiple formats and ORMs."
arch=('x86_64' 'aarch64')
url="https://git.warky.dev/wdevs/relspecgo"
license=('MIT')
makedepends=('go')
source=("$pkgname-$pkgver.zip::$url/archive/v$pkgver.zip")
sha256sums=('SKIP')
build() {
cd "relspecgo"
export CGO_ENABLED=0
go build \
-trimpath \
-ldflags "-X git.warky.dev/wdevs/relspecgo/cmd/relspec.version=$pkgver" \
-o "$pkgname" ./cmd/relspec
}
check() {
cd "relspecgo"
go test ./...
}
package() {
cd "relspecgo"
# Binary
install -Dm755 "$pkgname" "$pkgdir/usr/bin/$pkgname"
# Default config dir
install -dm755 "$pkgdir/etc/relspec"
}

43
linux/centos/relspec.spec Normal file
View File

@@ -0,0 +1,43 @@
Name: relspec
Version: 1.0.44
Release: 1%{?dist}
Summary: RelSpec is a comprehensive database relations management tool that reads, transforms, and writes database table specifications across multiple formats and ORMs.
License: MIT
URL: https://git.warky.dev/wdevs/relspecgo
Source0: %{name}-%{version}.tar.gz
BuildRequires: golang >= 1.24
%global debug_package %{nil}
%define _debugsource_packages 0
%define _debuginfo_subpackages 0
%description
RelSpec provides bidirectional conversion between various database schema
formats including PostgreSQL, MySQL, SQLite, Prisma, TypeORM, GORM, Drizzle,
DBML, GraphQL, and more.
%prep
%autosetup
%build
export CGO_ENABLED=0
go build \
-trimpath \
-ldflags "-X git.warky.dev/wdevs/relspecgo/cmd/relspec.version=%{version}" \
-o %{name} ./cmd/relspec
%install
install -Dm755 %{name} %{buildroot}%{_bindir}/%{name}
install -Dm644 LICENSE %{buildroot}%{_licensedir}/%{name}/LICENSE
install -dm755 %{buildroot}%{_sysconfdir}/relspec
%files
%license LICENSE
%{_bindir}/%{name}
%dir %{_sysconfdir}/relspec
%changelog
* Wed Apr 08 2026 Hein (Warky Devs) <hein@warky.dev> - 1.0.42-1
- Initial package

11
linux/debian/control Normal file
View File

@@ -0,0 +1,11 @@
Package: relspec
Version: VERSION
Architecture: ARCH
Maintainer: Hein (Warky Devs) <hein@warky.dev>
Section: database
Priority: optional
Homepage: https://git.warky.dev/wdevs/relspecgo
Description: Database schema conversion and analysis tool
RelSpec provides bidirectional conversion between various database schema
formats including PostgreSQL, MySQL, SQLite, Prisma, TypeORM, GORM, Drizzle,
DBML, GraphQL, and more.

View File

@@ -60,19 +60,19 @@ func (f *MarkdownFormatter) Format(report *InspectorReport) (string, error) {
// Summary
sb.WriteString(f.formatHeader("Summary"))
sb.WriteString("\n")
sb.WriteString(fmt.Sprintf("- Rules Checked: %d\n", report.Summary.RulesChecked))
fmt.Fprintf(&sb, "- Rules Checked: %d\n", report.Summary.RulesChecked)
// Color-code error and warning counts
if report.Summary.ErrorCount > 0 {
sb.WriteString(f.colorize(fmt.Sprintf("- Errors: %d\n", report.Summary.ErrorCount), colorRed))
} else {
sb.WriteString(fmt.Sprintf("- Errors: %d\n", report.Summary.ErrorCount))
fmt.Fprintf(&sb, "- Errors: %d\n", report.Summary.ErrorCount)
}
if report.Summary.WarningCount > 0 {
sb.WriteString(f.colorize(fmt.Sprintf("- Warnings: %d\n", report.Summary.WarningCount), colorYellow))
} else {
sb.WriteString(fmt.Sprintf("- Warnings: %d\n", report.Summary.WarningCount))
fmt.Fprintf(&sb, "- Warnings: %d\n", report.Summary.WarningCount)
}
if report.Summary.PassedCount > 0 {

View File

@@ -832,7 +832,11 @@ func (r *Reader) parseRef(refStr string) *models.Constraint {
for _, action := range actionList {
action = strings.TrimSpace(action)
if strings.HasPrefix(action, "ondelete:") {
if strings.HasPrefix(action, "delete:") {
constraint.OnDelete = strings.TrimSpace(strings.TrimPrefix(action, "delete:"))
} else if strings.HasPrefix(action, "update:") {
constraint.OnUpdate = strings.TrimSpace(strings.TrimPrefix(action, "update:"))
} else if strings.HasPrefix(action, "ondelete:") {
constraint.OnDelete = strings.TrimSpace(strings.TrimPrefix(action, "ondelete:"))
} else if strings.HasPrefix(action, "onupdate:") {
constraint.OnUpdate = strings.TrimSpace(strings.TrimPrefix(action, "onupdate:"))

View File

@@ -231,14 +231,13 @@ func (r *Reader) queryColumns(schemaName string) (map[string]map[string]*models.
}
column := models.InitColumn(columnName, tableName, schema)
column.Type = r.mapDataType(dataType, udtName)
column.NotNull = (isNullable == "NO")
column.Sequence = uint(ordinalPosition)
// Check if this is a serial type (has nextval default)
hasNextval := false
if columnDefault != nil {
// Parse default value - remove nextval for sequences
defaultVal := *columnDefault
if strings.HasPrefix(defaultVal, "nextval") {
hasNextval = true
column.AutoIncrement = true
column.Default = defaultVal
} else {
@@ -246,6 +245,11 @@ func (r *Reader) queryColumns(schemaName string) (map[string]map[string]*models.
}
}
// Map data type, preserving serial types when detected
column.Type = r.mapDataType(dataType, udtName, hasNextval)
column.NotNull = (isNullable == "NO")
column.Sequence = uint(ordinalPosition)
if description != nil {
column.Description = *description
}

View File

@@ -3,6 +3,7 @@ package pgsql
import (
"context"
"fmt"
"strings"
"github.com/jackc/pgx/v5"
@@ -259,33 +260,46 @@ func (r *Reader) close() {
}
// mapDataType maps PostgreSQL data types to canonical types
func (r *Reader) mapDataType(pgType, udtName string) string {
func (r *Reader) mapDataType(pgType, udtName string, hasNextval bool) string {
// If the column has a nextval default, it's likely a serial type
// Map to the appropriate serial type instead of the base integer type
if hasNextval {
switch strings.ToLower(pgType) {
case "integer", "int", "int4":
return "serial"
case "bigint", "int8":
return "bigserial"
case "smallint", "int2":
return "smallserial"
}
}
// Map common PostgreSQL types
typeMap := map[string]string{
"integer": "int",
"bigint": "int64",
"smallint": "int16",
"int": "int",
"int2": "int16",
"int4": "int",
"int8": "int64",
"serial": "int",
"bigserial": "int64",
"smallserial": "int16",
"numeric": "decimal",
"integer": "integer",
"bigint": "bigint",
"smallint": "smallint",
"int": "integer",
"int2": "smallint",
"int4": "integer",
"int8": "bigint",
"serial": "serial",
"bigserial": "bigserial",
"smallserial": "smallserial",
"numeric": "numeric",
"decimal": "decimal",
"real": "float32",
"double precision": "float64",
"float4": "float32",
"float8": "float64",
"money": "decimal",
"character varying": "string",
"varchar": "string",
"character": "string",
"char": "string",
"text": "string",
"boolean": "bool",
"bool": "bool",
"real": "real",
"double precision": "double precision",
"float4": "real",
"float8": "double precision",
"money": "money",
"character varying": "varchar",
"varchar": "varchar",
"character": "char",
"char": "char",
"text": "text",
"boolean": "boolean",
"bool": "boolean",
"date": "date",
"time": "time",
"time without time zone": "time",

View File

@@ -177,20 +177,20 @@ func TestMapDataType(t *testing.T) {
udtName string
expected string
}{
{"integer", "int4", "int"},
{"bigint", "int8", "int64"},
{"smallint", "int2", "int16"},
{"character varying", "varchar", "string"},
{"text", "text", "string"},
{"boolean", "bool", "bool"},
{"integer", "int4", "integer"},
{"bigint", "int8", "bigint"},
{"smallint", "int2", "smallint"},
{"character varying", "varchar", "varchar"},
{"text", "text", "text"},
{"boolean", "bool", "boolean"},
{"timestamp without time zone", "timestamp", "timestamp"},
{"timestamp with time zone", "timestamptz", "timestamptz"},
{"json", "json", "json"},
{"jsonb", "jsonb", "jsonb"},
{"uuid", "uuid", "uuid"},
{"numeric", "numeric", "decimal"},
{"real", "float4", "float32"},
{"double precision", "float8", "float64"},
{"numeric", "numeric", "numeric"},
{"real", "float4", "real"},
{"double precision", "float8", "double precision"},
{"date", "date", "date"},
{"time without time zone", "time", "time"},
{"bytea", "bytea", "bytea"},
@@ -199,12 +199,31 @@ func TestMapDataType(t *testing.T) {
for _, tt := range tests {
t.Run(tt.pgType, func(t *testing.T) {
result := reader.mapDataType(tt.pgType, tt.udtName)
result := reader.mapDataType(tt.pgType, tt.udtName, false)
if result != tt.expected {
t.Errorf("mapDataType(%s, %s) = %s, expected %s", tt.pgType, tt.udtName, result, tt.expected)
}
})
}
// Test serial type detection with hasNextval=true
serialTests := []struct {
pgType string
expected string
}{
{"integer", "serial"},
{"bigint", "bigserial"},
{"smallint", "smallserial"},
}
for _, tt := range serialTests {
t.Run(tt.pgType+"_with_nextval", func(t *testing.T) {
result := reader.mapDataType(tt.pgType, "", true)
if result != tt.expected {
t.Errorf("mapDataType(%s, '', true) = %s, expected %s", tt.pgType, result, tt.expected)
}
})
}
}
func TestParseIndexDefinition(t *testing.T) {

View File

@@ -216,6 +216,21 @@ func resolveFieldNameCollision(fieldName string) string {
return fieldName
}
// sortConstraints sorts constraints by sequence, then by name
func sortConstraints(constraints map[string]*models.Constraint) []*models.Constraint {
result := make([]*models.Constraint, 0, len(constraints))
for _, c := range constraints {
result = append(result, c)
}
sort.Slice(result, func(i, j int) bool {
if result[i].Sequence > 0 && result[j].Sequence > 0 {
return result[i].Sequence < result[j].Sequence
}
return result[i].Name < result[j].Name
})
return result
}
// sortColumns sorts columns by sequence, then by name
func sortColumns(columns map[string]*models.Column) []*models.Column {
result := make([]*models.Column, 0, len(columns))

View File

@@ -62,6 +62,17 @@ func (tm *TypeMapper) isSimpleType(sqlType string) bool {
return simpleTypes[sqlType]
}
// isSerialType checks if a SQL type is a serial type (auto-incrementing)
func (tm *TypeMapper) isSerialType(sqlType string) bool {
baseType := tm.extractBaseType(sqlType)
serialTypes := map[string]bool{
"serial": true,
"bigserial": true,
"smallserial": true,
}
return serialTypes[baseType]
}
// baseGoType returns the base Go type for a SQL type (not null, simple types only)
func (tm *TypeMapper) baseGoType(sqlType string) string {
typeMap := map[string]string{
@@ -122,10 +133,10 @@ func (tm *TypeMapper) bunGoType(sqlType string) string {
"decimal": tm.sqlTypesAlias + ".SqlFloat64",
// Date/Time types
"timestamp": tm.sqlTypesAlias + ".SqlTime",
"timestamp without time zone": tm.sqlTypesAlias + ".SqlTime",
"timestamp with time zone": tm.sqlTypesAlias + ".SqlTime",
"timestamptz": tm.sqlTypesAlias + ".SqlTime",
"timestamp": tm.sqlTypesAlias + ".SqlTimeStamp",
"timestamp without time zone": tm.sqlTypesAlias + ".SqlTimeStamp",
"timestamp with time zone": tm.sqlTypesAlias + ".SqlTimeStamp",
"timestamptz": tm.sqlTypesAlias + ".SqlTimeStamp",
"date": tm.sqlTypesAlias + ".SqlDate",
"time": tm.sqlTypesAlias + ".SqlTime",
"time without time zone": tm.sqlTypesAlias + ".SqlTime",
@@ -190,10 +201,15 @@ func (tm *TypeMapper) BuildBunTag(column *models.Column, table *models.Table) st
parts = append(parts, "pk")
}
// Auto increment (for serial types or explicit auto_increment)
if column.AutoIncrement || tm.isSerialType(column.Type) {
parts = append(parts, "autoincrement")
}
// Default value
if column.Default != nil {
// Sanitize default value to remove backticks
safeDefault := writers.SanitizeStructTagValue(fmt.Sprintf("%v", column.Default))
// Sanitize default value to remove backticks, then quote based on column type
safeDefault := writers.QuoteDefaultValue(writers.SanitizeStructTagValue(fmt.Sprintf("%v", column.Default)), column.Type)
parts = append(parts, fmt.Sprintf("default:%s", safeDefault))
}
@@ -251,7 +267,15 @@ func (tm *TypeMapper) BuildRelationshipTag(constraint *models.Constraint, relTyp
if len(constraint.Columns) > 0 && len(constraint.ReferencedColumns) > 0 {
localCol := constraint.Columns[0]
foreignCol := constraint.ReferencedColumns[0]
parts = append(parts, fmt.Sprintf("join:%s=%s", localCol, foreignCol))
// For has-many relationships, swap the columns
// has-one: join:fk_in_this_table=pk_in_other_table
// has-many: join:pk_in_this_table=fk_in_other_table
if relType == "has-many" {
parts = append(parts, fmt.Sprintf("join:%s=%s", foreignCol, localCol))
} else {
parts = append(parts, fmt.Sprintf("join:%s=%s", localCol, foreignCol))
}
}
return strings.Join(parts, ",")

View File

@@ -242,7 +242,7 @@ func (w *Writer) addRelationshipFields(modelData *ModelData, table *models.Table
usedFieldNames := make(map[string]int)
// For each foreign key in this table, add a belongs-to/has-one relationship
for _, constraint := range table.Constraints {
for _, constraint := range sortConstraints(table.Constraints) {
if constraint.Type != models.ForeignKeyConstraint {
continue
}
@@ -275,7 +275,7 @@ func (w *Writer) addRelationshipFields(modelData *ModelData, table *models.Table
continue // Skip self
}
for _, constraint := range otherTable.Constraints {
for _, constraint := range sortConstraints(otherTable.Constraints) {
if constraint.Type != models.ForeignKeyConstraint {
continue
}

View File

@@ -90,8 +90,8 @@ func TestWriter_WriteTable(t *testing.T) {
}
// Verify Bun-specific elements
if !strings.Contains(generated, "bun:\"id,type:bigint,pk,") {
t.Errorf("Missing Bun-style primary key tag")
if !strings.Contains(generated, "bun:\"id,type:bigint,pk,autoincrement,") {
t.Errorf("Missing Bun-style primary key tag with autoincrement")
}
}
@@ -308,14 +308,20 @@ func TestWriter_MultipleReferencesToSameTable(t *testing.T) {
filepointerStr := string(filepointerContent)
// Should have two different has-many relationships with unique names
hasManyExpectations := []string{
"RelRIDFilepointerRequestOrgAPIEvents", // Has many via rid_filepointer_request
"RelRIDFilepointerResponseOrgAPIEvents", // Has many via rid_filepointer_response
hasManyExpectations := []struct {
fieldName string
tag string
}{
{"RelRIDFilepointerRequestOrgAPIEvents", "join:id_filepointer=rid_filepointer_request"}, // Has many via rid_filepointer_request
{"RelRIDFilepointerResponseOrgAPIEvents", "join:id_filepointer=rid_filepointer_response"}, // Has many via rid_filepointer_response
}
for _, exp := range hasManyExpectations {
if !strings.Contains(filepointerStr, exp) {
t.Errorf("Missing has-many relationship field: %s\nGenerated:\n%s", exp, filepointerStr)
if !strings.Contains(filepointerStr, exp.fieldName) {
t.Errorf("Missing has-many relationship field: %s\nGenerated:\n%s", exp.fieldName, filepointerStr)
}
if !strings.Contains(filepointerStr, exp.tag) {
t.Errorf("Missing has-many relationship join tag: %s\nGenerated:\n%s", exp.tag, filepointerStr)
}
}
}
@@ -455,10 +461,10 @@ func TestWriter_MultipleHasManyRelationships(t *testing.T) {
// Verify all has-many relationships have unique names
hasManyExpectations := []string{
"RelRIDAPIProviderOrgLogins", // Has many via Login
"RelRIDAPIProviderOrgLogins", // Has many via Login
"RelRIDAPIProviderOrgFilepointers", // Has many via Filepointer
"RelRIDAPIProviderOrgAPIEvents", // Has many via APIEvent
"RelRIDOwner", // Has one via rid_owner
"RelRIDAPIProviderOrgAPIEvents", // Has many via APIEvent
"RelRIDOwner", // Has one via rid_owner
}
for _, exp := range hasManyExpectations {
@@ -561,8 +567,8 @@ func TestTypeMapper_SQLTypeToGoType_Bun(t *testing.T) {
{"bigint", false, "resolvespec_common.SqlInt64"},
{"varchar", true, "resolvespec_common.SqlString"}, // Bun uses sql types even for NOT NULL strings
{"varchar", false, "resolvespec_common.SqlString"},
{"timestamp", true, "resolvespec_common.SqlTime"},
{"timestamp", false, "resolvespec_common.SqlTime"},
{"timestamp", true, "resolvespec_common.SqlTimeStamp"},
{"timestamp", false, "resolvespec_common.SqlTimeStamp"},
{"date", false, "resolvespec_common.SqlDate"},
{"boolean", true, "bool"},
{"boolean", false, "resolvespec_common.SqlBool"},
@@ -609,14 +615,75 @@ func TestTypeMapper_BuildBunTag(t *testing.T) {
want: []string{"email,", "type:varchar(255),", "nullzero,"},
},
{
name: "with default",
name: "with default string",
column: &models.Column{
Name: "status",
Type: "text",
NotNull: true,
Default: "active",
},
want: []string{"status,", "type:text,", "default:active,"},
want: []string{"status,", "type:text,", "default:'active',"},
},
{
name: "with default integer",
column: &models.Column{
Name: "retries",
Type: "integer",
NotNull: true,
Default: "0",
},
want: []string{"retries,", "type:integer,", "default:0,"},
},
{
name: "with default boolean",
column: &models.Column{
Name: "active",
Type: "boolean",
NotNull: true,
Default: "true",
},
want: []string{"active,", "type:boolean,", "default:true,"},
},
{
name: "with default function call",
column: &models.Column{
Name: "created_at",
Type: "timestamp",
NotNull: true,
Default: "now()",
},
want: []string{"created_at,", "type:timestamp,", "default:now(),"},
},
{
name: "auto increment with AutoIncrement flag",
column: &models.Column{
Name: "id",
Type: "bigint",
NotNull: true,
IsPrimaryKey: true,
AutoIncrement: true,
},
want: []string{"id,", "type:bigint,", "pk,", "autoincrement,"},
},
{
name: "serial type (auto-increment)",
column: &models.Column{
Name: "id",
Type: "serial",
NotNull: true,
IsPrimaryKey: true,
},
want: []string{"id,", "type:serial,", "pk,", "autoincrement,"},
},
{
name: "bigserial type (auto-increment)",
column: &models.Column{
Name: "id",
Type: "bigserial",
NotNull: true,
IsPrimaryKey: true,
},
want: []string{"id,", "type:bigserial,", "pk,", "autoincrement,"},
},
}

View File

@@ -62,10 +62,10 @@ func (w *Writer) databaseToDBML(d *models.Database) string {
var sb strings.Builder
if d.Description != "" {
sb.WriteString(fmt.Sprintf("// %s\n", d.Description))
fmt.Fprintf(&sb, "// %s\n", d.Description)
}
if d.Comment != "" {
sb.WriteString(fmt.Sprintf("// %s\n", d.Comment))
fmt.Fprintf(&sb, "// %s\n", d.Comment)
}
if d.Description != "" || d.Comment != "" {
sb.WriteString("\n")
@@ -94,7 +94,7 @@ func (w *Writer) schemaToDBML(schema *models.Schema) string {
var sb strings.Builder
if schema.Description != "" {
sb.WriteString(fmt.Sprintf("// Schema: %s - %s\n", schema.Name, schema.Description))
fmt.Fprintf(&sb, "// Schema: %s - %s\n", schema.Name, schema.Description)
}
for _, table := range schema.Tables {
@@ -110,10 +110,10 @@ func (w *Writer) tableToDBML(t *models.Table) string {
var sb strings.Builder
tableName := fmt.Sprintf("%s.%s", t.Schema, t.Name)
sb.WriteString(fmt.Sprintf("Table %s {\n", tableName))
fmt.Fprintf(&sb, "Table %s {\n", tableName)
for _, column := range t.Columns {
sb.WriteString(fmt.Sprintf(" %s %s", column.Name, column.Type))
fmt.Fprintf(&sb, " %s %s", column.Name, column.Type)
var attrs []string
if column.IsPrimaryKey {
@@ -138,11 +138,11 @@ func (w *Writer) tableToDBML(t *models.Table) string {
}
if len(attrs) > 0 {
sb.WriteString(fmt.Sprintf(" [%s]", strings.Join(attrs, ", ")))
fmt.Fprintf(&sb, " [%s]", strings.Join(attrs, ", "))
}
if column.Comment != "" {
sb.WriteString(fmt.Sprintf(" // %s", column.Comment))
fmt.Fprintf(&sb, " // %s", column.Comment)
}
sb.WriteString("\n")
}
@@ -161,9 +161,9 @@ func (w *Writer) tableToDBML(t *models.Table) string {
indexAttrs = append(indexAttrs, fmt.Sprintf("type: %s", index.Type))
}
sb.WriteString(fmt.Sprintf(" (%s)", strings.Join(index.Columns, ", ")))
fmt.Fprintf(&sb, " (%s)", strings.Join(index.Columns, ", "))
if len(indexAttrs) > 0 {
sb.WriteString(fmt.Sprintf(" [%s]", strings.Join(indexAttrs, ", ")))
fmt.Fprintf(&sb, " [%s]", strings.Join(indexAttrs, ", "))
}
sb.WriteString("\n")
}
@@ -172,7 +172,7 @@ func (w *Writer) tableToDBML(t *models.Table) string {
note := strings.TrimSpace(t.Description + " " + t.Comment)
if note != "" {
sb.WriteString(fmt.Sprintf("\n Note: '%s'\n", note))
fmt.Fprintf(&sb, "\n Note: '%s'\n", note)
}
sb.WriteString("}\n")

View File

@@ -213,6 +213,21 @@ func resolveFieldNameCollision(fieldName string) string {
return fieldName
}
// sortConstraints sorts constraints by sequence, then by name
func sortConstraints(constraints map[string]*models.Constraint) []*models.Constraint {
result := make([]*models.Constraint, 0, len(constraints))
for _, c := range constraints {
result = append(result, c)
}
sort.Slice(result, func(i, j int) bool {
if result[i].Sequence > 0 && result[j].Sequence > 0 {
return result[i].Sequence < result[j].Sequence
}
return result[i].Name < result[j].Name
})
return result
}
// sortColumns sorts columns by sequence, then by name
func sortColumns(columns map[string]*models.Column) []*models.Column {
result := make([]*models.Column, 0, len(columns))

View File

@@ -158,10 +158,10 @@ func (tm *TypeMapper) nullableGoType(sqlType string) string {
"decimal": tm.sqlTypesAlias + ".SqlFloat64",
// Date/Time types
"timestamp": tm.sqlTypesAlias + ".SqlTime",
"timestamp without time zone": tm.sqlTypesAlias + ".SqlTime",
"timestamp with time zone": tm.sqlTypesAlias + ".SqlTime",
"timestamptz": tm.sqlTypesAlias + ".SqlTime",
"timestamp": tm.sqlTypesAlias + ".SqlTimeStamp",
"timestamp without time zone": tm.sqlTypesAlias + ".SqlTimeStamp",
"timestamp with time zone": tm.sqlTypesAlias + ".SqlTimeStamp",
"timestamptz": tm.sqlTypesAlias + ".SqlTimeStamp",
"date": tm.sqlTypesAlias + ".SqlDate",
"time": tm.sqlTypesAlias + ".SqlTime",
"time without time zone": tm.sqlTypesAlias + ".SqlTime",
@@ -238,8 +238,8 @@ func (tm *TypeMapper) BuildGormTag(column *models.Column, table *models.Table) s
// Default value
if column.Default != nil {
// Sanitize default value to remove backticks
safeDefault := writers.SanitizeStructTagValue(fmt.Sprintf("%v", column.Default))
// Sanitize default value to remove backticks, then quote based on column type
safeDefault := writers.QuoteDefaultValue(writers.SanitizeStructTagValue(fmt.Sprintf("%v", column.Default)), column.Type)
parts = append(parts, fmt.Sprintf("default:%s", safeDefault))
}

View File

@@ -236,7 +236,7 @@ func (w *Writer) addRelationshipFields(modelData *ModelData, table *models.Table
usedFieldNames := make(map[string]int)
// For each foreign key in this table, add a belongs-to relationship
for _, constraint := range table.Constraints {
for _, constraint := range sortConstraints(table.Constraints) {
if constraint.Type != models.ForeignKeyConstraint {
continue
}
@@ -269,7 +269,7 @@ func (w *Writer) addRelationshipFields(modelData *ModelData, table *models.Table
continue // Skip self
}
for _, constraint := range otherTable.Constraints {
for _, constraint := range sortConstraints(otherTable.Constraints) {
if constraint.Type != models.ForeignKeyConstraint {
continue
}

View File

@@ -655,7 +655,7 @@ func TestTypeMapper_SQLTypeToGoType(t *testing.T) {
{"varchar", true, "string"},
{"varchar", false, "sql_types.SqlString"},
{"timestamp", true, "time.Time"},
{"timestamp", false, "sql_types.SqlTime"},
{"timestamp", false, "sql_types.SqlTimeStamp"},
{"boolean", true, "bool"},
{"boolean", false, "sql_types.SqlBool"},
}

View File

@@ -52,7 +52,7 @@ func (w *Writer) databaseToGraphQL(db *models.Database) string {
if w.shouldIncludeComments() {
sb.WriteString("# Generated GraphQL Schema\n")
if db.Name != "" {
sb.WriteString(fmt.Sprintf("# Database: %s\n", db.Name))
fmt.Fprintf(&sb, "# Database: %s\n", db.Name)
}
sb.WriteString("\n")
}
@@ -62,7 +62,7 @@ func (w *Writer) databaseToGraphQL(db *models.Database) string {
scalars := w.collectCustomScalars(db)
if len(scalars) > 0 {
for _, scalar := range scalars {
sb.WriteString(fmt.Sprintf("scalar %s\n", scalar))
fmt.Fprintf(&sb, "scalar %s\n", scalar)
}
sb.WriteString("\n")
}
@@ -176,9 +176,9 @@ func (w *Writer) isJoinTable(table *models.Table) bool {
func (w *Writer) enumToGraphQL(enum *models.Enum) string {
var sb strings.Builder
sb.WriteString(fmt.Sprintf("enum %s {\n", enum.Name))
fmt.Fprintf(&sb, "enum %s {\n", enum.Name)
for _, value := range enum.Values {
sb.WriteString(fmt.Sprintf(" %s\n", value))
fmt.Fprintf(&sb, " %s\n", value)
}
sb.WriteString("}\n")
@@ -197,10 +197,10 @@ func (w *Writer) tableToGraphQL(table *models.Table, db *models.Database, schema
if desc == "" {
desc = table.Comment
}
sb.WriteString(fmt.Sprintf("# %s\n", desc))
fmt.Fprintf(&sb, "# %s\n", desc)
}
sb.WriteString(fmt.Sprintf("type %s {\n", typeName))
fmt.Fprintf(&sb, "type %s {\n", typeName)
// Collect and categorize fields
var idFields, scalarFields, relationFields []string

View File

@@ -125,9 +125,9 @@ func (w *Writer) generateGenerator() string {
func (w *Writer) enumToPrisma(enum *models.Enum) string {
var sb strings.Builder
sb.WriteString(fmt.Sprintf("enum %s {\n", enum.Name))
fmt.Fprintf(&sb, "enum %s {\n", enum.Name)
for _, value := range enum.Values {
sb.WriteString(fmt.Sprintf(" %s\n", value))
fmt.Fprintf(&sb, " %s\n", value)
}
sb.WriteString("}\n")
@@ -179,7 +179,7 @@ func (w *Writer) identifyJoinTables(schema *models.Schema) map[string]bool {
func (w *Writer) tableToPrisma(table *models.Table, schema *models.Schema, joinTables map[string]bool) string {
var sb strings.Builder
sb.WriteString(fmt.Sprintf("model %s {\n", table.Name))
fmt.Fprintf(&sb, "model %s {\n", table.Name)
// Collect columns to write
columns := make([]*models.Column, 0, len(table.Columns))
@@ -219,11 +219,11 @@ func (w *Writer) columnToField(col *models.Column, table *models.Table, schema *
var sb strings.Builder
// Field name
sb.WriteString(fmt.Sprintf(" %s", col.Name))
fmt.Fprintf(&sb, " %s", col.Name)
// Field type
prismaType := w.sqlTypeToPrisma(col.Type, schema)
sb.WriteString(fmt.Sprintf(" %s", prismaType))
fmt.Fprintf(&sb, " %s", prismaType)
// Optional modifier
if !col.NotNull && !col.IsPrimaryKey {
@@ -413,7 +413,7 @@ func (w *Writer) generateRelationFields(table *models.Table, schema *models.Sche
relationName = relationName[:len(relationName)-1]
}
sb.WriteString(fmt.Sprintf(" %s %s", strings.ToLower(relationName), relationType))
fmt.Fprintf(&sb, " %s %s", strings.ToLower(relationName), relationType)
if isOptional {
sb.WriteString("?")
@@ -479,8 +479,8 @@ func (w *Writer) generateInverseRelations(table *models.Table, schema *models.Sc
if fk.ReferencedTable != table.Name {
// This is the other side
otherSide := fk.ReferencedTable
sb.WriteString(fmt.Sprintf(" %ss %s[]\n",
strings.ToLower(otherSide), otherSide))
fmt.Fprintf(&sb, " %ss %s[]\n",
strings.ToLower(otherSide), otherSide)
break
}
}
@@ -497,8 +497,8 @@ func (w *Writer) generateInverseRelations(table *models.Table, schema *models.Sc
pluralName += "s"
}
sb.WriteString(fmt.Sprintf(" %s %s[]\n",
strings.ToLower(pluralName), otherTable.Name))
fmt.Fprintf(&sb, " %s %s[]\n",
strings.ToLower(pluralName), otherTable.Name)
}
}
}
@@ -530,20 +530,20 @@ func (w *Writer) generateBlockAttributes(table *models.Table) string {
if len(pkCols) > 1 {
sort.Strings(pkCols)
sb.WriteString(fmt.Sprintf(" @@id([%s])\n", strings.Join(pkCols, ", ")))
fmt.Fprintf(&sb, " @@id([%s])\n", strings.Join(pkCols, ", "))
}
// @@unique for multi-column unique constraints
for _, constraint := range table.Constraints {
if constraint.Type == models.UniqueConstraint && len(constraint.Columns) > 1 {
sb.WriteString(fmt.Sprintf(" @@unique([%s])\n", strings.Join(constraint.Columns, ", ")))
fmt.Fprintf(&sb, " @@unique([%s])\n", strings.Join(constraint.Columns, ", "))
}
}
// @@index for indexes
for _, index := range table.Indexes {
if !index.Unique { // Unique indexes are handled by @@unique
sb.WriteString(fmt.Sprintf(" @@index([%s])\n", strings.Join(index.Columns, ", ")))
fmt.Fprintf(&sb, " @@index([%s])\n", strings.Join(index.Columns, ", "))
}
}

View File

@@ -207,7 +207,7 @@ func (w *Writer) tableToEntity(table *models.Table, schema *models.Schema, joinT
// Generate @Entity decorator with options
entityOptions := w.buildEntityOptions(table)
sb.WriteString(fmt.Sprintf("@Entity({\n%s\n})\n", entityOptions))
fmt.Fprintf(&sb, "@Entity({\n%s\n})\n", entityOptions)
// Get class name (from metadata if different from table name)
className := table.Name
@@ -219,7 +219,7 @@ func (w *Writer) tableToEntity(table *models.Table, schema *models.Schema, joinT
}
}
sb.WriteString(fmt.Sprintf("export class %s {\n", className))
fmt.Fprintf(&sb, "export class %s {\n", className)
// Collect and sort columns
columns := make([]*models.Column, 0, len(table.Columns))
@@ -272,7 +272,7 @@ func (w *Writer) viewToEntity(view *models.View) string {
sb.WriteString("})\n")
// Generate class
sb.WriteString(fmt.Sprintf("export class %s {\n", view.Name))
fmt.Fprintf(&sb, "export class %s {\n", view.Name)
// Generate field definitions (without decorators for view fields)
columns := make([]*models.Column, 0, len(view.Columns))
@@ -285,7 +285,7 @@ func (w *Writer) viewToEntity(view *models.View) string {
for _, col := range columns {
tsType := w.sqlTypeToTypeScript(col.Type)
sb.WriteString(fmt.Sprintf(" %s: %s;\n", col.Name, tsType))
fmt.Fprintf(&sb, " %s: %s;\n", col.Name, tsType)
}
sb.WriteString("}\n")
@@ -314,7 +314,7 @@ func (w *Writer) columnToField(col *models.Column, table *models.Table) string {
// Regular @Column decorator
options := w.buildColumnOptions(col, table)
if options != "" {
sb.WriteString(fmt.Sprintf(" @Column({ %s })\n", options))
fmt.Fprintf(&sb, " @Column({ %s })\n", options)
} else {
sb.WriteString(" @Column()\n")
}
@@ -327,7 +327,7 @@ func (w *Writer) columnToField(col *models.Column, table *models.Table) string {
nullable = " | null"
}
sb.WriteString(fmt.Sprintf(" %s: %s%s;", col.Name, tsType, nullable))
fmt.Fprintf(&sb, " %s: %s%s;", col.Name, tsType, nullable)
return sb.String()
}
@@ -464,17 +464,17 @@ func (w *Writer) generateRelationFields(table *models.Table, schema *models.Sche
inverseField := w.findInverseFieldName(table.Name, relatedTable, schema)
if inverseField != "" {
sb.WriteString(fmt.Sprintf(" @ManyToOne(() => %s, %s => %s.%s)\n",
relatedTable, strings.ToLower(relatedTable), strings.ToLower(relatedTable), inverseField))
fmt.Fprintf(&sb, " @ManyToOne(() => %s, %s => %s.%s)\n",
relatedTable, strings.ToLower(relatedTable), strings.ToLower(relatedTable), inverseField)
} else {
if isNullable {
sb.WriteString(fmt.Sprintf(" @ManyToOne(() => %s, { nullable: true })\n", relatedTable))
fmt.Fprintf(&sb, " @ManyToOne(() => %s, { nullable: true })\n", relatedTable)
} else {
sb.WriteString(fmt.Sprintf(" @ManyToOne(() => %s)\n", relatedTable))
fmt.Fprintf(&sb, " @ManyToOne(() => %s)\n", relatedTable)
}
}
sb.WriteString(fmt.Sprintf(" %s: %s%s;\n", fieldName, relatedTable, nullable))
fmt.Fprintf(&sb, " %s: %s%s;\n", fieldName, relatedTable, nullable)
sb.WriteString("\n")
}

View File

@@ -81,6 +81,64 @@ func SanitizeFilename(name string) string {
return name
}
// QuoteDefaultValue wraps a sanitized default value in single quotes when the SQL
// column type requires it (strings, dates, times, UUIDs, enums). Numeric types
// (integers, floats, serials) and boolean types are left unquoted. Function-call
// expressions such as now() or gen_random_uuid() are always left unquoted regardless
// of type, because they contain parentheses.
//
// Examples (varchar): "disconnected" → "'disconnected'"
// Examples (boolean): "true" → "true"
// Examples (bigint): "0" → "0"
// Examples (timestamp): "now()" → "now()" (function call never quoted)
func QuoteDefaultValue(value, sqlType string) string {
// Function calls are never quoted regardless of column type.
if strings.Contains(value, "(") || strings.Contains(value, ")") {
return value
}
// Normalise the SQL type: lowercase, strip length/precision suffix.
baseType := strings.ToLower(strings.TrimSpace(sqlType))
if idx := strings.Index(baseType, "("); idx > 0 {
baseType = baseType[:idx]
}
// Types whose default values must NOT be quoted.
unquotedTypes := map[string]bool{
// Integer types
"integer": true,
"int": true,
"int2": true,
"int4": true,
"int8": true,
"smallint": true,
"bigint": true,
"serial": true,
"smallserial": true,
"bigserial": true,
// Float / numeric types
"real": true,
"float": true,
"float4": true,
"float8": true,
"double precision": true,
"numeric": true,
"decimal": true,
"money": true,
// Boolean
"boolean": true,
"bool": true,
}
if unquotedTypes[baseType] {
return value
}
// Everything else (text, varchar, char, uuid, date, time, timestamp, json, …)
// is treated as a quoted literal.
return "'" + value + "'"
}
// SanitizeStructTagValue sanitizes a value to be safely used inside Go struct tags.
// Go struct tags are delimited by backticks, so any backtick in the value would break the syntax.
// This function:

View File

@@ -56,7 +56,7 @@ Table admin.audit_logs {
}
// Relationships
Ref: public.posts.user_id > public.users.id [ondelete: CASCADE, onupdate: CASCADE]
Ref: public.comments.post_id > public.posts.id [ondelete: CASCADE]
Ref: public.comments.user_id > public.users.id [ondelete: SET NULL]
Ref: admin.audit_logs.user_id > public.users.id [ondelete: SET NULL]
Ref: public.posts.user_id > public.users.id [delete: CASCADE, update: CASCADE]
Ref: public.comments.post_id > public.posts.id [delete: CASCADE]
Ref: public.comments.user_id > public.users.id [delete: SET NULL]
Ref: admin.audit_logs.user_id > public.users.id [delete: SET NULL]