Added more examples and pgsql reader
Some checks are pending
CI / Build (push) Waiting to run
CI / Test (1.23) (push) Waiting to run
CI / Test (1.24) (push) Waiting to run
CI / Test (1.25) (push) Waiting to run
CI / Lint (push) Waiting to run

This commit is contained in:
2025-12-17 10:08:50 +02:00
parent 5d60bc3b2c
commit db6cd21511
22 changed files with 6741 additions and 1 deletions

131
tests/postgres/README.md Normal file
View File

@@ -0,0 +1,131 @@
# PostgreSQL Integration Tests
This directory contains Docker-based integration tests for the PostgreSQL reader.
## Quick Start
Run integration tests with Docker:
```bash
# From project root
make docker-test
```
Or use the script directly:
```bash
./tests/postgres/run_tests.sh
```
## Manual Testing
Start the PostgreSQL test database:
```bash
make docker-up
```
Run tests with the running database:
```bash
export RELSPEC_TEST_PG_CONN="postgres://relspec:relspec_test_password@localhost:5433/relspec_test"
go test -v ./pkg/readers/pgsql/
```
Stop the database:
```bash
make docker-down
```
## Test Database Contents
The test database (`relspec_test`) contains:
### Schemas (3)
- **public**: Main schema with user and blog data
- **analytics**: Event tracking and analytics data
- **inventory**: Product inventory management
### Tables (14 total)
- **public.users**: User accounts with various data types
- **public.posts**: Blog posts with foreign keys
- **public.comments**: Nested comments (self-referencing FK)
- **public.categories**: Post categories
- **public.post_categories**: Many-to-many junction table
- **analytics.events**: Event tracking
- **analytics.page_views**: Page view analytics
- **analytics.conversions**: Conversion tracking
- **inventory.products**: Product catalog
- **inventory.warehouses**: Warehouse locations
- **inventory.stock**: Product stock by warehouse
### Views (8 total)
- **public.active_users**: Active users only
- **public.published_posts**: Published posts with author info
- **public.post_stats**: Post statistics
- **analytics.daily_stats**: Daily aggregated statistics
- **analytics.top_pages**: Most viewed pages
- **inventory.available_products**: Available products with stock
- **inventory.low_stock_products**: Products with low inventory
### Sequences (5 total)
- **public.users_id_seq**: Users ID sequence
- **public.posts_id_seq**: Posts ID sequence (with cycle)
- **analytics.events_id_seq**: Events ID sequence
- Plus auto-generated sequences for serial columns
### Constraint Types
- **Primary Keys**: All tables have PKs
- **Foreign Keys**: Including cross-schema references
- **Unique Constraints**: Username, email, SKU, etc.
- **Check Constraints**: Age limits, status validation, balance checks
### Index Types
- **btree**: Standard B-tree indexes
- **gin**: GIN indexes for JSONB and array columns
- **hash**: Hash indexes for session lookups
- **Unique**: Unique indexes
- **Partial**: Conditional indexes (WHERE clauses)
- **Composite**: Multi-column indexes
## Connection Details
- **Host**: localhost
- **Port**: 5433 (to avoid conflicts with local PostgreSQL)
- **Database**: relspec_test
- **User**: relspec
- **Password**: relspec_test_password
Full connection string:
```
postgres://relspec:relspec_test_password@localhost:5433/relspec_test
```
## Requirements
- Docker or Docker Compose
- Go 1.25+
## Makefile Targets
- `make docker-up` - Start PostgreSQL container
- `make docker-down` - Stop PostgreSQL container
- `make docker-test` - Run full test suite with automatic cleanup
- `make docker-test-integration` - Run integration tests only
## Test Coverage
The tests verify:
- Schema introspection (excluding system schemas)
- Table metadata extraction
- View definitions and columns
- Sequence properties and ownership
- Column data types and constraints
- Primary key detection
- Foreign key relationships with referential actions
- Unique and check constraints
- Index types and definitions
- Cross-schema relationships
- Data type mapping
- Error handling

510
tests/postgres/init.sql Normal file
View File

@@ -0,0 +1,510 @@
-- RelSpec Test Database Initialization
-- This script creates a comprehensive test database with multiple schemas,
-- tables, views, sequences, constraints, indexes, and relationships
-- Enable extensions
CREATE EXTENSION IF NOT EXISTS "uuid-ossp";
-- ============================================================================
-- SCHEMA: public (default schema)
-- ============================================================================
-- Sequences
CREATE SEQUENCE public.users_id_seq
START WITH 1
INCREMENT BY 1
NO MINVALUE
NO MAXVALUE
CACHE 1;
CREATE SEQUENCE public.posts_id_seq
START WITH 100
INCREMENT BY 1
MINVALUE 100
MAXVALUE 9999999
CACHE 10
CYCLE;
-- Tables
CREATE TABLE public.users (
id bigint NOT NULL DEFAULT nextval('users_id_seq'::regclass),
username character varying(50) NOT NULL,
email character varying(100) NOT NULL,
first_name character varying(50),
last_name character varying(50),
age integer,
is_active boolean DEFAULT true,
created_at timestamp without time zone DEFAULT CURRENT_TIMESTAMP,
updated_at timestamp without time zone,
metadata jsonb,
profile_data json,
birth_date date,
balance numeric(10,2) DEFAULT 0.00,
rating real,
score double precision,
user_uuid uuid DEFAULT uuid_generate_v4(),
bio text,
avatar_url text
);
COMMENT ON TABLE public.users IS 'User accounts table';
COMMENT ON COLUMN public.users.id IS 'Primary key';
COMMENT ON COLUMN public.users.username IS 'Unique username';
COMMENT ON COLUMN public.users.email IS 'User email address';
CREATE TABLE public.posts (
id bigint NOT NULL DEFAULT nextval('posts_id_seq'::regclass),
user_id bigint NOT NULL,
title character varying(200) NOT NULL,
content text,
status character varying(20) DEFAULT 'draft'::character varying,
published_at timestamp with time zone,
view_count integer DEFAULT 0,
tags text[],
created_at timestamp without time zone DEFAULT CURRENT_TIMESTAMP
);
COMMENT ON TABLE public.posts IS 'Blog posts table';
CREATE TABLE public.comments (
id bigserial NOT NULL,
post_id bigint NOT NULL,
user_id bigint NOT NULL,
parent_comment_id bigint,
content text NOT NULL,
created_at timestamp without time zone DEFAULT CURRENT_TIMESTAMP
);
CREATE TABLE public.categories (
id serial NOT NULL,
name character varying(100) NOT NULL,
slug character varying(100) NOT NULL,
description text
);
CREATE TABLE public.post_categories (
post_id bigint NOT NULL,
category_id integer NOT NULL,
assigned_at timestamp without time zone DEFAULT CURRENT_TIMESTAMP
);
-- Primary Keys
ALTER TABLE ONLY public.users
ADD CONSTRAINT users_pkey PRIMARY KEY (id);
ALTER TABLE ONLY public.posts
ADD CONSTRAINT posts_pkey PRIMARY KEY (id);
ALTER TABLE ONLY public.comments
ADD CONSTRAINT comments_pkey PRIMARY KEY (id);
ALTER TABLE ONLY public.categories
ADD CONSTRAINT categories_pkey PRIMARY KEY (id);
ALTER TABLE ONLY public.post_categories
ADD CONSTRAINT post_categories_pkey PRIMARY KEY (post_id, category_id);
-- Unique Constraints
ALTER TABLE ONLY public.users
ADD CONSTRAINT users_username_key UNIQUE (username);
ALTER TABLE ONLY public.users
ADD CONSTRAINT users_email_key UNIQUE (email);
ALTER TABLE ONLY public.categories
ADD CONSTRAINT categories_slug_key UNIQUE (slug);
-- Check Constraints
ALTER TABLE public.users
ADD CONSTRAINT users_age_check CHECK ((age >= 0 AND age <= 150));
ALTER TABLE public.posts
ADD CONSTRAINT posts_status_check CHECK (status IN ('draft', 'published', 'archived'));
ALTER TABLE public.users
ADD CONSTRAINT users_balance_check CHECK (balance >= 0);
-- Foreign Keys
ALTER TABLE ONLY public.posts
ADD CONSTRAINT fk_posts_user_id
FOREIGN KEY (user_id)
REFERENCES public.users(id)
ON DELETE CASCADE
ON UPDATE RESTRICT;
ALTER TABLE ONLY public.comments
ADD CONSTRAINT fk_comments_post_id
FOREIGN KEY (post_id)
REFERENCES public.posts(id)
ON DELETE CASCADE;
ALTER TABLE ONLY public.comments
ADD CONSTRAINT fk_comments_user_id
FOREIGN KEY (user_id)
REFERENCES public.users(id)
ON DELETE CASCADE;
ALTER TABLE ONLY public.comments
ADD CONSTRAINT fk_comments_parent
FOREIGN KEY (parent_comment_id)
REFERENCES public.comments(id)
ON DELETE CASCADE;
ALTER TABLE ONLY public.post_categories
ADD CONSTRAINT fk_post_categories_post_id
FOREIGN KEY (post_id)
REFERENCES public.posts(id)
ON DELETE CASCADE;
ALTER TABLE ONLY public.post_categories
ADD CONSTRAINT fk_post_categories_category_id
FOREIGN KEY (category_id)
REFERENCES public.categories(id)
ON DELETE CASCADE;
-- Indexes
CREATE INDEX idx_users_email ON public.users USING btree (email);
CREATE INDEX idx_users_created_at ON public.users USING btree (created_at);
CREATE INDEX idx_users_last_name_first_name ON public.users USING btree (last_name, first_name);
CREATE INDEX idx_users_active ON public.users USING btree (id) WHERE (is_active = true);
CREATE INDEX idx_posts_user_id ON public.posts USING btree (user_id);
CREATE INDEX idx_posts_status ON public.posts USING btree (status);
CREATE INDEX idx_posts_published_at ON public.posts USING btree (published_at) WHERE (published_at IS NOT NULL);
CREATE INDEX idx_posts_tags ON public.posts USING gin (tags);
CREATE INDEX idx_comments_post_id ON public.comments USING btree (post_id);
CREATE INDEX idx_comments_user_id ON public.comments USING btree (user_id);
CREATE INDEX idx_comments_parent ON public.comments USING btree (parent_comment_id);
CREATE UNIQUE INDEX idx_categories_slug_unique ON public.categories USING btree (slug);
-- Views
CREATE VIEW public.active_users AS
SELECT id, username, email, first_name, last_name, created_at
FROM public.users
WHERE is_active = true;
COMMENT ON VIEW public.active_users IS 'View of active users only';
CREATE VIEW public.published_posts AS
SELECT
p.id,
p.title,
p.content,
p.published_at,
p.view_count,
u.username as author_username,
u.email as author_email
FROM public.posts p
JOIN public.users u ON p.user_id = u.id
WHERE p.status = 'published';
COMMENT ON VIEW public.published_posts IS 'Published posts with author information';
CREATE VIEW public.post_stats AS
SELECT
p.id as post_id,
p.title,
COUNT(DISTINCT c.id) as comment_count,
COUNT(DISTINCT pc.category_id) as category_count
FROM public.posts p
LEFT JOIN public.comments c ON c.post_id = p.id
LEFT JOIN public.post_categories pc ON pc.post_id = p.id
GROUP BY p.id, p.title;
-- Set sequence ownership
ALTER SEQUENCE public.users_id_seq OWNED BY public.users.id;
ALTER SEQUENCE public.posts_id_seq OWNED BY public.posts.id;
-- ============================================================================
-- SCHEMA: analytics (secondary schema for testing multi-schema support)
-- ============================================================================
CREATE SCHEMA analytics;
COMMENT ON SCHEMA analytics IS 'Analytics data schema';
-- Sequences
CREATE SEQUENCE analytics.events_id_seq
START WITH 1000
INCREMENT BY 1;
-- Tables
CREATE TABLE analytics.events (
id bigint NOT NULL DEFAULT nextval('analytics.events_id_seq'::regclass),
event_type character varying(50) NOT NULL,
user_id bigint,
session_id uuid,
event_data jsonb,
created_at timestamp with time zone DEFAULT CURRENT_TIMESTAMP
);
COMMENT ON TABLE analytics.events IS 'User event tracking';
CREATE TABLE analytics.page_views (
id bigserial NOT NULL,
event_id bigint NOT NULL,
page_url text NOT NULL,
referrer text,
user_agent text,
ip_address inet,
duration_seconds integer
);
CREATE TABLE analytics.conversions (
id serial NOT NULL,
user_id bigint NOT NULL,
conversion_type character varying(50) NOT NULL,
revenue numeric(12,2),
converted_at timestamp with time zone DEFAULT CURRENT_TIMESTAMP
);
-- Primary Keys
ALTER TABLE ONLY analytics.events
ADD CONSTRAINT events_pkey PRIMARY KEY (id);
ALTER TABLE ONLY analytics.page_views
ADD CONSTRAINT page_views_pkey PRIMARY KEY (id);
ALTER TABLE ONLY analytics.conversions
ADD CONSTRAINT conversions_pkey PRIMARY KEY (id);
-- Foreign Keys (cross-schema references)
ALTER TABLE ONLY analytics.events
ADD CONSTRAINT fk_events_user_id
FOREIGN KEY (user_id)
REFERENCES public.users(id)
ON DELETE SET NULL;
ALTER TABLE ONLY analytics.page_views
ADD CONSTRAINT fk_page_views_event_id
FOREIGN KEY (event_id)
REFERENCES analytics.events(id)
ON DELETE CASCADE;
ALTER TABLE ONLY analytics.conversions
ADD CONSTRAINT fk_conversions_user_id
FOREIGN KEY (user_id)
REFERENCES public.users(id)
ON DELETE CASCADE;
-- Check Constraints
ALTER TABLE analytics.conversions
ADD CONSTRAINT conversions_revenue_check CHECK (revenue >= 0);
-- Indexes
CREATE INDEX idx_events_user_id ON analytics.events USING btree (user_id);
CREATE INDEX idx_events_created_at ON analytics.events USING btree (created_at);
CREATE INDEX idx_events_event_type ON analytics.events USING btree (event_type);
CREATE INDEX idx_events_data ON analytics.events USING gin (event_data);
CREATE INDEX idx_events_session ON analytics.events USING hash (session_id);
CREATE INDEX idx_page_views_event_id ON analytics.page_views USING btree (event_id);
CREATE INDEX idx_conversions_user_id ON analytics.conversions USING btree (user_id);
-- Views
CREATE VIEW analytics.daily_stats AS
SELECT
DATE(created_at) as date,
COUNT(*) as total_events,
COUNT(DISTINCT user_id) as unique_users,
COUNT(DISTINCT session_id) as unique_sessions
FROM analytics.events
GROUP BY DATE(created_at);
CREATE VIEW analytics.top_pages AS
SELECT
pv.page_url,
COUNT(*) as view_count,
AVG(pv.duration_seconds) as avg_duration
FROM analytics.page_views pv
GROUP BY pv.page_url
ORDER BY view_count DESC
LIMIT 100;
-- Set sequence ownership
ALTER SEQUENCE analytics.events_id_seq OWNED BY analytics.events.id;
-- ============================================================================
-- SCHEMA: inventory (third schema for comprehensive testing)
-- ============================================================================
CREATE SCHEMA inventory;
CREATE TABLE inventory.products (
id serial NOT NULL,
sku character varying(50) NOT NULL,
name character varying(200) NOT NULL,
description text,
price numeric(10,2) NOT NULL,
stock_quantity integer DEFAULT 0,
is_available boolean DEFAULT true
);
CREATE TABLE inventory.warehouses (
id serial NOT NULL,
name character varying(100) NOT NULL,
location character varying(200),
capacity integer
);
CREATE TABLE inventory.stock (
product_id integer NOT NULL,
warehouse_id integer NOT NULL,
quantity integer DEFAULT 0,
last_updated timestamp without time zone DEFAULT CURRENT_TIMESTAMP
);
-- Primary Keys
ALTER TABLE ONLY inventory.products
ADD CONSTRAINT products_pkey PRIMARY KEY (id);
ALTER TABLE ONLY inventory.warehouses
ADD CONSTRAINT warehouses_pkey PRIMARY KEY (id);
ALTER TABLE ONLY inventory.stock
ADD CONSTRAINT stock_pkey PRIMARY KEY (product_id, warehouse_id);
-- Unique Constraints
ALTER TABLE ONLY inventory.products
ADD CONSTRAINT products_sku_key UNIQUE (sku);
-- Check Constraints
ALTER TABLE inventory.products
ADD CONSTRAINT products_price_check CHECK (price >= 0);
ALTER TABLE inventory.products
ADD CONSTRAINT products_stock_check CHECK (stock_quantity >= 0);
ALTER TABLE inventory.stock
ADD CONSTRAINT stock_quantity_check CHECK (quantity >= 0);
-- Foreign Keys
ALTER TABLE ONLY inventory.stock
ADD CONSTRAINT fk_stock_product_id
FOREIGN KEY (product_id)
REFERENCES inventory.products(id)
ON DELETE CASCADE;
ALTER TABLE ONLY inventory.stock
ADD CONSTRAINT fk_stock_warehouse_id
FOREIGN KEY (warehouse_id)
REFERENCES inventory.warehouses(id)
ON DELETE CASCADE;
-- Indexes
CREATE UNIQUE INDEX idx_products_sku ON inventory.products USING btree (sku);
CREATE INDEX idx_products_available ON inventory.products USING btree (id) WHERE (is_available = true);
CREATE INDEX idx_stock_product_id ON inventory.stock USING btree (product_id);
CREATE INDEX idx_stock_warehouse_id ON inventory.stock USING btree (warehouse_id);
-- Views
CREATE VIEW inventory.available_products AS
SELECT id, sku, name, price, stock_quantity
FROM inventory.products
WHERE is_available = true AND stock_quantity > 0;
CREATE VIEW inventory.low_stock_products AS
SELECT p.id, p.sku, p.name, p.stock_quantity
FROM inventory.products p
WHERE p.stock_quantity < 10 AND p.is_available = true;
-- ============================================================================
-- Insert Sample Data
-- ============================================================================
-- Users
INSERT INTO public.users (username, email, first_name, last_name, age, is_active, balance, bio) VALUES
('john_doe', 'john@example.com', 'John', 'Doe', 30, true, 100.50, 'Software developer'),
('jane_smith', 'jane@example.com', 'Jane', 'Smith', 25, true, 250.00, 'Designer'),
('bob_jones', 'bob@example.com', 'Bob', 'Jones', 35, false, 50.00, 'Manager'),
('alice_wilson', 'alice@example.com', 'Alice', 'Wilson', 28, true, 500.75, 'Data scientist');
-- Categories
INSERT INTO public.categories (name, slug, description) VALUES
('Technology', 'technology', 'Tech related posts'),
('Design', 'design', 'Design and UX posts'),
('Business', 'business', 'Business and strategy');
-- Posts (with explicit IDs)
INSERT INTO public.posts (id, user_id, title, content, status, view_count, tags) VALUES
(1, 1, 'Getting Started with PostgreSQL', 'A comprehensive guide...', 'published', 150, ARRAY['database', 'postgresql', 'tutorial']),
(2, 1, 'Advanced SQL Queries', 'Deep dive into complex queries...', 'published', 200, ARRAY['sql', 'advanced']),
(3, 2, 'UI Design Principles', 'Best practices for UI design...', 'published', 300, ARRAY['design', 'ui', 'ux']),
(4, 4, 'Data Analysis with Python', 'Introduction to pandas...', 'draft', 0, ARRAY['python', 'data']);
-- Reset sequence to avoid conflicts
SELECT setval('public.posts_id_seq', 100);
-- Post Categories
INSERT INTO public.post_categories (post_id, category_id) VALUES
(1, 1), (2, 1), (3, 2), (4, 1);
-- Comments
INSERT INTO public.comments (post_id, user_id, content) VALUES
(1, 2, 'Great article!'),
(1, 4, 'Very helpful, thanks!'),
(2, 3, 'Can you elaborate on joins?');
-- Analytics Events
INSERT INTO analytics.events (event_type, user_id, event_data) VALUES
('page_view', 1, '{"page": "/posts/1"}'),
('page_view', 2, '{"page": "/posts/1"}'),
('click', 1, '{"element": "share_button"}'),
('conversion', 4, '{"type": "signup"}');
-- Products
INSERT INTO inventory.products (sku, name, description, price, stock_quantity) VALUES
('PROD-001', 'Laptop', 'High-performance laptop', 999.99, 50),
('PROD-002', 'Mouse', 'Wireless mouse', 29.99, 200),
('PROD-003', 'Keyboard', 'Mechanical keyboard', 79.99, 5);
-- Warehouses
INSERT INTO inventory.warehouses (name, location, capacity) VALUES
('Main Warehouse', 'New York', 10000),
('West Coast Warehouse', 'San Francisco', 5000);
-- Stock
INSERT INTO inventory.stock (product_id, warehouse_id, quantity) VALUES
(1, 1, 30), (1, 2, 20),
(2, 1, 150), (2, 2, 50),
(3, 1, 5);
-- Grant permissions (for testing purposes)
GRANT ALL PRIVILEGES ON ALL TABLES IN SCHEMA public TO relspec;
GRANT ALL PRIVILEGES ON ALL SEQUENCES IN SCHEMA public TO relspec;
GRANT ALL PRIVILEGES ON ALL TABLES IN SCHEMA analytics TO relspec;
GRANT ALL PRIVILEGES ON ALL SEQUENCES IN SCHEMA analytics TO relspec;
GRANT ALL PRIVILEGES ON ALL TABLES IN SCHEMA inventory TO relspec;
GRANT ALL PRIVILEGES ON ALL SEQUENCES IN SCHEMA inventory TO relspec;
-- Analyze tables for query optimization
ANALYZE;
-- Summary
DO $$
DECLARE
schema_count integer;
table_count integer;
view_count integer;
sequence_count integer;
BEGIN
SELECT COUNT(*) INTO schema_count FROM pg_namespace
WHERE nspname NOT IN ('pg_catalog', 'information_schema', 'pg_toast')
AND nspname NOT LIKE 'pg_%';
SELECT COUNT(*) INTO table_count FROM pg_tables
WHERE schemaname NOT IN ('pg_catalog', 'information_schema');
SELECT COUNT(*) INTO view_count FROM pg_views
WHERE schemaname NOT IN ('pg_catalog', 'information_schema');
SELECT COUNT(*) INTO sequence_count FROM pg_sequences
WHERE schemaname NOT IN ('pg_catalog', 'information_schema');
RAISE NOTICE 'Database initialization complete!';
RAISE NOTICE 'Schemas: %', schema_count;
RAISE NOTICE 'Tables: %', table_count;
RAISE NOTICE 'Views: %', view_count;
RAISE NOTICE 'Sequences: %', sequence_count;
END $$;

95
tests/postgres/run_tests.sh Executable file
View File

@@ -0,0 +1,95 @@
#!/bin/bash
set -e
# RelSpec PostgreSQL Integration Tests Runner
# This script starts a PostgreSQL Docker container and runs integration tests
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
PROJECT_ROOT="$(cd "$SCRIPT_DIR/../.." && pwd)"
echo "=== RelSpec PostgreSQL Integration Tests ==="
echo ""
# Colors for output
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
RED='\033[0;31m'
NC='\033[0m' # No Color
# Check if docker-compose is available
if ! command -v docker-compose &> /dev/null && ! command -v docker &> /dev/null; then
echo -e "${RED}Error: docker-compose or docker is not installed${NC}"
exit 1
fi
# Determine docker compose command
if command -v docker-compose &> /dev/null; then
DOCKER_COMPOSE="docker-compose"
else
DOCKER_COMPOSE="docker compose"
fi
# Change to project root
cd "$PROJECT_ROOT"
# Function to cleanup
cleanup() {
echo -e "\n${YELLOW}Cleaning up...${NC}"
$DOCKER_COMPOSE down
}
# Trap exit to cleanup
trap cleanup EXIT
# Start PostgreSQL container
echo -e "${YELLOW}Starting PostgreSQL container...${NC}"
$DOCKER_COMPOSE up -d postgres
# Wait for PostgreSQL to be ready
echo -e "${YELLOW}Waiting for PostgreSQL to be ready...${NC}"
max_attempts=30
attempt=0
while [ $attempt -lt $max_attempts ]; do
if $DOCKER_COMPOSE exec -T postgres pg_isready -U relspec -d relspec_test &> /dev/null; then
echo -e "${GREEN}PostgreSQL is ready!${NC}"
break
fi
attempt=$((attempt + 1))
echo -n "."
sleep 1
done
if [ $attempt -eq $max_attempts ]; then
echo -e "\n${RED}Error: PostgreSQL failed to start${NC}"
exit 1
fi
# Give it one more second to fully initialize
sleep 2
# Show database stats
echo -e "\n${YELLOW}Database Information:${NC}"
$DOCKER_COMPOSE exec -T postgres psql -U relspec -d relspec_test -c "
SELECT
(SELECT COUNT(*) FROM pg_namespace WHERE nspname NOT IN ('pg_catalog', 'information_schema', 'pg_toast') AND nspname NOT LIKE 'pg_%') as schemas,
(SELECT COUNT(*) FROM pg_tables WHERE schemaname NOT IN ('pg_catalog', 'information_schema')) as tables,
(SELECT COUNT(*) FROM pg_views WHERE schemaname NOT IN ('pg_catalog', 'information_schema')) as views,
(SELECT COUNT(*) FROM pg_sequences WHERE schemaname NOT IN ('pg_catalog', 'information_schema')) as sequences;
"
# Set environment variable for tests
export RELSPEC_TEST_PG_CONN="postgres://relspec:relspec_test_password@localhost:5433/relspec_test"
echo -e "\n${YELLOW}Running PostgreSQL reader tests...${NC}"
echo "Connection string: $RELSPEC_TEST_PG_CONN"
echo ""
# Run the tests
if go test -v ./pkg/readers/pgsql/ -count=1; then
echo -e "\n${GREEN}✓ All tests passed!${NC}"
exit 0
else
echo -e "\n${RED}✗ Tests failed${NC}"
exit 1
fi

View File

@@ -0,0 +1,110 @@
#!/bin/bash
set -e
# RelSpec PostgreSQL Integration Tests Runner (Podman version)
# This script starts a PostgreSQL Podman container and runs integration tests
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
PROJECT_ROOT="$(cd "$SCRIPT_DIR/../.." && pwd)"
echo "=== RelSpec PostgreSQL Integration Tests (Podman) ==="
echo ""
# Colors for output
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
RED='\033[0;31m'
NC='\033[0m' # No Color
# Container configuration
CONTAINER_NAME="relspec-test-postgres"
POSTGRES_USER="relspec"
POSTGRES_PASSWORD="relspec_test_password"
POSTGRES_DB="relspec_test"
POSTGRES_PORT="5433"
# Check if podman is available
if ! command -v podman &> /dev/null; then
echo -e "${RED}Error: podman is not installed${NC}"
exit 1
fi
# Change to project root
cd "$PROJECT_ROOT"
# Function to cleanup
cleanup() {
echo -e "\n${YELLOW}Cleaning up...${NC}"
podman stop "$CONTAINER_NAME" 2>/dev/null || true
podman rm "$CONTAINER_NAME" 2>/dev/null || true
}
# Trap exit to cleanup
trap cleanup EXIT
# Stop and remove existing container if it exists
echo -e "${YELLOW}Cleaning up any existing containers...${NC}"
podman stop "$CONTAINER_NAME" 2>/dev/null || true
podman rm "$CONTAINER_NAME" 2>/dev/null || true
# Start PostgreSQL container
echo -e "${YELLOW}Starting PostgreSQL container...${NC}"
podman run -d \
--name "$CONTAINER_NAME" \
-e POSTGRES_USER="$POSTGRES_USER" \
-e POSTGRES_PASSWORD="$POSTGRES_PASSWORD" \
-e POSTGRES_DB="$POSTGRES_DB" \
-p "$POSTGRES_PORT:5432" \
-v "$SCRIPT_DIR/init.sql:/docker-entrypoint-initdb.d/init.sql:Z" \
docker.io/library/postgres:16-alpine
# Wait for PostgreSQL to be ready
echo -e "${YELLOW}Waiting for PostgreSQL to be ready...${NC}"
max_attempts=30
attempt=0
while [ $attempt -lt $max_attempts ]; do
if podman exec "$CONTAINER_NAME" pg_isready -U "$POSTGRES_USER" -d "$POSTGRES_DB" &> /dev/null; then
echo -e "${GREEN}PostgreSQL is ready!${NC}"
break
fi
attempt=$((attempt + 1))
echo -n "."
sleep 1
done
if [ $attempt -eq $max_attempts ]; then
echo -e "\n${RED}Error: PostgreSQL failed to start${NC}"
podman logs "$CONTAINER_NAME"
exit 1
fi
# Give it one more second to fully initialize
sleep 2
# Show database stats
echo -e "\n${YELLOW}Database Information:${NC}"
podman exec "$CONTAINER_NAME" psql -U "$POSTGRES_USER" -d "$POSTGRES_DB" -c "
SELECT
(SELECT COUNT(*) FROM pg_namespace WHERE nspname NOT IN ('pg_catalog', 'information_schema', 'pg_toast') AND nspname NOT LIKE 'pg_%') as schemas,
(SELECT COUNT(*) FROM pg_tables WHERE schemaname NOT IN ('pg_catalog', 'information_schema')) as tables,
(SELECT COUNT(*) FROM pg_views WHERE schemaname NOT IN ('pg_catalog', 'information_schema')) as views,
(SELECT COUNT(*) FROM pg_sequences WHERE schemaname NOT IN ('pg_catalog', 'information_schema')) as sequences;
"
# Set environment variable for tests
export RELSPEC_TEST_PG_CONN="postgres://$POSTGRES_USER:$POSTGRES_PASSWORD@localhost:$POSTGRES_PORT/$POSTGRES_DB"
echo -e "\n${YELLOW}Running PostgreSQL reader tests...${NC}"
echo "Connection string: $RELSPEC_TEST_PG_CONN"
echo ""
# Run the tests
cd "$PROJECT_ROOT"
if go test -v ./pkg/readers/pgsql/ -count=1; then
echo -e "\n${GREEN}✓ All tests passed!${NC}"
exit 0
else
echo -e "\n${RED}✗ Tests failed${NC}"
exit 1
fi