Building Real-Time Dashboards with Grafana and PostgreSQL

Create stunning real-time dashboards with Grafana and PostgreSQL. Step-by-step tutorial covering data modeling, queries, panels, alerts, and optimization.

Y
Yash Pritwani
12 min read

Why Grafana + PostgreSQL?

You do not need a specialized time-series database for most dashboard use cases. PostgreSQL handles time-series queries remarkably well, and you probably already have it running. Grafana connects to PostgreSQL natively with a powerful query editor.

Monitoring DashboardCPU Usage23%Memory6.2 GBRequests/s1.2KUptime99.9%Response Time (ms)

Real-time monitoring dashboard showing CPU, memory, request rate, and response time trends.

At TechSaaS, our monitoring stack runs Grafana backed by PostgreSQL and Loki. It tracks everything from Docker container health to business metrics — all on a single self-hosted server.

Setting Up

If you are using Docker Compose:

services:
  grafana:
    image: grafana/grafana:11.0.0
    container_name: grafana
    restart: unless-stopped
    ports:
      - "3000:3000"
    environment:
      GF_SECURITY_ADMIN_USER: admin
      GF_SECURITY_ADMIN_PASSWORD: ${GRAFANA_PASSWORD}
      GF_INSTALL_PLUGINS: grafana-clock-panel,grafana-piechart-panel
    volumes:
      - grafana_data:/var/lib/grafana
    networks:
      - padc-net

Data Modeling for Dashboards

Structure your PostgreSQL tables with dashboards in mind:

-- Time-series events table
CREATE TABLE app_events (
    id BIGSERIAL PRIMARY KEY,
    event_type VARCHAR(50) NOT NULL,
    user_id UUID,
    metadata JSONB DEFAULT '{}',
    created_at TIMESTAMPTZ NOT NULL DEFAULT NOW()
);

-- Create a time-based index for fast range queries
CREATE INDEX idx_events_created_at ON app_events
    USING BRIN (created_at);

-- Partitioned table for high-volume data
CREATE TABLE metrics (
    time TIMESTAMPTZ NOT NULL,
    name VARCHAR(100) NOT NULL,
    value DOUBLE PRECISION NOT NULL,
    labels JSONB DEFAULT '{}'
) PARTITION BY RANGE (time);

-- Create monthly partitions
CREATE TABLE metrics_2025_11 PARTITION OF metrics
    FOR VALUES FROM ('2025-11-01') TO ('2025-12-01');
CREATE TABLE metrics_2025_12 PARTITION OF metrics
    FOR VALUES FROM ('2025-12-01') TO ('2026-01-01');

Connecting Grafana to PostgreSQL

  1. Navigate to Connections > Data Sources > Add data source
  2. Select PostgreSQL
  3. Enter connection details:
    • Host: postgres:5432 (or your service name)
    • Database: analytics
    • User: grafana_reader (always use a read-only user)
    • TLS/SSL Mode: disable (internal Docker network)

Get more insights on Tutorials

Join 2,000+ engineers who get our weekly deep-dives. No spam, unsubscribe anytime.

  • Click Save & Test
  • Create a read-only database user:

    CREATE USER grafana_reader WITH PASSWORD 'secure_password';
    GRANT CONNECT ON DATABASE analytics TO grafana_reader;
    GRANT USAGE ON SCHEMA public TO grafana_reader;
    GRANT SELECT ON ALL TABLES IN SCHEMA public TO grafana_reader;
    ALTER DEFAULT PRIVILEGES IN SCHEMA public
        GRANT SELECT ON TABLES TO grafana_reader;
    

    Building Dashboard Panels

    Panel 1: Requests Per Minute (Time Series)

    SELECT
      date_trunc('minute', created_at) AS time,
      event_type,
      COUNT(*) AS requests
    FROM app_events
    WHERE
      created_at >= $__timeFrom()
      AND created_at <= $__timeTo()
    GROUP BY 1, 2
    ORDER BY 1
    

    Grafana's $__timeFrom() and $__timeTo() macros automatically map to the dashboard time picker.

    Panel 2: Active Users (Stat Panel)

    SELECT COUNT(DISTINCT user_id) AS "Active Users"
    FROM app_events
    WHERE created_at >= NOW() - INTERVAL '24 hours'
    

    Panel 3: Revenue by Day (Bar Chart)

    SELECT
      date_trunc('day', created_at) AS time,
      SUM((metadata->>'amount')::numeric) AS revenue
    FROM app_events
    WHERE
      event_type = 'purchase'
      AND created_at >= $__timeFrom()
      AND created_at <= $__timeTo()
    GROUP BY 1
    ORDER BY 1
    
    PrimaryRead + WriteReplica 1Replica 2WAL streamWAL streamRead-onlyRead-only

    Database replication: the primary handles writes while replicas serve read queries via WAL streaming.

    Panel 4: Top Endpoints (Table)

    SELECT
      metadata->>'endpoint' AS endpoint,
      COUNT(*) AS hits,
      ROUND(AVG((metadata->>'latency_ms')::numeric), 1) AS avg_latency_ms,
      PERCENTILE_CONT(0.95) WITHIN GROUP (
        ORDER BY (metadata->>'latency_ms')::numeric
      ) AS p95_latency_ms
    FROM app_events
    WHERE
      event_type = 'api_request'
      AND created_at >= $__timeFrom()
      AND created_at <= $__timeTo()
    GROUP BY 1
    ORDER BY hits DESC
    LIMIT 20
    

    Panel 5: Error Rate (Gauge)

    SELECT
      ROUND(
        100.0 * COUNT(*) FILTER (WHERE (metadata->>'status_code')::int >= 500)
        / NULLIF(COUNT(*), 0),
        2
      ) AS "Error Rate %"
    FROM app_events
    WHERE
      event_type = 'api_request'
      AND created_at >= NOW() - INTERVAL '1 hour'
    

    Setting Up Alerts

    Grafana can alert you when metrics cross thresholds:

    1. Edit a panel and go to the Alert tab
    2. Set conditions:
    WHEN avg() OF query(A) IS ABOVE 5.0
    FOR 5 minutes
    

    Configure notification channels in Alerting > Contact points:

    • Slack/Discord: Webhook URL
    • Email: SMTP settings in grafana.ini
    • Ntfy: Simple HTTP webhook (we use this at TechSaaS)
    • PagerDuty: For critical production alerts

    Performance Optimization

    Free Resource

    Free Cloud Architecture Checklist

    A 47-point checklist covering security, scalability, cost optimization, and disaster recovery for production cloud environments.

    Download the Checklist

    Materialized Views for Heavy Queries

    Pre-compute expensive aggregations:

    CREATE MATERIALIZED VIEW hourly_stats AS
    SELECT
      date_trunc('hour', created_at) AS hour,
      event_type,
      COUNT(*) AS event_count,
      COUNT(DISTINCT user_id) AS unique_users
    FROM app_events
    GROUP BY 1, 2;
    
    CREATE UNIQUE INDEX idx_hourly_stats ON hourly_stats (hour, event_type);
    
    -- Refresh every hour via cron or pg_cron
    REFRESH MATERIALIZED VIEW CONCURRENTLY hourly_stats;
    

    Query Caching

    In Grafana panel settings, set the Min interval to reduce query frequency:

    • Real-time panels: 10s-30s
    • Historical panels: 1m-5m
    • Daily/weekly reports: 1h

    Connection Pooling

    Use PgBouncer between Grafana and PostgreSQL to prevent connection exhaustion:

    # pgbouncer.ini
    [databases]
    analytics = host=postgres port=5432 dbname=analytics
    
    [pgbouncer]
    pool_mode = transaction
    max_client_conn = 200
    default_pool_size = 20
    
    docker-compose.yml123456789version: "3.8"services: web: image: nginx:alpine ports: - "80:80" volumes: - ./html:/usr/share/nginx

    A well-structured configuration file is the foundation of reproducible infrastructure.

    Dashboard Design Tips

    1. Top row: KPI stat panels (big numbers) — active users, revenue, error rate
    2. Second row: Time-series graphs — requests/minute, latency, throughput
    3. Third row: Tables and heatmaps — top endpoints, error breakdown
    4. Use variables: Template variables for filtering by environment, service, region
    5. Color thresholds: Green < 1% error rate, yellow < 5%, red > 5%

    Grafana with PostgreSQL is a powerful combination that covers 90% of dashboard needs without additional infrastructure. At TechSaaS, it is the backbone of our monitoring for every client deployment.

    #grafana#postgresql#dashboards#monitoring#analytics#tutorial

    Related Service

    Cloud Solutions

    Let our experts help you build the right technology strategy for your business.

    Need help with tutorials?

    TechSaaS provides expert consulting and managed services for cloud infrastructure, DevOps, and AI/ML operations.

    We Will Build You a Demo Site — For Free

    Like it? Pay us. Do not like it? Walk away, zero complaints. You will spend way less than hiring developers or any agency.

    47+ companies trusted us
    99.99% uptime
    < 48hr response

    No spam. No contracts. Just a free demo.