TGM Expert uses InfluxDB as a time-series database for high-performance storage and querying of sensor data, audit logs, and metrics. InfluxDB is optimized for time-stamped data and provides efficient compression, retention policies, and fast aggregation queries.

Table of Contents


Overview

Key Features

  • Time-Series Optimized - Purpose-built for time-stamped data
  • High Write Throughput - Handle thousands of sensor readings per second
  • Efficient Compression - Automatic data compression for storage efficiency
  • Retention Policies - Automatic data expiration based on age
  • Multi-Tenant Isolation - Data tagged with client_id for tenant separation
  • Fast Aggregations - Sub-second queries for dashboards and analytics

Use Cases in TGM Expert

Use Case Bucket Measurement
Sensor readings tgm-metrics sensor_readings
Audit logs logs-bucket audit_logs
Security events logs-bucket security_events
Error logs logs-bucket error_logs

Architecture

┌─────────────────────────────────────────────────────────────────┐
│                    Data Flow Architecture                        │
├─────────────────────────────────────────────────────────────────┤
│                                                                  │
│   Sensors/IoT          API Requests           User Actions       │
│        │                    │                      │             │
│        ▼                    ▼                      ▼             │
│   ┌─────────────────────────────────────────────────────┐       │
│   │              Spring Boot Application                 │       │
│   │  ┌─────────────┐  ┌────────────┐  ┌──────────────┐  │       │
│   │  │IoTService   │  │AuditService│  │SensorAlert   │  │       │
│   │  │             │  │            │  │Service       │  │       │
│   │  └──────┬──────┘  └─────┬──────┘  └──────┬───────┘  │       │
│   └─────────┼───────────────┼────────────────┼──────────┘       │
│             │               │                │                   │
│             ▼               ▼                ▼                   │
│   ┌─────────────────────────────────────────────────────┐       │
│   │                   InfluxDB Client                    │       │
│   │        (Batched writes, async processing)            │       │
│   └─────────────────────────┬───────────────────────────┘       │
│                             │                                    │
│                             ▼                                    │
│   ┌─────────────────────────────────────────────────────┐       │
│   │                      InfluxDB                        │       │
│   │  ┌─────────────┐  ┌─────────────┐                   │       │
│   │  │ tgm-metrics │  │ logs-bucket │                   │       │
│   │  │  (sensors)  │  │   (audit)   │                   │       │
│   │  └─────────────┘  └─────────────┘                   │       │
│   └─────────────────────────────────────────────────────┘       │
└─────────────────────────────────────────────────────────────────┘

Configuration

Environment Variables

# Enable InfluxDB integration
ENABLE_INFLUXDB=true

# InfluxDB connection
INFLUXDB_FULL_HOST=http://localhost:8086
INFLUXDB_TOKEN=your-influxdb-token
INFLUXDB_ORG=ensolutions
INFLUXDB_BUCKET=tgm-metrics
INFLUXDB_LOGS_BUCKET=logs-bucket

application.yml

app:
  influxdb:
    enabled: ${ENABLE_INFLUXDB:false}
    url: ${INFLUXDB_FULL_HOST:http://localhost:8086}
    token: ${INFLUXDB_TOKEN:}
    org: ${INFLUXDB_ORG:ensolutions}
    bucket: ${INFLUXDB_BUCKET:tgm-metrics}
    logs-bucket: ${INFLUXDB_LOGS_BUCKET:logs-bucket}

Conditional Beans

InfluxDB services are only loaded when enabled:

@Service
@ConditionalOnProperty(name = "app.influxdb.enabled", havingValue = "true")
public class InfluxDBSensorService {
    // Only instantiated when ENABLE_INFLUXDB=true
}

Data Storage

Sensor Data Schema

Measurement: sensor_readings
Tags (indexed):
  - client_id: String     # Multi-tenant isolation
  - sandbox: String       # Sandbox environment
  - sensor_id: String     # Sensor identifier
  - unit_id: String       # Equipment unit
  - component_id: String  # Component
  - sensor_type: String   # temperature, vibration, etc.
  - unit_of_measure: String

Fields (data):
  - value: Float          # Sensor reading value
  - quality: Integer      # Data quality score (0-100)

Timestamp: nanosecond precision

Audit Log Schema

Measurement: audit_logs
Tags (indexed):
  - client_id: String     # Multi-tenant isolation
  - action: String        # CREATE, UPDATE, DELETE, etc.
  - entity_type: String   # User, Unit, Inspection, etc.
  - level: String         # INFO, WARNING, ERROR
  - sandbox: String       # Sandbox environment

Fields (data):
  - entity_id: Long
  - user_id: Long
  - username: String
  - ip_address: String
  - message: String
  - meta_*: String        # Dynamic metadata fields

Security Events Schema

Measurement: security_events
Tags (indexed):
  - client_id: String
  - action: String        # LOGIN, LOGOUT, FAILED_LOGIN, etc.
  - username: String
  - ip_address: String

Fields (data):
  - user_id: Long
  - message: String
  - user_agent: String
  - meta_*: String

Multi-Tenancy

Data Isolation

All data written to InfluxDB includes the client_id tag for multi-tenant isolation:

// Writing data with tenant context
String clientId = ClientContext.getClient();
String sandbox = TenantContext.getTenant();

Point point = Point.measurement("sensor_readings")
    .time(Instant.now(), WritePrecision.MS)
    .addTag("client_id", clientId != null ? clientId : "master")
    .addTag("sandbox", sandbox != null ? sandbox : "public")
    .addTag("sensor_id", sensorId)
    .addField("value", value);

writeApi.writePoint(bucket, org, point);

Query Filtering

All queries automatically filter by the current tenant:

public List<Map<String, Object>> querySensorData(Long sensorId,
        LocalDateTime start, LocalDateTime end) {
    String clientId = ClientContext.getClient();
    String sandbox = TenantContext.getTenant();

    String flux = String.format("""
        from(bucket: "%s")
          |> range(start: %s, stop: %s)
          |> filter(fn: (r) => r._measurement == "sensor_readings")
          |> filter(fn: (r) => r.client_id == "%s")
          |> filter(fn: (r) => r.sandbox == "%s")
          |> filter(fn: (r) => r.sensor_id == "%d")
        """,
        bucket,
        start.toInstant(ZoneOffset.UTC),
        end.toInstant(ZoneOffset.UTC),
        clientId != null ? clientId : "master",
        sandbox != null ? sandbox : "public",
        sensorId);

    return executeQuery(flux);
}

API Endpoints

Sensor Data

# Write sensor reading
POST /iot/sensors/{sensorId}/readings
Content-Type: application/json
Authorization: Bearer $TOKEN

{
  "value": 75.5,
  "timestamp": "2026-02-05T10:30:00Z",
  "quality": 100
}

# Query sensor readings
GET /iot/sensors/{sensorId}/readings?start=2026-02-01&end=2026-02-05
Authorization: Bearer $TOKEN

# Response
{
  "success": true,
  "data": [
    {
      "_time": "2026-02-05T10:30:00Z",
      "_value": 75.5,
      "sensor_id": "123",
      "sensor_type": "temperature"
    }
  ]
}

# Get sensor statistics
GET /iot/sensors/{sensorId}/statistics?start=2026-02-01&end=2026-02-05
Authorization: Bearer $TOKEN

# Response
{
  "success": true,
  "data": {
    "min": 65.0,
    "max": 85.5,
    "mean": 74.2,
    "count": 1440
  }
}

Audit Logs

# Query audit logs
GET /admin/audit-logs?entityType=User&startTime=2026-02-01&endTime=2026-02-05&limit=100
Authorization: Bearer $TOKEN

# Response
{
  "success": true,
  "data": [
    {
      "_time": "2026-02-05T10:30:00Z",
      "action": "UPDATE",
      "entity_type": "User",
      "entity_id": 123,
      "username": "admin",
      "message": "Updated user profile"
    }
  ]
}

# Get action statistics
GET /admin/audit-logs/statistics/actions?startTime=2026-02-01&endTime=2026-02-05
Authorization: Bearer $TOKEN

Querying Data

Flux Query Language

InfluxDB uses Flux for querying. Common patterns:

// Basic query with time range
from(bucket: "tgm-metrics")
  |> range(start: -24h)
  |> filter(fn: (r) => r._measurement == "sensor_readings")
  |> filter(fn: (r) => r.client_id == "client123")

// Aggregation (hourly average)
from(bucket: "tgm-metrics")
  |> range(start: -7d)
  |> filter(fn: (r) => r._measurement == "sensor_readings")
  |> filter(fn: (r) => r.sensor_id == "sensor456")
  |> aggregateWindow(every: 1h, fn: mean)

// Multiple sensors comparison
from(bucket: "tgm-metrics")
  |> range(start: -24h)
  |> filter(fn: (r) => r._measurement == "sensor_readings")
  |> filter(fn: (r) => r.sensor_id =~ /temp_.*/)
  |> pivot(rowKey: ["_time"], columnKey: ["sensor_id"], valueColumn: "_value")

// Downsampling for charts
from(bucket: "tgm-metrics")
  |> range(start: -30d)
  |> filter(fn: (r) => r._measurement == "sensor_readings")
  |> aggregateWindow(every: 1d, fn: mean)
  |> yield(name: "daily_average")

Java Query Execution

@Service
public class InfluxDBSensorService {

    @Autowired
    private InfluxDBClient influxDBClient;

    public List<Map<String, Object>> executeQuery(String flux) {
        List<FluxTable> tables = influxDBClient.getQueryApi()
            .query(flux, influxOrg);

        List<Map<String, Object>> results = new ArrayList<>();
        for (FluxTable table : tables) {
            for (FluxRecord record : table.getRecords()) {
                results.add(record.getValues());
            }
        }
        return results;
    }
}

Retention Policies

Bucket Configuration

# Create bucket with 30-day retention
influx bucket create \
  --name tgm-metrics \
  --retention 30d \
  --org ensolutions

# Create logs bucket with 90-day retention
influx bucket create \
  --name logs-bucket \
  --retention 90d \
  --org ensolutions
Bucket Retention Use Case
tgm-metrics 30 days Raw sensor readings
tgm-metrics-downsampled 1 year Hourly aggregates
logs-bucket 90 days Audit and security logs

Downsampling Task

// Create a task to downsample data hourly
option task = {name: "downsample_sensors", every: 1h}

from(bucket: "tgm-metrics")
  |> range(start: -1h)
  |> filter(fn: (r) => r._measurement == "sensor_readings")
  |> aggregateWindow(every: 1h, fn: mean)
  |> to(bucket: "tgm-metrics-downsampled", org: "ensolutions")

Monitoring & Health

Health Check

# Check InfluxDB health
curl http://localhost:8086/health

# Check from application
GET /actuator/health

# Response includes InfluxDB status when enabled
{
  "status": "UP",
  "components": {
    "influxdb": {
      "status": "UP",
      "details": {
        "url": "http://localhost:8086",
        "org": "ensolutions"
      }
    }
  }
}

Metrics

# InfluxDB internal metrics
curl http://localhost:8086/metrics

# Key metrics to monitor:
# - influxdb_write_requests_total
# - influxdb_query_requests_total
# - influxdb_storage_bucket_size_bytes

Docker Setup

docker-compose.yml

services:
  influxdb:
    image: influxdb:2.7
    container_name: tgm-influxdb
    ports:
      - "8086:8086"
    environment:
      - DOCKER_INFLUXDB_INIT_MODE=setup
      - DOCKER_INFLUXDB_INIT_USERNAME=admin
      - DOCKER_INFLUXDB_INIT_PASSWORD=adminpassword
      - DOCKER_INFLUXDB_INIT_ORG=ensolutions
      - DOCKER_INFLUXDB_INIT_BUCKET=tgm-metrics
      - DOCKER_INFLUXDB_INIT_ADMIN_TOKEN=my-super-secret-token
      - DOCKER_INFLUXDB_INIT_RETENTION=30d
    volumes:
      - influxdb-data:/var/lib/influxdb2
      - influxdb-config:/etc/influxdb2
    healthcheck:
      test: ["CMD", "curl", "-f", "http://localhost:8086/health"]
      interval: 30s
      timeout: 10s
      retries: 3

volumes:
  influxdb-data:
  influxdb-config:

Initial Setup

# Start InfluxDB
docker-compose up -d influxdb

# Create additional buckets
docker exec tgm-influxdb influx bucket create \
  --name logs-bucket \
  --retention 90d \
  --org ensolutions \
  --token my-super-secret-token

# Verify setup
docker exec tgm-influxdb influx bucket list \
  --org ensolutions \
  --token my-super-secret-token

Angular Integration

Sensor Data Service

import { Injectable } from '@angular/core';
import { HttpClient, HttpParams } from '@angular/common/http';
import { Observable } from 'rxjs';
import { map } from 'rxjs/operators';

export interface SensorReading {
  time: string;
  value: number;
  sensorId: string;
  sensorType: string;
}

export interface SensorStatistics {
  min: number;
  max: number;
  mean: number;
  count: number;
}

@Injectable({ providedIn: 'root' })
export class SensorDataService {
  private baseUrl = '/iot/sensors';

  constructor(private http: HttpClient) {}

  getReadings(
    sensorId: number,
    start: Date,
    end: Date
  ): Observable<SensorReading[]> {
    const params = new HttpParams()
      .set('start', start.toISOString())
      .set('end', end.toISOString());

    return this.http
      .get<ApiResponse<SensorReading[]>>(
        `${this.baseUrl}/${sensorId}/readings`,
        { params }
      )
      .pipe(map(response => response.data));
  }

  getStatistics(
    sensorId: number,
    start: Date,
    end: Date
  ): Observable<SensorStatistics> {
    const params = new HttpParams()
      .set('start', start.toISOString())
      .set('end', end.toISOString());

    return this.http
      .get<ApiResponse<SensorStatistics>>(
        `${this.baseUrl}/${sensorId}/statistics`,
        { params }
      )
      .pipe(map(response => response.data));
  }

  writeReading(
    sensorId: number,
    value: number,
    timestamp?: Date
  ): Observable<void> {
    return this.http.post<void>(`${this.baseUrl}/${sensorId}/readings`, {
      value,
      timestamp: timestamp?.toISOString() || new Date().toISOString()
    });
  }
}

Sensor Chart Component

import { Component, Input, OnInit, OnDestroy } from '@angular/core';
import { Subject, interval } from 'rxjs';
import { takeUntil, switchMap } from 'rxjs/operators';
import { SensorDataService, SensorReading } from './sensor-data.service';

@Component({
  selector: 'app-sensor-chart',
  template: `
    <div class="sensor-chart">
      <h3>{{ sensorName }}</h3>
      <div *ngIf="loading" class="loading">Loading...</div>
      <canvas #chart></canvas>
      <div class="stats" *ngIf="statistics">
        <span>Min: {{ statistics.min | number:'1.1-1' }}</span>
        <span>Max: {{ statistics.max | number:'1.1-1' }}</span>
        <span>Avg: {{ statistics.mean | number:'1.1-1' }}</span>
      </div>
    </div>
  `
})
export class SensorChartComponent implements OnInit, OnDestroy {
  @Input() sensorId!: number;
  @Input() sensorName!: string;
  @Input() refreshInterval = 30000; // 30 seconds

  readings: SensorReading[] = [];
  statistics: any;
  loading = true;

  private destroy$ = new Subject<void>();

  constructor(private sensorService: SensorDataService) {}

  ngOnInit(): void {
    this.loadData();

    // Auto-refresh
    interval(this.refreshInterval)
      .pipe(takeUntil(this.destroy$))
      .subscribe(() => this.loadData());
  }

  ngOnDestroy(): void {
    this.destroy$.next();
    this.destroy$.complete();
  }

  private loadData(): void {
    const end = new Date();
    const start = new Date(end.getTime() - 24 * 60 * 60 * 1000); // 24 hours

    this.sensorService.getReadings(this.sensorId, start, end)
      .subscribe(readings => {
        this.readings = readings;
        this.loading = false;
        this.updateChart();
      });

    this.sensorService.getStatistics(this.sensorId, start, end)
      .subscribe(stats => {
        this.statistics = stats;
      });
  }

  private updateChart(): void {
    // Update chart with this.readings data
    // Use Chart.js, ngx-charts, or similar
  }
}

Troubleshooting

Common Issues

Connection Refused

# Check if InfluxDB is running
docker ps | grep influxdb

# Check logs
docker logs tgm-influxdb

# Verify network connectivity
curl http://localhost:8086/health

Authentication Failed

# Verify token
docker exec tgm-influxdb influx auth list \
  --org ensolutions

# Generate new token if needed
docker exec tgm-influxdb influx auth create \
  --org ensolutions \
  --all-access

No Data Returned

// Check if data is being written
// Add debug logging
log.debug("Writing to InfluxDB - bucket: {}, measurement: {}, tags: {}",
    bucket, measurement, tags);

// Verify with direct query
docker exec tgm-influxdb influx query '
  from(bucket: "tgm-metrics")
    |> range(start: -1h)
    |> limit(n: 10)
' --org ensolutions

High Memory Usage

# Limit memory in Docker
services:
  influxdb:
    deploy:
      resources:
        limits:
          memory: 2G

Performance Optimization

  1. Use appropriate tags - Index fields you filter on frequently
  2. Batch writes - Use batching for high-throughput scenarios
  3. Downsample old data - Create aggregation tasks
  4. Set retention policies - Don't keep data longer than needed
  5. Use field types correctly - Use integers for IDs, floats for measurements