No description
Find a file
CI 964143a6b6
All checks were successful
Verify Build / verify (push) Successful in 2m44s
Update all non-major dependencies to v3.1.2 (#11)
Co-authored-by: CI <ci@16reiche.de>
Co-committed-by: CI <ci@16reiche.de>
2026-04-17 09:05:40 +02:00
.forgejo Update dependency actions/setup-maven to v5.1 (#8) 2026-03-20 17:35:02 +01:00
src Add test and logging 2026-01-31 10:25:04 +01:00
.gitignore Prepare for OSS 2026-03-20 12:39:28 +01:00
LICENSE Prepare for OSS 2026-03-20 12:39:28 +01:00
pom.xml Update all non-major dependencies to v3.1.2 (#11) 2026-04-17 09:05:40 +02:00
README.md Prepare for OSS 2026-03-20 12:39:28 +01:00
renovate.json Add renovate config 2026-01-31 10:29:51 +01:00

Golatar Data Bridge

Golatar Data Bridge is a database-agnostic, JPA/Hibernate-based logical data export and import library. It lets you export and import application data on the level of JPA entities, independent of the underlying database vendor.

Instead of creating physical database dumps, Golatar Data Bridge operates on your domain model and produces portable JSON snapshots that can be restored into different databases and environments.

This makes it ideal for:

  • Database migrations (e.g. PostgreSQL → MySQL → H2)
  • Environment synchronization (prod → staging → local)
  • Test data provisioning
  • Long-term data portability
  • Logical backups based on the JPA domain model

Key Concepts

Logical, Domain-Level Backups

Golatar Data Bridge does not back up tables or vendor-specific database structures. It exports and imports data based on your JPA entity definitions, making the JPA model the single source of truth.

This means:

  • The backup format is independent of the database vendor
  • Schema differences can be handled through JPA mappings
  • Refactorings and migrations can be managed at the domain level

Database-Agnostic JSON Snapshots

Data is exported as JSON snapshots containing:

  • Entity type information
  • Creation timestamp
  • Entry count
  • Serialized entity data

These snapshots can be:

  • Inspected manually
  • Transported between systems
  • Restored into different databases

Isolated Persistence Layer

Golatar Data Bridge uses its own Hibernate bootstrap and does not rely on application services or caches. This ensures:

  • No interference with application-level caches
  • No business logic side effects
  • No lifecycle listeners or interceptors
  • Clean and deterministic data access

Architecture Overview

The library is structured around three main components:

  • DataBridgeExporter — Exports entity data to JSON
  • DataBridgeImporter — Imports entity data from JSON
  • DataBridgeCore — Shared Hibernate bootstrap infrastructure

Serialization is handled by:

  • CollectionSerializer
  • SerializedCollection

Import behavior is controlled by:

  • ImportDeleteMode

Exporting Data

Example

Properties props = ... // Hibernate + JDBC properties

DataBridgeExporter exporter = DataBridgeExporter.builder()
    .targetFolder(new File("/backups"))
    .properties(props)
    .build();

exporter.exportBackup(
    User.class,
    Comment.class,
    Topic.class
);

What happens during export

  • Hibernate is bootstrapped using the provided properties
  • Entities are registered explicitly
  • All rows for each entity are loaded
  • Data is written as JSON snapshots per entity
  • No application caches or services are used
  • Each entity is written to a separate file, e.g.:
    • User.json
    • Comment.json
    • Topic.json

Importing Data

Example

Properties props = ... // Hibernate + JDBC properties

DataBridgeImporter importer = DataBridgeImporter.builder()
    .importDeleteMode(ImportDeleteMode.AUTO)
    .deletionBatchSize(100)
    .properties(props)
    .build();

importer.importBackup(
    new File("/backups/User.json"),
    new File("/backups/Comment.json"),
    new File("/backups/Topic.json")
);

Import delete modes

Golatar Data Bridge supports multiple deletion strategies before importing:

Mode Behavior
NONE Do not delete existing data
DELETE Use bulk delete (CriteriaDelete)
DELETE_BATCHED Delete entities in batches
AUTO Try bulk delete, fall back to batched delete

This allows safe handling of:

  • Foreign key constraints
  • Large tables
  • Databases with limited bulk delete capabilities

Hibernate Configuration

Golatar Data Bridge is independent of Spring and uses a standalone Hibernate bootstrap.

You must provide Hibernate and JDBC properties, for example:

Properties props = new Properties();

props.setProperty("jakarta.persistence.jdbc.url", "jdbc:postgresql://localhost:5432/app");
props.setProperty("jakarta.persistence.jdbc.user", "app");
props.setProperty("jakarta.persistence.jdbc.password", "secret");
props.setProperty("jakarta.persistence.jdbc.driver", "org.postgresql.Driver");

props.setProperty("hibernate.dialect", "org.hibernate.dialect.PostgreSQLDialect");
props.setProperty(
    "hibernate.connection.provider_class",
    "org.hibernate.hikaricp.internal.HikariCPConnectionProvider"
);

props.setProperty("hibernate.hikari.maximumPoolSize", "10");
props.setProperty("hibernate.hbm2ddl.auto", "none");

Any Hibernate-supported database can be used.

Portability and Migrations

Because Golatar Data Bridge operates on the JPA model, it enables:

  • Migrating data between different database vendors
  • Restoring data into environments with different schemas
  • Using JPA mappings to handle structural differences

The JPA entity model is treated as the authoritative definition of the data structure.

Design Goals

Golatar Data Bridge is designed to be:

  • Database-agnostic
  • Domain-model driven
  • Cache-independent
  • Deterministic and side-effect free
  • Suitable for migrations and long-term data portability
  • Easy to integrate into existing systems

Limitations and Considerations

  • Foreign key relationships and insert order must be considered when importing
  • Large datasets may require batching and memory tuning
  • Refactoring entity class names may affect existing snapshots
  • JSON snapshots are logical backups, not physical database backups

Golatar Data Bridge is not intended to replace vendor-specific tools like pg_dump for full physical backups.