An open API service indexing awesome lists of open source software.

https://github.com/y-ok/flexdblink

Manage DB test data as text (CSV/JSON/YAML/XML ↔ DB), with LOB files and JUnit 5 + Spring integration for Oracle/PostgreSQL/MySQL/SQL Server.
https://github.com/y-ok/flexdblink

csv dbunit doma2 json mybatis mysql oracle postgresql spring spring-boot sqlserver xml yaml

Last synced: 28 days ago
JSON representation

Manage DB test data as text (CSV/JSON/YAML/XML ↔ DB), with LOB files and JUnit 5 + Spring integration for Oracle/PostgreSQL/MySQL/SQL Server.

Awesome Lists containing this project

README

          

# FlexDBLink

[![GitHub release](https://img.shields.io/github/v/release/y-ok/FlexDBLink)](https://github.com/y-ok/FlexDBLink/releases)
[![CI](https://github.com/y-ok/FlexDBLink/actions/workflows/ci.yml/badge.svg?branch=main)](https://github.com/y-ok/FlexDBLink/actions/workflows/ci.yml)
[![Coverage](https://codecov.io/gh/y-ok/FlexDBLink/branch/main/graph/badge.svg)](https://codecov.io/gh/y-ok/FlexDBLink)

**FlexDBLink** is a data management tool that automates "DB data preparation and verification" in development and testing workflows.

It performs **bulk load/dump** between databases and text files such as CSV / JSON / YAML / XML.
LOB data (BLOB/CLOB) can be intuitively managed via "external file references."
The JUnit 5 extension automates per-test data setup and rollback, enabling **Git-based configuration management without writing any SQL scripts**.

---

## Why FlexDBLink?

### Do any of these sound familiar?

- Writing `INSERT` / `TRUNCATE` SQL by hand for every test
- Missing DB cleanup steps before/after tests
- LOB data (images, PDFs, XML) is hard to manage with plain SQL
- DB state differs across development environments with no reproducibility
- Want to version-control data in Git, but SQL diffs are hard to read

### How FlexDBLink solves them

| Problem | Solution |
| --------- | ---------- |
| Manual SQL for data management | Manage with text files: CSV/JSON/YAML/XML |
| Difficulty handling LOB data | Reference external files with `file:filename` |
| Manual DB cleanup before/after tests | Automatic load and rollback with `@LoadData` |
| Non-reproducible DB state across environments | Version-control datasets in Git |
| Load order management under FK constraints | Auto-generate and update `table-ordering.txt` |

---

## Key Features

- **Load & Dump** — Bidirectional data transfer between CSV / JSON / YAML / XML and databases
- **LOB External File References** — Manage BLOB/CLOB by simply writing `file:xxx` in a CSV cell
- **Two-phase Load** — Initial data (`pre`) + scenario-specific incremental data
- **Automatic Table Ordering** — Analyzes FK dependencies and auto-generates `table-ordering.txt`
- **JUnit 5 Extension** — Per-test data injection and automatic rollback via `@LoadData`
- **Multi-DB Support** — Oracle / PostgreSQL / MySQL / SQL Server

---

## Requirements

| Requirement | Details |
| ------------- | --------- |
| Java | 11 or higher (CI verified: core on **11 / 17 / 21 / 25**, plugin on **11**) |
| OS | Windows / macOS / Linux |

> When using Oracle, you need to provide `oracle.jdbc.OracleDriver` (`ojdbc8`) separately.

---

## Quick Start

### 1. Download

Download the latest `FlexDBLink-distribution.zip` from [GitHub Releases](https://github.com/y-ok/FlexDBLink/releases) and extract it.

```bash
FlexDBLink/
flexdblink.jar
conf/
application.yml ← configuration file
```

### 2. Edit the Configuration File

Edit `conf/application.yml` to set connection details and the data path.

#### **Single DB**

```yaml
data-path: /path/to/your/data

connections:
- id: DB1
url: jdbc:oracle:thin:@localhost:1521/OPEDB
user: oracle
password: password
driver-class: oracle.jdbc.OracleDriver
```

#### **Multiple DBs** (just add more entries under `connections`)

```yaml
data-path: /path/to/your/data

connections:
- id: DB1
url: jdbc:oracle:thin:@localhost:1521/OPEDB
user: oracle
password: password
driver-class: oracle.jdbc.OracleDriver
- id: DB2
url: jdbc:postgresql://localhost:5432/mydb
user: postgres
password: password
driver-class: org.postgresql.Driver
```

> Use `--target DB1,DB2` to restrict processing to specific DB IDs. When omitted, all connections are targeted.
>
> Dialect is resolved per connection entry. The tool checks `connections[].driver-class` first and
> falls back to the JDBC URL. Supported dialects: `ORACLE`, `POSTGRESQL`, `MYSQL`, `SQLSERVER`.
> If one connection cannot be resolved to a supported dialect, the tool fails with an error that
> includes the target connection ID.

### 3. Place Your Dataset

Place CSV files under `data-path`. The filename corresponds to the table name.

```bash
/path/to/your/data/
load/
pre/
DB1/
USERS.csv
ORDERS.csv
```

CSV files are UTF-8 with a header row:

```csv
USER_ID,USER_NAME,STATUS
1,Alice,ACTIVE
2,Bob,INACTIVE
```

### 4. Run

#### Setup (first time only)

Scans the DB schema to detect LOB columns and auto-writes `file-patterns` templates into `application.yml`.

```bash
java -jar flexdblink.jar --setup
# Target a specific DB only
java -jar flexdblink.jar --setup --target DB1
```

> The `file-patterns` generated by `--setup` are **filename format templates for dump output** (e.g., `{ID}.bin`).
> After generation, open `application.yml` and **edit and maintain them** according to your project's naming conventions.

#### Load

```bash
# Load initial data (inserts from load/pre//)
java -jar flexdblink.jar --load

# Load with a scenario (inserts load/pre// then load/SCENARIO//)
java -jar flexdblink.jar --load SCENARIO

# Target a specific DB
java -jar flexdblink.jar --load SCENARIO --target DB1
java -jar flexdblink.jar --load SCENARIO --target DB1,DB2

# Running without arguments is equivalent to --load (loads pre)
java -jar flexdblink.jar
```

> When `confirm-before-load: true` is set, a confirmation prompt (`y/N`) appears before execution.

#### Dump

```bash
# Dump all DBs under the name SCENARIO (output to dump/SCENARIO//)
java -jar flexdblink.jar --dump SCENARIO

# Dump a specific DB only
java -jar flexdblink.jar --dump SCENARIO --target DB1
```

#### Short Options

```bash
java -jar flexdblink.jar -l SCENARIO -t DB1 # --load SCENARIO --target DB1
java -jar flexdblink.jar -d SCENARIO -t DB1 # --dump SCENARIO --target DB1
java -jar flexdblink.jar -s # --setup
```

---

## Building from Source (for developers)

**Requirements**: Java 11+, Maven 3.9+, Docker (for integration tests with Testcontainers)

### Project Structure

```text
FlexDBLink/ ← Aggregator POM (flexdblink-parent)
├── flexdblink/ ← Core module (CLI + JUnit 5 extension)
└── flexdblink-maven-plugin/ ← Maven plugin module (load/dump goals)
```

### Build All Modules

```bash
mvn clean install -DskipTests
```

This builds all modules and installs the following artifacts to the local Maven repository (`~/.m2/repository/io/github/yok/`):

| Module | Artifact | Location |
| --- | --- | --- |
| `flexdblink-parent` | POM | `flexdblink-parent//` |
| `flexdblink` | JAR | `flexdblink//` |
| `flexdblink-maven-plugin` | Maven plugin JAR | `flexdblink-maven-plugin//` |

The core module also produces `flexdblink.jar` and `FlexDBLink-distribution.zip` under `flexdblink/target/`.

Installed and deployed child-module artifacts publish flattened consumer POMs. Downstream builds can
resolve `flexdblink` and `flexdblink-maven-plugin` without pre-installing or pre-deploying
`flexdblink-parent`.

### Core Module (`flexdblink`)

```bash
# Run unit tests + integration tests (requires Docker for Testcontainers)
mvn clean test -pl flexdblink

# Build distribution archive
mvn clean package -pl flexdblink -DskipTests
```

### Maven Plugin Module (`flexdblink-maven-plugin`)

```bash
# Run all tests: unit (surefire) + integration (failsafe, requires Docker)
# -am automatically builds the dependent core module first
mvn clean verify -pl flexdblink-maven-plugin -am
```

### CI Matrix

The CI pipeline (`ci.yml`) runs:

| Job | Java Versions | Description |
| --- | ------------- | ----------- |
| `test-and-coverage` | 11, 17, 21, 25 | Core module tests with JaCoCo coverage |
| `maven-plugin-test` | 11 | Plugin unit + integration tests |

---

## CLI Reference

```bash
java -jar flexdblink.jar [OPTIONS]
```

| Option | Short | Description |
| -------- | ------- | ------------- |
| `--load [scenario]` | `-l` | Load mode. Uses `pre` (or `pre-dir-name` from `application.yml`) when no scenario is specified |
| `--dump ` | `-d` | Dump mode. Scenario name is required |
| `--setup` | `-s` | Setup mode. Scans the DB schema for LOB columns and auto-generates `file-patterns` in `application.yml` |
| `--target ` | `-t` | Comma-separated DB IDs to target. When omitted, all connections are processed |

> Overriding Spring properties from the command line is disabled. All configuration must be specified in `application.yml`.

### Example Output

#### **Load execution log (--load COMMON)**

```bash
$ java -jar flexdblink.jar --load COMMON

. ____ _ __ _ _
/\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \
( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \
\\/ ___)| |_)| | | | | || (_| | ) ) ) )
' |____| .__|_| |_|_| |_\__, | / / / /
=========|_|==============|___/=/_/_/_/
:: Spring Boot :: (v2.7.18)

INFO : Mode: load, Scenario: COMMON, Target DBs: [DB1]
INFO : [DB1] Table[BINARY_TEST_TABLE] Initial | inserted=2
INFO : [DB1] Table[CHAR_CLOB_TEST_TABLE] Initial | inserted=2
INFO : [DB1] Table[DATE_TIME_TEST_TABLE] Initial | inserted=4
INFO : [DB1] Table[NUMERIC_TEST_TABLE] Initial | inserted=3
INFO : [DB1] Table[SAMPLE_BLOB_TABLE] Initial | inserted=2
INFO : [DB1] Table[BINARY_TEST_TABLE] Scenario (INSERT only) | inserted=2
INFO : [DB1] Table[CHAR_CLOB_TEST_TABLE] Scenario (INSERT only) | inserted=1
...
INFO : == Data loading to all DBs has completed ==
```

#### **Dump execution log (--dump COMMON)**

```bash
$ java -jar flexdblink.jar --dump COMMON

INFO : Mode: dump, Scenario: COMMON, Target DBs: [DB1]
INFO : [DB1] Table[BINARY_TEST_TABLE] CSV dump completed (UTF-8)
INFO : [DB1] Table[BINARY_TEST_TABLE] dumped-records=2, BLOB/CLOB file-outputs=2
INFO : [DB1] Table[CHAR_CLOB_TEST_TABLE] dumped-records=2, BLOB/CLOB file-outputs=4
...
INFO : === All DB dumps completed: Output [dump/COMMON] ===
```

---

## Directory Structure (under `data-path`)

```bash
/
load/
pre/
/
TABLE_A.csv
TABLE_B.csv
table-ordering.txt # Load order file for FK dependency resolution (auto-generated)
/
/
TABLE_A.csv # Incremental data (scenario-specific additions)
files/
Foo_001.bin # LOB content (referenced from CSV via file:xxx)
dump/
/
/
*.csv # Dump output
files/ # Dumped LOB files
```

---

## Working with LOB (BLOB/CLOB)

**On load**: Write `file:filename` in a CSV cell to read and insert `/files/filename`.

```csv
ID,FILE_DATA
001,file:LeapSecond_001.dat
002,file:LeapSecond_002.dat
```

**On dump**: LOB columns are written as files under `files/`, and the CSV cell is populated with `file:xxx`.
The filename template is controlled by `file-patterns`.

---

## application.yml Configuration Reference

```yaml
# Base path for CSV and LOB files (required)
data-path: /absolute/path/to/data

dbunit:
pre-dir-name: pre # Initial load directory name (default: pre)
csv:
format:
date: "yyyy-MM-dd" # Dump output format / preferred parse format on load
time: "HH:mm:ss" # Same as above
dateTime: "yyyy-MM-dd HH:mm:ss" # Same as above
dateTimeWithMillis: "yyyy-MM-dd HH:mm:ss.SSS" # Same as above
config:
allow-empty-fields: true # Allow empty fields
batched-statements: true # Use batch execution
batch-size: 100 # Batch size

# DB connection settings (multiple entries supported)
connections:
- id: DB1
url: jdbc:oracle:thin:@localhost:1521/OPEDB
user: oracle
password: password
driver-class: oracle.jdbc.OracleDriver
- id: DB2
url: jdbc:postgresql://localhost:5432/mydb
user: postgres
password: password
driver-class: org.postgresql.Driver

# LOB filename templates for dump output
# Column name placeholders like {COL_NAME} are replaced with the row's values
file-patterns:
SAMPLE_BLOB_TABLE:
FILE_DATA: "LeapSecond_{ID}.dat"
CHAR_CLOB_TEST_TABLE:
CLOB_COL: "char_clob_{ID}.clob"
NCLOB_COL: "char_clob_{ID}.nclob"

# Tables to exclude from dump
dump:
exclude-tables:
- flyway_schema_history
```

| Key | Required | Description |
| ----- | ---------- | ------------- |
| `data-path` | ✅ | Base path for CSV and LOB files |
| `dbunit.pre-dir-name` | | Initial load directory name (default: `pre`) |
| `dbunit.confirm-before-load` | | When `true`, shows a confirmation prompt before `--load` executes (default: `false`) |
| `dbunit.csv.format.date` | | **Dump output format** and **preferred parse format on load** for DATE (default: `yyyy-MM-dd`) |
| `dbunit.csv.format.time` | | **Dump output format** and **preferred parse format on load** for TIME (default: `HH:mm:ss`) |
| `dbunit.csv.format.dateTime` | | **Dump output format** and **preferred parse format on load** for TIMESTAMP (default: `yyyy-MM-dd HH:mm:ss`) |
| `dbunit.csv.format.dateTimeWithMillis` | | **Dump output format** and **preferred parse format on load** for TIMESTAMP with milliseconds (default: `yyyy-MM-dd HH:mm:ss.SSS`) |
| `connections[].id` | ✅ | DB identifier, matched against the `--target` option |
| `connections[].url` | ✅ | JDBC URL. Fallback dialect detection source when `driver-class` is omitted |
| `connections[].user` | ✅ | Database user name |
| `connections[].password` | | Database password |
| `connections[].driver-class` | | Preferred dialect detection source for each connection entry |
| `file-patterns` | | LOB filename templates for dump. Generate a template with `--setup`, then edit and maintain manually |
| `dump.exclude-tables` | | Tables to exclude from dump |

---

## Published Maven Artifacts

This repository publishes Maven artifacts via GitHub Packages for `y-ok/FlexDBLink`.
Configure GitHub Packages access for this repository in your build environment before resolving these artifacts.
Package overview: [GitHub Packages](https://github.com/y-ok?tab=packages&repo_name=FlexDBLink)

| Artifact | Purpose | Notes |
| ----- | ----- | ----- |
| `io.github.yok:flexdblink` | Core library artifact with the JUnit 5 extension (`@LoadData`) | Use this dependency when consuming FlexDBLink from tests or Spring-based test suites |
| `io.github.yok:flexdblink-maven-plugin` | Maven plugin artifact for `load` / `dump` | Use this artifact inside Maven plugin configuration |

---

## Maven Plugin (`flexdblink-maven-plugin`)

`flexdblink-maven-plugin` is a published GitHub Packages artifact for this repository. Use the plugin
coordinates shown in the `POM Plugin Example` section below, and make sure GitHub Packages access is
configured in your build environment before resolving the plugin.

FlexDBLink can also be used as a Maven plugin wrapper for `load` and `dump`, while reusing the
existing core logic (`DataLoader` / `DataDumper`).

### Configuration Split

- Store DB connection settings in `~/.m2/settings.xml` (`servers/server`)
- Store `dataPath`, `filePatterns`, and `dbunit` settings in your POM plugin configuration

### `settings.xml` Example

```xml



flexdblink-db1
app_user
password

DB1
jdbc:postgresql://localhost:5432/app
org.postgresql.Driver


```

### POM Plugin Example

```xml

io.github.yok
flexdblink-maven-plugin
${flexdblink.version}


flexdblink-db1

${project.basedir}/src/test/resources/dbunit


pre


yyyy-MM-dd

yyyy-MM-dd HH:mm:ss
yyyy-MM-dd HH:mm:ss.SSS



true
true
100



employee
photo
employee/${ID}_photo.bin


```

### Commands

- `mvn flexdblink:load` → loads `pre` only
- `mvn flexdblink:load -Dflexdblink.scenario=scenario1` → loads `pre` and `scenario1`
- `mvn flexdblink:dump` → dumps to `${dataPath}/dump/`
- `mvn flexdblink:dump -Dflexdblink.scenario=scenario1` → dumps to `${dataPath}/dump/scenario1`

### VS Code Classpath Note

If you see `is not on the classpath of project flexdblink, only syntax errors are reported`,
open the repository with the included workspace file `FlexDBLink.code-workspace`.
The workspace intentionally registers only the repository root and relies on Maven import for
subprojects (including `flexdblink-maven-plugin`) to avoid duplicate project registration.

If the warning still appears after opening the workspace, run `Java: Clean Java Language Server Workspace`
from the VS Code command palette and reopen the workspace.
If needed, also run `Maven: Reload Projects`.

---

## JUnit 5 Extension (`@LoadData`)

### Maven Dependency

`io.github.yok:flexdblink` is the published artifact that provides the JUnit 5 extension. Resolve
this dependency from GitHub Packages for this repository after configuring GitHub Packages access in
your build environment.

```xml

io.github.yok
flexdblink
${flexdblink.version}
test

```

Simply annotate your test with `@LoadData` to automatically inject the dataset before the test and roll it back on completion.
This integrates with Spring Test transaction management (`@Transactional`), ensuring the DB state is reliably restored after each test method.

A **Spring Test execution context** is required (`@SpringBootTest`, `@MybatisTest`, `@ExtendWith(SpringExtension.class)`, etc.).

### Usage

```java
@MybatisTest
@AutoConfigureTestDatabase(replace = AutoConfigureTestDatabase.Replace.NONE)
@Import(MyDataSourceConfig.class)
@LoadData(scenario = "NORMAL", dbNames = "BBB") // ← applied to the entire class
class UserMapperTest {

@Autowired
private UserMapper mapper;

@Test
@LoadData(scenario = "ADMIN", dbNames = "BBB") // ← applied to a single method (overrides class-level)
void findById_returnsExpectedRecord() {
User user = mapper.findById(1L);
assertNotNull(user);
assertEquals("Alice", user.getName());
}
}
```

### Test Resource Layout Convention

```bash
# Multi-DB tests
src/test/resources////input//*.csv
src/test/resources////input//files/* # LOB content

# Single-DB tests
src/test/resources////input/*.csv
src/test/resources////input/files/* # LOB content
```

> **Paths outside this convention will cause an error.**

### `@LoadData` Parameters

| Parameter | Description |
| ----------- | ------------- |
| `scenario` | Scenario name (directory name). E.g., `"NORMAL"`, `"ERROR_CASE"` |
| `dbNames` | Target DB name (subdirectory name). When omitted, uses single-DB mode under `input/` |

---

## Supported Databases and Type Coverage

### Oracle

| SQL Type | Format / Notes |
| ---------- | ---------------- |
| `DATE` | `yyyy-MM-dd` |
| `TIMESTAMP(6)` | `yyyy-MM-dd HH:mm:ss.SSS` |
| `TIMESTAMP WITH TIME ZONE` / `WITH LOCAL TIME ZONE` | `yyyy-MM-dd HH:mm:ss +HHMM` |
| `INTERVAL YEAR TO MONTH` | `Y-M` |
| `INTERVAL DAY TO SECOND` | `D H:M:S` |
| `CLOB` / `NCLOB` / `BLOB` | External file reference via `file:xxx` |
| `NUMBER`, `VARCHAR2`, `CHAR`, `NVARCHAR2`, `NCHAR`, `RAW`, `BINARY_FLOAT`, `BINARY_DOUBLE` | Standard support |

### PostgreSQL

| SQL Type | Format / Notes |
| ---------- | ---------------- |
| `DATE` | `yyyy-MM-dd` |
| `TIME` | `HH:mm:ss` |
| `TIMESTAMP` | `yyyy-MM-dd HH:mm:ss[.fraction]` |
| `TIMESTAMPTZ` | `yyyy-MM-dd HH:mm:ss[.fraction]+HHMM` |
| `BYTEA` | External file reference via `file:xxx` |
| `BIGINT`, `NUMERIC`, `REAL`, `DOUBLE PRECISION`, `VARCHAR`, `CHAR`, `TEXT`, `XML` | Standard support |

### MySQL

| SQL Type | Format / Notes |
| ---------- | ---------------- |
| `DATE` | `yyyy-MM-dd` |
| `TIME` | `HH:mm:ss` |
| `DATETIME` / `TIMESTAMP` | `yyyy-MM-dd HH:mm:ss[.fraction]` |
| `YEAR` | `yyyy` |
| `TINYBLOB` / `BLOB` / `MEDIUMBLOB` / `LONGBLOB` | External file reference via `file:xxx` |
| Integer types, `DECIMAL`, `FLOAT`, `DOUBLE`, `BIT`, `BOOLEAN`, `CHAR`, `VARCHAR`, `TEXT` family, `ENUM`, `SET`, `JSON`, `BINARY`, `VARBINARY` | Standard support |

### SQL Server

| SQL Type | Format / Notes |
| ---------- | ---------------- |
| `DATE` | `yyyy-MM-dd` |
| `TIME` | `HH:mm:ss` |
| `DATETIME2` | `yyyy-MM-dd HH:mm:ss[.fraction]` |
| `VARBINARY` | External file reference via `file:xxx` |
| `BIGINT`, `INT`, `SMALLINT`, `TINYINT`, `DECIMAL`, `NUMERIC`, `FLOAT`, `REAL`, `BIT`, `CHAR`, `VARCHAR`, `NCHAR`, `NVARCHAR`, `XML` | Standard support |

---

## Accepted Date/Time Formats on CSV Load

On `--load`, the format specified by `dbunit.csv.format.*` is tried first.
If no match is found, the following formats are attempted in order (applies to all databases).

### Date (DATE)

| Format | Example |
| -------- | --------- |
| `yyyy-MM-dd` (ISO) | `2026-02-25` |
| `yyyy/MM/dd` | `2026/02/25` |
| `yyyyMMdd` (basic ISO) | `20260225` |
| `yyyy.MM.dd` | `2026.02.25` |
| `yyyy年M月d日` (Japanese) | `2026年2月25日` |

### Time (TIME)

| Format | Example |
| -------- | --------- |
| `HH:mm:ss[.fraction]` (seconds and fractional seconds are optional) | `14:30:00.123456789` |
| `HH:mm` | `14:30` |
| `HHmmss[.fraction]` (no delimiter) | `143000.123` |
| `HHmm` | `1430` |

### Timestamp (TIMESTAMP)

All combinations of date and time formats are tried, with `dbunit.csv.format.dateTime` / `dateTimeWithMillis` applied first.

> DB-specific extended types (e.g., Oracle `TIMESTAMP WITH TIME ZONE`, SQL Server `DATETIMEOFFSET`) have additional formats in each DB's implementation. See the type coverage tables for details.

---

## Best Practices

**No need to maintain `table-ordering.txt` manually**
FK dependencies are automatically analyzed to determine the load order, and the file is always kept up to date. You can commit it, but manual editing is not necessary.

**Configure tables to exclude**
Add tables like `flyway_schema_history` to `dump.exclude-tables` so they are not loaded or dumped.

**Standardize date/time formats across your team**
Explicitly configuring `dbunit.csv.format.*` aligns the dump output format with the preferred load parse format, preventing inconsistencies due to environment differences.
Even without configuration, multiple formats such as `yyyy-MM-dd` / `yyyy/MM/dd` are tried automatically, so existing CSV files can be loaded as-is.

**Grow `file-patterns` yourself**
`--setup` only generates a template for filename formats. Edit and maintain `file-patterns` in `application.yml` manually to match your project's naming conventions.

**Centralize LOBs in `files/`**
Store BLOB/CLOB binaries in the `files/` directory rather than in CSV, and consider applying Git LFS to that directory.

**Choose a data format that produces readable diffs**
CSV is the simplest, but JSON/YAML is better for nested structures. Choose based on your team's needs.

---

## Docker Sample Environment (Oracle 19c)

A sample using an Oracle 19c Docker environment is available in the `script/` directory.

- [Oracle 19c (Docker) Setup and FlexDBLink Sample Run](script/README_jp.md)

---

## Internal Implementation

- **DBUnit** is used internally for dataset management, but its API is fully abstracted by FlexDBLink.
- Separate implementations handle the type characteristics of Oracle, PostgreSQL, MySQL, and SQL Server.

---

## License

This repository is provided under the **Apache License 2.0**. See [LICENSE](LICENSE.txt) for details.

---

## Contributing

Bug reports and feature requests are welcome. Issues and PRs are appreciated.