https://github.com/squidmin/java11-spring-gradle-bigquery-reference
Java v11 ⋅ Spring v2 ⋅ Gradle ⋅ BigQuery
https://github.com/squidmin/java11-spring-gradle-bigquery-reference
bigquery gradle gradle-java java java-gradle java11 java11-spring-boot spring spring-boot-2 spring-mvc spring-rest
Last synced: 9 months ago
JSON representation
Java v11 ⋅ Spring v2 ⋅ Gradle ⋅ BigQuery
- Host: GitHub
- URL: https://github.com/squidmin/java11-spring-gradle-bigquery-reference
- Owner: squidmin
- Created: 2023-04-05T13:04:07.000Z (over 2 years ago)
- Default Branch: main
- Last Pushed: 2023-08-18T02:27:37.000Z (over 2 years ago)
- Last Synced: 2025-01-22T19:45:54.950Z (11 months ago)
- Topics: bigquery, gradle, gradle-java, java, java-gradle, java11, java11-spring-boot, spring, spring-boot-2, spring-mvc, spring-rest
- Language: Java
- Homepage:
- Size: 209 KB
- Stars: 0
- Watchers: 1
- Forks: 0
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
Awesome Lists containing this project
README
# java11-spring-gradle-bigquery-reference
About
Made with:
- **Adoptium Temurin OpenJDK 11.0.17**
- **Spring Boot v2.7.10**
- **Gradle 7.6.1**
- **IntelliJ IDEA 2023.1 (Ultimate Edition)**
---
## Install & build
Download, install, and initialize the gcloud SDK on your local machine
Refer to the `gcloud` CLI documentation to complete this step.
Install the `gcloud` SDK to the user's home directory (e.g., `/Users/USERNAME/google-cloud-sdk`).
When it's finished installing, add the `gcloud` executable to your system's `$PATH` and run the command:
```shell
gcloud init
```
gcloud CLI: Application Default Credentials (ADC) usage
```shell
gcloud auth login
gcloud auth application-default login
```
gcloud CLI: Generate an Application Default Credentials (ADC) access token
If you're running the application locally, you can use the following command to generate an access token using Application Default Credentials (ADC):
```shell
gcloud auth application-default print-access-token
```
```shell
export GCP_ADC_ACCESS_TOKEN="$(gcloud auth application-default print-access-token)"
```
gcloud CLI: Generate an access token for service account impersonation
Run this command to generate an access token for a specific GCP service account:
```shell
export GCP_SA_ACCESS_TOKEN=$(gcloud auth print-access-token --impersonate-service-account='GCP_SA_EMAIL_ADDRESS')
```
**Replace the following**:
- `GCP_SA_EMAIL_ADDRESS`: the email address of the service account to impersonate.
Example:
```shell
export GCP_SA_ACCESS_TOKEN=$(gcloud auth print-access-token --impersonate-service-account='sa-developer@your-sa-name.iam.gserviceaccount.com')
```
Create and store a service account key
This section refers to usage of a GCP service account key (.json) file stored on your local file system.
To map a local `gcloud` installation to a volume on a container instance running the application, include the `-v` parameter in the `docker run` command used to start a container instance, as described below.
### macOS
Assuming the user's service account key file is stored in the same directory as their local `gcloud` installation:
`/Users/USERNAME/.config/gcloud`
```shell
export LOCAL_GCLOUD_AUTH_DIRECTORY=$HOME/.config/gcloud
```
and the target volume on the container instance is:
`/root/.config/gcloud`
```shell
export CONTAINER_GCLOUD_AUTH_DIRECTORY=/root/.config/gcloud
```
the command to run the container instance would be:
```shell
docker run --rm -it \
-e GCP_SA_KEY_PATH=$GCP_SA_KEY_PATH \
-e GCP_ADC_ACCESS_TOKEN=$GCP_ADC_ACCESS_TOKEN \
-e GCP_SA_ACCESS_TOKEN=$GCP_SA_ACCESS_TOKEN \
-e GCP_DEFAULT_USER_PROJECT_ID=$GCP_DEFAULT_USER_PROJECT_ID \
-e GCP_DEFAULT_USER_DATASET=$GCP_DEFAULT_USER_DATASET \
-e GCP_DEFAULT_USER_TABLE=$GCP_DEFAULT_USER_TABLE \
-e GCP_SA_PROJECT_ID=$GCP_SA_PROJECT_ID \
-e GCP_SA_DATASET=$GCP_SA_DATASET \
-e GCP_SA_TABLE=$GCP_SA_TABLE \
-v ${LOCAL_GCLOUD_AUTH_DIRECTORY}:${CONTAINER_GCLOUD_AUTH_DIRECTORY} \
-v ${LOCAL_MAVEN_REPOSITORY}:${CONTAINER_MAVEN_REPOSITORY} \
java11-spring-gradle-bigquery-reference
```
**Replace the following** in the path to the `gcloud` directory:
- `USERNAME`: the current OS user's username
so that the path to the service account key file is correct, e.g.:
`/Users/squidmin/.config/gcloud/sa-private-key.json`
Read here for more information about creating service account keys.
Read here for more information about run config CLI arguments.
Activate GCP service account
```shell
gcloud auth activate-service-account --key-file=GCP_SA_KEY_FILE
```
**Replace the following**:
- `GCP_SA_KEY_FILE`: path to the user's service account key file.
Example:
```shell
gcloud auth activate-service-account --key-file='/Users/squidmin/.config/gcloud/sa-private-key.json'
```
Set the active GCP project
```shell
gcloud config set project ${GCP_DEFAULT_USER_PROJECT_ID}
```
List available gcloud SDK components
```shell
gcloud components list
```
Update gcloud SDK components
```shell
gcloud components update
```
CLI reference table: Run configuration
Build JAR
```shell
./gradlew clean build
```
```shell
./gradlew clean build -x test
```
```shell
./gradlew clean build testClasses -x test
```
Add manifest file
```shell
jar -cmvf \
./build/tmp/jar/MANIFEST.MF \
./build/libs/java11-spring-gradle-bigquery-reference-0.0.1-SNAPSHOT.jar \
./build/classes/java/main/org/squidmin/java/spring/gradle/bigquery/JavaSpringGradleBigQueryReferenceApplication.class
```
Build container image
```shell
docker build \
--build-arg GCP_SA_KEY_PATH=${GCP_SA_KEY_PATH} \
--build-arg GCP_DEFAULT_USER_PROJECT_ID=${GCP_DEFAULT_USER_PROJECT_ID} \
--build-arg GCP_DEFAULT_USER_DATASET=${GCP_DEFAULT_USER_DATASET} \
--build-arg GCP_DEFAULT_USER_TABLE=${GCP_DEFAULT_USER_TABLE} \
--build-arg GCP_SA_PROJECT_ID=${GCP_SA_PROJECT_ID} \
--build-arg GCP_SA_DATASET=${GCP_SA_DATASET} \
--build-arg GCP_SA_TABLE=${GCP_SA_TABLE} \
-t java11-spring-gradle-bigquery-reference .
```
---
## Commands
### Java application
Run container
```shell
docker run --rm -it \
-e GCP_SA_KEY_PATH=$GCP_SA_KEY_PATH \
-e GCP_ADC_ACCESS_TOKEN=$GCP_ADC_ACCESS_TOKEN \
-e GCP_SA_ACCESS_TOKEN=$GCP_SA_ACCESS_TOKEN \
-e GCP_DEFAULT_USER_PROJECT_ID=$GCP_DEFAULT_USER_PROJECT_ID \
-e GCP_DEFAULT_USER_DATASET=$GCP_DEFAULT_USER_DATASET \
-e GCP_DEFAULT_USER_TABLE=$GCP_DEFAULT_USER_TABLE \
-e GCP_SA_PROJECT_ID=$GCP_SA_PROJECT_ID \
-e GCP_SA_DATASET=$GCP_SA_DATASET \
-e GCP_SA_TABLE=$GCP_SA_TABLE \
-v ${LOCAL_GCLOUD_AUTH_DIRECTORY}:${CONTAINER_GCLOUD_AUTH_DIRECTORY} \
-v ${LOCAL_MAVEN_REPOSITORY}:${CONTAINER_MAVEN_REPOSITORY} \
java11-spring-gradle-bigquery-reference
```
Run jar
```shell
exec java -jar \
-Dspring.profiles.active=local \
./build/libs/java11-spring-gradle-bigquery-reference-0.0.1-SNAPSHOT.jar
```
### `bq` CLI
List datasets
```shell
bq ls --filter labels.key:value \
--max_results integer \
--format=prettyjson \
--project_id project_id
```
**Replace the following**:
- `key:value`: a label key and value, if applicable.
- `integer`: an integer representing the number of datasets to list.
- `project_id`: the name of the GCP project to target.
**Examples**:
```shell
bq ls --format=pretty
```
Create a dataset
Refer to the GCP documentation for creating datasets.
**Examples**:
```shell
bq --location=us mk \
--dataset \
--default_partition_expiration=3600 \
--default_table_expiration=3600 \
--description="An example." \
--label=test_label_1:test_value_1 \
--label=test_label_2:test_value_2 \
--max_time_travel_hours=168 \
--storage_billing_model=LOGICAL \
lofty-root-378503:test_dataset_integration
```
The Cloud Key Management Service (KMS) key parameter (`KMS_KEY_NAME`) can be specified.
This parameter is used to pass the name of the default Cloud Key Management Service key used to protect newly created tables in this dataset.
You cannot create a Google-encrypted table in a dataset with this parameter set.
```shell
bq --location=us mk \
--dataset \
--default_kms_key=KMS_KEY_NAME \
...
lofty-root-378503:test_dataset_integration
```
Delete a dataset
Refer to the GCP documentation for deleting a dataset.
#### Examples:
Remove all tables in the dataset (`-r` flag):
```shell
bq rm -r -f -d lofty-root-378503:test_dataset_integration
```
Create a table with a configured schema
**Create an empty table with an inline schema definition**
```shell
bq mk --table project_id:dataset.table schema
```
**Replace the following**:
- `project_id`: the name of the GCP project to target.
- `dataset`: the name of the BigQuery dataset to target.
- `table`: the name of the BigQuery table to target.
- `schema`: an inline schema definition.
Example:
```shell
bq mk --table \
lofty-root-378503:test_dataset_integration.test_table_integration \
id:STRING,creation_timestamp:DATETIME,last_update_timestamp:DATETIME,column_a:STRING,column_b:BOOL
```
### Specify the schema in a JSON schema file
For an example JSON schema file, refer to: `/schema/example.json`.
**Create an empty table**
```shell
bq mk --table \
project_id:dataset.table \
path_to_schema_file
```
Example:
```shell
bq mk --table \
lofty-root-378503:test_dataset_integration.test_table_integration \
./schema/example.json
```
**Create a table with CSV data**
```shell
bq --location=location load \
--source_format=format \
project_id:dataset.table \
path_to_data_file \
path_to_schema_file
```
Example:
```shell
bq --location=us load \
--source_format=CSV \
lofty-root-378503:test_dataset_integration.test_table_integration \
./csv/example.csv \
./schema/example.json
```
Refer to the BigQuery documentation: Details of loading CSV data.
Delete a table
```shell
bq rm --table test_dataset_integration.test_table_integration
```
Show table schema
Example:
```shell
bq show \
--schema \
--format=prettyjson \
lofty-root-378503:test_dataset_integration.test_table_integration
```
The table schema can be written to a file:
```shell
bq show \
--schema \
--format=prettyjson \
lofty-root-378503:test_dataset_integration.test_table_integration \ > ./schema/example_show-write.json
```
Modify table schemas
```shell
bq update \
lofty-root-378503:test_dataset_integration.test_table_integration \
./schema/example_update.json
```
Refer to the GCP documentation on modifying table schemas..
Insert data into a table
**Examples**:
Insert for known values:
```shell
bq insert test_dataset_integration.test_table_integration ./json/example.json
```
Specify a template suffix (`--template_suffix` or `-x`):
```shell
bq insert --ignore_unknown_values \
--template_suffix=_insert \
test_dataset_integration.test_table_integration \
./json/example.json
```
Refer to the `bq insert` documentation.
Run an interactive query
```shell
bq query \
--use_legacy_sql=false \
'query_string'
```
Example:
```shell
bq query \
--use_legacy_sql=false \
'SELECT
id, fieldC
FROM
`lofty-root-378503.test_dataset_integration.test_table_integration`
LIMIT
3;'
```