Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/nityanandagohain/clickhouse_test
clickhouse_test
https://github.com/nityanandagohain/clickhouse_test
Last synced: 21 days ago
JSON representation
clickhouse_test
- Host: GitHub
- URL: https://github.com/nityanandagohain/clickhouse_test
- Owner: nityanandagohain
- Created: 2024-05-29T07:28:41.000Z (6 months ago)
- Default Branch: main
- Last Pushed: 2024-05-29T17:03:55.000Z (6 months ago)
- Last Synced: 2024-10-11T15:47:44.687Z (about 1 month ago)
- Language: Python
- Size: 21.5 KB
- Stars: 0
- Watchers: 1
- Forks: 0
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
Awesome Lists containing this project
README
Two clusters of ClickHouse created
* clickhouse-1 is connected to nitya-1 bucket - prefer_not_to_merge=0
* clickhouse-2 is connected to nitya-2 bucket - prefer_not_to_merge=1We are also setting support_batch_delete to false, because GCS doesn’t support batch delete, not sure if it will affect with no. of delete request between S3 and GCS.
```
CREATE TABLE default.test(
timestamp UInt64 CODEC(DoubleDelta, LZ4),
data String
)
ENGINE = MergeTree
PARTITION BY toDate(timestamp/1000000000)
ORDER BY (timestamp)
TTL toDateTime(timestamp / 1000000000) + toIntervalSecond(259200), toDateTime(timestamp / 1000000000) + toIntervalSecond(172800) TO VOLUME 's3'
SETTINGS index_granularity = 8192, ttl_only_drop_parts = 1, storage_policy = 'tiered'
SELECT
disk_name,
database,
table,
formatReadableSize(sum(data_compressed_bytes) AS size) AS compressed,
formatReadableSize(sum(data_uncompressed_bytes) AS usize) AS uncompressed,
round(usize / size, 2) AS compr_rate,
sum(rows) AS rows,
count() AS part_count
FROM system.parts
WHERE (active = 1) AND (table LIKE '%test%')
GROUP BY
disk_name,
database,
table
ORDER BY
disk_name ASC,
size DESC