https://github.com/dbajuliano/postgres-dba-scripts
Daily scripts used on multi cluster environments with many hosts
https://github.com/dbajuliano/postgres-dba-scripts
Last synced: 4 months ago
JSON representation
Daily scripts used on multi cluster environments with many hosts
- Host: GitHub
- URL: https://github.com/dbajuliano/postgres-dba-scripts
- Owner: dbajuliano
- License: gpl-3.0
- Created: 2020-05-09T11:46:17.000Z (almost 5 years ago)
- Default Branch: master
- Last Pushed: 2021-05-18T12:56:53.000Z (almost 4 years ago)
- Last Synced: 2024-08-13T07:15:56.971Z (8 months ago)
- Language: Shell
- Size: 109 KB
- Stars: 6
- Watchers: 2
- Forks: 4
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
- jimsghstars - dbajuliano/postgres-dba-scripts - Daily scripts used on multi cluster environments with many hosts (Shell)
README
# :elephant: [postgres-dba-scripts](README.md)
Welcome to my public notes and scripts about Postgres
# :closed_lock_with_key: [.pgpass](.pgpass)
:heavy_exclamation_mark: Most of the scripts requires credentials stored on the file [.pgpass](https://www.postgresql.org/docs/11/libpq-pgpass.html) to access the databases
# :floppy_disk: [pg_backup.sh](scripts/pg_backup.sh)
Cron syntax:
```
mm hh dd month weekday /path/to/script.sh hostname port username
```
The example below runs every midnight cron user replication on 02 different hosts:
```
00 00 * * * /opt/bin/pg_backup.sh primary.production.localhost 5432 replication
00 00 * * * /opt/bin/pg_backup.sh single.test.localhost 5433 replication
```
Or as an alternative, we can use the `/etc/crontab` like the example below exporting the `.pgpass` credentials:
```
00 00 * * * postgres export PGPASSFILE=/home/replication/.pgpass | /opt/bin/pg_backup.sh localhost 5432 replication
```Output:
```
[2020-05-10 - 13:12:02] : /opt/bin/pg_backup.sh : Performing base backup on new dir /backup/postgres
[2020-05-10 - 13:24:33] : /opt/bin/pg_backup.sh : Created compressed backup directory /backup/2020.05.10-13.12
[2020-05-10 - 13:24:33] : /opt/bin/pg_backup.sh : Compressing /backup/postgres into /backup/2020.05.10-13.12
[2020-05-10 - 13:33:49] : /opt/bin/pg_backup.sh : Cleaning compressed backup directories older than 4 days
[2020-05-10 - 13:33:50] : /opt/bin/pg_backup.sh : Deleting archives
[2020-05-10 - 13:33:50] : /opt/bin/pg_backup.sh : Finished
```
```
ls -l /backup/
2020.05.08-00.00
2020.05.09-00.00
2020.05.10-00.00
2020.05.11-00.00
backup_logs/
```
# :mag_right: [find_user_multiple_hosts.sh](scripts/find_user_multiple_hosts.sh)
All you need is to provide a username as parameter to find. I.e.: `/opt/bin/find_user_multiple_hosts.sh juliano`Output:
```
Host: primary.production.local Port: 5432
user
juliano_readonly
juliano_admin
(2 rows)
------------------------------Host: single.test.local Port: 5433
user
ro_juliano
su_juliano
(2 rows)
------------------------------Host: demo.local Port: 5434
user
(0 rows)
------------------------------
```
# :clipboard: [refresh_materialized_view.sh](scripts/refresh_materialized_view.sh)
A Cron script with customized port in case of running multiple pg instances
```
00 2 * * * /opt/bin/refresh_materialized_view.sh localhost 5432
00 2 * * * /opt/bin/refresh_materialized_view.sh demo.local 5434
```Output:
```
[2020-05-07 - 02:00:00] : /opt/bin/refresh_materialized_view.sh : StartedREFRESHING public.mview_queries
Timing is on.
REFRESH MATERIALIZED VIEW
Time: 315.307 msREFRESHING public.mview_trx
Timing is on.
REFRESH MATERIALIZED VIEW
Time: 574.408 msREFRESHING public.mview_tests
Timing is on.
REFRESH MATERIALIZED VIEW
Time: 142.555 ms[2020-05-07 - 02:00:01] : /opt/bin/refresh_materialized_view.sh : Finished
-------------------------------------------------
```
# :watch: [idle_in_transaction.sql](scripts/idle_in_transaction.sql)
Just put the psql straightly on the cron job to find queries "idle in transaction" (running every 2 minutes below)
```
*/2 * * * * psql -h localhost -U postgres -d postgres -p 5432 --no-psqlrc -q -t -A -c "SELECT TO_CHAR(NOW(), 'DD-MM-YYYY HH:MI:SS'),pid,STATE,usename,client_addr,application_name,query FROM pg_stat_activity WHERE pid <> pg_backend_pid() AND STATE IN ( 'idle in transaction' ,'idle in transaction (aborted)', 'disabled' ) AND state_change < current_timestamp - '2 minutes'::INTERVAL" >> /var/log/pg_idle_tx_conn.log
```
* You can change the output symbol from ">>" to ">" if you want to reset the log file for each entry instead of increment the file otherwise would be recommended develop a log rotate plan, just in case
* A good strategy is to configure your alert system tool to read the log file on every change and send a notification
* Don't forget to create the log file with the correct permissionsOutput:
```
20-07-2020 09:52:01|14779|idle in transaction|juliano|192.168.0.1|psql|SELECT 'Juliano detection test';
```
# ⏩ [port-forward_k8s.sh](scripts/port-forward_k8s.sh)
This script maps all database instances and ports using k8s ```port-forward``` from each remote environment to your local machine (```localhost``` or ```127.0.0.1```). It can be used also to access AWS RDS Aurora Postgres and MySQL instances.If you don’t want to use this bash script you can use [kubefwd](https://github.com/txn2/kubefwd) instead.
Before start list the service names and ports running on kubernetes using the command: ```kubectl --context=context-name-here -n dba get services```
1. Edit the script with your instances addresses, they must be already configured to use ```port-forward``` on k8s as a requisite, then run the script ```./port-forward_k8s.sh```
Once the script is running you can using you prefred cli or gui on localhost.
2. To stop it list the opened ports ```ps -ef | grep kubectl``` and then just kill the connection you want ```kill -9 pid_here``` or all connections ```pkill -9 kubectl```
# ☸️ [k8s_sql_connect.sh](/scripts/k8s_sql_connect.sh)
Script to quick automate and direct connect to an AWS RDS instance using K8S pods
Run ```./k8s_sql_connect.sh -h``` or ```--help``` to see how it works:```
Usage:
./psql_k8s.sh -c [context] -i [instance]E.g. ./psql_k8s.sh -c staging -i pg-staging01
-c, --context --> -i, --instance
1. staging --> pg-staging01 | pg-staging02 | mysql-staging01
2. production --> pg-production01 | pg-production02 | mysql-production01
-h, --help This message
```# [](https://www.gnu.org/licenses/gpl-3.0)
# [](https://www.paypal.me/julianotech)