{"id":14062888,"url":"https://github.com/FGRibreau/sql-convention","last_synced_at":"2025-07-29T14:31:55.768Z","repository":{"id":34045052,"uuid":"37819638","full_name":"FGRibreau/sql-convention","owner":"FGRibreau","description":":ok_hand: The only SQL conventions you will ever need","archived":false,"fork":false,"pushed_at":"2024-02-23T10:33:29.000Z","size":146,"stargazers_count":65,"open_issues_count":1,"forks_count":6,"subscribers_count":3,"default_branch":"master","last_synced_at":"2025-04-03T19:21:58.439Z","etag":null,"topics":["conventions","sql","sql-conventions"],"latest_commit_sha":null,"homepage":"http://twitter.com/FGRibreau","language":null,"has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":null,"status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/FGRibreau.png","metadata":{"files":{"readme":"README.adoc","changelog":null,"contributing":null,"funding":null,"license":null,"code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null}},"created_at":"2015-06-21T18:02:49.000Z","updated_at":"2024-10-12T09:49:25.000Z","dependencies_parsed_at":"2024-02-23T11:45:06.355Z","dependency_job_id":null,"html_url":"https://github.com/FGRibreau/sql-convention","commit_stats":null,"previous_names":[],"tags_count":0,"template":false,"template_full_name":null,"purl":"pkg:github/FGRibreau/sql-convention","repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/FGRibreau%2Fsql-convention","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/FGRibreau%2Fsql-convention/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/FGRibreau%2Fsql-convention/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/FGRibreau%2Fsql-convention/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/FGRibreau","download_url":"https://codeload.github.com/FGRibreau/sql-convention/tar.gz/refs/heads/master","sbom_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/FGRibreau%2Fsql-convention/sbom","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":267703037,"owners_count":24130463,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","status":"online","status_checked_at":"2025-07-29T02:00:12.549Z","response_time":2574,"last_error":null,"robots_txt_status":"success","robots_txt_updated_at":"2025-07-24T06:49:26.215Z","robots_txt_url":"https://github.com/robots.txt","online":true,"can_crawl_api":true,"host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["conventions","sql","sql-conventions"],"created_at":"2024-08-13T07:02:48.249Z","updated_at":"2025-07-29T14:31:55.468Z","avatar_url":"https://github.com/FGRibreau.png","language":null,"readme":":toc:\n:toclevels: 4 \n:toc-placement!:\n\n= SQL Conventions \n\nimage::https://img.shields.io/badge/Slack-Join%20our%20tech%20community-17202A?logo=slack[link=https://join.slack.com/t/fgribreau/shared_invite/zt-edpjwt2t-Zh39mDUMNQ0QOr9qOj~jrg]\n\n==== The only SQL convention you will ever need.\n\nThis convention is used @cloud_iam_com @Netwo @OuestFrance @hook0 @iAdvize @Bringr @Redsmin @Oxmoto.\n\ntoc::[]\n\n=== Data layer\n\n* For SQL use https://www.postgresql.org[PostgreSQL], it’s the\nhttps://insights.stackoverflow.com/survey/2018/#technology-most-loved-dreaded-and-wanted-databases[most\nloved relational database (StackOverflow survey 2018)] and it’s a\nmulti-model database (K/V store, Document store (use jsonb), foreign\ndata wrapper, and much more). Any questions?\n\n=== Application layer\n\n* If your API is only doing mainly data persistence use\nhttps://postgrest.com[Postgrest] is the way to go and only implement the\nmissing part in another process. You can then compose both API with the\nreverse-proxy.\n* Otherwise, use a data-mapping library\n(e.g. https://github.com/tpolecat/doobie[doobie]) not an ORM.\n\n==== Queries\n\n* Don’t use BETWEEN\n(https://wiki.postgresql.org/wiki/Don%27t_Do_This#Don.27t_use_BETWEEN_.28especially_with_timestamps.29[why])\n\n* Prefer = to LIKE\n\n____\nLIKE compares characters, and can be paired with wildcard operators like %, whereas the = operator compares strings and numbers for exact matches. The = can take advantage of indexed columns. (https://www.metabase.com/learn/building-analytics/sql-templates/sql-best-practices[source])\n____\n\n* \n\n* Prefer `EXIST` to `IN`\n\n____\nIf you just need to verify the existence of a value in a table, prefer EXISTS to IN, as the EXISTS process exits as soon as it finds the search value, whereas IN will scan the entire table. IN should be used for finding values in lists.\nSimilarly, prefer NOT EXISTS to NOT IN. (https://www.metabase.com/learn/building-analytics/sql-templates/sql-best-practices[source])\n____\n\n\n=== DDL - Data Description Language\n\n* `SET search_path=pg_catalog` to force to explicitely specify schema names in every object declaration (besides triggers). This will lower bugs and gives better understanding to developers because (https://getnobullshit.com/)[explicit \u003e implicit].\n\n=== Tables/Views\n\n==== Table/Views name\n\n* *singular* (e.g. `+team+` not `+teams+`) (https://launchbylunch.com/posts/2014/Feb/16/sql-naming-conventions/#singular-relations[Here is why])\n* *snake_case* (e.g. `block_theme_version`)\n* *double underscore* for `+n-n+` tables (e.g. `user__organization`)\n\n==== Columns\n\n* *snake_case* (for example: `+created_at+`. Not `+createdAt+` or `CreatedAt`) Because in PostgreSQL https://www.postgresql.org/docs/current/sql-syntax-lexical.html#SQL-SYNTAX-IDENTIFIERS[keywords and unquoted identifiers are case insensitive] and is the source of many mistakes.\n* *double underscore* for PK and FK columns (e.g. (PK) `+user__id+`, (PK) `+user__id+`, (FK) `+organization__id+`, (FK)\n`+organization__id+`)\n ** why?\n   *** leverage `using(column__id)`\n   *** easier to grasp for PK/FK the table name part (the part before `__`) for snake_case columns\n   *** Column are case-sensitive in postgresql but SQL queries are case insensitive\n\n* *`NOT NULL` by default*, NULL is the exception (think of it as the https://github.com/chrissrogers/maybe#why[maybe Monad])\n* *No abbreviation* unless it's both well-known and very long like `i18n`\n* *No reserved keywords* (https://www.postgresql.org/docs/8.1/sql-keywords-appendix.html[Complete list])\n* *Use UUID* as PK and FK (https://www.clever-cloud.com/blog/engineering/2015/05/20/why-auto-increment-is-a-terrible-idea/[Where is why]), (https://wiki.postgresql.org/wiki/Don%27t_Do_This#Don.27t_use_serial[do not use `serial`]) rely on `gen_random_uuid()` (https://shusson.info/post/benchmark-v4-uuid-generation-in-postgres[benchmark])\n  * Note that when you use Postgres native UUID v4 type instead of bigserial, table size grows by 25% and insert rate drops to 25%.\n  * If you choose bigserial than distinguish internal and external ids (e.g. gitlab internal schema design names column \"iid\" those that are publicly shared to the end user). Don't forget to add an index `CREATE UNIQUE INDEX index_issues_on_project_id_and_iid ON public.issues USING btree (project_id, iid);`\n* Use `text` or `citext` (variable unlimited length) with check constraint instead of `varchar(n)` or `char(n)`.\n  * `text` type with CHECK constraint allows you to evolve the schema easily compared to character varying or varchar(n) when you have length checks. ([source](https://shekhargulati.com/2022/07/08/my-notes-on-gitlabs-postgres-schema-design/))\n\n#### Date time management\n* Use `timestamptz` everywhere you need to store a date (e.g. `+created_at TIMESTAMPTZ DEFAULT now()+` (https://wiki.postgresql.org/wiki/Don%27t_Do_This#Don.27t_use_timestamp_.28without_time_zone.29[Here is why])) and leverage the https://www.postgresql.org/docs/current/brin-intro.html[BRIN] index on it\n* `+updated_at TIMESTAMPTZ DEFAULT now()+` unless you plan to leverage (https://www.morling.dev/blog/last-updated-columns-with-postgres/[learn more])\nevent-sourcing\n* `+deleted_at TIMESTAMPTZ DEFAULT NULL+`:\n** unless you plan to leverage event-sourcing\n** don’t forget to\nhttp://stackoverflow.com/questions/8289100/create-unique-constraint-with-null-columns/8289253#8289253[`+deleted_at+`]\n* Comment each column, explain your rational, explain your decisions, should be in plain english\n* Boolean columns must start with either `+is+` or `+has+`.\n* https://wiki.postgresql.org/wiki/Don%27t_Do_This#Don.27t_use_char.28n.29[Don't use char(n)]\nhttps://wiki.postgresql.org/wiki/Don%27t_Do_This#Don.27t_use_char.28n.29_even_for_fixed-length_identifiers[even for fixed-length identifiers]\n\n=== Constraints\n\nGeneral rule is: `+{tablename}_{columnname(s)}_{suffix}+`\n(e.g. `+table_name_column_name_a__pkey+`) where the suffix is one of the\nfollowing: \n\n* Primary Key constraint: `+pk+` \n* Foreign key: `+fk+`\n* Unique constraint: `+key+` \n* Check constraint: `+chk+` \n* Exclusion constraint: `+exl+` \n* Any other kind of index: `+idx+`\n\n==== PK - Primary Key\n\n* `+{table_name}_{column_name}_pk+` in case of a single column PK\n* `+{table_name}_{column_name1}_{column_name2}_{column_name3}_pk+` in case of\nmultiple columns as primary key (`+column_name1+`, `+column_name2+`,\n`+column_name3+`)\n\n==== FK - Foreign key\n\n* `+{from_table_name}_{from_column_name}_{to_table_name}_{to_column_name}__fk+`\n* Always specify `ON DELETE` `ON UPDATE` in order to force *you* to think about reference consequences\n\n==== Unique\n\n* `+{from_table_name}_{from_column_name}_key+` in case of a single column unique\nconstraint\n* `+{from_table_name}_{from_column_name1}_{from_column_name2}_{from_column_name3}__key+` in case of\nmultiple columns as unique (`+column_name1+`, `+column_name2+`,\n`+column_name3+`)\n\n=== Functions\n\n==== Name\n\nThey are 3 types of functions, `+notify+` functions and `+private+`\nfunctions and `+public+` functions\n\n* *notify*, format: notify[_schema_name_][_table_name_][_event_] (e.g. `+notify_authentication_user_created(user_id)+`): should only format the notification message underneath and use pg_notify. Beware of the\nhttp://stackoverflow.com/a/41059797/745121[8000 characters limit], only\nsend metadata (ids), data should be asked by workers through the API. If\nyou really wish to send data then\nhttps://github.com/xstevens/pg_kafka[pg_kafka] might be a better\nalternative.\n* *private*, format: _[`+_function_name_+`]\n(e.g. `+_reset_failed_login+`): must never be exposed through the public\nschema. Used mainly for consistency and business-rules\n* *public*, format [`+_function_name_+`] (e.g. `+log_in(email, password)+`): must be\nexposed through the public schema.\n\n==== Parameters\n\nEvery parameter name must ends with `$`. This will prevent any \"Reference to XXX is ambiguous\" issue.\n\n===== Example \n\n```sql\ncreate function lib_fsm.transition_create(\n  from_state__id$ uuid, \n  event$ varchar(30), \n  to_state__id$ uuid, \n  description$ text default null\n)\n```\n\n=== Types\n\n==== Enum types\n\nDon't use enums, you will have issue over time because https://stackoverflow.com/a/25812436/745121[you cannot remove element from an enum].\nIf your enums represent various state, leverage https://en.wikipedia.org/wiki/Finite-state_machine[a state machine]. Use a library like https://github.com/netwo-io/lib_fsm[lib_fsm].\n\n==== Boolean\n\nAlways use `true` and `false`, without single-quote.\n\nPostgreSQL documentation says that `TRUE` and `FALSE` should be prefered because they are more SQL compliant but hey, LET'S STOP YELLING WHEN WE WRITE SQL SHALL WE?\n\n\n==== String\n\n- Multi-line string must be represented with `$_$my string$_$`\n\n\n==== JSONB\n\n- prefer `+jsonb+` to `json` and sql arrays. Jsonb has improved query performance and efficient storage\n \n\n- A `metadata` jsonb column is a great way to let the end-user store arbitrary key-value data to these objects. (e.g. https://documentation.hook0.com/docs/metadata https://stripe.com/docs/api/metadata )\n\nmetadata key-value pair must be https://www.getnobullshit.com/tech-lead/tout-limiter-dans-lespace-et-dans-le-temps[limited in space] you can use a trigger for that:\n\n[source,sql]\n----\nCREATE OR REPLACE FUNCTION validate_metadata()\nRETURNS TRIGGER AS $$\nDECLARE\n    key TEXT;\n    value TEXT;\n    keys INT;\nBEGIN\n    keys := 0;\n\n    FOR key, value IN (SELECT * FROM jsonb_each_text(NEW.metadata))\n    LOOP\n        keys := keys + 1;\n\n        IF length(key::text) \u003e 40 OR length(value::text) \u003e 500 THEN\n            RAISE 'Key and value must be at most 40 and 500 characters long respectively.';\n        END IF;\n\n        IF keys \u003e 50 THEN\n            RAISE 'A maximum of 50 keys are allowed in the metadata.';\n        END IF;\n    END LOOP;\n\n    RETURN NEW;\nEND;\n$$ LANGUAGE plpgsql;\n\nCREATE TRIGGER validate_metadata_trigger\nBEFORE INSERT OR UPDATE ON your_table\nFOR EACH ROW EXECUTE FUNCTION validate_metadata();\n----\n\n=== Triggers\n\n==== Name\n\n(translation in progress)\n\n==== Columns\n\n* utiliser BNCF (au dessus de la 3NF) (cf normal form)\n* leverage `+using+`, so instead of:\n\n[source,sql]\n----\nselect \u003cfields\u003e from\n  table_1\n  inner join table_2\n    on table_1.table_1_id =\n       table_2.table_1_id\n----\n\nuse:\n\n[source,sql]\n----\nselect \u003cfields\u003e from\n  table_1\n  inner join table_2\n    using (table_1_id)\n----\n\n* don’t use PostgreSQL enums you will have issues when you need to https://stackoverflow.com/a/25812436/745121[remove some values over time]. Use a dedicated table instead.\n* use the right PostgreSQL types:\n\n....\ninet (IP address)\ntimestamp with time zone\npoint (2D point)\ntstzrange (time range)\ninterval (duration)\n....\n\n* constraint should be inside your database as much as possible:\n\n[source,sql]\n----\ncreate table reservation(\n    reservation_id uuid primary key,\n    dates tstzrange not null,\n    exclude using gist (dates with \u0026\u0026)\n);\n----\n\n* use row-level-security to ensure R/U/D access on each table rows\n\n(http://stackoverflow.com/questions/4107915/postgresql-default-constraint-names/4108266#4108266[source])\n\n=== Policies\n\n==== Name\n\ntodo.\n\n=== SQL Formatter\n\n```bash\ndocker run --rm --network=none guriandoro/sqlparse:0.3.1 \"SELECT several, columns from a_table as a join another_table as b where a.id = 1;\"\n```\n\n=== Configuration\n\n\n==== `statement_timeout`\n\nSince we do want to https://www.getnobullshit.com/[limit everything in space and time], configure `statement_timeout` on role to let your database abort any statement that takes more than the specified amount of time (in ms).\n\n```sql\n-- Limit in time SQL queries =\u003e improve overall reliability\n-- https://www.postgresql.org/docs/current/runtime-config-client.html\n-- PostgreSQL WILL ABORT any statement that takes more than the specified amount of time (in milliseconds)\n-- If you do have an issue with that, please first (from first to last):\n--  - .. check that your query is relying on indices (did you use EXPLAIN (ANALYZE, BUFFERS) ?)\n--  - .. consider materialized views\n--  - .. ensure pg cache settings are OK\n--  - .. ensure the disk is SSD and fast enough\n--  - .. ensure the server has enough CPU \u0026 RAM\n--  - .. check if its for analytics purposes, if so then requesting a postgres replica might be a better idea\n-- When all these above points were evaluated *then* we can all talk about increasing the values below :)\nalter role APP_ROLE_THAT_DOES_THE_QUERY set statement_timeout to '250ms';\n```\n\n== Things to monitor\n\n- https://www.percona.com/blog/2020/05/29/removing-postgresql-bottlenecks-caused-by-high-traffic/[Removing PostgreSQL Bottlenecks Caused by High Traffic]\n____\nYour cache hit ratio tells you how often your data is served from in\nmemory vs. having to go to disk. Serving from memory vs. going to disk\nwill be orders of magnitude faster, thus the more you can keep in memory\nthe better. Of course you could provision an instance with as much\nmemory as you have data, but you don’t necessarily have to. Instead\nwatching your cache hit ratio and ensuring it is at 99% is a good metric\nfor proper performance.\n(https://www.citusdata.com/blog/2019/03/29/health-checks-for-your-postgres-database/[Source])\n____\n\n[source,sql]\n----\nSELECT\n  sum(heap_blks_read) as heap_read,\n  sum(heap_blks_hit)  as heap_hit,\n  sum(heap_blks_hit) / (sum(heap_blks_hit) + sum(heap_blks_read)) as ratio\nFROM\n  pg_statio_user_tables;\n----\n\n____\nUnder the covers Postgres is essentially a giant append only log. When\nyou write data it appends to the log, when you update data it marks the\nold record as invalid and writes a new one, when you delete data it just\nmarks it invalid. Later Postgres comes through and vacuums those dead\nrecords (also known as tuples). All those unvacuumed dead tuples are\nwhat is known as bloat. Bloat can slow down other writes and create\nother issues. Paying attention to your bloat and when it is getting out\nof hand can be key for tuning vacuum on your database.\n(https://www.citusdata.com/blog/2019/03/29/health-checks-for-your-postgres-database/[Source])\n____\n\n[source,sql]\n----\nWITH constants AS (\n  SELECT current_setting('block_size')::numeric AS bs, 23 AS hdr, 4 AS ma\n), bloat_info AS (\n  SELECT\n    ma,bs,schemaname,tablename,\n    (datawidth+(hdr+ma-(case when hdr%ma=0 THEN ma ELSE hdr%ma END)))::numeric AS datahdr,\n    (maxfracsum*(nullhdr+ma-(case when nullhdr%ma=0 THEN ma ELSE nullhdr%ma END))) AS nullhdr2\n  FROM (\n    SELECT\n      schemaname, tablename, hdr, ma, bs,\n      SUM((1-null_frac)*avg_width) AS datawidth,\n      MAX(null_frac) AS maxfracsum,\n      hdr+(\n        SELECT 1+count(*)/8\n        FROM pg_stats s2\n        WHERE null_frac\u003c\u003e0 AND s2.schemaname = s.schemaname AND s2.tablename = s.tablename\n      ) AS nullhdr\n    FROM pg_stats s, constants\n    GROUP BY 1,2,3,4,5\n  ) AS foo\n), table_bloat AS (\n  SELECT\n    schemaname, tablename, cc.relpages, bs,\n    CEIL((cc.reltuples*((datahdr+ma-\n      (CASE WHEN datahdr%ma=0 THEN ma ELSE datahdr%ma END))+nullhdr2+4))/(bs-20::float)) AS otta\n  FROM bloat_info\n  JOIN pg_class cc ON cc.relname = bloat_info.tablename\n  JOIN pg_namespace nn ON cc.relnamespace = nn.oid AND nn.nspname = bloat_info.schemaname AND nn.nspname \u003c\u003e 'information_schema'\n), index_bloat AS (\n  SELECT\n    schemaname, tablename, bs,\n    COALESCE(c2.relname,'?') AS iname, COALESCE(c2.reltuples,0) AS ituples, COALESCE(c2.relpages,0) AS ipages,\n    COALESCE(CEIL((c2.reltuples*(datahdr-12))/(bs-20::float)),0) AS iotta -- very rough approximation, assumes all cols\n  FROM bloat_info\n  JOIN pg_class cc ON cc.relname = bloat_info.tablename\n  JOIN pg_namespace nn ON cc.relnamespace = nn.oid AND nn.nspname = bloat_info.schemaname AND nn.nspname \u003c\u003e 'information_schema'\n  JOIN pg_index i ON indrelid = cc.oid\n  JOIN pg_class c2 ON c2.oid = i.indexrelid\n)\nSELECT\n  type, schemaname, object_name, bloat, pg_size_pretty(raw_waste) as waste\nFROM\n(SELECT\n  'table' as type,\n  schemaname,\n  tablename as object_name,\n  ROUND(CASE WHEN otta=0 THEN 0.0 ELSE table_bloat.relpages/otta::numeric END,1) AS bloat,\n  CASE WHEN relpages \u003c otta THEN '0' ELSE (bs*(table_bloat.relpages-otta)::bigint)::bigint END AS raw_waste\nFROM\n  table_bloat\n    UNION\nSELECT\n  'index' as type,\n  schemaname,\n  tablename || '::' || iname as object_name,\n  ROUND(CASE WHEN iotta=0 OR ipages=0 THEN 0.0 ELSE ipages/iotta::numeric END,1) AS bloat,\n  CASE WHEN ipages \u003c iotta THEN '0' ELSE (bs*(ipages-iotta))::bigint END AS raw_waste\nFROM\n  index_bloat) bloat_summary\nORDER BY raw_waste DESC, bloat DESC\n----\n\n____\nPostgres makes it simply to query for unused indexes so you can easily\ngive yourself back some performance by removing them\n(https://www.citusdata.com/blog/2019/03/29/health-checks-for-your-postgres-database/[Source])\n____\n\n[source,sql]\n----\nSELECT\n            schemaname || '.' || relname AS table,\n            indexrelname AS index,\n            pg_size_pretty(pg_relation_size(i.indexrelid)) AS index_size,\n            idx_scan as index_scans\nFROM pg_stat_user_indexes ui\n         JOIN pg_index i ON ui.indexrelid = i.indexrelid\nWHERE NOT indisunique AND idx_scan \u003c 50 AND pg_relation_size(relid) \u003e 5 * 8192\nORDER BY pg_relation_size(i.indexrelid) / nullif(idx_scan, 0) DESC NULLS FIRST,\n         pg_relation_size(i.indexrelid) DESC;\n----\n\n____\npg_stat_statements is useful for monitoring your database query\nperformance. It records a lot of valuable stats about which queries are\nrun, how fast they return, how many times their run, etc. Checking in on\nthis set of queries regularly can tell you where is best to add indexes\nor optimize your application so your query calls may not be so\nexcessive.\n(https://www.citusdata.com/blog/2019/03/29/health-checks-for-your-postgres-database/[Source])\n____\n\n[source,sql]\n----\nSELECT query,\n       calls,\n       total_time,\n       total_time / calls as time_per,\n       stddev_time,\n       rows,\n       rows / calls as rows_per,\n       100.0 * shared_blks_hit / nullif(shared_blks_hit + shared_blks_read, 0) AS hit_percent\nFROM pg_stat_statements\nWHERE query not similar to '%pg_%'\nand calls \u003e 500\n--ORDER BY calls\n--ORDER BY total_time\norder by time_per\n--ORDER BY rows_per\nDESC LIMIT 20;\n----\n\n== Schema design\n\n* https://github.com/FGRibreau/stripe-schema[Stripe own schema]\n\n== Tools\n\n* https://www.postgresql.org/docs/9.4/pgstatstatements.html[pg_stat_statements]\n* https://github.com/darold/pgbadger[A fast PostgreSQL Log Analyzer]\n* https://pganalyze.com[PostgreSQL Performance Monitoring]\n\n== Migrations\n\n- https://pythonspeed.com/articles/schema-migrations-server-startup/[How to do Zero-downtime migrations]\n- https://medium.com/braintree-product-technology/postgresql-at-scale-database-schema-changes-without-downtime-20d3749ed680[Zero-downtime migrations best practices]\n\n== Good practices\n\n* https://hakibenita.com/sql-dos-and-donts[12 Common Mistakes and Missed Optimization Opportunities in SQL]\n* https://pythonspeed.com/articles/schema-migrations-server-startup/[Don't apply migrations on application startup]\n\n== Managed PostgreSQL Databases\n\n* Google Cloud PostgreSQL\n** Pros\n** Cons\n*** No support for plv8\n*** Any features that require `superuser` privileges are not supported\n*** `postgres` role is not a `superuser`\n**** Can create roles\n**** Can not select from tables that are restricted by default like `pg_shadow`\n**** Thus can not edit `pg_catalog.pg_class` (in order to change row level security activation for example)\n**** Can read from all necessary tables other than `pg_authid`\n**** \n* Scaleway Managed PostgreSQL:\n** Pros\n*** multi-schema support\n*** configuration options are editable\n*** user/role management is self-service\n** Cons\n*** /\n* OVH Cloud SQL\n** Pros\n*** /\n** Cons\n*** no multi-schema support\n","funding_links":[],"categories":["Others"],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2FFGRibreau%2Fsql-convention","html_url":"https://awesome.ecosyste.ms/projects/github.com%2FFGRibreau%2Fsql-convention","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2FFGRibreau%2Fsql-convention/lists"}