{"id":21893059,"url":"https://github.com/peekjef72/sql_exporter","last_synced_at":"2025-03-22T03:40:37.414Z","repository":{"id":260240961,"uuid":"880347058","full_name":"peekjef72/sql_exporter","owner":"peekjef72","description":null,"archived":false,"fork":false,"pushed_at":"2025-03-01T11:45:08.000Z","size":34565,"stargazers_count":0,"open_issues_count":0,"forks_count":0,"subscribers_count":1,"default_branch":"master","last_synced_at":"2025-03-01T12:19:59.575Z","etag":null,"topics":["db2-database","hana-database","mssql-database","oracle-database","prometheus-exporter"],"latest_commit_sha":null,"homepage":"","language":"Go","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"mit","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/peekjef72.png","metadata":{"files":{"readme":"README.md","changelog":"CHANGELOG.md","contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null}},"created_at":"2024-10-29T15:03:37.000Z","updated_at":"2025-03-01T11:09:30.000Z","dependencies_parsed_at":"2024-12-19T13:10:35.737Z","dependency_job_id":null,"html_url":"https://github.com/peekjef72/sql_exporter","commit_stats":null,"previous_names":["peekjef72/sql_exporter"],"tags_count":3,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/peekjef72%2Fsql_exporter","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/peekjef72%2Fsql_exporter/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/peekjef72%2Fsql_exporter/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/peekjef72%2Fsql_exporter/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/peekjef72","download_url":"https://codeload.github.com/peekjef72/sql_exporter/tar.gz/refs/heads/master","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":244902929,"owners_count":20529114,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["db2-database","hana-database","mssql-database","oracle-database","prometheus-exporter"],"created_at":"2024-11-28T13:01:11.539Z","updated_at":"2025-03-22T03:40:37.406Z","avatar_url":"https://github.com/peekjef72.png","language":"Go","readme":"# Prometheus SQL Exporter\n\nExporter for [Prometheus](https://prometheus.io) that can collect multiple type of sql servers.\n\nAs examples 4 configurations for exporters are provided (see contribs):\n\n* [mssql](contribs/mssql_exporter/)\n* [db2](contribs/db2_exporter/)\n* [oracle](contribs/oracle_exporter/)\n* [hana](contribs/hanasql_exporter/)\n\nThis exporter was [free/sql_exporter](https://github.com/free/sql_exporter) before version 0.5.\nIn actual version the exporter is compiled via tag for specific sql server. The advantage is to have only one logic for configuration and deployement.\n\n## Overview\n\n\u003cfigure\u003e\n    \u003cimg src=\"contribs/mssql_exporter/screenshots/mssql_dashboard_general.PNG\" alt=\"overview MSSQL\"\u003e\n    \u003cfigcaption style=\"font-style: italic; text-align: center;\"\u003eMSSQL dashboard overview\u003c/figcaption\u003e\n\u003c/figure\u003e\n\nSQL Exporter is a configuration driven exporter that exposes metrics gathered from MSSQL Servers, for use by the Prometheus monitoring system. Out of the box, it provides support for Microsoft SQL Server, IBM DB2, HANADB, and Oracle but any DBMS for which a Go driver is available may be monitored after rebuilding the binary with the DBMS driver included.\n\nThe exporter is multi targets, meaning that you can set several target servers configuration identified each by name, then Prometheus can scratch these targets by adding the parameter target into the url. It can also works with a default target configuration and authentication models.\n\nThe collected metrics and the queries that produce them are entirely configuration defined. **No SQL query are hard coded inside the exporter**. SQL queries are grouped into\ncollectors -- logical groups of queries, e.g. *query stats* or *I/O stats*, mapped to the metrics they populate.\nThis means you can quickly and easily set up custom collectors to measure data quality, whatever that might mean in your specific case.\n\nPer the Prometheus philosophy, scrapes are synchronous (metrics are collected on every `/metrics` poll) but, in order to keep load at reasonable levels, minimum collection intervals may optionally be set per collector, producing cached\nmetrics when queried more frequently than the configured interval.\n\n## building\n\n### mssql or hanasql\n\nmssql_exporter and hanasql_exporter can be compiled staticaly.\n\n```bash\nmake build-mssql build-hanasql\n```\n\npre-requirements:\n\n* gcc (installed via your prefered package manager)\n\n### db2\n\ndb2_exporter can't be compiled staticaly.\nctdriver must be installed first for compilation and for **usage**.\nsee [go_ibm_db/INSTALL.md](https://github.com/ibmdb/go_ibm_db/blob/master/INSTALL.md)\n\nHere a small summary for linux:\n\n* download the cli :\n\n  ```bash\n  mkdir $HOME/db2\n  cd $HOME/db2\n  curl --output linuxx64_odbc_cli.tar.gz https://public.dhe.ibm.com/ibmdl/export/pub/software/data/db2/drivers/odbc_cli/linuxx64_odbc_cli.tar.gz\n  tar xzf linuxx64_odbc_cli.tar.gz\n  export IBM_DB_HOME=/home/\u003cuser\u003e/db2/clidriver\n  export CGO_CFLAGS=-I$IBM_DB_HOME/include\n  export CGO_LDFLAGS=-L$IBM_DB_HOME/lib\n  export LD_LIBRARY_PATH=$IBM_DB_HOME/lib:$LD_LIBRARY_PATH\n  ```\n\n  for RH 10, libcrypt.so.1 is required and may need to install libxcrypt-compat:\n\n  ```bash\n  dnf install libxcrypt-compat\n  ```\n\n  If you have root access you can set path to DB2 dynamic library via ld.so.conf:\n\n  ```bash\n  vi /etc/ld.so.conf.d/db2_odbc.conf\n  ```\n\n  ```text\n  /home/jfpik/db2/clidriver/lib\n  ```\n\n  ```bash\n  ldconfig\n  ```\n\n  if you plan to recompile db2_exporter several times, you can build an env file:\n\n  ```bash\n  vi .env_db2\n  ```\n\n  ```text\n  export IBM_DB_HOME=/home/\u003cuser\u003e/db2/clidriver\n  export CGO_CFLAGS=\"-I $IBM_DB_HOME/include\"\n  export CGO_LDFLAGS=\"-L $IBM_DB_HOME/lib\"\n  export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:$IBM_DB_HOME/lib\n\n  GO111MODULE=on\n  GOSUMDB=off\n  GOFLAGS=\"-tags=db2\"\n  ```\n\n  Then use this file:\n\n    ```bash\n    . .env_db2\n    make build-db2\n    ```\n\nfor others urls check [setup.go](https://github.com/ibmdb/go_ibm_db/blob/master/installer/setup.go)\n\n### OracleDB\n\noracledb_exporter can't be compiled staticaly too. Oracle Instant client must be installed first on system.\nDownload and install an oracle instant client: recommanded on linux oracle-instantclient19.23-basiclite-19.23.0.0.0 (oracle-instantclient19.6-basiclite-19.6.0.0.0-1.x86_64.rpm\n and oracle-instantclient19.23-devel-19.23.0.0.0-1.x86_64.rpm)\n\n```bash\ncurl --output ~/Downloads/oracle-instantclient19.23-basic-19.23.0.0.0-1.x86_64.rpm https://yum.oracle.com/repo/OracleLinux/OL8/oracle/instantclient/x86_64/getPackage/oracle-instantclient19.23-basic-19.23.0.0.0-1.x86_64.rpm\ncurl --output ~/Downloads/oracle-instantclient19.23-devel-19.23.0.0.0-1.x86_64.rpm https://yum.oracle.com/repo/OracleLinux/OL8/oracle/instantclient/x86_64/getPackage/oracle-instantclient19.23-devel-19.23.0.0.0-1.x86_64.rpm\n```\n\nthen install the download package, and update library path\n\n```bash\ndnf install file:///home/jfpik/Downloads/oracle-instantclient19.23-basic-19.23.0.0.0-1.x86_64.rpm \ndnf install file:///home/jfpik/Downloads/oracle-instantclient19.23-devel-19.23.0.0.0-1.x86_64.rpm\n\nldconfig\n```\n\ncheck oci8.pc and .promu-oracle.yml file to adapt version or path with installed rpm.\n\nThen use this file:\n\n```bash\n. .env_oracle\nmake build-oracledb\n```\n\n## Usage\n\nUsage is the same for all sql_exporters, but will be explained only for mssql_exporter.\n\nGet Prometheus MSSQL Exporter as a [packaged release](https://github.com/jfpik/sql_exporter/releases/latest) or\nbuild it yourself (see above.)\n\nthen run it from the command line:\n\n```shell\n$ ./mssql_exporter\n```\n\nUse the `--help` flag to get help information.\n\n```shell\n$ ./mssql_exporter --help\nusage: mssql_exporter [\u003cflags\u003e]\n\n\nFlags:\n  -h, --[no-]help                Show context-sensitive help (also try --help-long and --help-man).\n      --config.data-source-name=CONFIG.DATA-SOURCE-NAME  \n                                 Data source name to override the value in the configuration file with.\n      --web.telemetry-path=\"/metrics\"  \n                                 Path under which to expose collector's internal metrics.\n  -c, --config.file=\"config/config.yml\"  \n                                 mssql_exporter Exporter configuration file.\n  -d, --[no-]debug               debug connection checks.\n  -n, --[no-]dry-run             check exporter configuration file and try to collect a target then exit.\n  -t, --target=TARGET            In dry-run mode specify the target name, else ignored.\n  -m, --model=\"default\"          In dry-run mode specify the model name to build the dynamic target, else ignored.\n  -a, --auth_key=AUTH_KEY        In dry-run mode specify the auth_key to use, else ignored.\n  -o, --collector=COLLECTOR      Specify the collector name restriction to collect, replace the collector_names set for each\n                                 target.\n      --[no-]web.systemd-socket  Use systemd socket activation listeners instead of port listeners (Linux only).\n      --web.listen-address=:9399 ...  \n                                 Addresses on which to expose metrics and web interface. Repeatable for multiple addresses.\n                                 Examples: `:9100` or `[::1]:9100` for http, `vsock://:9100` for vsock\n      --web.config.file=\"\"       Path to configuration file that can enable TLS or authentication. See:\n                                 https://github.com/prometheus/exporter-toolkit/blob/master/docs/web-configuration.md\n      --log.level=info           Only log messages with the given severity or above. One of: [debug, info, warn, error]\n      --log.format=logfmt        Output format of log messages. One of: [logfmt, json]\n  -V, --[no-]version             Show application version.\n\n```\n\n## Configuration\n\nSQL Exporter is deployed alongside the DB server it collects metrics from. If both the exporter and the DB\nserver are on the same host, they will share the same failure domain: they will usually be either both up and running\nor both down. When the database is unreachable, `/metrics` responds with HTTP code 500 Internal Server Error, causing\nPrometheus to record `up=0` for that scrape. Only metrics defined by collectors are exported on the `/metrics` endpoint.\nSQL Exporter process metrics are exported at `/sql_exporter_metrics`.\n\nThe configuration examples listed here only cover the core elements.\nYou will find ready to use \"standard\" DBMS-specific collector definitions in the\n[`examples`](https://github.com/peekjef72/sql_exporter/tree/master/contribs) directory. You may contribute your own collector\ndefinitions and metric additions if you think they could be more widely useful, even if they are merely different takes\non already covered DBMSs.\n\n**`./mssql_exporter.yml`**\n\n```yaml\n# Global settings and defaults.\nglobal:\n  # Subtracted from Prometheus' scrape_timeout to give us some headroom and prevent Prometheus from\n  # timing out first.\n  scrape_timeout_offset: 500ms\n  # Minimum interval between collector runs: by default (0s) collectors are executed on every scrape.\n  min_interval: 0s\n  # Maximum number of open connections to any one target. Metric queries will run concurrently on\n  # multiple connections.\n  max_connections: 3\n  # Maximum number of idle connections to any one target.\n  max_idle_connections: 3\n\n# The target to monitor and the collectors to execute on it.\ntargets:\n  # list of target to collect\n  - target:\n    name: MY_INSTANCE\n    # Data source name always has a URI schema that matches the driver name. In some cases (e.g. MySQL)\n    # the schema gets dropped or replaced to match the driver expected DSN format.\n    data_source_name: 'sqlserver://prom_user:prom_password@dbserver1.example.com:1433'\n\n    # Collectors (referenced by name) to execute on the target.\n    collectors: [mssql_standard]\n\n  # or specify each target in a configuration file with same format than for a target\n  - targets_files: [ \"targets/*.yml\" ]\n\n# Collector definition files.\ncollector_files: \n  - \"*.collector.yml\"\n```\n\n### Collectors\n\nCollectors may be defined inline, in the exporter configuration file, under `collectors`, or they may be defined in\nseparate files and referenced in the exporter configuration by name, making them easy to share and reuse.\n\nThe collector definition below generates gauge metrics of the form `pricing_update_time{market=\"US\"}`.\n\n**`./pricing_data_freshness.collector.yml`**\n\n```yaml\n# This collector will be referenced in the exporter configuration as `pricing_data_freshness`.\ncollector_name: pricing_data_freshness\n\n# A Prometheus metric with (optional) additional labels, value and labels populated from one query.\nmetrics:\n  - metric_name: pricing_update_time\n    type: gauge\n    help: 'Time when prices for a market were last updated.'\n    key_labels:\n      # Populated from the `market` column of each row.\n      - Market\n    static_labels:\n      # Arbitrary key/value pair\n      portfolio: income\n    values: [LastUpdateTime]\n    query: |\n      SELECT Market, max(UpdateTime) AS LastUpdateTime\n      FROM MarketPrices\n      GROUP BY Market\n```\n\n### target file\n\n```yaml\nname: \"target_name\"\ndata_source_name: \"sqlserver://nowhere:1434/instance_2?user%20id=domain\\\\user\u0026password={Xöe8;vhmbr4yYEL0~Ybfg}\u0026database=myDatabase\"\n\n# Collectors (referenced by name) to execute on the target.\ncollectors:\n  - mssql_standard\n\n```\n\n### Data Source Names\n\nTo keep things simple and yet allow fully configurable database connections to be set up, SQL Exporter uses DSNs (like\n`sqlserver://prom_user:prom_password@dbserver1.example.com:1433`) to refer to database instances. However, because the\nGo `sql` library does not allow for automatic driver selection based on the DSN (i.e. an explicit driver name must be\nspecified) SQL Exporter uses the schema part of the DSN (the part before the `://`) to determine which driver to use.\n\nDB | SQL Exporter expected DSN | Driver sees\n:---|:---|:---\nDB2 | `db2:////\u003chostname\u003e:\u003cport\u003e?user%20id=\u003clogin\u003e\u0026password=\u003cpassword\u003e\u0026database=\u003cdatabase\u003e\u0026protocol=...`\u003cbr\u003eor\u003cbr\u003e`db2://DATABASE=\u003cdatabase\u003e; HOSTNAME=\u003chostname\u003e; PORT=\u003cport\u003e; PROTOCOL=\u003cprotocol\u003e; UID=\u003clogin\u003e; PWD=\u003cpassword\u003e;` | _\nHanasql | `hdb:////\u003chostname\u003e:\u003cport\u003e?user%20id=\u003clogin\u003e\u0026password=\u003cpassword\u003e\u0026database=\u003cdatabase\u003e\u0026protocol=...`\u003cbr\u003eor\u003cbr\u003e`hdb://DATABASE=\u003cdatabase\u003e; HOSTNAME=\u003chostname\u003e; PORT=\u003cport\u003e; PROTOCOL=\u003cprotocol\u003e; UID=\u003clogin\u003e; PWD=\u003cpassword\u003e;` | optionnal parameters: \u003cul\u003e\u003cli\u003edatabaseName=\u0026lt;dbname\u0026gt;\u003cli\u003e defaultSchema=\u0026lt;schema\u0026gt; \u003cli\u003etimeout=\u0026lt;timeout_seconds\u0026gt;\u003cli\u003epingInterval=\u0026lt;intervanl_seconds\u0026gt;\u003cli\u003eTLSRootCAFile=\u0026lt;file\u0026gt;\u003cli\u003eTLSServerName=\u0026lt;file\u0026gt;\u003cli\u003eTLSInsecureSkipVerify=\u0026lt;file\u0026gt;\u003c/ul\u003e\nOracle | `oracle://\u003chost\u003e:\u003cport\u003e/\u003csid\u003e?user_id=\u003clogin\u003e\u0026password=\u003cpassword\u003e\u0026params=\u003cVAL\u003e`\u003cbr\u003eor\u003cbr\u003e`oci:///user:passw@host:port/dbname?params=\u003cVAL\u003e`\u003cbr\u003eor\u003cbr\u003e`oracle://DATABASE=\u003cdatabase\u003e; HOSTNAME=\u003chostname\u003e; PORT=\u003cport\u003e; PROTOCOL=\u003cprotocol\u003e; UID=\u003clogin\u003e; PWD=\u003cpassword\u003e; optional=\u003cvalue\u003e` | optionnal parameters: \u003cul\u003e\u003cli\u003eloc=\u0026lt;time.location\u0026gt; default time.UTC\u003cbr\u003e\u003cli\u003eisolation=\u0026lt;READONLY\u0026#124;SERIALIZABLE\u0026#124;DEFAULT\u0026gt;\u003cli\u003equestionph=\u0026lt;enableQuestionPlaceHolders\u0026gt; true\u0026#124;false\u003cli\u003eprefetch_rows=\u0026lt;u_int\u0026gt; default 0\u003cli\u003eprefetch_memory=\u0026lt;u_int\u0026gt; default 4096\u003cli\u003eas=\u0026lt;sysdba\u0026#124;sysasm\u0026#124;sysoper default empty.\u003cli\u003estmt_cache_size=\u003cu_int\u003edefault 0\u003c/ul\u003e\nSQL Server | `sqlserver://\u003chostname\u003e:\u003cport\u003e/\u003cinstance\u003e?user%20id=\u003clogin\u003e\u0026password=\u003cpassword\u003e\u0026database=\u003cdatabase\u003e\u0026protocol=...`\u003cbr\u003eor\u003cbr\u003e`sqlserver://DATABASE=\u003cdatabase\u003e; HOSTNAME=\u003chostname\u003e; PORT=\u003cport\u003e; PROTOCOL=\u003cprotocol\u003e; UID=\u003clogin\u003e; PWD=\u003cpassword\u003e;` | *unchanged*\n\u003cstrike\u003ePostgreSQL\u003c/strike\u003e | \u003cstrike\u003e`postgres://user:passw@host:port/dbname`\u003c/strike\u003e | \u003cstrike\u003e*unchanged*\u003c/strike\u003e\n\n### User authentication / password encryption\n\nIf you don't want to write the users' password in clear text in config file (targets files on the exporter), you can encrypt them with a shared password.\n\nHow it works:\n\n* choose a shared password (passphrase) of 16 24 or 32 bytes length and store it your in your favorite password keeper (keepass for me).\n* use passwd_encrypt tool:\n\n    ```bash\n    ./passwd_encrypt \n    give the key: must be 16 24 or 32 bytes long\n    enter key: 0123456789abcdef \n    enter password: mypassword\n    Encrypting...\n    Encrypted message hex: CsG1r/o52tjX6zZH+uHHbQx97BaHTnayaGNP0tcTHLGpt5lMesw=\n    $\n    ```\n\n* set the user password in the target file part:\n\n    ```yaml\n    name: \u003ctarget_name\u003e\n    # doublequotes are mandatory because of \":\" in string\n    data_source_name: \"\u003cdriver\u003e://\u003chostname\u003e:\u003cport\u003e/\u003cinstance\u003e?database=\u003cdatabase\u003e?protocol=TCP\u0026isolation=READONLY\"\n    auth_config:\n      user: \u003cuser\u003e\n      # password: \"/encrypted/base64_encrypted_password_by_passwd_crypt_cmd\"\n      password: /encrypted/qtj1GrR3HcqtJFoBAnEIXlQYQtcptu4COs1Q3A85A5z6vv5HXEC4n0aXWQI=\n    collectors:\n      - \u003ccollectors_name\u003e\n    ```\n\n* set the shared passphrase in prometheus config (either job or node file)\n\n  * prometheus jobs with target files:\n\n    ```yaml\n    #--------- Start prometheus \u003cdriver\u003e exporter  ---------#\n    - job_name: \"\u003cdriver\u003e\"\n        metrics_path: /metrics\n        file_sd_configs:\n          - files: [ \"/etc/prometheus/\u003cdriver\u003e_nodes/*.yml\" ]\n        relabel_configs:\n          - source_labels: [__address__]\n            target_label: __param_target\n          - source_labels: [__tmp_source_host]\n            target_label: __address__\n\n    #--------- End prometheus \u003cdriver\u003e exporter ---------#\n    ```\n\n    ```yaml\n    - targets: [ \"\u003ctarget_name\u003e\" ]\n      labels:\n        # if you have activated password encrypted passphrass\n        __param_auth_key: 0123456789abcdef\n        host: \"\u003ctarget_name\u003e_fullqualified.domain.name\"\n        # custom labels…\n        environment: \"DEV\"\n    ```\n\n## Loging level\n\nYou can change the log.level online by sending a signal USR2 to the process. It will increase and cycle into levels each time a si\ngnal is received.\n\n```shell\nkill -USR2 pid\n```\n\nUsefull if something is wrong and you want to have detailled log only for a small interval.\n\nYou can also set the loglevel using API endpoint /loglevel\n\n* GET /loglevel : to retrieve the current level\n* POST /loglevel : to cycle and increase the current loglevel\n* POST /loglevel/\\\u003clevel\\\u003e : to set level to \\\u003clevel\\\u003e\n\n## Reload\n\nYou can tell the exporter to reload its configuration by sending a signal HUP to the process or send a POST request to /reload endpoint.\n\n## Exporter HTTP server\n\nThe exporter http server has a default landing page that permit to access :\n\n* \"/health\" : a simple heartbeat page that return \"OK\" if exporter is UP\n* \"/config\": expose defined configuration of the exporter\n* \"/targets\": expose all known targets (locally defined or dynamically defined). Password are masked.\n* \"/targets/\u0026lt;target\u0026gt;\": obtain configuration for target \u0026lt;target\u0026gt; or 404 Not found if doesn't exist. Password are masked.\n* \"/status\": expose exporter version, process start time\n* \"/debug\": expose exporter debug/profiling metrics\n* \"/sql_exporter_metrics\": exporter internal prometheus metrics\n* \"/metrics\": expose target's metrics.\n* \"/loglevel\": GET exposes exporter current log level. POST /loglevel increases by one the current level (cycling). POST /loglevel/[level] set the new [level].\n* \"/reload\": method POST only: tells the exporter to reload the configuration.\n\nReponse can be set to json by supplying a header \"Accept: application/json\" in the request.\n\n### Prometheus scrapping\n\nPrometheus scraps a target by geting the url /metrics or *metric_path if you redefine it by command line argument.\n\nThe entrypoint \"/metrics\" accepts the following argument:\n\n* target=\u0026lt;name\u0026gt; [mandatory]: define the target to use:\n  * a \"name\" defined locally in the exporter configuration.\n  * a \"definition\" of target, that represents a data_source_name uri. In this case the target definition is based on the model parameter value, and if authentication is not set in the data_source_name, it should use the auth_name defined in configuration. If password  is encrypted, the shared key used to decipher must be speficied in auth_key.\n* model=\u0026lt;model\u0026gt; (default=\"default\")\n* auth_name=\u0026lt;auth_name\u0026gt; the authentication parameters to use to connect with data_source_name\n* auth_key=\u0026lt;auth_key\u0026gt; the shared key used to decipher encrypted password.\n* health=\u0026lt;true\u0026gt; alter scraping behavior: only return the target connection status metrics; Use to determine if the connection to target is OK or not 1|0.\n* collector=\u0026lt;collector_name\u0026gt;[\u0026amp;collector=\u0026lt;coll_name2\u0026gt;\u0026amp;...] alter scraping behavior; collect specific collectors list, instead of the default defined for the target; usefull to build a specific job with custom metrics with a different scraping interval by example.\n","funding_links":[],"categories":[],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fpeekjef72%2Fsql_exporter","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fpeekjef72%2Fsql_exporter","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fpeekjef72%2Fsql_exporter/lists"}