{"id":18419642,"url":"https://github.com/teradata/nodejs-driver","last_synced_at":"2025-04-13T09:14:16.158Z","repository":{"id":34306158,"uuid":"173786071","full_name":"Teradata/nodejs-driver","owner":"Teradata","description":"Teradata SQL Driver for Node.js","archived":false,"fork":false,"pushed_at":"2025-04-06T01:37:49.000Z","size":1210,"stargazers_count":16,"open_issues_count":9,"forks_count":13,"subscribers_count":8,"default_branch":"develop","last_synced_at":"2025-04-13T09:14:08.876Z","etag":null,"topics":[],"latest_commit_sha":null,"homepage":"","language":null,"has_issues":false,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"other","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/Teradata.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null}},"created_at":"2019-03-04T17:01:06.000Z","updated_at":"2025-04-09T10:13:57.000Z","dependencies_parsed_at":"2024-01-17T18:37:49.472Z","dependency_job_id":"32e9a406-050a-4d11-9917-4e3207c1bfc5","html_url":"https://github.com/Teradata/nodejs-driver","commit_stats":null,"previous_names":[],"tags_count":13,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/Teradata%2Fnodejs-driver","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/Teradata%2Fnodejs-driver/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/Teradata%2Fnodejs-driver/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/Teradata%2Fnodejs-driver/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/Teradata","download_url":"https://codeload.github.com/Teradata/nodejs-driver/tar.gz/refs/heads/develop","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":248688551,"owners_count":21145766,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":[],"created_at":"2024-11-06T04:17:43.057Z","updated_at":"2025-04-13T09:14:16.125Z","avatar_url":"https://github.com/Teradata.png","language":null,"readme":"## Teradata SQL Driver for Node.js\n\nThis package enables Node.js applications to connect to the Teradata Database.\n\nThis package requires 64-bit Node.js v18.20.7 or later and runs on the following operating systems and processor architectures. 32-bit Node.js is not supported.\n* Windows x64 on 64-bit Intel and AMD processors\n* macOS on 64-bit ARM processors\n* macOS on 64-bit Intel processors\n* Linux x64 on 64-bit Intel and AMD processors\n* Linux ARM64 on 64-bit ARM processors\n\nFor community support, please visit [Teradata Community](https://support.teradata.com/community).\n\nFor Teradata customer support, please visit [Teradata Customer Service](https://support.teradata.com/).\n\nPlease note, this driver may contain beta/preview features (\"Beta Features\"). As such, by downloading and/or using the driver, in addition to agreeing to the licensing terms below, you acknowledge that the Beta Features are experimental in nature and that the Beta Features are provided \"AS IS\" and may not be functional on any machine or in any environment.\n\nCopyright 2025 Teradata. All Rights Reserved.\n\n### Table of Contents\n\n* [Features](#Features)\n* [Limitations](#Limitations)\n* [Installation](#Installation)\n* [License](#License)\n* [Documentation](#Documentation)\n* [Sample Programs](#SamplePrograms)\n* [Using the Driver](#Using)\n* [Connection Parameters](#ConnectionParameters)\n* [COP Discovery](#COPDiscovery)\n* [Stored Password Protection](#StoredPasswordProtection)\n* [Logon Authentication Methods](#LogonMethods)\n* [Client Attributes](#ClientAttributes)\n* [User STARTUP SQL Request](#UserStartup)\n* [Transaction Mode](#TransactionMode)\n* [Auto-Commit](#AutoCommit)\n* [Data Types](#DataTypes)\n* [Null Values](#NullValues)\n* [Undefined Values](#UndefinedValues)\n* [Character Export Width](#CharacterExportWidth)\n* [Module Constructors](#ModuleConstructors)\n* [Module Exceptions](#ModuleExceptions)\n* [Connection Attributes](#ConnectionAttributes)\n* [Connection Methods](#ConnectionMethods)\n* [Cursor Attributes](#CursorAttributes)\n* [Cursor Methods](#CursorMethods)\n* [Type Objects](#TypeObjects)\n* [Escape Syntax](#EscapeSyntax)\n* [FastLoad](#FastLoad)\n* [FastExport](#FastExport)\n* [CSV Batch Inserts](#CSVBatchInserts)\n* [CSV Export Results](#CSVExportResults)\n* [Command Line Interface](#CommandLineInterface)\n* [Change Log](#ChangeLog)\n\n\u003ca id=\"Features\"\u003e\u003c/a\u003e\n\n### Features\n\nAt the present time, the driver offers the following features.\n\n* Supported for use with Teradata database 16.20 and later releases.\n* [COP Discovery](#COPDiscovery).\n* Laddered Concurrent Connect.\n* [HTTPS](https://en.wikipedia.org/wiki/HTTPS)/[TLS](https://en.wikipedia.org/wiki/Transport_Layer_Security) connections with Teradata database 16.20.53.30 and later.\n* Encrypted logon.\n* [GSS-API](https://en.wikipedia.org/wiki/Generic_Security_Services_Application_Program_Interface) logon authentication methods `KRB5` (Kerberos), `LDAP`, `TD2`, and `TDNEGO`.\n* [OpenID Connect (OIDC)](https://en.wikipedia.org/wiki/OpenID#OpenID_Connect_(OIDC)) logon authentication methods `BEARER`, `BROWSER`, `CODE`, `CRED`, `JWT`, `ROPC`, and `SECRET`.\n* Data encryption provided by TLS for HTTPS connections.\n* For non-HTTPS connections, data encryption governed by central administration or enabled via the `encryptdata` connection parameter.\n* Unicode character data transferred via the UTF8 session character set.\n* [Auto-commit]((#AutoCommit)) for ANSI and TERA transaction modes.\n* Result set row size up to 1 MB.\n* Multi-statement requests that return multiple result sets.\n* Most JDBC escape syntax.\n* Parameterized SQL requests with question-mark parameter markers.\n* Parameterized batch SQL requests with multiple rows of data bound to question-mark parameter markers.\n* Auto-Generated Key Retrieval (AGKR) for identity column values and more.\n* Large Object (LOB) support for the BLOB and CLOB data types.\n* Complex data types such as `XML`, `JSON`, `DATASET STORAGE FORMAT AVRO`, and `DATASET STORAGE FORMAT CSV`.\n* ElicitFile protocol support for DDL commands that create external UDFs or stored procedures and upload a file from client to database.\n* `CREATE PROCEDURE` and `REPLACE PROCEDURE` commands.\n* Stored Procedure Dynamic Result Sets.\n* FastLoad and FastExport.\n* Monitor partition.\n\n\u003ca id=\"Limitations\"\u003e\u003c/a\u003e\n\n### Limitations\n\n* The UTF8 session character set is always used. The `charset` connection parameter is not supported.\n* No support yet for Recoverable Network Protocol and Redrive.\n\n\u003ca id=\"Installation\"\u003e\u003c/a\u003e\n\n### Installation\n\nThe driver depends on the `ffi-napi`, `ref-napi`, `ref-array-di` packages which are available from [npmjs.com](http://www.npmjs.com).\n\nUse `npm install teradatasql` to download and install the driver and its dependencies automatically.\n\n\u003ca id=\"License\"\u003e\u003c/a\u003e\n\n### License\n\nUse of the driver is governed by the [License Agreement for the Teradata SQL Driver for Node.js](https://github.com/Teradata/nodejs-driver/blob/develop/LICENSE).\n\nWhen the driver is installed, the `LICENSE` and `THIRDPARTYLICENSE` files are placed in the `teradatasql` directory under your `node_modules` installation directory.\n\nIn addition to the license terms, the driver may contain beta/preview features (\"Beta Features\"). As such, by downloading and/or using the driver, in addition to the licensing terms, you acknowledge that the Beta Features are experimental in nature and that the Beta Features are provided \"AS IS\" and may not be functional on any machine or in any environment.\n\n\u003ca id=\"Documentation\"\u003e\u003c/a\u003e\n\n### Documentation\n\nWhen the driver is installed, the `README.md` file is placed in the `teradatasql` directory under your `node_modules` installation directory. This permits you to view the documentation offline, when you are not connected to the Internet.\n\nThe `README.md` file is a plain text file containing the documentation for the driver. While the file can be viewed with any text file viewer or editor, your viewing experience will be best with an editor that understands Markdown format.\n\n\u003ca id=\"SamplePrograms\"\u003e\u003c/a\u003e\n\n### Sample Programs\n\nSample programs are provided to demonstrate how to use the driver. When the driver is installed, the sample programs are placed in the `teradatasql/samples` directory under your `node_modules` installation directory.\n\nThe sample programs are coded with a fake database hostname `whomooz`, username `guest`, and password `please`. Substitute your actual database hostname and credentials before running a sample program.\n\nProgram                                                                                                              | Purpose\n-------------------------------------------------------------------------------------------------------------------- | ---\n[AGKRBatchInsert.ts](https://github.com/Teradata/nodejs-driver/blob/develop/samples/AGKRBatchInsert.ts)              | Demonstrates how to insert a batch of rows with Auto-Generated Key Retrieval (AGKR)\n[AGKRInsertSelect.ts](https://github.com/Teradata/nodejs-driver/blob/develop/samples/AGKRInsertSelect.ts)            | Demonstrates Insert/Select with Auto-Generated Key Retrieval (AGKR)\n[BatchInsert.ts](https://github.com/Teradata/nodejs-driver/blob/develop/samples/BatchInsert.ts)                      | Demonstrates how to insert a batch of rows\n[BatchInsertCSV.ts](https://github.com/Teradata/nodejs-driver/blob/develop/samples/BatchInsertCSV.ts)                | Demonstrates how to insert a batch of rows from a CSV file\n[BatchInsPerf.ts](https://github.com/Teradata/nodejs-driver/blob/develop/samples/BatchInsPerf.ts)                    | Measures time to insert one million rows\n[CancelSleep.ts](https://github.com/Teradata/nodejs-driver/blob/develop/samples/CancelSleep.ts)                      | Demonstrates how to use the cancel method to interrupt a query\n[CharPadding.ts](https://github.com/Teradata/nodejs-driver/blob/develop/samples/CharPadding.ts)                      | Demonstrates the database's *Character Export Width* behavior\n[CommitRollback.ts](https://github.com/Teradata/nodejs-driver/blob/develop/samples/CommitRollback.ts)                | Demonstrates commit and rollback methods with auto-commit off.\n[DecimalDigits.ts](https://github.com/Teradata/nodejs-driver/blob/develop/samples/DecimalDigits.ts)                  | Demonstrates how to format decimal.Decimal values.\n[DriverDatabaseVersion.ts](https://github.com/Teradata/nodejs-driver/blob/develop/samples/DriverDatabaseVersion.ts)  | Displays the driver version and database version\n[ElicitFile.ts](https://github.com/Teradata/nodejs-driver/blob/develop/samples/ElicitFile.ts)                        | Demonstrates C source file upload to create a User-Defined Function (UDF)\n[ExecuteRequest.ts](https://github.com/Teradata/nodejs-driver/blob/develop/samples/ExecuteRequest.ts)                | Demonstrates how to execute a SQL request and display results\n[ExportCSVResult.ts](https://github.com/Teradata/nodejs-driver/blob/develop/samples/ExportCSVResult.ts)              | Demonstrates how to export a query result set to a CSV file\n[ExportCSVResults.ts](https://github.com/Teradata/nodejs-driver/blob/develop/samples/ExportCSVResults.ts)            | Demonstrates how to export multiple query result sets to CSV files\n[FakeExportCSVResults.ts](https://github.com/Teradata/nodejs-driver/blob/develop/samples/FakeExportCSVResults.ts)    | Demonstrates how to export multiple query result sets with the metadata to CSV files\n[FakeResultSetCon.ts](https://github.com/Teradata/nodejs-driver/blob/develop/samples/FakeResultSetCon.ts)            | Demonstrates connection parameter for fake result sets\n[FakeResultSetEsc.ts](https://github.com/Teradata/nodejs-driver/blob/develop/samples/FakeResultSetEsc.ts)            | Demonstrates escape function for fake result sets\n[FastExportCSV.ts](https://github.com/Teradata/nodejs-driver/blob/develop/samples/FastExportCSV.ts)                  | Demonstrates how to FastExport rows from a table to a CSV file\n[FastExportTable.ts](https://github.com/Teradata/nodejs-driver/blob/develop/samples/FastExportTable.ts)              | Demonstrates how to FastExport rows from a table\n[FastLoadBatch.ts](https://github.com/Teradata/nodejs-driver/blob/develop/samples/FastLoadBatch.ts)                  | Demonstrates how to FastLoad batches of rows\n[FastLoadCSV.ts](https://github.com/Teradata/nodejs-driver/blob/develop/samples/FastLoadCSV.ts)                      | Demonstrates how to FastLoad batches of rows from a CSV file\n[HelpSession.ts](https://github.com/Teradata/nodejs-driver/blob/develop/samples/HelpSession.ts)                      | Displays session information\n[IgnoreErrors.ts](https://github.com/Teradata/nodejs-driver/blob/develop/samples/IgnoreErrors.ts)                    | Demonstrates how to ignore errors\n[InsertLob.ts](https://github.com/Teradata/nodejs-driver/blob/develop/samples/InsertLob.ts)                          | Demonstrates how to insert BLOB and CLOB values\n[InsertXML.ts](https://github.com/Teradata/nodejs-driver/blob/develop/samples/InsertXML.ts)                          | Demonstrates how to insert and retrieve XML values\n[LoadCSVFile.ts](https://github.com/Teradata/nodejs-driver/blob/develop/samples/LoadCSVFile.ts)                      | Demonstrates how to load data from a CSV file into a table\n[LobLocators.ts](https://github.com/Teradata/nodejs-driver/blob/develop/samples/LobLocators.ts)                      | Demonstrates how to use LOB locators\n[MetadataFromPrepare.ts](https://github.com/Teradata/nodejs-driver/blob/develop/samples/MetadataFromPrepare.ts)      | Demonstrates how to prepare a SQL request and obtain SQL statement metadata\n[ParamDataTypes.ts](https://github.com/Teradata/nodejs-driver/blob/develop/samples/ParamDataTypes.ts)                | Demonstrates how to specify data types for parameter marker bind values\n[ShowCommand.ts](https://github.com/Teradata/nodejs-driver/blob/develop/samples/ShowCommand.ts)                      | Displays the results from the `SHOW` command\n[StoredProc.ts](https://github.com/Teradata/nodejs-driver/blob/develop/samples/StoredProc.ts)                        | Demonstrates how to create and call a SQL stored procedure\n[TJEncryptPassword.ts](https://github.com/Teradata/nodejs-driver/blob/develop/samples/TJEncryptPassword.ts)          | Creates encrypted password files\n\n\u003ca id=\"Using\"\u003e\u003c/a\u003e\n\n### Using the Driver\n\nYour JavaScript program must import the `teradatasql` package in order to use the driver.\n\n    const teradatasql = require(\"teradatasql\");\n\nAfter importing the `teradatasql` package, your JavaScript program calls the `teradatasql.connect` function to open a connection to the database.\n\n    const con = teradatasql.connect({\n        host: \"whomooz\",\n        user: \"guest\",\n        password: \"please\"\n    });\n\nYou may specify connection parameters as a JavaScript object, as a JSON string, or using a combination of the two approaches. The `teradatasql.connect` function's first argument is a JavaScript object. The `teradatasql.connect` function's second argument is an optional JSON string.\n\nConnection parameters specified only as a JavaScript object:\n\n    con = teradatasql.connect({host:\"whomooz\",user:\"guest\",password:\"please\"});\n\nConnection parameters specified as a JSON string:\n\n    con = teradatasql.connect({}, '{\"host\":\"whomooz\", \"user\":\"guest\", \"password\":\"please\"}');\n\nConnection parameters specified using a combination:\n\n    con = teradatasql.connect({host:\"whomooz\"}, '{\"user\":\"guest\", \"password\":\"please\"}');\n\nWhen a combination of parameters are specified, connection parameters specified as a JSON string take precedence over same-named connection parameters specified in the JavaScript object.\n\n\u003ca id=\"ConnectionParameters\"\u003e\u003c/a\u003e\n\n### Connection Parameters\n\nThe following table lists the connection parameters currently offered by the driver. Connection parameter values are case-sensitive unless stated otherwise.\n\nOur goal is consistency for the connection parameters offered by this driver and the Teradata JDBC Driver, with respect to connection parameter names and functionality. For comparison, Teradata JDBC Driver connection parameters are [documented here](https://downloads.teradata.com/doc/connectivity/jdbc/reference/current/jdbcug_chapter_2.html#BGBHDDGB).\n\nParameter               | Default     | Type           | Description\n----------------------- | ----------- | -------------- | ---\n`account`               |             | string         | Specifies the database account. Equivalent to the Teradata JDBC Driver `ACCOUNT` connection parameter.\n`browser`               |             | string         | Specifies the command to open the browser for Browser Authentication when `logmech` is `BROWSER`. Browser Authentication is supported for Windows and macOS. Equivalent to the Teradata JDBC Driver `BROWSER` connection parameter.\u003cbr/\u003eThe specified command must include a placeholder token, literally specified as `PLACEHOLDER`, which the driver will replace with the Identity Provider authorization endpoint URL. The `PLACEHOLDER` token is case-sensitive and must be specified in uppercase.\u003cbr/\u003e\u0026bull; On Windows, the default command is `cmd /c start \"title\" \"PLACEHOLDER\"`. Windows command syntax requires the quoted title to precede the quoted URL.\u003cbr/\u003e\u0026bull; On macOS, the default command is `open PLACEHOLDER`. macOS command syntax does not allow the URL to be quoted.\n`browser_tab_timeout`   | `\"5\"`       | quoted integer | Specifies the number of seconds to wait before closing the browser tab after Browser Authentication is completed. The default is 5 seconds. The behavior is under the browser's control, and not all browsers support automatic closing of browser tabs. Typically, the tab used to log on will remain open indefinitely, but the second and subsequent tabs will be automatically closed. Specify `0` (zero) to close the tab immediately. Specify `-1` to turn off automatic closing of browser tabs. Browser Authentication is supported for Windows and macOS. Equivalent to the Teradata JDBC Driver `BROWSER_TAB_TIMEOUT` connection parameter.\n`browser_timeout`       | `\"180\"`     | quoted integer | Specifies the number of seconds that the driver will wait for Browser Authentication to complete. The default is 180 seconds (3 minutes). Browser Authentication is supported for Windows and macOS. Equivalent to the Teradata JDBC Driver `BROWSER_TIMEOUT` connection parameter.\n`code_append_file`      | `\"-out\"`    | string         | Specifies how to display the verification URL and code. Optional when `logmech` is `CODE` and ignored for other `logmech` values. The default `-out` prints the verification URL and code to stdout. Specify `-err` to print the verification URL and code to stderr. Specify a file name to append the verification URL and code to an existing file or create a new file if the file does not exist. Equivalent to the Teradata JDBC Driver `CODE_APPEND_FILE` connection parameter.\n`column_name`           | `\"false\"`   | quoted boolean | Controls the behavior of cursor `.description` sequence `name` items. Equivalent to the Teradata JDBC Driver `COLUMN_NAME` connection parameter. False specifies that a cursor `.description` sequence `name` item provides the AS-clause name if available, or the column name if available, or the column title. True specifies that a cursor `.description` sequence `name` item provides the column name if available, but has no effect when StatementInfo parcel support is unavailable.\n`concurrent_interval`   | `\"1000\"`    | quoted integer | Specifies the interval in milliseconds for Laddered Concurrent Connect (LCC) to wait before starting another concurrent connection attempt.\n`concurrent_limit`      | `\"3\"`       | quoted integer | Limits the number of concurrent connection attempts.\n`connect_failure_ttl`   | `\"0\"`       | quoted integer | Specifies the time-to-live in seconds to remember the most recent connection failure for each IP address/port combination. The driver subsequently skips connection attempts to that IP address/port for the duration of the time-to-live. The default value of zero disables this feature. The recommended value is half the database restart time. Equivalent to the Teradata JDBC Driver `CONNECT_FAILURE_TTL` connection parameter.\n`connect_function`      | `\"0\"`       | quoted integer | Specifies whether the database should allocate a Logon Sequence Number (LSN) for this session, or associate this session with an existing LSN. Specify `0` for a session with no LSN (the default). Specify `1` to allocate a new LSN for the session. Specify `2` to associate the session with the existing LSN identified by the `logon_sequence_number` connection parameter. The database only permits sessions for the same user to share an LSN. Equivalent to the Teradata JDBC Driver `CONNECT_FUNCTION` connection parameter.\n`connect_timeout`       | `\"10000\"`   | quoted integer | Specifies the timeout in milliseconds for establishing a TCP socket connection. Specify `0` for no timeout. The default is 10 seconds (10000 milliseconds).\n`cop`                   | `\"true\"`    | quoted boolean | Specifies whether COP Discovery is performed. Equivalent to the Teradata JDBC Driver `COP` connection parameter.\n`coplast`               | `\"false\"`   | quoted boolean | Specifies how COP Discovery determines the last COP hostname. Equivalent to the Teradata JDBC Driver `COPLAST` connection parameter. When `coplast` is `false` or omitted, or COP Discovery is turned off, then no DNS lookup occurs for the coplast hostname. When `coplast` is `true`, and COP Discovery is turned on, then a DNS lookup occurs for a coplast hostname.\n`database`              |             | string         | Specifies the initial database to use after logon, instead of the user's default database. Equivalent to the Teradata JDBC Driver `DATABASE` connection parameter.\n`dbs_port`              | `\"1025\"`    | quoted integer | Specifies the database port number. Equivalent to the Teradata JDBC Driver `DBS_PORT` connection parameter.\n`encryptdata`           | `\"false\"`   | quoted boolean | Controls encryption of data exchanged between the driver and the database. Equivalent to the Teradata JDBC Driver `ENCRYPTDATA` connection parameter.\n`error_query_count`     | `\"21\"`      | quoted integer | Specifies how many times the driver will attempt to query FastLoad Error Table 1 after a FastLoad operation. Equivalent to the Teradata JDBC Driver `ERROR_QUERY_COUNT` connection parameter.\n`error_query_interval`  | `\"500\"`     | quoted integer | Specifies how many milliseconds the driver will wait between attempts to query FastLoad Error Table 1. Equivalent to the Teradata JDBC Driver `ERROR_QUERY_INTERVAL` connection parameter.\n`error_table_1_suffix`  | `\"_ERR_1\"`  | string         | Specifies the suffix for the name of FastLoad Error Table 1. Equivalent to the Teradata JDBC Driver `ERROR_TABLE_1_SUFFIX` connection parameter.\n`error_table_2_suffix`  | `\"_ERR_2\"`  | string         | Specifies the suffix for the name of FastLoad Error Table 2. Equivalent to the Teradata JDBC Driver `ERROR_TABLE_2_SUFFIX` connection parameter.\n`error_table_database`  |             | string         | Specifies the database name for the FastLoad error tables. By default, FastLoad error tables reside in the same database as the destination table being loaded. Equivalent to the Teradata JDBC Driver `ERROR_TABLE_DATABASE` connection parameter.\n`fake_result_sets`      | `\"false\"`   | quoted boolean | Controls whether a fake result set containing statement metadata precedes each real result set.\n`field_quote`           | `\"\\\"\"`      | string         | Specifies a single character string used to quote fields in a CSV file.\n`field_sep`             | `\",\"`       | string         | Specifies a single character string used to separate fields in a CSV file. Equivalent to the Teradata JDBC Driver `FIELD_SEP` connection parameter.\n`govern`                | `\"true\"`    | quoted boolean | Controls FastLoad and FastExport throttling by Teradata workload management rules. When set to `true` (the default), workload management rules may delay a FastLoad or FastExport. When set to `false`, workload management rules will reject rather than delay a FastLoad or FastExport. Equivalent to the Teradata JDBC Driver `GOVERN` connection parameter.\n`host`                  |             | string         | Specifies the database hostname.\n`http_proxy`            |             | string         | Specifies the proxy server URL for HTTP connections to TLS certificate verification CRL and OCSP endpoints. The URL must begin with `http://` and must include a colon `:` and port number.\n`http_proxy_password`   |             | string         | Specifies the proxy server password for the proxy server identified by the `http_proxy` parameter. This parameter may only be specified in conjunction with the `http_proxy` parameter. When this parameter is omitted, no proxy server password is provided to the proxy server identified by the `http_proxy` parameter.\n`http_proxy_user`       |             | string         | Specifies the proxy server username for the proxy server identified by the `http_proxy` parameter. This parameter may only be specified in conjunction with the `http_proxy` parameter. When this parameter is omitted, no proxy server username is provided to the proxy server identified by the `http_proxy` parameter.\n`https_port`            | `\"443\"`     | quoted integer | Specifies the database port number for HTTPS/TLS connections. Equivalent to the Teradata JDBC Driver `HTTPS_PORT` connection parameter.\n`https_proxy`           |             | string         | Specifies the proxy server URL for HTTPS/TLS connections to the database and to Identity Provider endpoints. The URL must begin with `http://` and must include a colon `:` and port number. The driver connects to the proxy server using a non-TLS HTTP connection, then uses the HTTP CONNECT method to establish an HTTPS/TLS connection to the destination. Equivalent to the Teradata JDBC Driver `HTTPS_PROXY` connection parameter.\n`https_proxy_password`  |             | string         | Specifies the proxy server password for the proxy server identified by the `https_proxy` parameter. This parameter may only be specified in conjunction with the `https_proxy` parameter. When this parameter is omitted, no proxy server password is provided to the proxy server identified by the `https_proxy` parameter. Equivalent to the Teradata JDBC Driver `HTTPS_PROXY_PASSWORD` connection parameter.\n`https_proxy_user`      |             | string         | Specifies the proxy server username for the proxy server identified by the `https_proxy` parameter. This parameter may only be specified in conjunction with the `https_proxy` parameter. When this parameter is omitted, no proxy server username is provided to the proxy server identified by the `https_proxy` parameter. Equivalent to the Teradata JDBC Driver `HTTPS_PROXY_USER` connection parameter.\n`jws_algorithm`         | `\"RS256\"`   | string         | Specifies the JSON Web Signature (JWS) algorithm to sign the JWT Bearer Token for client authentication. Optional when `logmech` is `BEARER` and ignored for other `logmech` values. The default `RS256` is RSASSA-PKCS1-v1_5 using SHA-256. Specify `RS384` for RSASSA-PKCS1-v1_5 using SHA-384. Specify `RS512` for RSASSA-PKCS1-v1_5 using SHA-512. Equivalent to the Teradata JDBC Driver `JWS_ALGORITHM` connection parameter.\n`jws_cert`              |             | string         | Specifies the file name of the X.509 certificate PEM file that contains the public key corresponding to the private key from `jws_private_key`. Optional when `logmech` is `BEARER` and ignored for other `logmech` values. When this parameter is specified, the \"x5t\" header thumbprint is added to the JWT Bearer Token for the Identity Provider to select the public key for JWT signature verification. Some Identity Providers, such as Microsoft Entra ID, require this. When this parameter is omitted, the \"x5t\" header thumbprint is not added to the JWT Bearer Token. Some Identity Providers do not require the \"x5t\" header thumbprint. Equivalent to the Teradata JDBC Driver `JWS_CERT` connection parameter.\n`jws_private_key`       |             | string         | Specifies the file name of the PEM or JWK file containing the private key to sign the JWT Bearer Token for client authentication. Required when `logmech` is `BEARER` and ignored for other `logmech` values. PEM and JWK file formats are supported. The private key filename must end with the `.pem` or `.jwk` extension. A PEM file must contain the BEGIN/END PRIVATE KEY header and trailer. If a JWK file contains a \"kid\" (key identifier) parameter, the \"kid\" header is added to the JWT Bearer Token for the Identity Provider to select the public key for JWT signature verification. Equivalent to the Teradata JDBC Driver `JWS_PRIVATE_KEY` connection parameter.\n`lob_support`           | `\"true\"`    | quoted boolean | Controls LOB support. Equivalent to the Teradata JDBC Driver `LOB_SUPPORT` connection parameter.\n`log`                   | `\"0\"`       | quoted integer | Controls debug logging. Somewhat equivalent to the Teradata JDBC Driver `LOG` connection parameter. This parameter's behavior is subject to change in the future. This parameter's value is currently defined as an integer in which the 1-bit governs function and method tracing, the 2-bit governs debug logging, the 4-bit governs transmit and receive message hex dumps, and the 8-bit governs timing. Compose the value by adding together 1, 2, 4, and/or 8.\n`logdata`               |             | string         | Specifies extra data for the chosen logon authentication method. Equivalent to the Teradata JDBC Driver `LOGDATA` connection parameter.\n`logmech`               | `\"TD2\"`     | string         | Specifies the [logon authentication method](#LogonMethods). Equivalent to the Teradata JDBC Driver `LOGMECH` connection parameter. The database user must have the \"logon with null password\" permission for `KRB5` Single Sign On (SSO) or any of the [OpenID Connect (OIDC)](https://en.wikipedia.org/wiki/OpenID#OpenID_Connect_(OIDC)) methods `BEARER`, `BROWSER`, `CODE`, `CRED`, `JWT`, `ROPC`, or `SECRET`. [GSS-API](https://en.wikipedia.org/wiki/Generic_Security_Services_Application_Program_Interface) methods are `KRB5`, `LDAP`, `TD2`, and `TDNEGO`. Values are case-insensitive.\u003cbr/\u003e\u0026bull; `BEARER` uses OIDC Client Credentials Grant with JWT Bearer Token for client authentication.\u003cbr/\u003e\u0026bull; `BROWSER` uses Browser Authentication, supported for Windows and macOS.\u003cbr/\u003e\u0026bull; `CODE` uses OIDC Device Code Flow, also known as OIDC Device Authorization Grant.\u003cbr/\u003e\u0026bull; `CRED` uses OIDC Client Credentials Grant with client_secret_post for client authentication.\u003cbr/\u003e\u0026bull; `JWT` uses JSON Web Token.\u003cbr/\u003e\u0026bull; `KRB5` uses Kerberos V5.\u003cbr/\u003e\u0026bull; `LDAP` uses Lightweight Directory Access Protocol.\u003cbr/\u003e\u0026bull; `ROPC` uses OIDC Resource Owner Password Credentials (ROPC).\u003cbr/\u003e\u0026bull; `SECRET` uses OIDC Client Credentials Grant with client_secret_basic for client authentication.\u003cbr/\u003e\u0026bull; `TD2` uses Teradata Method 2.\u003cbr/\u003e\u0026bull; `TDNEGO` automatically selects an appropriate GSS-API logon authentication method. OIDC methods are not selected.\n`logon_sequence_number` |             | quoted integer | Associates this session with an existing Logon Sequence Number (LSN) when `connect_function` is `2`. The database only permits sessions for the same user to share an LSN. An LSN groups multiple sessions together for workload management. Using an LSN is a three-step process. First, establish a control session with `connect_function` as `1`, which allocates a new LSN. Second, obtain the LSN from the control session using the escape function `{fn teradata_logon_sequence_number}`. Third, establish an associated session with `connect_function` as `2` and the logon sequence number. Equivalent to the Teradata JDBC Driver `LOGON_SEQUENCE_NUMBER` connection parameter.\n`logon_timeout`         | `\"0\"`       | quoted integer | Specifies the logon timeout in seconds. Zero means no timeout.\n`manage_error_tables`   | `\"true\"`    | quoted boolean | Controls whether the driver manages the FastLoad error tables.\n`max_message_body`      | `\"2097000\"` | quoted integer | Specifies the maximum Response Message size in bytes. Equivalent to the Teradata JDBC Driver `MAX_MESSAGE_BODY` connection parameter.\n`oauth_level`           | `\"0\"`       | quoted integer | Controls Single Sign On (SSO) access to Open Table Format (OTF) catalog and storage instances. Equivalent to the Teradata JDBC Driver `OAUTH_LEVEL` connection parameter. If `redrive` is `1` or higher and the database supports Control Data, this specifies which tokens are transmitted to the database with each request, and the database may use the tokens for SSO access to OTF catalog and storage instances. If `redrive` is `0` or the database does not support Control Data, tokens are not transmitted to the database with each request, and tokens will not be available for SSO access to OTF. \u003cbr/\u003e\u0026bull; `0` (the default) disables sending tokens to the database. \u003cbr/\u003e\u0026bull; `1` sends the token from OIDC authentication to the database for each SQL request. \u003cbr/\u003e\u0026bull; `2` sends the OAuth tokens from `oauth_scopes` to the database for each SQL request. \u003cbr/\u003e\u0026bull; `3` sends the token from OIDC authentication and the OAuth tokens to the database for each SQL request.\n`oauth_scopes`          |             | string         | Specifies one or more OAuth scopes for SSO access to OTF catalog and storage instances. Multiple scopes are separated by vertical bar `\\|` characters. This parameter may only be used with OIDC logon mechanisms for individual users, not for service accounts. When this parameter is specified, after successful OIDC authentication, the driver obtains an additional access token from the Identity Provider for each specified scope. Each additional access token request uses the same OIDC parameters as the initial OIDC authentication; only the scope is varied. Equivalent to the Teradata JDBC Driver `OAUTH_SCOPES` connection parameter.\n`oidc_cache_size`       | `\"20\"`      | quoted integer | Specifies the maximum size of the OpenID Connect (OIDC) token cache for Browser Authentication and other OIDC methods. Equivalent to the Teradata JDBC Driver `OIDC_CACHE_SIZE` connection parameter.\n`oidc_claim`            | `\"email\"`   | string         | Specifies the OpenID Connect (OIDC) claim to use for Browser Authentication and other OIDC methods. Equivalent to the Teradata JDBC Driver `OIDC_CLAIM` connection parameter.\n`oidc_clientid`         |             | string         | Specifies the OpenID Connect (OIDC) Client ID to use for Browser Authentication and other OIDC methods. When omitted, the default Client ID comes from the database's TdgssUserConfigFile.xml file. Browser Authentication is supported for Windows and macOS. Equivalent to the Teradata JDBC Driver `OIDC_CLIENTID` connection parameter.\n`oidc_metadata`         |             | string         | Specifies the Identity Provider metadata URL for OpenID Connect (OIDC). When this connection parameter is omitted, the default metadata URL is provided by the database. This connection parameter is a troubleshooting tool only, and is not intended for normal production usage. Equivalent to the Teradata JDBC Driver `OIDC_METADATA` connection parameter.\n`oidc_prompt`           |             | string         | Specifies the OpenID Connect (OIDC) prompt value to use for Browser Authentication. Optional when `logmech` is `BROWSER` and ignored for other `logmech` values. Ignored unless `user` is specified as an OIDC login hint. Specify `login` for the Identity Provider to prompt the user for credentials. May not be supported by all Identity Providers. The browser tab may not close automatically after Browser Authentication is completed. Equivalent to the Teradata JDBC Driver `OIDC_PROMPT` connection parameter.\n`oidc_scope`            | `\"openid\"`  | string         | Specifies the OpenID Connect (OIDC) scope to use for Browser Authentication. Beginning with Teradata Database 17.20.03.11, the default scope can be specified in the database's `TdgssUserConfigFile.xml` file, using the `IdPConfig` element's `Scope` attribute. Browser Authentication is supported for Windows and macOS. Equivalent to the Teradata JDBC Driver `OIDC_SCOPE` connection parameter.\n`oidc_sslmode`          |             | string         | Specifies the mode for HTTPS connections to the Identity Provider. Equivalent to the Teradata JDBC Driver `OIDC_SSLMODE` connection parameter. Values are case-insensitive. When this parameter is omitted, the default is the value of the `sslmode` connection parameter.\u003cbr/\u003e\u0026bull; `ALLOW` does not perform certificate verification for HTTPS connections to the Identity Provider.\u003cbr/\u003e\u0026bull; `VERIFY-CA` verifies that the server certificate is valid and trusted.\u003cbr/\u003e\u0026bull; `VERIFY-FULL` verifies that the server certificate is valid and trusted, and verifies that the server certificate matches the Identity Provider hostname.\n`oidc_token`            | `\"access_token\"` | string    | Specifies the kind of OIDC token to use for Browser Authentication. Specify `id_token` to use the id_token instead of the access_token. Browser Authentication is supported for Windows and macOS. Equivalent to the Teradata JDBC Driver `OIDC_TOKEN` connection parameter.\n`partition`             | `\"DBC/SQL\"` | string         | Specifies the database partition. Equivalent to the Teradata JDBC Driver `PARTITION` connection parameter.\n`password`              |             | string         | Specifies the database password. Equivalent to the Teradata JDBC Driver `PASSWORD` connection parameter.\n`proxy_bypass_hosts`    |             | string         | Specifies a matching pattern for hostnames and addresses to bypass the proxy server identified by the `http_proxy` and/or `https_proxy` parameter. This parameter may only be specified in conjunction with the `http_proxy` and/or `https_proxy` parameter. Separate multiple hostnames and addresses with a vertical bar `\\|` character. Specify an asterisk `*` as a wildcard character. When this parameter is omitted, the default pattern `localhost\\|127.*\\|[::1]` bypasses the proxy server identified by the `http_proxy` and/or `https_proxy` parameter for common variations of the loopback address. Equivalent to the Teradata JDBC Driver `PROXY_BYPASS_HOSTS` connection parameter.\n`request_timeout`       | `\"0\"`       | quoted integer | Specifies the timeout for executing each SQL request. Zero means no timeout.\n`runstartup`            | `\"false\"`   | quoted boolean | Controls whether the user's `STARTUP` SQL request is executed after logon. For more information, refer to [User STARTUP SQL Request](#UserStartup). Equivalent to the Teradata JDBC Driver `RUNSTARTUP` connection parameter.\n`sessions`              |             | quoted integer | Specifies the number of data transfer connections for FastLoad or FastExport. The default (recommended) lets the database choose the appropriate number of connections. Equivalent to the Teradata JDBC Driver `SESSIONS` connection parameter.\n`sip_support`           | `\"true\"`    | quoted boolean | Controls whether StatementInfo parcel is used. Equivalent to the Teradata JDBC Driver `SIP_SUPPORT` connection parameter.\n`sp_spl`                | `\"true\"`    | quoted boolean | Controls whether stored procedure source code is saved in the database when a SQL stored procedure is created. Equivalent to the Teradata JDBC Driver `SP_SPL` connection parameter.\n`sslca`                 |             | string         | Specifies the file name of a PEM file that contains Certificate Authority (CA) certificates for use with `sslmode` or `oidc_sslmode` values `VERIFY-CA` or `VERIFY-FULL`. Equivalent to the Teradata JDBC Driver `SSLCA` connection parameter.\n`sslcapath`             |             | string         | Specifies a directory of PEM files that contain Certificate Authority (CA) certificates for use with `sslmode` or `oidc_sslmode` values `VERIFY-CA` or `VERIFY-FULL`. Only files with an extension of `.pem` are used. Other files in the specified directory are not used. Equivalent to the Teradata JDBC Driver `SSLCAPATH` connection parameter.\n`sslcipher`             |             | string         | Specifies the TLS cipher for HTTPS/TLS connections. Default lets database and driver choose the most appropriate TLS cipher. Equivalent to the Teradata JDBC Driver `SSLCIPHER` connection parameter.\n`sslcrc`                | `\"ALLOW\"`   | string         | Controls TLS certificate revocation checking (CRC) for HTTPS/TLS connections. Equivalent to the Teradata JDBC Driver `SSLCRC` connection parameter. Values are case-insensitive.\u003cbr/\u003e\u0026bull; `ALLOW` performs CRC for `sslmode` or `oidc_sslmode` `VERIFY-CA` and `VERIFY-FULL`, and provides soft fail CRC for `VERIFY-CA` and `VERIFY-FULL` to ignore CRC communication failures.\u003cbr/\u003e\u0026bull; `PREFER` performs CRC for all HTTPS connections, and provides soft fail CRC for `VERIFY-CA` and `VERIFY-FULL` to ignore CRC communication failures.\u003cbr/\u003e\u0026bull; `REQUIRE` performs CRC for all HTTPS connections, and requires CRC for `VERIFY-CA` and `VERIFY-FULL`.\n`sslcrl`                | `\"true\"`    | quoted boolean | Controls the use of Certificate Revocation List (CRL) for TLS certificate revocation checking for HTTPS/TLS connections. Online Certificate Status Protocol (OCSP) is preferred over CRL, so CRL is used when OSCP is unavailable. Equivalent to the Teradata JDBC Driver `SSLCRL` connection parameter.\n`sslmode`               | `\"PREFER\"`  | string         | Specifies the mode for connections to the database. Equivalent to the Teradata JDBC Driver `SSLMODE` connection parameter. Values are case-insensitive.\u003cbr/\u003e\u0026bull; `DISABLE` disables HTTPS/TLS connections and uses only non-TLS connections.\u003cbr/\u003e\u0026bull; `ALLOW` uses non-TLS connections unless the database requires HTTPS/TLS connections.\u003cbr/\u003e\u0026bull; `PREFER` uses HTTPS/TLS connections unless the database does not offer HTTPS/TLS connections.\u003cbr/\u003e\u0026bull; `REQUIRE` uses only HTTPS/TLS connections.\u003cbr/\u003e\u0026bull; `VERIFY-CA` uses only HTTPS/TLS connections and verifies that the server certificate is valid and trusted.\u003cbr/\u003e\u0026bull; `VERIFY-FULL` uses only HTTPS/TLS connections, verifies that the server certificate is valid and trusted, and verifies that the server certificate matches the database hostname.\n`sslnamedgroups`        |             | string         | Specifies the TLS key exchange named groups for HTTPS/TLS connections. Multiple named groups are separated by commas. Default lets database and driver choose the most appropriate named group. Equivalent to the Teradata JDBC Driver `SSLNAMEDGROUPS` connection parameter.\n`sslocsp`               | `\"true\"`    | quoted boolean | Controls the use of Online Certificate Status Protocol (OCSP) for TLS certificate revocation checking for HTTPS/TLS connections. Equivalent to the Teradata JDBC Driver `SSLOCSP` connection parameter.\n`sslprotocol`           | `\"TLSv1.2\"` | string         | Specifies the TLS protocol for HTTPS/TLS connections. Equivalent to the Teradata JDBC Driver `SSLPROTOCOL` connection parameter.\n`teradata_values`       | `\"true\"`    | quoted boolean | Controls whether `str` or a more specific JavaScript data type is used for certain result set column value types. Refer to the [Data Types](#DataTypes) table below for details.\n`tmode`                 | `\"DEFAULT\"` | string         | Specifies the [transaction mode](#TransactionMode). Equivalent to the Teradata JDBC Driver `TMODE` connection parameter. Possible values are `DEFAULT` (the default), `ANSI`, or `TERA`.\n`user`                  |             | string         | Specifies the database username. Equivalent to the Teradata JDBC Driver `USER` connection parameter.\n\n\u003ca id=\"COPDiscovery\"\u003e\u003c/a\u003e\n\n### COP Discovery\n\nThe driver provides Communications Processor (COP) discovery behavior when the `cop` connection parameter is `true` or omitted. COP Discovery is turned off when the `cop` connection parameter is `false`.\n\nA database system can be composed of multiple database nodes. One or more of the database nodes can be configured to run the database Gateway process. Each database node that runs the database Gateway process is termed a Communications Processor, or COP. COP Discovery refers to the procedure of identifying all the available COP hostnames and their IP addresses. COP hostnames can be defined in DNS, or can be defined in the client system's `hosts` file. Teradata strongly recommends that COP hostnames be defined in DNS, rather than the client system's `hosts` file. Defining COP hostnames in DNS provides centralized administration, and enables centralized changes to COP hostnames if and when the database is reconfigured.\n\nThe `coplast` connection parameter specifies how COP Discovery determines the last COP hostname.\n* When `coplast` is `false` or omitted, or COP Discovery is turned off, then the driver will not perform a DNS lookup for the coplast hostname.\n* When `coplast` is `true`, and COP Discovery is turned on, then the driver will first perform a DNS lookup for a coplast hostname to obtain the IP address of the last COP hostname before performing COP Discovery. Subsequently, during COP Discovery, the driver will stop searching for COP hostnames when either an unknown COP hostname is encountered, or a COP hostname is encountered whose IP address matches the IP address of the coplast hostname.\n\nSpecifying `coplast` as `true` can improve performance with DNS that is slow to respond for DNS lookup failures, and is necessary for DNS that never returns a DNS lookup failure.\n\nWhen performing COP Discovery, the driver starts with cop1, which is appended to the database hostname, and then proceeds with cop2, cop3, ..., copN. The driver supports domain-name qualification for COP Discovery and the coplast hostname. Domain-name qualification is recommended, because it can improve performance by avoiding unnecessary DNS lookups for DNS search suffixes.\n\nThe following table illustrates the DNS lookups performed for a hypothetical three-node database system named \"whomooz\".\n\n\u0026nbsp; | No domain name qualification | With domain name qualification\u003cbr/\u003e(Recommended)\n------ | ---------------------------- | ---\nApplication-specified\u003cbr/\u003edatabase hostname | `whomooz` | `whomooz.domain.com`\nDefault: COP Discovery turned on, and `coplast` is `false` or omitted,\u003cbr/\u003eperform DNS lookups until unknown COP hostname is encountered | `whomoozcop1`\u0026rarr;`10.0.0.1`\u003cbr/\u003e`whomoozcop2`\u0026rarr;`10.0.0.2`\u003cbr/\u003e`whomoozcop3`\u0026rarr;`10.0.0.3`\u003cbr/\u003e`whomoozcop4`\u0026rarr;undefined | `whomoozcop1.domain.com`\u0026rarr;`10.0.0.1`\u003cbr/\u003e`whomoozcop2.domain.com`\u0026rarr;`10.0.0.2`\u003cbr/\u003e`whomoozcop3.domain.com`\u0026rarr;`10.0.0.3`\u003cbr/\u003e`whomoozcop4.domain.com`\u0026rarr;undefined\nCOP Discovery turned on, and `coplast` is `true`,\u003cbr/\u003eperform DNS lookups until COP hostname is found whose IP address matches the coplast hostname, or unknown COP hostname is encountered | `whomoozcoplast`\u0026rarr;`10.0.0.3`\u003cbr/\u003e`whomoozcop1`\u0026rarr;`10.0.0.1`\u003cbr/\u003e`whomoozcop2`\u0026rarr;`10.0.0.2`\u003cbr/\u003e`whomoozcop3`\u0026rarr;`10.0.0.3` | `whomoozcoplast.domain.com`\u0026rarr;`10.0.0.3`\u003cbr/\u003e`whomoozcop1.domain.com`\u0026rarr;`10.0.0.1`\u003cbr/\u003e`whomoozcop2.domain.com`\u0026rarr;`10.0.0.2`\u003cbr/\u003e`whomoozcop3.domain.com`\u0026rarr;`10.0.0.3`\nCOP Discovery turned off and round-robin DNS,\u003cbr/\u003eperform one DNS lookup that returns multiple IP addresses | `whomooz`\u0026rarr;`10.0.0.1`, `10.0.0.2`, `10.0.0.3` | `whomooz.domain.com`\u0026rarr;`10.0.0.1`, `10.0.0.2`, `10.0.0.3`\n\nRound-robin DNS rotates the list of IP addresses automatically to provide load distribution. Round-robin is only possible with DNS, not with the client system `hosts` file.\n\nThe driver supports the definition of multiple IP addresses for COP hostnames and non-COP hostnames.\n\nFor the first connection to a particular database system, the driver generates a random number to index into the list of COPs. For each subsequent connection, the driver increments the saved index until it wraps around to the first position. This behavior provides load distribution across all discovered COPs.\n\nThe driver masks connection failures to down COPs, thereby hiding most connection failures from the client application. An exception is thrown to the application only when all the COPs are down for that database. If a COP is down, the next COP in the sequence (including a wrap-around to the first COP) receives extra connections that were originally destined for the down COP. When multiple IP addresses are defined in DNS for a COP, the driver will attempt to connect to each of the COP's IP addresses, and the COP is considered down only when connection attempts fail to all of the COP's IP addresses.\n\nIf COP Discovery is turned off, or no COP hostnames are defined in DNS, the driver connects directly to the hostname specified in the `host` connection parameter. This permits load distribution schemes other than the COP Discovery approach. For example, round-robin DNS or a TCP/IP load distribution product can be used. COP Discovery takes precedence over simple database hostname lookup. To use an alternative load distribution scheme, either ensure that no COP hostnames are defined in DNS, or turn off COP Discovery with `cop` as `false`.\n\n\u003ca id=\"StoredPasswordProtection\"\u003e\u003c/a\u003e\n\n### Stored Password Protection\n\n#### Overview\n\nStored Password Protection enables an application to provide a connection password in encrypted form to the driver.\n\nAn encrypted password may be specified in the following contexts:\n* A login password specified as the `password` connection parameter.\n* A login password specified within the `logdata` connection parameter.\n\nIf the password, however specified, begins with the prefix `ENCRYPTED_PASSWORD(` then the specified password must follow this format:\n\n`ENCRYPTED_PASSWORD(file:`*PasswordEncryptionKeyFileName*`,file:`*EncryptedPasswordFileName*`)`\n\nEach filename must be preceded by the `file:` prefix. The *PasswordEncryptionKeyFileName* must be separated from the *EncryptedPasswordFileName* by a single comma.\n\nThe *PasswordEncryptionKeyFileName* specifies the name of a file that contains the password encryption key and associated information. The *EncryptedPasswordFileName* specifies the name of a file that contains the encrypted password and associated information. The two files are described below.\n\nStored Password Protection is offered by this driver, the Teradata JDBC Driver, and the Teradata SQL Driver for R. These drivers use the same file format.\n\n#### Program TJEncryptPassword.ts\n\n`TJEncryptPassword.ts` is a sample program to create encrypted password files for use with Stored Password Protection. When the driver is installed, the sample programs are placed in the `teradatasql/samples` directory under your `node_modules` installation directory.\n\nThis program works in conjunction with Stored Password Protection offered by the driver. This program creates the files containing the password encryption key and encrypted password, which can be subsequently specified via the `ENCRYPTED_PASSWORD(` syntax.\n\nYou are not required to use this program to create the files containing the password encryption key and encrypted password. You can develop your own software to create the necessary files. You may also use the [`TJEncryptPassword.java`](https://downloads.teradata.com/doc/connectivity/jdbc/reference/current/samp/TJEncryptPassword.java.txt) sample program that is available with the [Teradata JDBC Driver Reference](https://downloads.teradata.com/doc/connectivity/jdbc/reference/current/frameset.html). The only requirement is that the files must match the format expected by the driver, which is documented below.\n\nThis program encrypts the password and then immediately decrypts the password, in order to verify that the password can be successfully decrypted. This program mimics the password decryption of the driver, and is intended to openly illustrate its operation and enable scrutiny by the community.\n\nThe encrypted password is only as safe as the two files. You are responsible for restricting access to the files containing the password encryption key and encrypted password. If an attacker obtains both files, the password can be decrypted. The operating system file permissions for the two files should be as limited and restrictive as possible, to ensure that only the intended operating system userid has access to the files.\n\nThe two files can be kept on separate physical volumes, to reduce the risk that both files might be lost at the same time. If either or both of the files are located on a network volume, then an encrypted wire protocol can be used to access the network volume, such as sshfs, encrypted NFSv4, or encrypted SMB 3.0.\n\nThis program accepts eight command-line arguments:\n\nArgument                      | Example              | Description\n----------------------------- | -------------------- | ---\nTransformation                | `AES/CBC/NoPadding`  | Specifies the transformation in the form *Algorithm*`/`*Mode*`/`*Padding*. Supported transformations are listed in a table below.\nKeySizeInBits                 | `256`                | Specifies the algorithm key size, which governs the encryption strength.\nMAC                           | `HmacSHA256`         | Specifies the message authentication code (MAC) algorithm `HmacSHA1` or `HmacSHA256`.\nPasswordEncryptionKeyFileName | `PassKey.properties` | Specifies a filename in the current directory, a relative pathname, or an absolute pathname. The file is created by this program. If the file already exists, it will be overwritten by the new file.\nEncryptedPasswordFileName     | `EncPass.properties` | Specifies a filename in the current directory, a relative pathname, or an absolute pathname. The filename or pathname that must differ from the PasswordEncryptionKeyFileName. The file is created by this program. If the file already exists, it will be overwritten by the new file.\nHostname                      | `whomooz`            | Specifies the database hostname.\nUsername                      | `guest`              | Specifies the database username.\nPassword                      | `please`             | Specifies the database password to be encrypted. Unicode characters in the password can be specified with the `\\u`*XXXX* escape sequence.\n\n\n#### Example Commands\n\nThe TJEncryptPassword program uses the driver to log on to the specified database using the encrypted password, so the driver must have been installed with the `npm install teradatasql` command.\n\nThe following commands assume that the `TJEncryptPassword.ts` program file is located in the current directory. When the driver is installed, the sample programs are placed in the `teradatasql/samples` directory under your `node_modules` installation directory. Change your current directory to the `teradatasql/samples` directory under your `node_modules` installation directory.\n\nThe following example commands illustrate using a 256-bit AES key, and using the HmacSHA256 algorithm.\n\n `npx ts-node TJEncryptPassword.ts AES/CBC/NoPadding 256 HmacSHA256 PassKey.properties EncPass.properties whomooz guest please`\n\n#### Password Encryption Key File Format\n\nYou are not required to use the TJEncryptPassword.ts program to create the files containing the password encryption key and encrypted password. You can develop your own software to create the necessary files, but the files must match the format expected by the driver.\n\nThe password encryption key file is a text file in Java Properties file format, using the ISO 8859-1 character encoding.\n\nThe file must contain the following string properties:\n\nProperty                                          | Description\n------------------------------------------------- | ---\n`version=1`                                       | The version number must be `1`. This property is required.\n`transformation=`*Algorithm*`/`*Mode*`/`*Padding* | Specifies the transformation in the form *Algorithm*`/`*Mode*`/`*Padding*. Supported transformations are listed in a table below. This property is required.\n`algorithm=`*Algorithm*                           | This value must correspond to the *Algorithm* portion of the transformation. This property is required.\n`match=`*MatchValue*                              | The password encryption key and encrypted password files must contain the same match value. The match values are compared to ensure that the two specified files are related to each other, serving as a \"sanity check\" to help avoid configuration errors. This property is required.\n`key=`*HexDigits*                                 | This value is the password encryption key, encoded as hex digits. This property is required.\n`mac=`*MACAlgorithm*                              | Specifies the message authentication code (MAC) algorithm `HmacSHA1` or `HmacSHA256`. Stored Password Protection performs Encrypt-then-MAC for protection from a padding oracle attack. This property is required.\n`mackey=`*HexDigits*                              | This value is the MAC key, encoded as hex digits. This property is required.\n\nThe TJEncryptPassword.ts program uses a timestamp as a shared match value, but a timestamp is not required. Any shared string can serve as a match value. The timestamp is not related in any way to the encryption of the password, and the timestamp cannot be used to decrypt the password.\n\n#### Encrypted Password File Format\n\nThe encrypted password file is a text file in Java Properties file format, using the ISO 8859-1 character encoding.\n\nThe file must contain the following string properties:\n\nProperty                                          | Description\n------------------------------------------------- | ---\n`version=1`                                       | The version number must be `1`. This property is required.\n`match=`*MatchValue*                              | The password encryption key and encrypted password files must contain the same match value. The match values are compared to ensure that the two specified files are related to each other, serving as a \"sanity check\" to help avoid configuration errors. This property is required.\n`password=`*HexDigits*                            | This value is the encrypted password, encoded as hex digits. This property is required.\n`params=`*HexDigits*                              | This value contains the cipher algorithm parameters, if any, encoded as hex digits. Some ciphers need algorithm parameters that cannot be derived from the key, such as an initialization vector. This property is optional, depending on whether the cipher algorithm has associated parameters.\n`hash=`*HexDigits*                                | This value is the expected message authentication code (MAC), encoded as hex digits. After encryption, the expected MAC is calculated using the ciphertext, transformation name, and algorithm parameters if any. Before decryption, the driver calculates the MAC using the ciphertext, transformation name, and algorithm parameters if any, and verifies that the calculated MAC matches the expected MAC. If the calculated MAC differs from the expected MAC, then either or both of the files may have been tampered with. This property is required.\n\nWhile `params` is technically optional, an initialization vector is required by all three block cipher modes `CBC`, `CFB`, and `OFB` that are supported by the driver. ECB (Electronic Codebook) does not require `params`, but ECB is not supported by the driver.\n\n#### Transformation, Key Size, and MAC\n\nA transformation is a string that describes the set of operations to be performed on the given input, to produce transformed output. A transformation specifies the name of a cryptographic algorithm such as AES, followed by a feedback mode and padding scheme.\n\nThe driver supports the following transformations and key sizes.\n\nTransformation              | Key Size\n--------------------------- | ---\n`AES/CBC/NoPadding`         | 128\n`AES/CBC/NoPadding`         | 192\n`AES/CBC/NoPadding`         | 256\n`AES/CBC/PKCS5Padding`      | 128\n`AES/CBC/PKCS5Padding`      | 192\n`AES/CBC/PKCS5Padding`      | 256\n`AES/CFB/NoPadding`         | 128\n`AES/CFB/NoPadding`         | 192\n`AES/CFB/NoPadding`         | 256\n`AES/CFB/PKCS5Padding`      | 128\n`AES/CFB/PKCS5Padding`      | 192\n`AES/CFB/PKCS5Padding`      | 256\n`AES/OFB/NoPadding`         | 128\n`AES/OFB/NoPadding`         | 192\n`AES/OFB/NoPadding`         | 256\n`AES/OFB/PKCS5Padding`      | 128\n`AES/OFB/PKCS5Padding`      | 192\n`AES/OFB/PKCS5Padding`      | 256\n\nStored Password Protection uses a symmetric encryption algorithm such as AES, in which the same secret key is used for encryption and decryption of the password. Stored Password Protection does not use an asymmetric encryption algorithm such as RSA, with separate public and private keys.\n\nCBC (Cipher Block Chaining) is a block cipher encryption mode. With CBC, each ciphertext block is dependent on all plaintext blocks processed up to that point. CBC is suitable for encrypting data whose total byte count exceeds the algorithm's block size, and is therefore suitable for use with Stored Password Protection.\n\nStored Password Protection hides the password length in the encrypted password file by extending the length of the UTF8-encoded password with trailing null bytes. The length is extended to the next 512-byte boundary.\n\n* A block cipher with no padding, such as `AES/CBC/NoPadding`, may only be used to encrypt data whose byte count after extension is a multiple of the algorithm's block size. The 512-byte boundary is compatible with many block ciphers. AES, for example, has a block size of 128 bits (16 bytes), and is therefore compatible with the 512-byte boundary.\n* A block cipher with padding, such as `AES/CBC/PKCS5Padding`, can be used to encrypt data of any length. However, CBC with padding is vulnerable to a \"padding oracle attack\", so Stored Password Protection performs Encrypt-then-MAC for protection from a padding oracle attack. MAC algorithms `HmacSHA1` and `HmacSHA256` are supported.\n* The driver does not support block ciphers used as byte-oriented ciphers via modes such as `CFB8` or `OFB8`.\n\nThe strength of the encryption depends on your choice of cipher algorithm and key size.\n\n* AES uses a 128-bit (16 byte), 192-bit (24 byte), or 256-bit (32 byte) key.\n\n#### Sharing Files with the Teradata JDBC Driver\n\nThis driver and the Teradata JDBC Driver can share the files containing the password encryption key and encrypted password, if you use a transformation, key size, and MAC algorithm that is supported by both drivers.\n\n* Recommended choices for compatibility are `AES/CBC/NoPadding` and `HmacSHA256`.\n* Use a 256-bit key if your Java environment has the Java Cryptography Extension (JCE) Unlimited Strength Jurisdiction Policy Files from Oracle.\n* Use a 128-bit key if your Java environment does not have the Unlimited Strength Jurisdiction Policy Files.\n* Use `HmacSHA1` for compatibility with JDK 1.4.2.\n\n#### File Locations\n\nFor the `ENCRYPTED_PASSWORD(` syntax of the driver, each filename must be preceded by the `file:` prefix.\nThe *PasswordEncryptionKeyFileName* must be separated from the *EncryptedPasswordFileName* by a single comma. The files can be located in the current directory, specified with a relative path, or specified with an absolute path.\n\n\nExample for files in the current directory:\n\n    ENCRYPTED_PASSWORD(file:JohnDoeKey.properties,file:JohnDoePass.properties)\n\nExample with relative paths:\n\n    ENCRYPTED_PASSWORD(file:../dir1/JohnDoeKey.properties,file:../dir2/JohnDoePass.properties)\n\nExample with absolute paths on Windows:\n\n    ENCRYPTED_PASSWORD(file:c:/dir1/JohnDoeKey.properties,file:c:/dir2/JohnDoePass.properties)\n\nExample with absolute paths on Linux:\n\n    ENCRYPTED_PASSWORD(file:/dir1/JohnDoeKey.properties,file:/dir2/JohnDoePass.properties)\n\n#### Processing Sequence\n\nThe two filenames specified for an encrypted password must be accessible to the driver and must conform to the properties file formats described above. The driver raises an exception if the file is not accessible, or the file does not conform to the required file format.\n\nThe driver verifies that the match values in the two files are present, and match each other. The driver raises an exception if the match values differ from each other. The match values are compared to ensure that the two specified files are related to each other, serving as a \"sanity check\" to help avoid configuration errors. The TJEncryptPassword program uses a timestamp as a shared match value, but a timestamp is not required. Any shared string can serve as a match value. The timestamp is not related in any way to the encryption of the password, and the timestamp cannot be used to decrypt the password.\n\nBefore decryption, the driver calculates the MAC using the ciphertext, transformation name, and algorithm parameters if any, and verifies that the calculated MAC matches the expected MAC. The driver raises an exception if the calculated MAC differs from the expected MAC, to indicate that either or both of the files may have been tampered with.\n\nFinally, the driver uses the decrypted password to log on to the database.\n\n\u003ca id=\"LogonMethods\"\u003e\u003c/a\u003e\n\n### Logon Authentication Methods\n\nThe following table describes the logon authentication methods selected by the `logmech` connection parameter.\n\n`logmech` | Description | Usage and Requirements\n----------|-------------|---\n`BEARER`  | OIDC Client Credentials Grant with JWT Bearer Token for client authentication | This method is intended for automated logon by service accounts.\u003cbr/\u003e`user`, `password`, `logdata`, and `oauth_scopes` must all be omitted when using this method.\u003cbr/\u003e`jws_private_key` is required when using this method. `jws_cert` is also needed for Identity Providers that require an \"x5t\" header thumbprint.\u003cbr/\u003e`oidc_clientid` is commonly used to override the default Client ID when using this method.\u003cbr/\u003e`oidc_claim`, `oidc_scope`, `oidc_token`, and `jws_algorithm` are optional parameters when using this method.\u003cbr/\u003eThe database user must have the \"logon with null password\" permission.\u003cbr/\u003eThe database must be configured with Identity Provider information for Federated Authentication. These tasks are covered in the reference Teradata Vantage\u0026trade; Security Administration.\n`BROWSER` | Browser Authentication, also known as OIDC Authorization Code Flow with Proof Key for Code Exchange (PKCE) | This method is intended for interactive logon by individual users.\u003cbr/\u003e`password` and `logdata` must be omitted when using this method.\u003cbr/\u003e`user` is optional when using this method. When `user` is specified, it is used as the OIDC login hint and it is included in the OIDC token cache key for token retrieval.\u003cbr/\u003e`browser`, `browser_tab_timeout`, `browser_timeout`, `oauth_scopes`, `oidc_claim`, `oidc_clientid`, `oidc_prompt`, `oidc_scope`, and `oidc_token` are optional parameters when using this method.\u003cbr/\u003eBrowser Authentication is supported for Windows and macOS. Browser Authentication is not supported for other operating systems.\u003cbr/\u003eThe database user must have the \"logon with null password\" permission.\u003cbr/\u003eThe database must be configured with Identity Provider information for Federated Authentication. These tasks are covered in the reference Teradata Vantage\u0026trade; Security Administration.\n`CODE`    | OIDC Device Code Flow, also known as OIDC Device Authorization Grant | This method is intended for interactive logon by individual users.\u003cbr/\u003e`password` and `logdata` must be omitted when using this method.\u003cbr/\u003e`user` is optional when using this method. When `user` is specified, it is used as the OIDC login hint and it is included in the OIDC token cache key for token retrieval.\u003cbr/\u003e`code_append_file`, `oauth_scopes`, `oidc_claim`, `oidc_clientid`, `oidc_scope`, and `oidc_token` are optional parameters when using this method.\u003cbr/\u003eThe database user must have the \"logon with null password\" permission.\u003cbr/\u003eThe database must be configured with Identity Provider information for Federated Authentication. These tasks are covered in the reference Teradata Vantage\u0026trade; Security Administration.\n`CRED`    | OIDC Client Credentials Grant with client_secret_post for client authentication | This method is intended for automated logon by service accounts.\u003cbr/\u003e`user`, `password`, `oauth_scopes`, `oidc_clientid`, and `oidc_scope` must all be omitted when using this method.\u003cbr/\u003e`logdata` must contain the Client Credentials Grant request HTTP POST Form Data encoded as Content-Type application/x-www-form-urlencoded.\u003cbr/\u003e`oidc_claim` and `oidc_token` are optional parameters when using this method.\u003cbr/\u003eThe database user must have the \"logon with null password\" permission.\u003cbr/\u003eThe database must be configured with Identity Provider information for Federated Authentication. These tasks are covered in the reference Teradata Vantage\u0026trade; Security Administration.\n`JWT`     | JSON Web Token (JWT) | `logdata` must contain `token=` followed by the JSON Web Token.\u003cbr/\u003eThe database user must have the \"logon with null password\" permission.\u003cbr/\u003eYour application must obtain a valid JWT from an Identity Provider. The database must be configured to trust JWTs issued by your Identity Provider. These tasks are covered in the reference Teradata Vantage\u0026trade; Security Administration.\n`KRB5`    | GSS-API Kerberos V5 | Requires a significant number of administration tasks on the machine that is running the driver.\u003cbr/\u003eFor Kerberos Single Sign On (SSO), the database user must have the \"logon with null password\" permission.\n`LDAP`    | GSS-API Lightweight Directory Access Protocol (LDAP) | Requires a significant administration effort to set up the LDAP environment. These tasks are covered in the reference Teradata Vantage\u0026trade; Security Administration.\u003cbr/\u003eOnce they are complete, LDAP can be used without any additional work required on the machine that is running the driver.\n`ROPC`    | OIDC Resource Owner Password Credentials (ROPC) | This method is intended for interactive logon by individual users.\u003cbr/\u003e`logdata` must be omitted when using this method.\u003cbr/\u003e`user` and `password` are required when using this method.\u003cbr/\u003e`oauth_scopes`, `oidc_claim`, `oidc_clientid`, `oidc_scope`, and `oidc_token` are optional parameters when using this method.\u003cbr/\u003eThe database user must have the \"logon with null password\" permission.\u003cbr/\u003eThe database must be configured with Identity Provider information for Federated Authentication. These tasks are covered in the reference Teradata Vantage\u0026trade; Security Administration.\n`SECRET`  | OIDC Client Credentials Grant with client_secret_basic for client authentication | This method is intended for automated logon by service accounts.\u003cbr/\u003e`user`, `password`, and `oauth_scopes` must all be omitted when using this method.\u003cbr/\u003e`logdata` must contain the client secret.\u003cbr/\u003e`oidc_clientid` is commonly used to override the default Client ID when using this method.\u003cbr/\u003e`oidc_claim`, `oidc_scope`, and `oidc_token` are optional parameters when using this method.\u003cbr/\u003eThe database user must have the \"logon with null password\" permission.\u003cbr/\u003eThe database must be configured with Identity Provider information for Federated Authentication. These tasks are covered in the reference Teradata Vantage\u0026trade; Security Administration.\n`TD2`     | GSS-API Teradata Method 2 | Does not require any special setup, and can be used immediately.\n`TDNEGO`  | GSS-API Teradata Negotiating Mechanism | Automatically selects an appropriate GSS-API logon authentication method. OIDC methods are not selected.\n\n\u003ca id=\"ClientAttributes\"\u003e\u003c/a\u003e\n\n### Client Attributes\n\nClient Attributes record a variety of information about the client system and client software in the system tables `DBC.SessionTbl` and `DBC.EventLog`. Client Attributes are intended to be a replacement for the information recorded in the `LogonSource` column of the system tables `DBC.SessionTbl` and `DBC.EventLog`.\n\nThe Client Attributes are recorded at session logon time. Subsequently, the system views `DBC.SessionInfoV` and `DBC.LogOnOffV` can be queried to obtain information about the client system and client software on a per-session basis. Client Attribute values may be recorded in the database in either mixed-case or in uppercase, depending on the session character set and other factors. Analysis of recorded Client Attributes must flexibly accommodate either mixed-case or uppercase values.\n\nWarning: The information in this section is subject to change in future releases of the driver. Client Attributes can be \"mined\" for information about client system demographics; however, any applications that parse Client Attribute values must be changed if Client Attribute formats are changed in the future.\n\nClient Attributes are not intended to be used for workload management. Instead, query bands are intended for workload management. Any use of Client Attributes for workload management may break if Client Attributes are changed, or augmented, in the future.\n\nClient Attribute            | Source   | Description\n--------------------------- | -------- | ---\n`MechanismName`             | database | The connection's logon mechanism; for example, TD2, LDAP, etc.\n`ClientIpAddress`           | database | The client IP address, as determined by the database\n`ClientTcpPortNumber`       | database | The connection's client TCP port number, as determined by the database\n`ClientIPAddrByClient`      | driver   | The client IP address, as determined by the driver\n`ClientPortByClient`        | driver   | The connection's client TCP port number, as determined by the driver\n`ClientInterfaceKind`       | driver   | The value `S` to indicate Node.js, available beginning with Teradata Database 17.20.03.19\n`ClientInterfaceVersion`    | driver   | The driver version, available beginning with Teradata Database 17.20.03.19\n`ClientProgramName`         | driver   | The client program name, followed by a streamlined call stack\n`ClientSystemUserId`        | driver   | The client user name\n`ClientOsName`              | driver   | The client operating system name\n`ClientProcThreadId`        | driver   | The client process ID\n`ClientVmName`              | driver   | Node.js runtime information\n`ClientSecProdGrp`          | driver   | Go crypto library version\n`ClientCoordName`           | driver   | The proxy server hostname and port number when a proxy server is used for a database connection\n`ClientTerminalId`          | driver   | The proxy server hostname and port number when a proxy server is used for an Identity Provider\n`ClientSessionDesc`         | driver   | TLS cipher information is available in this column as a list of name=value pairs, each terminated by a semicolon. Individual values can be accessed using the `NVP` system function.\n\u0026nbsp; | `C` | Y/N indicates whether the `sslcipher` connection parameter was specified\n\u0026nbsp; | `D` | the database TLS cipher\n\u0026nbsp; | `I` | the Identity Provider TLS cipher\n`ClientTdHostName`          | driver   | The database hostname as specified by the application, without any COP suffix\n`ClientCOPSuffixedHostName` | driver   | The COP-suffixed database hostname chosen by the driver\n`ServerIPAddrByClient`      | driver   | The database node's IP address, as determined by the driver\n`ServerPortByClient`        | driver   | The destination port number of the TCP connection to the database node, as determined by the driver\n`ClientConfType`            | driver   | The confidentiality type, as determined by the driver\n\u0026nbsp;                      | `V`      | TLS used for encryption, with full certificate verification\n\u0026nbsp;                      | `C`      | TLS used for encryption, with Certificate Authority (CA) verification\n\u0026nbsp;                      | `R`      | TLS used for encryption, with no certificate verification\n\u0026nbsp;                      | `E`      | TLS was not attempted, and TDGSS used for encryption\n\u0026nbsp;                      | `U`      | TLS was not attempted, and TDGSS encryption depends on central administration\n\u0026nbsp;                      | `F`      | TLS was attempted, but the TLS handshake failed, so this is a fallback to using TDGSS for encryption\n\u0026nbsp;                      | `H`      | SSLMODE was set to PREFER, but a non-TLS connection was made, and TDGSS encryption depends on central administration\n`ServerConfType`            | database | The confidentiality type, as determined by the database\n\u0026nbsp;                      | `T`      | TLS used for encryption\n\u0026nbsp;                      | `E`      | TDGSS used for encryption\n\u0026nbsp;                      | `U`      | Data transfer is unencrypted\n`ClientConfVersion`         | database | The TLS version as determined by the database, if this is an HTTPS/TLS connection\n`ClientConfCipherSuite`     | database | The TLS cipher as determined by the database, if this is an HTTPS/TLS connection\n`ClientEnvName`             | driver   | The OIDC metadata URL for a connection using an OIDC logon authentication mechanism\n`ClientJobId`               | driver   | The OIDC client ID for a connection using an OIDC logon authentication mechanism\n`ClientJobName`             | driver   | The OIDC scope for a connection using an OIDC logon authentication mechanism\n`ClientJobData`             | driver   | The OIDC login hint for a connection using an OIDC logon authentication mechanism\n`ClientUserOperId`          | driver   | The OIDC token kind, OIDC claim name, and claim value for a connection using an OIDC logon authentication mechanism\n`ClientWorkload`            | driver   | The scopes for acquired OAuth tokens, separated by vertical bar `\\|` characters\n`ClientAttributesEx`        | driver   | Additional Client Attributes are available in the `ClientAttributesEx` column as a list of name=value pairs, each terminated by a semicolon. Individual values can be accessed using the `NVP` system function.\n\u0026nbsp;                      | `AS`     | the application connection's endpoint session number\n\u0026nbsp;                      | `BA`     | Y/N indicator for Browser Authentication\n\u0026nbsp;                      | `CCS`    | the client character set\n\u0026nbsp;                      | `CERT`   | the database TLS certificate status (see [table below](#CertStatus))\n\u0026nbsp;                      | `CF`     | the `connect_function` connection parameter\n\u0026nbsp;                      | `CRC`    | the `sslcrc` connection parameter\n\u0026nbsp;                      | `CRL`    | Y/N indicator for `sslcrl` connection parameter\n\u0026nbsp;                      | `CS`     | the control session's endpoint session number\n\u0026nbsp;                      | `DL`     | this connection's database logon sequence number\n\u0026nbsp;                      | `DP`     | the `dbs_port` connection parameter\n\u0026nbsp;                      | `EL`     | this connection's endpoint logon sequence number\n\u0026nbsp;                      | `ENC`    | Y/N indicator for `encryptdata` connection parameter\n\u0026nbsp;                      | `ES`     | endpoint session number if connected to an endpoint such as Unity, Session Manager, or Business Continuity Manager; database session number otherwise\n\u0026nbsp;                      | `FIPS`   | Y/N indicator for FIPS mode\n\u0026nbsp;                      | `GO`     | the Go version\n\u0026nbsp;                      | `GOV`    | the `govern` connection parameter\n\u0026nbsp;                      | `HP`     | the `https_port` connection parameter\n\u0026nbsp;                      | `IDPC`   | the Identity Provider TLS certificate status (see [table below](#CertStatus))\n\u0026nbsp;                      | `JH`     | JWT header parameters to identify signature key\n\u0026nbsp;                      | `JWS`    | the JSON Web Signature (JWS) algorithm\n\u0026nbsp;                      | `LM`     | the logon authentication method\n\u0026nbsp;                      | `LOB`    | Y/N indicator for LOB support\n\u0026nbsp;                      | `NODEJS` | the Node.js version\n\u0026nbsp;                      | `OA`     | the `oauth_level` connection parameter\n\u0026nbsp;                      | `OAC`    | sequence of comma-separated OAuth token reuse counts\n\u0026nbsp;                      | `OAR`    | sequence of Y/N values to indicate OAuth refresh token availability\n\u0026nbsp;                      | `OC`     | OIDC token cache status O (off) M (miss) H (hit) X (expired)\n\u0026nbsp;                      | `OCSP`   | Y/N indicator for `sslocsp` connection parameter\n\u0026nbsp;                      | `OSL`    | Numeric level corresponding to `oidc_sslmode`\n\u0026nbsp;                      | `OSM`    | the `oidc_sslmode` connection parameter\n\u0026nbsp;                      | `PART`   | the `partition` connection parameter\n\u0026nbsp;                      | `RT`     | Y/N indicator for OIDC refresh token available\n\u0026nbsp;                      | `SCS`    | the session character set\n\u0026nbsp;                      | `SIP`    | Y/N indicator for StatementInfo parcel support\n\u0026nbsp;                      | `SSL`    | Numeric level corresponding to `sslmode`\n\u0026nbsp;                      | `SSLM`   | the `sslmode` connection parameter\n\u0026nbsp;                      | `SSLP`   | the `sslprotocol` connection parameter\n\u0026nbsp;                      | `TC`     | OIDC token reuse count\n\u0026nbsp;                      | `TM`     | the transaction mode indicator A (ANSI) or T (TERA)\n\u0026nbsp;                      | `TT`     | OIDC token time-to-live in seconds\n\u0026nbsp;                      | `TVD`    | the database TLS protocol version\n\u0026nbsp;                      | `TVI`    | the Identity Provider TLS protocol version\n\u0026nbsp;                      | `TZ`     | the current time zone\n\n\u003ca id=\"CertStatus\"\u003e\u003c/a\u003e\n\nThe `CERT` and `IDPC` attributes indicate the TLS certificate status of an HTTPS/TLS connection. When the attribute indicates the TLS certificate is valid (`V`) or invalid (`I`), then additional TLS certificate status details are provided as a series of comma-separated two-letter codes.\n\nCode | Description\n-----|---\n`U`  | the TLS certificate status is unavailable\n`V`  | the TLS certificate status is valid\n`I`  | the TLS certificate status is invalid\n`PU` | sslca PEM file is unavailable for server certificate verification\n`PA` | server certificate was verified using sslca PEM file\n`PR` | server certificate was rejected using sslca PEM file\n`DU` | sslcapath PEM directory is unavailable for server certificate verification\n`DA` | server certificate was verified using sslcapath PEM directory\n`DR` | server certificate was rejected using sslcapath PEM directory\n`TA` | server certificate was verified by the system\n`TR` | server certificate was rejected by the system\n`CY` | server certificate passed VERIFY-CA check\n`CN` | server certificate failed VERIFY-CA check\n`HU` | server hostname is unavailable for server certificate matching, because database IP address was specified\n`HY` | server hostname matches server certificate\n`HN` | server hostname does not match server certificate\n`RU` | resolved server hostname is unavailable for server certificate matching, because database IP address was specified\n`RY` | resolved server hostname matches server certificate\n`RN` | resolved server hostname does not match server certificate\n`IY` | IP address matches server certificate\n`IN` | IP address does not match server certificate\n`FY` | server certificate passed VERIFY-FULL check\n`FN` | server certificate failed VERIFY-FULL check\n`SU` | certificate revocation check status is unavailable\n`SG` | certificate revocation check status is good\n`SR` | certificate revocation check status is revoked\n\n\n#### LogonSource Column\n\nThe `LogonSource` column is obsolete and has been superseded by Client Attributes. The `LogonSource` column may be deprecated and subsequently removed in future releases of the database.\n\nWhen the driver establishes a connection to the database, the driver composes a string value that is stored in the `LogonSource` column of the system tables `DBC.SessionTbl` and `DBC.EventLog`. The `LogonSource` column is included in system views such as `DBC.SessionInfoV` and `DBC.LogOnOffV`. All `LogonSource` values are recorded in the database in uppercase.\n\nThe driver follows the format documented in the Teradata Data Dictionary, section \"System Views Columns Reference\", for network-attached `LogonSource` values. Network-attached `LogonSource` values have eight fields, separated by whitespace. The database composes fields 1 through 3, and the driver composes fields 4 through 8.\n\nField | Source   | Description\n----- | -------- | ---\n1     | database | The string `(TCP/IP)` to indicate the connection type\n2     | database | The connection's client TCP port number, in hexadecimal\n3     | database | The client IP address, as determined by the database\n4     | driver   | The database hostname as specified by the application, without any COP suffix\n5     | driver   | The client process ID\n6     | driver   | The client user name\n7     | driver   | The client program name\n8     | driver   | The string `01 LSS` to indicate the `LogonSource` string version `01`\n\n\u003ca id=\"UserStartup\"\u003e\u003c/a\u003e\n\n### User STARTUP SQL Request\n\n`CREATE USER` and `MODIFY USER` commands provide `STARTUP` clauses for specifying SQL commands to establish initial session settings. The following table lists several of the SQL commands that may be used to establish initial session settings.\n\nCategory                 | SQL command\n------------------------ | ---\nDiagnostic settings      | `DIAGNOSTIC` ... `FOR SESSION`\nSession query band       | `SET QUERY_BAND` ... `FOR SESSION`\nUnicode Pass Through     | `SET SESSION CHARACTER SET UNICODE PASS THROUGH ON`\nTransaction isolation    | `SET SESSION CHARACTERISTICS AS TRANSACTION ISOLATION LEVEL`\nCollation sequence       | `SET SESSION COLLATION`\nTemporal qualifier       | `SET SESSION CURRENT VALIDTIME AND CURRENT TRANSACTIONTIME`\nDate format              | `SET SESSION DATEFORM`\nFunction tracing         | `SET SESSION FUNCTION TRACE`\nSession time zone        | `SET TIME ZONE`\n\nFor example, the following command sets a `STARTUP` SQL request for user `susan` to establish read-uncommitted transaction isolation after logon.\n\n    MODIFY USER susan AS STARTUP='SET SESSION CHARACTERISTICS AS TRANSACTION ISOLATION LEVEL RU'\n\nThe driver's `runstartup` connection parameter must be `true` to execute the user's `STARTUP` SQL request after logon. The default for `runstartup` is `false`. If the `runstartup` connection parameter is omitted or `false`, then the user's `STARTUP` SQL request will not be executed.\n\n\u003ca id=\"TransactionMode\"\u003e\u003c/a\u003e\n\n### Transaction Mode\n\nThe `tmode` connection parameter enables an application to specify the transaction mode for the connection.\n* `\"tmode\":\"ANSI\"` provides American National Standards Institute (ANSI) transaction semantics. This mode is recommended.\n* `\"tmode\":\"TERA\"` provides legacy Teradata transaction semantics. This mode is only recommended for legacy applications that require Teradata transaction semantics.\n* `\"tmode\":\"DEFAULT\"` provides the default transaction mode configured for the database, which may be either ANSI or TERA mode. `\"tmode\":\"DEFAULT\"` is the default when the `tmode` connection parameter is omitted.\n\nWhile ANSI mode is generally recommended, please note that every application is different, and some applications may need to use TERA mode. The following differences between ANSI and TERA mode might affect a typical user or application:\n1. Silent truncation of inserted data occurs in TERA mode, but not ANSI mode. In ANSI mode, the database returns an error instead of truncating data.\n2. Tables created in ANSI mode are `MULTISET` by default. Tables created in TERA mode are `SET` tables by default.\n3. For tables created in ANSI mode, character columns are `CASESPECIFIC` by default. For tables created in TERA mode, character columns are `NOT CASESPECIFIC` by default.\n4. In ANSI mode, character literals are `CASESPECIFIC`. In TERA mode, character literals are `NOT CASESPECIFIC`.\n\nThe last two behavior differences, taken together, may cause character data comparisons (such as in `WHERE` clause conditions) to be case-insensitive in TERA mode, but case-sensitive in ANSI mode. This, in turn, can produce different query results in ANSI mode versus TERA mode. Comparing two `NOT CASESPECIFIC` expressions is case-insensitive regardless of mode, and comparing a `CASESPECIFIC` expression to another expression of any kind is case-sensitive regardless of mode. You may explicitly `CAST` an expression to be `CASESPECIFIC` or `NOT CASESPECIFIC` to obtain the character data comparison required by your application.\n\nThe Teradata Reference / *SQL Request and Transaction Processing* recommends that ANSI mode be used for all new applications. The primary benefit of using ANSI mode is that inadvertent data truncation is avoided. In contrast, when using TERA mode, silent data truncation can occur when data is inserted, because silent data truncation is a feature of TERA mode.\n\nA drawback of using ANSI mode is that you can only call stored procedures that were created using ANSI mode, and you cannot call stored procedures that were created using TERA mode. It may not be possible to switch over to ANSI mode exclusively, because you may have some legacy applications that require TERA mode to work properly. You can work around this drawback by creating your stored procedures twice, in two different users/databases, once using ANSI mode, and once using TERA mode.\n\nRefer to the Teradata Reference / *SQL Request and Transaction Processing* for complete information regarding the differences between ANSI and TERA transaction modes.\n\n\n\u003ca id=\"AutoCommit\"\u003e\u003c/a\u003e\n\n### Auto-Commit\n\nThe driver provides auto-commit on and off functionality for both ANSI and TERA mode.\n\nWhen a connection is first established, it begins with the default auto-commit setting, which is on. When auto-commit is on, the driver is solely responsible for managing transactions, and the driver commits each SQL request that is successfully executed. An application should not execute any transaction management SQL commands when auto-commit is on. An application should not call the `commit` method or the `rollback` method when auto-commit is on.\n\nAn application can manage transactions itself by setting the connection's `.autocommit` attribute to `false` to turn off auto-commit.\n\n    con.autocommit = false\n\nWhen auto-commit is off, the driver leaves the current transaction open after each SQL request is executed, and the application is responsible for committing or rolling back the transaction by calling the `commit` or the `rollback` method, respectively.\n\nAuto-commit remains turned off until the application turns it back on by setting the connection's `.autocommit` attribute to `True`.\n\n    con.autocommit = true\n\nBest practices recommend that an application avoid executing database-vendor-specific transaction management commands such as `BT`, `ET`, `ABORT`, `COMMIT`, or `ROLLBACK`, because such commands differ from one vendor to another. (They even differ between Teradata's two modes ANSI and TERA.) Instead, best practices recommend that an application only call the standard methods `commit` and `rollback` for transaction management.\n1. When auto-commit is on in ANSI mode, the driver automatically executes `COMMIT` after every successful SQL request.\n2. When auto-commit is off in ANSI mode, the driver does not automatically execute `COMMIT`. When the application calls the `commit` method, then the driver executes `COMMIT`.\n3. When auto-commit is on in TERA mode, the driver does not execute `BT` or `ET`, unless the application explicitly executes `BT` or `ET` commands itself, which is not recommended.\n4. When auto-commit is off in TERA mode, the driver executes `BT` before submitting the application's first SQL request of a new transaction. When the application calls the `commit` method, then the driver executes `ET` until the transaction is complete.\n\nAs part of the wire protocol between the database and Teradata client interface software (such as this driver), each message transmitted from the database to the client has a bit designated to indicate whether the session has a transaction in progress or not. Thus, the client interface software is kept informed as to whether the session has a transaction in progress or not.\n\nIn TERA mode with auto-commit off, when the application uses the driver to execute a SQL request, if the session does not have a transaction in progress, then the driver automatically executes `BT` before executing the application's SQL request. Subsequently, in TERA mode with auto-commit off, when the application uses the driver to execute another SQL request, and the session already has a transaction in progress, then the driver has no need to execute `BT` before executing the application's SQL request.\n\nIn TERA mode, `BT` and `ET` pairs can be nested, and the database keeps track of the nesting level. The outermost `BT`/`ET` pair defines the transaction scope; inner `BT`/`ET` pairs have no effect on the transaction because the database does not provide actual transaction nesting. To commit the transaction, `ET` commands must be repeatedly executed until the nesting is unwound. The Teradata wire protocol bit (mentioned earlier) indicates when the nesting is unwound and the transaction is complete. When the application calls the `commit` method in TERA mode, the driver repeatedly executes `ET` commands until the nesting is unwound and the transaction is complete.\n\nIn rare cases, an application may not follow best practices and may explicitly execute transaction management commands. Such an application must turn off auto-commit before executing transaction management commands such as `BT`, `ET`, `ABORT`, `COMMIT`, or `ROLLBACK`. The application is responsible for executing the appropriate commands for the transaction mode in effect. TERA mode commands are `BT`, `ET`, and `ABORT`. ANSI mode commands are `COMMIT` and `ROLLBACK`. An application must take special care when opening a transaction in TERA mode with auto-commit off. In TERA mode with auto-commit off, when the application executes a SQL request, if the session does not have a transaction in progress, then the driver automatically executes `BT` before executing the application's SQL request. Therefore, the application should not begin a transaction by executing `BT`.\n\n    # TERA mode example showing undesirable BT/ET nesting\n    con.autocommit = false\n    cur.execute(\"BT\") # BT automatically executed by the driver before this, and produces a nested BT\n    cur.execute(\"insert into mytable1 values(1, 2)\")\n    cur.execute(\"insert into mytable2 values(3, 4)\")\n    cur.execute(\"ET\") # unwind nesting\n    cur.execute(\"ET\") # complete transaction\n\n    # TERA mode example showing how to avoid BT/ET nesting\n    con.autocommit = false\n    cur.execute(\"insert into mytable1 values(1, 2)\") # BT automatically executed by the driver before this\n    cur.execute(\"insert into mytable2 values(3, 4)\")\n    cur.execute(\"ET\") # complete transaction\n\nPlease note that neither previous example shows best practices. Best practices recommend that an application only call the standard methods `commit` and `rollback` for transaction management.\n\n    # Example showing best practice\n    con.autocommit = false\n    cur.execute(\"insert into mytable1 values(1, 2)\")\n    cur.execute(\"insert into mytable2 values(3, 4)\")\n    con.commit()\n\n\u003ca id=\"DataTypes\"\u003e\u003c/a\u003e\n\n### Data Types\n\nThe table below lists the Teradata Database data types supported by the Teradata SQL Driver for Node.js, and indicates the corresponding JavaScript data type returned in result set rows.\n\nTeradata Database data type        | Result set JavaScript data type   | With `teradata_values` as `\"false\"`\n---------------------------------- | --------------------------------- | ---\n`BIGINT`                           | `BigInt`                          |\n`BLOB`                             | `Uint8Array`                      |\n`BYTE`                             | `Uint8Array`                      |\n`BYTEINT`                          | `Number`                          |\n`CHAR`                             | `String`                          |\n`CLOB`                             | `String`                          |\n`DATE`                             | `String`                          |\n`DECIMAL`                          | `Number`                          | `String`\n`FLOAT`                            | `Number`                          |\n`INTEGER`                          | `Number`                          |\n`INTERVAL YEAR`                    | `String`                          |\n`INTERVAL YEAR TO MONTH`           | `String`                          |\n`INTERVAL MONTH`                   | `String`                          |\n`INTERVAL DAY`                     | `String`                          |\n`INTERVAL DAY TO HOUR`             | `String`                          |\n`INTERVAL DAY TO MINUTE`           | `String`                          |\n`INTERVAL DAY TO SECOND`           | `String`                          |\n`INTERVAL HOUR`                    | `String`                          |\n`INTERVAL HOUR TO MINUTE`          | `String`                          |\n`INTERVAL HOUR TO SECOND`          | `String`                          |\n`INTERVAL MINUTE`                  | `String`                          |\n`INTERVAL MINUTE TO SECOND`        | `String`                          |\n`INTERVAL SECOND`                  | `String`                          |\n`NUMBER`                           | `Number`                          | `String`\n`PERIOD(DATE)`                     | `String`                          |\n`PERIOD(TIME)`                     | `String`                          |\n`PERIOD(TIME WITH TIME ZONE)`      | `String`                          |\n`PERIOD(TIMESTAMP)`                | `String`                          |\n`PERIOD(TIMESTAMP WITH TIME ZONE)` | `String`                          |\n`SMALLINT`                         | `Number`                          |\n`TIME`                             | `String`                          |\n`TIME WITH TIME ZONE`              | `String`                          |\n`TIMESTAMP`*                       | `Date`                            | `String`\n`TIMESTAMP WITH TIME ZONE`         | `String`                          |\n`VARBYTE`                          | `Uint8Array`                      |\n`VARCHAR`                          | `String`                          |\n`XML`                              | `String`                          |\n\n\\* The TIMESTAMP value retrieved from the server is regarded as the universal time (i.e., the UTC time) by the Teradata SQL Driver for Node.js.\n\nThe table below lists the parameterized SQL bind-value JavaScript data types supported by the Teradata SQL Driver for Node.js, and indicates the corresponding Teradata Database data type transmitted to the server.\n\nBind-value JavaScript data type   | Teradata Database data type\n--------------------------------- | ---\n`Boolean`                         |  Not supported\n`BigInt`                          | `BIGINT`\n`Date`                            | `TIMESTAMP(3)`\n`Number`                          | `FLOAT`\n`String`                          | `VARCHAR`\n`Symbol`                          |  Not supported\n`Uint8Array`                      | `VARBYTE`\n\nThe Teradata SQL Driver for Node.js transmits the UTC time of the JavaScript Date object to the server as the `TIMESTAMP(3)` value.\n\nTransforms are used for SQL `ARRAY` data values, and they can be transferred to and from the database as `VARCHAR` values.\n\nTransforms are used for structured UDT data values, and they can be transferred to and from the database as `VARCHAR` values.\n\n\u003ca id=\"NullValues\"\u003e\u003c/a\u003e\n\n## Null Values\n\nSQL `NULL` values received from the Teradata Database are returned in result set rows as JavaScript `null` values.\n\nA JavaScript `null` value bound to a question-mark parameter marker is transmitted to the Teradata Database as a `NULL` `VARCHAR` value.\n\nThe database does not provide automatic or implicit conversion of a `NULL` `VARCHAR` value to a different destination data type.\n\n* For `NULL` column values in a batch, the driver will automatically convert the `NULL` values to match the data type of the non-`NULL` values in the same column.\n* For solitary `NULL` values, your application may need to explicitly specify the data type with the `teradata_parameter` escape function, in order to avoid database error 3532 for non-permitted data type conversion.\n\nGiven a table with a destination column of `BYTE(4)`, the database would reject the following SQL with database error 3532 \"Conversion between BYTE data and other types is illegal.\"\n\n    cur.execute(\"update mytable set bytecolumn = ?\", [null]) // fails with database error 3532\n\nTo avoid database error 3532 in this situation, your application must use the the `teradata_parameter` escape function to specify the data type for the question-mark parameter marker.\n\n    cur.execute(\"{fn teradata_parameter(1, BYTE(4))}update mytable set bytecolumn = ?\", [null])\n\n\u003ca id=\"UndefinedValues\"\u003e\u003c/a\u003e\n\n## Undefined Values\n\nA JavaScript `undefined` value bound to a question-mark parameter marker is transmitted to the Teradata Database as a `NULL` `VARCHAR` value.    \n\n\u003ca id=\"CharacterExportWidth\"\u003e\u003c/a\u003e\n\n### Character Export Width\n\nThe driver always uses the UTF8 session character set, and the `charset` connection parameter is not supported. Be aware of the database's *Character Export Width* behavior that adds trailing space padding to fixed-width `CHAR` data type result set column values when using the UTF8 session character set.\n\nThe database `CHAR(`_n_`)` data type is a fixed-width data type (holding _n_ characters), and the database reserves a fixed number of bytes for the `CHAR(`_n_`)` data type in response spools and in network message traffic.\n\nUTF8 is a variable-width character encoding scheme that requires a varying number of bytes for each character. When the UTF8 session character set is used, the database reserves the maximum number of bytes that the `CHAR(`_n_`)` data type could occupy in response spools and in network message traffic. When the UTF8 session character set is used, the database appends padding characters to the tail end of `CHAR(`_n_`)` values smaller than the reserved maximum size, so that the `CHAR(`_n_`)` values all occupy the same fixed number of bytes in response spools and in network message traffic.\n\nWork around this drawback by using `CAST` or `TRIM` in SQL `SELECT` statements, or in views, to convert fixed-width `CHAR` data types to `VARCHAR`.\n\nGiven a table with fixed-width `CHAR` columns:\n\n`CREATE TABLE MyTable (c1 CHAR(10), c2 CHAR(10))`\n\nOriginal query that produces trailing space padding:\n\n`SELECT c1, c2 FROM MyTable`\n\nModified query with either `CAST` or `TRIM` to avoid trailing space padding:\n\n`SELECT CAST(c1 AS VARCHAR(10)), TRIM(TRAILING FROM c2) FROM MyTable`\n\nOr wrap query in a view with `CAST` or `TRIM` to avoid trailing space padding:\n\n`CREATE VIEW MyView (c1, c2) AS SELECT CAST(c1 AS VARCHAR(10)), TRIM(TRAILING FROM c2) FROM MyTable`\n\n`SELECT c1, c2 FROM MyView`\n\nThis technique is also demonstrated in sample program `CharPadding.ts`.\n\n\u003ca id=\"ModuleConstructors\"\u003e\u003c/a\u003e\n\n### Module Constructors\n\n`teradatasql.connect(` *ConnectionObject* `, ` *ConnectionJSONString* `)`\n\nCreates a connection to the database and returns a Connection object.\n\nThe first parameter is an optional JavaScript object that defaults to `{}`. The second argument is an optional JSON string that defaults to `\"{}\"`. Specify connection parameters as a JavaScript object, a JSON string, or a combination of the two.\n\nWhen a combination of parameters are specified, connection parameters specified as the JSON string takes precedence over same-named connection parameters specified in the JavaScript object.\n\n---\n\n`teradatasql.date(` *Year* `,` *Month* `,` *Day* `)`\n\nCreates and returns a `string` value in the ‘YYYY-MM-DD’ format.\n\n---\n\n`teradatasql.dateFromTicks(` *Seconds* `)`\n\nCreates and returns a `string` value corresponding to the specified number of seconds after 1970-01-01 00:00:00 in the 'YYYY-MM-DD HH:MM:SS.FFFFFF' format .\n\n---\n\n`teradatasql.timestamp(` *Year* `,` *Month* `,` *Day* `,` *Hour* `,` *Minute* `,` *Second* `)`\n\nCreates and returns a `Date` value.\n\n---\n\n`teradatasql.timestampFromTicks(` *Seconds* `)`\n\nCreates and returns a `Date` value corresponding to the specified number of seconds after 1970-01-01 00:00:00.\n\n\u003ca id=\"ModuleGlobals\"\u003e\u003c/a\u003e\n\n\u003ca id=\"ModuleExceptions\"\u003e\u003c/a\u003e\n\n### Module Exceptions\n\n`teradatasql.Error` is the base class for other exceptions.\n* `teradatasql.InterfaceError` is raised for errors related to the driver. Not supported yet.\n* `teradatasql.DatabaseError` is raised for errors related to the database.\n  * `teradatasql.DataError` is raised for data value errors such as division by zero. Not supported yet.\n  * `teradatasql.IntegrityError` is raised for referential integrity violations. Not supported yet.\n  * `teradatasql.OperationalError` is raised for errors related to the database's operation.\n  * `teradatasql.ProgrammingError` is raised for SQL object existence errors and SQL syntax errors. Not supported yet.\n\n\u003ca id=\"ConnectionAttributes\"\u003e\u003c/a\u003e\n\n### Connection Attributes\n\n`.autocommit`\n\nRead/write `boolean` attribute for the connection's auto-commit setting. Defaults to `true` meaning auto-commit is turned on.\n\n\u003ca id=\"ConnectionMethods\"\u003e\u003c/a\u003e\n\n### Connection Methods\n\n`.cancel()`\n\nAttempts to cancel the currently executing SQL request, if one is currently executing. Does nothing if called when no SQL request is executing\n\nOnly the SQL request executed by the `.executeAsync()` method or the `.executemanyAsync()` method can be cancelled.\n\n---\n\n`.close()`\n\nCloses the Connection.\n\n---\n\n`.commit()`\n\nCommits the current transaction.\n\n---\n\n`.cursor()`\n\nCreates and returns a new Cursor object for the Connection.\n\n---\n\n`.nativeSQL(` *SQLRequest* `)`\n\nReturns the specified SQL request text after conversion to native Teradata SQL. Equivalent to the JDBC API `Connection.nativeSQL` method.\n\nThe `{fn teradata_nativesql}` escape clause is automatically prepended to the SQL request before processing.\n\n---\n\n`.rollback()`\n\nRolls back the current transaction.\n\n\u003ca id=\"CursorAttributes\"\u003e\u003c/a\u003e\n\n### Cursor Attributes\n\n`.arraysize`\n\nRead/write `number` attribute specifying the number of rows to fetch at a time with the `.fetchmany()` method and the `.fetchall()` method. Defaults to `1` meaning fetch a single row at a time.\n\n---\n\n`.columntypename`\n\nRead-only attribute consisting of a sequence of result set column type names, available after a SQL request is executed.\n\n---\n\n`.connection`\n\nRead-only attribute indicating the Cursor's parent Connection object.\n\n---\n\n`.description`\n\nRead-only attribute consisting of a sequence of seven-item sequences that each describe a result set column, available after a SQL request is executed.\n* `.description[`*Column*`][0]` provides the column name.\n* `.description[`*Column*`][1]` provides the column type code as an object comparable to one of the Type Objects listed below.\n* `.description[`*Column*`][2]` provides the column display size in characters. Not implemented yet.\n* `.description[`*Column*`][3]` provides the column size in bytes.\n* `.description[`*Column*`][4]` provides the column precision if applicable, or `None` otherwise.\n* `.description[`*Column*`][5]` provides the column scale if applicable, or `None` otherwise.\n* `.description[`*Column*`][6]` provides the column nullability as `True` or `False`.\n\n---\n\n`.rowcount`\n\nRead-only `BigInt` attribute indicating the number of rows returned from, or affected by, the current SQL statement.\n\n\u003ca id=\"CursorMethods\"\u003e\u003c/a\u003e\n\n### Cursor Methods\n\n`.callproc(` *ProcedureName* `,` *OptionalSequenceOfParameterValues* `)`\n\nCalls the stored procedure specified by *ProcedureName*.\nProvide the second argument as a sequence of `IN` and `INOUT` parameter values to bind the values to question-mark parameter markers in the SQL request.\nSpecifying parameter values as a mapping is not supported.\nReturns a result set consisting of the `INOUT` parameter output values, if any, followed by any dynamic result sets.\n\n`OUT` parameters are not supported by this method. Use `.execute` to call a stored procedure with `OUT` parameters.\n\n---\n\n`.close()`\n\nCloses the Cursor.\n\n---\n\n`.execute(` *SQLRequest* `,` *OptionalSequenceOfParameterValues* `, ignoreErrors=` *OptionalSequenceOfIgnoredErrorCodes* `)`\n\nExecutes the SQL request.\nIf a sequence of parameter values is provided as the second argument, the values will be bound to question-mark parameter markers in the SQL request. Specifying parameter values as a mapping is not supported.\n\n---\n\n`.executeAsync(` *SQLRequest* `,` *OptionalSequenceOfParameterValues* `, ignoreErrors=` *OptionalSequenceOfIgnoredErrorCodes* `)`\n\nAsynchronously executes the SQL request. Returns a `Promise`.\nIf a sequence of parameter values is provided as the second argument, the values will be bound to question-mark parameter markers in the SQL request. Specifying parameter values as a mapping is not supported.\n\n---\n\n`.executemany(` *SQLRequest* `,` *SequenceOfSequencesOfParameterValues* `, ignoreErrors=` *OptionalSequenceOfIgnoredErrorCodes* `)`\n\nExecutes the SQL request as an iterated SQL request for the batch of parameter values.\nThe batch of parameter values must be specified as a sequence of sequences. Specifying parameter values as a mapping is not supported.\n\nThe `ignoreErrors` parameter is optional. The ignored error codes must be specified as a sequence of integers.\n\n---\n\n`.executemanyAsync(` *SQLRequest* `,` *SequenceOfSequencesOfParameterValues* `, ignoreErrors=` *OptionalSequenceOfIgnoredErrorCodes* `)`\n\nAsynchronously executes the SQL request as an iterated SQL request for the batch of parameter values. Returns a `Promise`.\nThe batch of parameter values must be specified as a sequence of sequences. Specifying parameter values as a mapping is not supported.\n\nThe `ignoreErrors` parameter is optional. The ignored error codes must be specified as a sequence of integers.\n\n---\n\n`.fetchall()`\n\nFetches all remaining rows of the current result set.\nReturns a sequence of sequences of column values.\n\n---\n\n`.fetchmany(` *OptionalRowCount* `)`\n\nFetches the next series of rows of the current result set.\nThe argument specifies the number of rows to fetch. If no argument is provided, then the Cursor's `.arraysize` attribute will determine the number of rows to fetch.\nReturns a sequence of sequences of column values, or an empty sequence to indicate that all rows have been fetched.\n\n---\n\n`.fetchone()`\n\nFetches the next row of the current result set.\nReturns a sequence of column values, or `None` to indicate that all rows have been fetched.\n\n---\n\n`.nextset()`\n\nAdvances to the next result set.\nReturns `True` if another result set is available, or `None` to indicate that all result sets have been fetched.\n\n---\n\n`.setinputsizes(` *SequenceOfTypesOrSizes* `)`\n\nHas no effect.\n\n---\n\n`.setoutputsize(` *Size* `,` *OptionalColumnIndex* `)`\n\nHas no effect.\n\n\u003ca id=\"TypeObjects\"\u003e\u003c/a\u003e\n\n### Type Objects\n\n`teradatasql.BINARY`\n\nIdentifies a SQL `BLOB`, `BYTE`, or `VARBYTE` column as a binary data type when compared with the Cursor's description attribute.\n\n`.description[`*Column*`][1] == teradatasql.BINARY`\n\n---\n\n`teradatasql.DATE`\n\nIdentifies a SQL `TIMESTAMP (WITHOUT TIME ZONE)` column as a date data type when compared with the Cursor's description attribute.\n\n`.description[`*Column*`][1] == teradatasql.DATE`\n\n---\n\n`teradatasql.NUMBER`\n\nIdentifies a SQL `BIGINT`, `BYTEINT`, `DECIMAL`, `FLOAT`, `INTEGER`, `NUMBER`, or `SMALLINT` column as a numeric data type when compared with the Cursor's description attribute.\n\n`.description[`*Column*`][1] == teradatasql.NUMBER`\n\n---\n\n`teradatasql.STRING`\n\nIdentifies a SQL `CHAR`, `CLOB`, `INTERVAL`, `PERIOD`, `VARCHAR`, `DATE`, `TIME`, `TIME WITH TIME ZONE`, or `TIMESTAMP WITH TIME ZONE` column as a character data type when compared with the Cursor's description attribute.\n\n`.description[`*Column*`][1] == teradatasql.STRING`\n\n\u003ca id=\"EscapeSyntax\"\u003e\u003c/a\u003e\n\n### Escape Syntax\n\nThe driver accepts most of the JDBC escape clauses offered by the Teradata JDBC Driver.\n\n#### Date and Time Literals\n\nDate and time literal escape clauses are replaced by the corresponding SQL literal before the SQL request text is transmitted to the database.\n\nLiteral Type | Format\n------------ | ------\nDate         | `{d '`*yyyy-mm-dd*`'}`\nTime         | `{t '`*hh:mm:ss*`'}`\nTimestamp    | `{ts '`*yyyy-mm-dd hh:mm:ss*`'}`\nTimestamp    | `{ts '`*yyyy-mm-dd hh:mm:ss.f*`'}`\n\nFor timestamp literal escape clauses, the decimal point and fractional digits may be omitted, or 1 to 6 fractional digits *f* may be specified after a decimal point.\n\n#### Scalar Functions\n\nScalar function escape clauses are replaced by the corresponding SQL expression before the SQL request text is transmitted to the database.\n\nNumeric Function                       | Returns\n-------------------------------------- | ---\n`{fn ABS(`*number*`)}`                 | Absolute value of *number*\n`{fn ACOS(`*float*`)}`                 | Arccosine, in radians, of *float*\n`{fn ASIN(`*float*`)}`                 | Arcsine, in radians, of *float*\n`{fn ATAN(`*float*`)}`                 | Arctangent, in radians, of *float*\n`{fn ATAN2(`*y*`,`*x*`)}`              | Arctangent, in radians, of *y* / *x*\n`{fn CEILING(`*number*`)}`             | Smallest integer greater than or equal to *number*\n`{fn COS(`*float*`)}`                  | Cosine of *float* radians\n`{fn COT(`*float*`)}`                  | Cotangent of *float* radians\n`{fn DEGREES(`*number*`)}`             | Degrees in *number* radians\n`{fn EXP(`*float*`)}`                  | *e* raised to the power of *float*\n`{fn FLOOR(`*number*`)}`               | Largest integer less than or equal to *number*\n`{fn LOG(`*float*`)}`                  | Natural (base *e*) logarithm of *float*\n`{fn LOG10(`*float*`)}`                | Base 10 logarithm of *float*\n`{fn MOD(`*integer1*`,`*integer2*`)}`  | Remainder for *integer1* / *integer2*\n`{fn PI()}`                            | The constant pi, approximately equal to 3.14159...\n`{fn POWER(`*number*`,`*integer*`)}`   | *number* raised to *integer* power\n`{fn RADIANS(`*number*`)}`             | Radians in *number* degrees\n`{fn RAND(`*seed*`)}`                  | A random float value such that 0 \u0026le; value \u003c 1, and *seed* is ignored\n`{fn ROUND(`*number*`,`*places*`)}`    | *number* rounded to *places*\n`{fn SIGN(`*number*`)}`                | -1 if *number* is negative; 0 if *number* is 0; 1 if *number* is positive\n`{fn SIN(`*float*`)}`                  | Sine of *float* radians\n`{fn SQRT(`*float*`)}`                 | Square root of *float*\n`{fn TAN(`*float*`)}`                  | Tangent of *float* radians\n`{fn TRUNCATE(`*number*`,`*places*`)}` | *number* truncated to *places*\n\nString Function                                                | Returns\n-------------------------------------------------------------- | ---\n`{fn ASCII(`*string*`)}`                                       | ASCII code of the first character in *string*\n`{fn CHAR(`*code*`)}`                                          | Character with ASCII *code*\n`{fn CHAR_LENGTH(`*string*`)}`                                 | Length in characters of *string*\n`{fn CHARACTER_LENGTH(`*string*`)}`                            | Length in characters of *string*\n`{fn CONCAT(`*string1*`,`*string2*`)}`                         | String formed by concatenating *string1* and *string2*\n`{fn DIFFERENCE(`*string1*`,`*string2*`)}`                     | A number from 0 to 4 that indicates the phonetic similarity of *string1* and *string2* based on their Soundex codes, such that a larger return value indicates greater phonetic similarity; 0 indicates no similarity, 4 indicates strong similarity\n`{fn INSERT(`*string1*`,`*position*`,`*length*`,`*string2*`)}` | String formed by replacing the *length*-character segment of *string1* at *position* with *string2*, available beginning with Teradata Database 15.0\n`{fn LCASE(`*string*`)}`                                       | String formed by replacing all uppercase characters in *string* with their lowercase equivalents\n`{fn LEFT(`*string*`,`*count*`)}`                              | Leftmost *count* characters of *string*\n`{fn LENGTH(`*string*`)}`                                      | Length in characters of *string*\n`{fn LOCATE(`*string1*`,`*string2*`)}`                         | Position in *string2* of the first occurrence of *string1*, or 0 if *string2* does not contain *string1*\n`{fn LTRIM(`*string*`)}`                                       | String formed by removing leading spaces from *string*\n`{fn OCTET_LENGTH(`*string*`)}`                                | Length in octets (bytes) of *string*\n`{fn POSITION(`*string1*` IN `*string2*`)}`                    | Position in *string2* of the first occurrence of *string1*, or 0 if *string2* does not contain *string1*\n`{fn REPEAT(`*string*`,`*count*`)}`                            | String formed by repeating *string* *count* times, available beginning with Teradata Database 15.0\n`{fn REPLACE(`*string1*`,`*string2*`,`*string3*`)}`            | String formed by replacing all occurrences of *string2* in *string1* with *string3*\n`{fn RIGHT(`*string*`,`*count*`)}`                             | Rightmost *count* characters of *string*, available beginning with Teradata Database 15.0\n`{fn RTRIM(`*string*`)}`                                       | String formed by removing trailing spaces from *string*\n`{fn SOUNDEX(`*string*`)}`                                     | Soundex code for *string*\n`{fn SPACE(`*count*`)}`                                        | String consisting of *count* spaces\n`{fn SUBSTRING(`*string*`,`*position*`,`*length*`)}`           | The *length*-character segment of *string* at *position*\n`{fn UCASE(`*string*`)}`                                       | String formed by replacing all lowercase characters in *string* with their uppercase equivalents\n\nSystem Function                         | Returns\n--------------------------------------- | ---\n`{fn DATABASE()}`                       | Current default database name\n`{fn IFNULL(`*expression*`,`*value*`)}` | *expression* if *expression* is not NULL, or *value* if *expression* is NULL\n`{fn USER()}`                           | Logon user name, which may differ from the current authorized user name after `SET QUERY_BAND` sets a proxy user\n\nTime/Date Function                                                 | Returns\n------------------------------------------------------------------ | ---\n`{fn CURDATE()}`                                                   | Current date\n`{fn CURRENT_DATE()}`                                              | Current date\n`{fn CURRENT_TIME()}`                                              | Current time\n`{fn CURRENT_TIMESTAMP()}`                                         | Current date and time\n`{fn CURTIME()}`                                                   | Current time\n`{fn DAYOFMONTH(`*date*`)}`                                        | Integer from 1 to 31 indicating the day of month in *date*\n`{fn EXTRACT(YEAR FROM `*value*`)}`                                | The year component of the date and/or time *value*\n`{fn EXTRACT(MONTH FROM `*value*`)}`                               | The month component of the date and/or time *value*\n`{fn EXTRACT(DAY FROM `*value*`)}`                                 | The day component of the date and/or time *value*\n`{fn EXTRACT(HOUR FROM `*value*`)}`                                | The hour component of the date and/or time *value*\n`{fn EXTRACT(MINUTE FROM `*value*`)}`                              | The minute component of the date and/or time *value*\n`{fn EXTRACT(SECOND FROM `*value*`)}`                              | The second component of the date and/or time *value*\n`{fn HOUR(`*time*`)}`                                              | Integer from 0 to 23 indicating the hour of *time*\n`{fn MINUTE(`*time*`)}`                                            | Integer from 0 to 59 indicating the minute of *time*\n`{fn MONTH(`*date*`)}`                                             | Integer from 1 to 12 indicating the month of *date*\n`{fn NOW()}`                                                       | Current date and time\n`{fn SECOND(`*time*`)}`                                            | Integer from 0 to 59 indicating the second of *time*\n`{fn TIMESTAMPADD(SQL_TSI_YEAR,`*count*`,`*timestamp*`)}`          | Timestamp formed by adding *count* years to *timestamp*\n`{fn TIMESTAMPADD(SQL_TSI_MONTH,`*count*`,`*timestamp*`)}`         | Timestamp formed by adding *count* months to *timestamp*\n`{fn TIMESTAMPADD(SQL_TSI_DAY,`*count*`,`*timestamp*`)}`           | Timestamp formed by adding *count* days to *timestamp*\n`{fn TIMESTAMPADD(SQL_TSI_HOUR,`*count*`,`*timestamp*`)}`          | Timestamp formed by adding *count* hours to *timestamp*\n`{fn TIMESTAMPADD(SQL_TSI_MINUTE,`*count*`,`*timestamp*`)}`        | Timestamp formed by adding *count* minutes to *timestamp*\n`{fn TIMESTAMPADD(SQL_TSI_SECOND,`*count*`,`*timestamp*`)}`        | Timestamp formed by adding *count* seconds to *timestamp*\n`{fn TIMESTAMPDIFF(SQL_TSI_YEAR,`*timestamp1*`,`*timestamp2*`)}`   | Number of years by which *timestamp2* exceeds *timestamp1*\n`{fn TIMESTAMPDIFF(SQL_TSI_MONTH,`*timestamp1*`,`*timestamp2*`)}`  | Number of months by which *timestamp2* exceeds *timestamp1*\n`{fn TIMESTAMPDIFF(SQL_TSI_DAY,`*timestamp1*`,`*timestamp2*`)}`    | Number of days by which *timestamp2* exceeds *timestamp1*\n`{fn TIMESTAMPDIFF(SQL_TSI_HOUR,`*timestamp1*`,`*timestamp2*`)}`   | Number of hours by which *timestamp2* exceeds *timestamp1*\n`{fn TIMESTAMPDIFF(SQL_TSI_MINUTE,`*timestamp1*`,`*timestamp2*`)}` | Number of minutes by which *timestamp2* exceeds *timestamp1*\n`{fn TIMESTAMPDIFF(SQL_TSI_SECOND,`*timestamp1*`,`*timestamp2*`)}` | Number of seconds by which *timestamp2* exceeds *timestamp1*\n`{fn YEAR(`*date*`)}`                                              | The year of *date*\n\n#### Conversion Functions\n\nConversion function escape clauses are replaced by the corresponding SQL expression before the SQL request text is transmitted to the database.\n\nConversion Function                                             | Returns\n--------------------------------------------------------------- | ---\n`{fn CONVERT(`*value*`, SQL_BIGINT)}`                           | *value* converted to SQL `BIGINT`\n`{fn CONVERT(`*value*`, SQL_BINARY(`*size*`))}`                 | *value* converted to SQL `BYTE(`*size*`)`\n`{fn CONVERT(`*value*`, SQL_CHAR(`*size*`))}`                   | *value* converted to SQL `CHAR(`*size*`)`\n`{fn CONVERT(`*value*`, SQL_DATE)}`                             | *value* converted to SQL `DATE`\n`{fn CONVERT(`*value*`, SQL_DECIMAL(`*precision*`,`*scale*`))}` | *value* converted to SQL `DECIMAL(`*precision*`,`*scale*`)`\n`{fn CONVERT(`*value*`, SQL_DOUBLE)}`                           | *value* converted to SQL `DOUBLE PRECISION`, a synonym for `FLOAT`\n`{fn CONVERT(`*value*`, SQL_FLOAT)}`                            | *value* converted to SQL `FLOAT`\n`{fn CONVERT(`*value*`, SQL_INTEGER)}`                          | *value* converted to SQL `INTEGER`\n`{fn CONVERT(`*value*`, SQL_LONGVARBINARY)}`                    | *value* converted to SQL `VARBYTE(64000)`\n`{fn CONVERT(`*value*`, SQL_LONGVARCHAR)}`                      | *value* converted to SQL `LONG VARCHAR`\n`{fn CONVERT(`*value*`, SQL_NUMERIC)}`                          | *value* converted to SQL `NUMBER`\n`{fn CONVERT(`*value*`, SQL_SMALLINT)}`                         | *value* converted to SQL `SMALLINT`\n`{fn CONVERT(`*value*`, SQL_TIME(`*scale*`))}`                  | *value* converted to SQL `TIME(`*scale*`)`\n`{fn CONVERT(`*value*`, SQL_TIMESTAMP(`*scale*`))}`             | *value* converted to SQL `TIMESTAMP(`*scale*`)`\n`{fn CONVERT(`*value*`, SQL_TINYINT)}`                          | *value* converted to SQL `BYTEINT`\n`{fn CONVERT(`*value*`, SQL_VARBINARY(`*size*`))}`              | *value* converted to SQL `VARBYTE(`*size*`)`\n`{fn CONVERT(`*value*`, SQL_VARCHAR(`*size*`))}`                | *value* converted to SQL `VARCHAR(`*size*`)`\n\n#### LIKE Predicate Escape Character\n\nWithin a `LIKE` predicate's *pattern* argument, the characters `%` (percent) and `_` (underscore) serve as wildcards.\nTo interpret a particular wildcard character literally in a `LIKE` predicate's *pattern* argument, the wildcard character must be preceded by an escape character, and the escape character must be indicated in the `LIKE` predicate's `ESCAPE` clause.\n\n`LIKE` predicate escape character escape clauses are replaced by the corresponding SQL clause before the SQL request text is transmitted to the database.\n\n`{escape '`*EscapeCharacter*`'}`\n\nThe escape clause must be specified immediately after the `LIKE` predicate that it applies to.\n\n#### Outer Joins\n\nOuter join escape clauses are replaced by the corresponding SQL clause before the SQL request text is transmitted to the database.\n\n`{oj `*TableName* *OptionalCorrelationName* `LEFT OUTER JOIN `*TableName* *OptionalCorrelationName* `ON `*JoinCondition*`}`\n\n`{oj `*TableName* *OptionalCorrelationName* `RIGHT OUTER JOIN `*TableName* *OptionalCorrelationName* `ON `*JoinCondition*`}`\n\n`{oj `*TableName* *OptionalCorrelationName* `FULL OUTER JOIN `*TableName* *OptionalCorrelationName* `ON `*JoinCondition*`}`\n\n#### Stored Procedure Calls\n\nStored procedure call escape clauses are replaced by the corresponding SQL clause before the SQL request text is transmitted to the database.\n\n`{call `*ProcedureName*`}`\n\n`{call `*ProcedureName*`(`*CommaSeparatedParameterValues...*`)}`\n\n#### Native SQL\n\nWhen a SQL request contains the native SQL escape clause, all escape clauses are replaced in the SQL request text, and the modified SQL request text is returned to the application as a result set containing a single row and a single VARCHAR column. The SQL request text is not transmitted to the database, and the SQL request is not executed. The native SQL escape clause mimics the functionality of the JDBC API `Connection.nativeSQL` method.\n\n`{fn teradata_nativesql}`\n\nThis escape clause is automatically prepended to the SQL request when the connection `.nativeSQL` method is called.\n\n#### Connection Functions\n\nThe following table lists connection function escape clauses that are intended for use with the native SQL escape clause `{fn teradata_nativesql}`.\n\nThese functions provide information about the connection, or control the behavior of the connection.\nFunctions that provide information return locally-cached information and avoid a round-trip to the database.\nConnection function escape clauses are replaced by the returned information before the SQL request text is transmitted to the database.\n\nConnection Function                           | Returns\n--------------------------------------------- | ---\n`{fn teradata_amp_count}`                     | Number of AMPs of the database system\n`{fn teradata_connected}`                     | `true` or `false` indicating whether this connection has logged on\n`{fn teradata_database_version}`              | Version number of the database\n`{fn teradata_driver_version}`                | Version number of the driver\n`{fn teradata_get_errors}`                    | Errors from the most recent batch operation\n`{fn teradata_get_warnings}`                  | Warnings from an operation that completed with warnings\n`{fn teradata_getloglevel}`                   | Current log level\n`{fn teradata_go_runtime}`                    | Go runtime version for the Teradata GoSQL Driver\n`{fn teradata_logon_sequence_number}`         | Session's Logon Sequence Number, if available\n`{fn teradata_program_name}`                  | Executable program name\n`{fn teradata_provide(config_response)}`      | Config Response parcel contents in JSON format\n`{fn teradata_provide(connection_id)}`        | Connection's unique identifier within the process\n`{fn teradata_provide(default_connection)}`   | `false` indicating this is not a stored procedure default connection\n`{fn teradata_provide(dhke)}`                 | Number of round trips for non-TLS Diffie-Hellman key exchange (DHKE) or `0` for TLS with database DHKE bypass\n`{fn teradata_provide(gateway_config)}`       | Gateway Config parcel contents in JSON format\n`{fn teradata_provide(governed)}`             | `true` or `false` indicating the `govern` connection parameter setting\n`{fn teradata_provide(host_id)}`              | Session's host ID\n`{fn teradata_provide(java_charset_name)}`    | `UTF8`\n`{fn teradata_provide(lob_support)}`          | `true` or `false` indicating this connection's LOB support\n`{fn teradata_provide(local_address)}`        | Local address of the connection's TCP socket\n`{fn teradata_provide(local_port)}`           | Local port of the connection's TCP socket\n`{fn teradata_provide(original_hostname)}`    | Original specified database hostname\n`{fn teradata_provide(redrive_active)}`       | `true` or `false` indicating whether this connection has Redrive active\n`{fn teradata_provide(remote_address)}`       | Hostname (if available) and IP address of the connected database node\n`{fn teradata_provide(remote_port)}`          | TCP port number of the database\n`{fn teradata_provide(rnp_active)}`           | `true` or `false` indicating whether this connection has Recoverable Network Protocol active\n`{fn teradata_provide(session_charset_code)}` | Session character set code `191`\n`{fn teradata_provide(session_charset_name)}` | Session character set name `UTF8`\n`{fn teradata_provide(sip_support)}`          | `true` or `false` indicating this connection's StatementInfo parcel support\n`{fn teradata_provide(transaction_mode)}`     | Session's transaction mode, `ANSI` or `TERA`\n`{fn teradata_provide(uses_check_workload)}`  | `true` or `false` indicating whether this connection uses `CHECK WORKLOAD`\n`{fn teradata_session_number}`                | Database session number if connected to a database Gateway or endpoint session number if connected to an endpoint such as Unity, Session Manager, or Business Continuity Manager\n\n#### Request-Scope Functions\n\nThe following table lists request-scope function escape clauses that are intended for use with the Cursor `.execute` or `.executemany` methods.\n\nThese functions control the behavior of the corresponding Cursor, and are limited in scope to the particular SQL request in which they are specified.\nRequest-scope function escape clauses are removed before the SQL request text is transmitted to the database.\n\nRequest-Scope Function                                 | Effect\n------------------------------------------------------ | ---\n`{fn teradata_agkr(`*Option*`)}`                       | Executes the SQL request with Auto-Generated Key Retrieval (AGKR) *Option* `C` (identity column value) or `R` (entire row)\n`{fn teradata_clobtranslate(`*Option*`)}`              | Executes the SQL request with CLOB translate *Option* `U` (unlocked) or the default `L` (locked)\n`{fn teradata_error_query_count(`*Number*`)}`          | Specifies how many times the driver will attempt to query FastLoad Error Table 1 after a FastLoad operation. Takes precedence over the `error_query_count` connection parameter.\n`{fn teradata_error_query_interval(`*Milliseconds*`)}` | Specifies how many milliseconds the driver will wait between attempts to query FastLoad Error Table 1. Takes precedence over the `error_query_interval` connection parameter.\n`{fn teradata_error_table_1_suffix(`*Suffix*`)}`       | Specifies the suffix to append to the name of FastLoad error table 1. Takes precedence over the `error_table_1_suffix` connection parameter.\n`{fn teradata_error_table_2_suffix(`*Suffix*`)}`       | Specifies the suffix to append to the name of FastLoad error table 2. Takes precedence over the `error_table_2_suffix` connection parameter.\n`{fn teradata_error_table_database(`*DbName*`)}`       | Specifies the parent database name for FastLoad error tables 1 and 2. Takes precedence over the `error_table_database` connection parameter.\n`{fn teradata_failfast}`                               | Reject (\"fail fast\") this SQL request rather than delay by a workload management rule or throttle\n`{fn teradata_fake_result_sets}`                       | A fake result set containing statement metadata precedes each real result set. Takes precedence over the `fake_result_sets` connection parameter.\n`{fn teradata_fake_result_sets_off}`                   | Turns off fake result sets for this SQL request. Takes precedence over the `fake_result_sets` connection parameter.\n`{fn teradata_field_quote(`*String*`)}`                | Specifies a single-character string used to quote fields in a CSV file. Takes precedence over the `field_quote` connection parameter.\n`{fn teradata_field_sep(`*String*`)}`                  | Specifies a single-character string used to separate fields in a CSV file. Takes precedence over the `field_sep` connection parameter.\n`{fn teradata_govern_off}`                             | Teradata workload management rules will reject rather than delay a FastLoad or FastExport. Takes precedence over the `govern` connection parameter.\n`{fn teradata_govern_on}`                              | Teradata workload management rules may delay a FastLoad or FastExport. Takes precedence over the `govern` connection parameter.\n`{fn teradata_lobselect(`*Option*`)}`                  | Executes the SQL request with LOB select *Option* `S` (spool-scoped LOB locators), `T` (transaction-scoped LOB locators), or the default `I` (inline materialized LOB values)\n`{fn teradata_manage_error_tables_off}`                | Turns off FastLoad error table management for this request. Takes precedence over the `manage_error_tables` connection parameter.\n`{fn teradata_manage_error_tables_on}`                 | Turns on FastLoad error table management for this request. Takes precedence over the `manage_error_tables` connection parameter.\n`{fn teradata_parameter(`*Index*`,`*DataType*`)`       | Transmits parameter *Index* bind values as *DataType*\n`{fn teradata_provide(request_scope_column_name_off)}` | Provides the default column name behavior for this SQL request. Takes precedence over the `column_name` connection parameter.\n`{fn teradata_provide(request_scope_lob_support_off)}` | Turns off LOB support for this SQL request. Takes precedence over the `lob_support` connection parameter.\n`{fn teradata_provide(request_scope_refresh_rsmd)}`    | Executes the SQL request with the default request processing option `B` (both)\n`{fn teradata_provide(request_scope_sip_support_off)}` | Turns off StatementInfo parcel support for this SQL request. Takes precedence over the `sip_support` connection parameter.\n`{fn teradata_read_csv(`*CSVFileName*`)}`              | Executes a batch insert using the bind parameter values read from the specified CSV file for either a SQL batch insert or a FastLoad\n`{fn teradata_request_timeout(`*Seconds*`)}`           | Specifies the timeout for executing the SQL request. Zero means no timeout. Takes precedence over the `request_timeout` connection parameter.\n`{fn teradata_require_fastexport}`                     | Specifies that FastExport is required for the SQL request\n`{fn teradata_require_fastload}`                       | Specifies that FastLoad is required for the SQL request\n`{fn teradata_rpo(`*RequestProcessingOption*`)}`       | Executes the SQL request with *RequestProcessingOption* `S` (prepare), `E` (execute), or the default `B` (both)\n`{fn teradata_sessions(`*Number*`)}`                   | Specifies the *Number* of data transfer connections for FastLoad or FastExport. Takes precedence over the `sessions` connection parameter.\n`{fn teradata_try_fastexport}`                         | Tries to use FastExport for the SQL request\n`{fn teradata_try_fastload}`                           | Tries to use FastLoad for the SQL request\n`{fn teradata_untrusted}`                              | Marks the SQL request as untrusted; not implemented yet\n`{fn teradata_values_off}`                             | Turns off `teradata_values` for this SQL request. Takes precedence over the `teradata_values` connection parameter. Refer to the [Data Types](#DataTypes) table for details.\n`{fn teradata_values_on}`                              | Turns on `teradata_values` for this SQL request. Takes precedence over the `teradata_values` connection parameter. Refer to the [Data Types](#DataTypes) table for details.\n`{fn teradata_write_csv(`*CSVFileName*`)}`             | Exports one or more result sets from a SQL request or a FastExport to the specified CSV file or files\n\nThe `teradata_field_sep` and `teradata_field_quote` escape functions have a single-character string argument. The string argument must follow SQL literal syntax. The string argument may be enclosed in single-quote (`'`) characters or double-quote (`\"`) characters.\n\nTo represent a single-quote character in a string enclosed in single-quote characters, you must repeat the single-quote character.\n\n    {fn teradata_field_quote('''')}\n\nTo represent a double-quote character in a string enclosed in double-quote characters, you must repeat the double-quote character.\n\n    {fn teradata_field_quote(\"\"\"\")}\n\n\u003ca id=\"FastLoad\"\u003e\u003c/a\u003e\n\n### FastLoad\n\nThe driver offers FastLoad, which opens multiple database connections to transfer data in parallel.\n\nPlease be aware that this is an early release of the FastLoad feature. Think of it as a beta or preview version. It works, but does not yet offer all the features that JDBC FastLoad offers. FastLoad is still under active development, and we will continue to enhance it in subsequent builds.\n\nFastLoad has limitations and cannot be used in all cases as a substitute for SQL batch insert:\n* FastLoad can only load into an empty permanent table.\n* FastLoad cannot load additional rows into a table that already contains rows.\n* FastLoad cannot load into a volatile table or global temporary table.\n* FastLoad cannot load duplicate rows into a `MULTISET` table with a primary index.\n* Do not use FastLoad to load only a few rows, because FastLoad opens extra connections to the database, which is time consuming.\n* Only use FastLoad to load many rows (at least 100,000 rows) so that the row-loading performance gain exceeds the overhead of opening additional connections.\n* FastLoad does not support all database data types. For example, `BLOB` and `CLOB` are not supported.\n* FastLoad requires StatementInfo parcel support to be enabled.\n* FastLoad requires read access to the DBC.SessionInfoV view to obtain the database Logon Sequence Number of the FastLoad job.\n* FastLoad locks the destination table.\n* If Online Archive encounters a table being loaded with FastLoad, online archiving of that table will be bypassed.\n\nYour application can bind a single row of data for FastLoad, but that is not recommended because the overhead of opening additional connections causes FastLoad to be slower than a regular SQL `INSERT` for a single row.\n\nHow to use FastLoad:\n* Auto-commit should be turned off before beginning a FastLoad.\n* FastLoad is intended for binding many rows at a time. Each batch of rows must be able to fit into memory.\n* When auto-commit is turned off, your application can insert multiple batches in a loop for the same FastLoad.\n* Each column's data type must be consistent across every row in every batch over the entire FastLoad.\n* The column values of the first row of the first batch dictate what the column data types must be in all subsequent rows and all subsequent batches of the FastLoad.\n\nFastLoad opens multiple data transfer connections to the database. FastLoad evenly distributes each batch of rows across the available data transfer connections, and uses overlapped I/O to send and receive messages in parallel.\n\nTo use FastLoad, your application must prepend one of the following escape functions to the `INSERT` statement:\n* `{fn teradata_try_fastload}` tries to use FastLoad for the `INSERT` statement, and automatically executes the `INSERT` as a regular SQL statement when the `INSERT` is not compatible with FastLoad.\n* `{fn teradata_require_fastload}` requires FastLoad for the `INSERT` statement, and fails with an error when the `INSERT` is not compatible with FastLoad.\n\nYour application can prepend other optional escape functions to the `INSERT` statement:\n* `{fn teradata_sessions(`n`)}` specifies the number of data transfer connections to be opened, and is capped at the number of AMPs. The default is the smaller of 8 or the number of AMPs. We recommend avoiding this function to let the driver ask the database how many data transfer connections should be used.\n* `{fn teradata_error_table_1_suffix(`suffix`)}` specifies the suffix to append to the name of FastLoad error table 1. The default suffix is `_ERR_1`.\n* `{fn teradata_error_table_2_suffix(`suffix`)}` specifies the suffix to append to the name of FastLoad error table 2. The default suffix is `_ERR_2`.\n* `{fn teradata_error_table_database(`dbname`)}` specifies the parent database name for FastLoad error tables 1 and 2. By default, the FastLoad error tables reside in the same database as the destination table.\n* `{fn teradata_govern_on}` or `{fn teradata_govern_off}` specifies whether Teradata workload management rules may delay or reject the FastLoad. Takes precedence over the `govern` connection parameter.\n\nAfter beginning a FastLoad, your application can obtain the Logon Sequence Number (LSN) assigned to the FastLoad by prepending the following escape functions to the `INSERT` statement:\n* `{fn teradata_nativesql}{fn teradata_logon_sequence_number}` returns the string form of an integer representing the Logon Sequence Number (LSN) for the FastLoad. Returns an empty string if the request is not a FastLoad.\n\nFastLoad does not stop for data errors such as constraint violations or unique primary index violations. After inserting each batch of rows, your application must obtain warning and error information by prepending the following escape functions to the `INSERT` statement:\n* `{fn teradata_nativesql}{fn teradata_get_warnings}` returns in one string all warnings generated by FastLoad for the request.\n* `{fn teradata_nativesql}{fn teradata_get_errors}` returns in one string all data errors observed by FastLoad for the most recent batch. The data errors are obtained from FastLoad error table 1, for problems such as constraint violations, data type conversion errors, and unavailable AMP conditions.\n\nYour application ends FastLoad by committing or rolling back the current transaction. After commit or rollback, your application must obtain warning and error information by prepending the following escape functions to the `INSERT` statement:\n* `{fn teradata_nativesql}{fn teradata_get_warnings}` returns in one string all warnings generated by FastLoad for the commit or rollback. The warnings are obtained from FastLoad error table 2, for problems such as duplicate rows.\n* `{fn teradata_nativesql}{fn teradata_get_errors}` returns in one string all data errors observed by FastLoad for the commit or rollback. The data errors are obtained from FastLoad error table 2, for problems such as unique primary index violations.\n\nWarning and error information remains available until the next batch is inserted or until the commit or rollback. Each batch execution clears the prior warnings and errors. Each commit or rollback clears the prior warnings and errors.\n\n\u003ca id=\"FastExport\"\u003e\u003c/a\u003e\n\n### FastExport\n\nThe driver offers FastExport, which opens multiple database connections to transfer data in parallel.\n\nPlease be aware that this is an early release of the FastExport feature. Think of it as a beta or preview version. It works, but does not yet offer all the features that JDBC FastExport offers. FastExport is still under active development, and we will continue to enhance it in subsequent builds.\n\nFastExport has limitations and cannot be used in all cases as a substitute for SQL queries:\n* FastExport cannot query a volatile table or global temporary table.\n* FastExport supports single-statement SQL `SELECT`, and supports multi-statement requests composed of multiple SQL `SELECT` statements only.\n* FastExport supports question-mark parameter markers in `WHERE` clause conditions. However, the database does not permit the equal `=` operator for primary or unique secondary indexes, and will return database error 3695 \"A Single AMP Select statement has been issued in FastExport\".\n* Do not use FastExport to fetch only a few rows, because FastExport opens extra connections to the database, which is time consuming.\n* Only use FastExport to fetch many rows (at least 100,000 rows) so that the row-fetching performance gain exceeds the overhead of opening additional connections.\n* FastExport does not support all database data types. For example, `BLOB` and `CLOB` are not supported.\n* For best efficiency, do not use `GROUP BY` and `ORDER BY` clauses with FastExport.\n* FastExport's result set ordering behavior may differ from a regular SQL query. In particular, a query containing an ordered analytic function may not produce an ordered result set. Use an `ORDER BY` clause to guarantee result set order.\n* FastExport requires read access to the DBC.SessionInfoV view to obtain the database Logon Sequence Number of the FastExport job.\n\nFastExport opens multiple data transfer connections to the database. FastExport uses overlapped I/O to send and receive messages in parallel.\n\nTo use FastExport, your application must prepend one of the following escape functions to the query:\n* `{fn teradata_try_fastexport}` tries to use FastExport for the query, and automatically executes the query as a regular SQL query when the query is not compatible with FastExport.\n* `{fn teradata_require_fastexport}` requires FastExport for the query, and fails with an error when the query is not compatible with FastExport.\n\nYour application can prepend other optional escape functions to the query:\n* `{fn teradata_sessions(`n`)}` specifies the number of data transfer connections to be opened, and is capped at the number of AMPs. The default is the smaller of 8 or the number of AMPs. We recommend avoiding this function to let the driver ask the database how many data transfer connections should be used.\n* `{fn teradata_govern_on}` or `{fn teradata_govern_off}` specifies whether Teradata workload management rules may delay or reject the FastExport. Takes precedence over the `govern` connection parameter.\n\nAfter beginning a FastExport, your application can obtain the Logon Sequence Number (LSN) assigned to the FastExport by prepending the following escape functions to the query:\n* `{fn teradata_nativesql}{fn teradata_logon_sequence_number}` returns the string form of an integer representing the Logon Sequence Number (LSN) for the FastExport. Returns an empty string if the request is not a FastExport.\n\n\u003ca id=\"CSVBatchInserts\"\u003e\u003c/a\u003e\n\n### CSV Batch Inserts\n\nThe driver can read batch insert bind values from a CSV (comma separated values) file. This feature can be used with SQL batch inserts and with FastLoad.\n\nTo specify batch insert bind values in a CSV file, the application prepends the escape function `{fn teradata_read_csv(`*CSVFileName*`)}` to the `INSERT` statement.\n\nThe application can specify batch insert bind values in a CSV file, or specify bind parameter values, but not both together. The driver returns an error if both are specified together.\n\nConsiderations when using a CSV file:\n* Each record is on a separate line of the CSV file. Records are delimited by line breaks (CRLF). The last record in the file may or may not have an ending line break.\n* The first line of the CSV file is a header line. The header line lists the column names separated by the field separator (e.g. `col1,col2,col3`).\n* The field separator defaults to the comma character (`,`). You can specify a different field separator character with the `field_sep` connection parameter or with the `teradata_field_sep` escape function. The specified field separator character must match the actual separator character used in the CSV file.\n* Each field can optionally be enclosed by the field quote character, which defaults to the double-quote character (e.g. `\"abc\",123,efg`). You can specify a different field quote character with the `field_quote` connection parameter or with the `teradata_field_quote` escape function. The field quote character must match the actual field quote character used in the CSV file.\n* The field separator and field quote characters cannot be set to the same value. The field separator and field quote characters must be legal UTF-8 characters and cannot be line feed (`\\n`) or carriage return (`\\r`).\n* Field quote characters are only permitted in fields enclosed by field quote characters. Field quote characters must not appear inside unquoted fields (e.g. not allowed `ab\"cd\"ef,1,abc`).\n* To include a field quote character in a quoted field, the field quote character must be repeated (e.g. `\"abc\"\"efg\"\"dh\",123,xyz`).\n* Line breaks, field quote characters, and field separators may be included in a quoted field (e.g. `\"abc,efg\\ndh\",123,xyz`).\n* Specify a `NULL` value in the CSV file with an empty value between commas (e.g. `1,,456`).\n* A zero-length quoted string specifies a zero-length non-`NULL` string, not a `NULL` value (e.g. `1,\"\",456`).\n* Not all data types are supported. For example, `BLOB`, `BYTE`, and `VARBYTE` are not supported.\n* A field length greater than 64KB is transmitted to the database as a `DEFERRED CLOB` for a SQL batch insert. A field length greater than 64KB is not supported with FastLoad.\n\nLimitations when using CSV batch inserts:\n* Bound parameter values cannot be specified in the execute method when using the escape function `{fn teradata_read_csv(`*CSVFileName*`)}`.\n* The CSV file must contain at least one valid record in addition to the header line containing the column names.\n* For FastLoad, the insert operation will fail if the CSV file is improperly formatted and a parser error occurs.\n* For SQL batch insert, some records may be inserted before a parsing error occurs. A list of the parser errors will be returned. Each parser error will include the line number (starting at line 1) and the column number (starting at zero).\n* Using a CSV file with FastLoad has the same limitations and is used the same way as described in the [FastLoad](#FastLoad) section.\n\n\u003ca id=\"CSVExportResults\"\u003e\u003c/a\u003e\n\n### CSV Export Results\n\nThe driver can export query results to CSV files. This feature can be used with SQL query results, with calls to stored procedures, and with FastExport.\n\nTo export a result set to a CSV file, the application prepends the escape function `{fn teradata_write_csv(`*CSVFileName*`)}` to the SQL request text.\n\nIf the query returns multiple result sets, each result set will be written to a separate file. The file name is varied by inserting the string \"_N\" between the specified file name and file type extension (e.g. `fileName.csv`, `fileName_1.csv`, `fileName_2.csv`). If no file type extension is specified, then the suffix \"_N\" is appended to the end of the file name (e.g. `fileName`, `fileName_1`, `fileName_2`).\n\nA stored procedure call that produces multiple dynamic result sets behaves like other SQL requests that return multiple result sets. The stored procedures's output parameter values are exported as the first CSV file.\n\nExample of a SQL request that returns multiple results:\n\n`{fn teradata_write_csv(myFile.csv)}select 'abc' ; select 123`\n\nCSV File Name | Content\n------------- | ---\nmyFile.csv    | First result set\nmyFile_1.csv  | Second result set\n\nTo obtain the metadata for each result set, use the escape function `{fn teradata_fake_result_sets}`. A fake result set containing the metadata will be written to a file preceding each real result set.\n\nExample of a query that returns multiple result sets with metadata:\n\n`{fn teradata_fake_result_sets}{fn teradata_write_csv(myFile.csv)}select 'abc' ; select 123`\n\nCSV File Name | Content\n------------- | ---\nmyFile.csv    | Fake result set containing the metadata for the first result set\nmyFile_1.csv  | First result set\nmyFile_2.csv  | Fake result set containing the metadata for the second result set\nmyFile_3.csv  | Second result set\n\nExported CSV files have the following characteristics:\n* Each record is on a separate line of the CSV file. Records are delimited by line breaks (CRLF).\n* Column values are separated by the field separator character, which defaults to the comma character (`,`). You can specify a different field separator character with the `field_sep` connection parameter or with the `teradata_field_sep` escape function.\n* The first line of the CSV file is a header line. The header line lists the column names separated by the field separator (e.g. `col1,col2,col3`).\n* When necessary, column values are enclosed by the field quote character, which defaults to the double-quote character (`\"`). You can specify a different field quote character with the `field_quote` connection parameter or with the `teradata_field_quote` escape function.\n* The field separator and field quote characters cannot be set to the same value. The field separator and field quote characters must be legal UTF-8 characters and cannot be line feed (`\\n`) or carriage return (`\\r`).\n* If a column value contains line breaks, field quote characters, and/or field separators in a field, the value is quoted with the field quote character.\n* If a column value contains a field quote character, the value is quoted and the field quote character is repeated. For example, column value `abc\"def` is exported as `\"abc\"\"def\"`.\n* A `NULL` value is exported to the CSV file as an empty value between field separators (e.g. `123,,456`).\n* A non-`NULL` zero-length character value is exported as a zero-length quoted string (e.g. `123,\"\",456`).\n\nLimitations when exporting to CSV files:\n* When the application chooses to export results to a CSV file, the results are not available for the application to fetch in memory.\n* A warning is returned if the application specifies an export CSV file for a SQL statement that does not produce a result set.\n* Exporting a CSV file with FastExport has the same limitations and is used the same way as described in the [FastExport](#FastExport) section.\n* Not all data types are supported. For example, `BLOB`, `BYTE`, and `VARBYTE` are not supported and if one of these column types are present in the result set, an error will be returned.\n* `CLOB`, `XML`, `JSON`, and `DATASET STORAGE FORMAT CSV` data types are supported for SQL query results and are exported as string values, but these data types are not supported by FastExport.\n\n\u003ca id=\"CommandLineInterface\"\u003e\u003c/a\u003e\n\n### Command Line Interface\n\nThe `teradatasql` package provides a command line interface via the `npx` package runner.\n\nRunning the `teradatasql` package without additional arguments prints a usage message.\n\n    npx teradatasql\n\nAny number of arguments can follow the `teradatasql` package name on the command line, and arguments can be repeated on the command line.\n\nThe command line interface can print the `teradatasql` version number.\n\n    npx teradatasql version\n\nSpecify connection parameters to connect to a database.\n\nConnection parameters begin with `host=` and consist of comma-separated key`=`value pairs. A repeated comma `,,` in a connection parameter value is treated as a single literal comma.\n\n    npx teradatasql host=whomooz,user=guest,password=please\n\nThis feature serves as a database connectivity test.\n\nSQL requests can be executed after a database connection is established.\n\n    npx teradatasql host=whomooz,user=guest,password=please \"select * from DBC.DBCInfo\"\n\n\u003ca id=\"ChangeLog\"\u003e\u003c/a\u003e\n\n### Change Log\n\n`20.0.28` - April 5, 2025\n* GOSQL-219 sslnamedgroups connection parameter\n* GOSQL-220 connection parameter oidc_prompt\n\n`20.0.27` - April 1, 2025\n* GOSQL-207 OIDC token cache for improved Browser Authentication UX\n\n`20.0.26` - March 17, 2025\n* Vector data type support for FastLoad\n* Build DLL and shared library with standard Go 1.24.1\n* Environment variable GODEBUG=fips140=on directs the Go Cryptographic Module to operate in FIPS 140-3 mode, default is off\n* No longer uses OpenSSL on Linux, to avoid panic: opensslcrypto: FIPS mode requested (system FIPS mode) but not available in OpenSSL\n* client attribute ClientSecProdGrp no longer indicates OpenSSL library on Linux\n* Switch to golang.org/x/crypto version 0.36.0\n* Requires Node.js v18.20.7 or later and ends support for older versions of Node.js\n\n`20.0.25` - February 25, 2025\n* FIPS support\n* proxy server support for FastLoad and FastExport\n* GOSQL-182 transmit additional Client Attributes\n* client attribute ClientSecProdGrp also indicates OpenSSL library on Linux\n* Build DLL and shared library with Microsoft Go 1.24 using build tag goexperiment.systemcrypto for FIPS support\n* Requires macOS 12.4 Monterey or later and ends support for older versions of macOS\n* Requires Linux kernel version 3.2 or later and ends support for older versions of Linux\n\n`20.0.24` - February 3, 2025\n* default Linux Kerberos libraries /usr/lib64/libgssapi_krb5.so and /usr/lib64/libgssapi_krb5.so.2\n\n`20.0.23` - January 27, 2025\n* client attribute ClientSecProdGrp indicates Go crypto library version\n* Build DLL and shared library with Go 1.23.5\n* Build DLL and shared library with golang.org/x/crypto v0.32.0\n* Requires macOS 11 Big Sur or later and ends support for older versions of macOS\n\n`20.0.22` - January 6, 2025\n* Debug logging for Kerberos library dynamic loading and linking\n\n`20.0.21` - December 12, 2024\n* Provide driver version number and session number in Teradata Security exceptions\n* Add teradatasql package command line interface\n* Add connection method `.nativeSQL`\n* Add cursor attribute `columntypename`\n* Add time zone to ClientAttributesEx\n\n`20.0.20` - October 25, 2024\n* GOSQL-212 handle Microsoft Entra OIDC Authorization Code Response redirect error parameters\n* Add escape function `{fn teradata_provide(dhke)}`\n* NJSD-18 support Electron version 29.1.0\n* NJSD-23 support Node.js v16.20.2 and later Node.js LTS releases\n* Requires Node.js v16.20.2 or later and ends support for older versions of Node.js\n* Replace C foreign function interface `ffi-napi` with `koffi`\n\n`20.0.19` - October 11, 2024\n* GOSQL-211 enable logon to database without DHKE bypass after logon to database having DHKE bypass\n* Add escape function `{fn teradata_connected}`\n\n`20.0.18` - October 7, 2024\n* Omit port suffix from HTTP Host: header when using default port 80 for HTTP or 443 for HTTPS\n* Build DLL and shared library with Go 1.22.4 (downgrade) to avoid [Go 1.22.5 performance regression issue #68587](https://github.com/golang/go/issues/68587)\n\n`20.0.17` - October 1, 2024\n* GOSQL-196 Go TeraGSS logmech=JWT bypass DHKE for HTTPS connections\n* GOSQL-210 provide Server Name Indication (SNI) field in the TLS client hello\n* Build macOS shared library with golang-fips go1.22.5-3-openssl-fips\n\n`20.0.16` - September 27, 2024\n* GOSQL-177 asynchronous request support\n* GOSQL-178 Go TeraGSS logmech=TD2 bypass DHKE for HTTPS connections\n\n`20.0.15` - July 31, 2024\n* GOSQL-205 Stored Password Protection for http(s)_proxy_password connection parameters\n* GOSQL-206 CRC client attribute\n* Build DLL and shared library with Go 1.22.5\n\n`20.0.14` - July 26, 2024\n* GOSQL-203 Go module\n* Build DLL and shared library with Go 1.21.9\n* Requires macOS 10.15 Catalina or later and ends support for older versions of macOS\n\n`20.0.13` - June 24, 2024\n* GOSQL-198 oidc_sslmode connection parameter\n* GOSQL-199 sslcrc=ALLOW and PREFER soft fail CRC for VERIFY-CA and VERIFY-FULL\n* GOSQL-200 sslcrc=REQUIRE requires CRC for VERIFY-CA and VERIFY-FULL\n\n`20.0.12` - April 30, 2024\n* GOSQL-31 Monitor partition\n\n`20.0.11` - April 25, 2024\n* GOSQL-193 Device Code Flow and Client Credentials Flow\n\n`20.0.10` - April 10, 2024\n* GOSQL-185 Use FIPS-140 Compliant Modules\n* Build DLL and shared library with Go 1.20.14\n* Build DLL and shared library with Microsoft Go for Windows, Linux Intel, Linux ARM\n* Build shared library with golang-fips for macOS Intel, macOS ARM\n\n`20.0.8` - March 18, 2024\n* GOSQL-121 Linux ARM support\n* GOSQL-184 remove DES and DESede\n* NJSD-13 remove DES and DESede\n\n`20.0.7` - February 1, 2024\n* GOSQL-189 improved error messages for missing or invalid logmech value\n* GOSQL-190 Go TeraGSS accommodate DH/CI bypass flag in JWT token header\n\n`20.0.6` - January 19, 2024\n* GOSQL-183 TLS certificate verification for Identity Provider endpoints\n\n`20.0.5` - January 17, 2024\n* GOSQL-151 proxy server support\n\n`20.0.4` - January 9, 2024\n* Build DLL and shared library with Go 1.20.12\n\n`20.0.3` - December 8, 2023\n* Improved exception message for query timeout\n\n`20.0.2` - November 16, 2023\n* Build DLL and shared library with Go 1.20.11\n\n`20.0.1` - November 14, 2023\n* Correct links for sample programs\n\n`20.0.0` - November 8, 2023\n* Initial release\n","funding_links":[],"categories":[],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fteradata%2Fnodejs-driver","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fteradata%2Fnodejs-driver","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fteradata%2Fnodejs-driver/lists"}