Compare Versions - @clickhouse/client
npm / @clickhouse/client / Compare Versions
Improvements
- Added a helping
WARNlevel log message with a suggestion to check thekeep_aliveconfiguration if the client receives anECONNRESETerror from the server, which can happen when the server closes idle connections after a certain timeout, and the client tries to reuse such a connection from the pool. This can be especially helpful for new users who might not be aware of this aspect of HTTP connection management. The log message is only emitted if thekeep_aliveoption is enabled in the client configuration, and it includes the server's keep-alive timeout value (if available) to assist with troubleshooting. (#597)
How to reproduce the issue that triggers the log message:
const client = createClient({
// ...
keep_alive: {
enabled: true,
// ❌ DON'T SET THIS VALUE SO HIGH IN PRODUCTION
idle_socket_ttl: 1_000_000,
},
log: {
level: ClickHouseLogLevel.WARN, // to see the warning logs
},
})
for (let i = 0; i < 1000; i++) {
await client.ping({
// To use a regular query instead of the /ping endpoint
// which might be configured differently on the server side
// and have different timeout settings.
select: true,
})
// Wait long enough to let the server close the idle connection,
// but not too long to let the client remove it from the pool,
// in other words try to hit the scenario when the race condition
// happens between the server closing the connection and the client
// trying to reuse it.
await sleep(SERVER_KEEP_ALIVE_TIMEOUT_MS - 100)
}
Example log message:
{
"message": "Ping: idle socket TTL is greater than server keep-alive timeout, try setting idle socket TTL to a value lower than the server keep-alive timeout to prevent unexpected connection resets, see https://c.house/js_keep_alive_econnreset for more details.",
"args": {
"operation": "Ping",
"connection_id": "8dc1c9bd-7895-49b1-8a95-276470151c65",
"query_id": "beee95af-2e83-4dcb-8e1e-045bd61f4985",
"request_id": "8dc1c9bd-7895-49b1-8a95-276470151c65:2",
"socket_id": "8dc1c9bd-7895-49b1-8a95-276470151c65:1",
"server_keep_alive_timeout_ms": 10000,
"idle_socket_ttl": 15000
},
"module": "HTTP Adapter"
}
Improvements
- Setting
log.leveldefault value toClickHouseLogLevel.WARNinstead ofClickHouseLogLevel.OFFto provide better visibility into potential issues without overwhelming users with too much information by default.
const client = createClient({
// ...
log: {
level: ClickHouseLogLevel.WARN, // default is now ClickHouseLogLevel.WARN instead of ClickHouseLogLevel.OFF
},
})
- Logging is now lazy, which means that the log messages will only be constructed if the log level is appropriate for the message. This can improve performance in cases where constructing the log message is expensive, and the log level is set to ignore such messages. See
ClickHouseLogLevelenum for the complete list of log levels. (#520)
const client = createClient({
// ...
log: {
level: ClickHouseLogLevel.TRACE, // to log everything available down to the network level events
},
})
- Enhanced the logging of the HTTP request / socket lifecycle with additional trace messages and context such as Connection ID (UUID) and Request ID and Socket ID that embed the connection ID for ease of tracing the logs of a particular request across the connection lifecycle. To enable such logs, set the
log.levelconfig option toClickHouseLogLevel.TRACE. (#567)
[2026-02-25T09:19:13.511Z][TRACE][@clickhouse/client][Connection] Insert: received 'close' event, 'free' listener removed
Arguments: {
operation: 'Insert',
connection_id: 'da3c9796-5dc5-46ef-83b0-ed1f4422094c',
query_id: '9dfda627-39a2-41a6-9fc9-8f8716574826',
request_id: 'da3c9796-5dc5-46ef-83b0-ed1f4422094c:3',
socket_id: 'da3c9796-5dc5-46ef-83b0-ed1f4422094c:2',
event: 'close'
}
[2026-02-25T09:19:13.502Z][TRACE][@clickhouse/client][Connection] Query: reusing socket
Arguments: {
operation: 'Query',
connection_id: 'da3c9796-5dc5-46ef-83b0-ed1f4422094c',
query_id: 'ad0127e8-b1c7-4ed6-9681-c0162f7a0ea9',
request_id: 'da3c9796-5dc5-46ef-83b0-ed1f4422094c:4',
socket_id: 'da3c9796-5dc5-46ef-83b0-ed1f4422094c:2',
usage_count: 1
}
- A step towards structured logging: the client now passes rich context to the logger
argsparameter (e.g.connection_id,query_id,request_id,socket_id). (#576)
Deprecated API
-
The
drainStreamutility function is now deprecated, as the client will handle draining the stream internally when needed. Useclient.command()instead, which will handle draining the stream internally when needed. (#578) -
The
sleeputility function is now deprecated, as it is not intended to be used outside of the client implementation. UsesetTimeoutdirectly or a more full-featured utility library if you need additional features like cancellation or timers management. (#578)
New features
- Added support for the new Disposable API (a.k.a the
usingkeyword) (#500)
async function main() {
await using client = await client.query(…);
// some code that can throw
// but thanks to `using` the client will still get closed
// client is also automatically closed here by calling [Symbol.disaposeAsync]
}
Without the new using keyword it is required to wrap the code that might leak expensive resources like sockets and big buffers in try / finally
async function main() {
let client
try {
client = await createClient(…);
// some code that can throw
} finally {
if (client) {
await client.close()
}
}
}
New features
- It is now possible to specify custom
parseandstringifyfunctions that will be used instead of the standardJSON.parseandJSON.stringifymethods for JSON serialization/deserialization when working withJSON*family formats. SeeClickHouseClientConfigOptions.json, and a new custom_json_handling example for more details. (#481, looskie) - (Node.js only) Added an
ignore_error_responseparam toClickHouseClient.exec, which allows callers to manually handle request errors on the application side. (#483, Kinzeng)
New features
- Server-side exceptions that occur in the middle of the HTTP stream are now handled correctly. This requires ClickHouse 25.11+. Previous ClickHouse versions are unaffected by this change. (#478)
Improvements
Bug fixes
- Fixed boolean value formatting in query parameters. Boolean values within
Array,Tuple, andMaptypes are now correctly formatted asTRUE/FALSEinstead of1/0to ensure proper type compatibility with ClickHouse. (#475, baseballyama)
Types
- Add missing
allow_experimental_join_conditiontoClickHouseSettingstyping. (#430, looskie) - Fixed
JSONEachRowWithProgressTypeScript flow after the breaking changes in ClickHouse 25.1.RowOrProgress<T>now has an additional variant:SpecialEventRow<T>. The library now additionally exports theparseErrormethod, and newly addedisRow/isExceptiontype guards. See the updated JSONEachRowWithProgress example (#443) - Added missing
allow_experimental_variant_type(24.1+),allow_experimental_dynamic_type(24.5+),allow_experimental_json_type(24.8+),enable_json_type(25.3+),enable_time_time64_type(25.6+) toClickHouseSettingstyping. (#445)
Improvements
- Add a warning on a socket closed without fully consuming the stream (e.g., when using
queryorexecmethod). (#441) - (Node.js only) An option to use a simple SELECT query for ping checks instead of
/pingendpoint. See the new optional argument to theClickHouseClient.pingmethod andPingParamstypings. Note that the Web version always used a SELECT query by default, as the/pingendpoint does not support CORS, and that cannot be changed. (#442)
Other
A minor release to allow further investigation regarding uncaught error issues with #410.
Types
- Added missing
lightweight_deletes_synctyping toClickHouseSettings(#422, pratimapatel2008)
Improvements (Node.js)
- Added a new configuration option:
capture_enhanced_stack_trace; see the JS doc in the Node.js client package. Note that it is disabled by default due to a possible performance impact. (#427) - Added more try-catch blocks to the Node.js connection layer. (#427)
Bug fixes
- Fixed an issue with URLEncoded special characters in the URL configuration for username or password. (#407)
Improvements
- (Node.js only) Added support for streaming on 32-bit platforms. (#403, shevchenkonik)
New features
- It is now possible to provide custom HTTP headers when calling the
query/insert/command/execmethods using thehttp_headersoption. NB:http_headersspecified this way will overridehttp_headersset on the client instance level. (#394, @DylanRJohnston) - (Web only) It is now possible to provide a custom
fetchimplementation to the client. (#315, @lucacasonato)
New features
-
Added support for JWT authentication (ClickHouse Cloud feature) in both Node.js and Web API packages (#270). JWT token can be set via
access_tokenclient configuration option.const client = createClient({ // ... access_token: '<JWT access token>', })Access token can also be configured via the URL params, e.g.,
https://host:port?access_token=.... It is also possible to override the access token for a particular request (seeBaseQueryParams.authfor more details).NB: do not mix access token and username/password credentials in the configuration; the client will throw an error if both are set.
- Fixed an uncaught exception that could happen in case of malformed ClickHouse response when response compression is enabled (#363)
New features
- Added
input_format_json_throw_on_bad_escape_sequenceto theClickhouseSettingstype. (#355, @emmanuel-bonin) - The client now exports
TupleParamwrapper class, allowing tuples to be properly used as query parameters. Added support for JS Map as a query parameter. (#359)
Improvements
- The client will throw a more informative error if the buffered response is larger than the max allowed string length in V8, which is
2**29 - 24bytes. (#357)
Bug fixes
- When a custom HTTP agent is used, the HTTP or HTTPS request implementation is now correctly chosen based on the URL protocol. (#352)
New features
- Added support for specifying roles via request query parameters. See this example for more details. (@pulpdrew, #328)
Bug fixes
- (Web only) Fixed an issue where streaming large datasets could provide corrupted results. See #333 (PR) for more details.
New features
-
Added
JSONEachRowWithProgressformat support,ProgressRowinterface, andisProgressRowtype guard. See this Node.js example for more details. It should work similarly with the Web version. -
(Experimental) Exposed the
parseColumnTypefunction that takes a string representation of a ClickHouse type (e.g.,FixedString(16),Nullable(Int32), etc.) and returns an AST-like object that represents the type. For example:for (const type of [ 'Int32', 'Array(Nullable(String))', `Map(Int32, DateTime64(9, 'UTC'))`, ]) { console.log(`##### Source ClickHouse type: ${type}`) console.log(parseColumnType(type)) }The above code will output:
##### Source ClickHouse type: Int32 { type: 'Simple', columnType: 'Int32', sourceType: 'Int32' } ##### Source ClickHouse type: Array(Nullable(String)) { type: 'Array', value: { type: 'Nullable', sourceType: 'Nullable(String)', value: { type: 'Simple', columnType: 'String', sourceType: 'String' } }, dimensions: 1, sourceType: 'Array(Nullable(String))' } ##### Source ClickHouse type: Map(Int32, DateTime64(9, 'UTC')) { type: 'Map', key: { type: 'Simple', columnType: 'Int32', sourceType: 'Int32' }, value: { type: 'DateTime64', timezone: 'UTC', precision: 9, sourceType: "DateTime64(9, 'UTC')" }, sourceType: "Map(Int32, DateTime64(9, 'UTC'))" }While the original intention was to use this function internally for
Native/RowBinaryWithNamesAndTypesdata formats headers parsing, it can be useful for other purposes as well (e.g., interfaces generation, or custom JSON serializers).NB: currently unsupported source types to parse:
- Geo
- (Simple)AggregateFunction
- Nested
- Old/new experimental JSON
- Dynamic
- Variant
New features
- Added optional
real_time_microsecondsfield to theClickHouseSummaryinterface (see https://github.com/ClickHouse/ClickHouse/pull/69032)
Bug fixes
- (Node.js) Fixed unhandled exceptions produced when calling
ResultSet.jsonif the response data was not, in fact, a valid JSON. (#311)
New features
- (Node.js only) It is now possible to disable the automatic decompression of the response stream with the
execmethod. SeeExecParams.decompress_response_streamfor more details. (#298).
Improvements
ClickHouseClientis now exported as a value from@clickhouse/clientand@clickhouse/client-webpackages, allowing for better integration in dependency injection frameworks that rely on IoC (e.g., Nest.js, tsyringe) (@mathieu-bour, #292).
Bug fixes
- Fixed a potential socket hang-up issue that could happen under 100% CPU load (#294).
New features
- (Node.js only) The
execmethod now accepts an optionalvaluesparameter, which allows you to pass the request body as aStream.Readable. This can be useful in the case of custom insert streaming with arbitrary ClickHouse data formats (which might not be explicitly supported and allowed by the client in theinsertmethod yet). NB: in this case, you are expected to serialize the data in the stream in the required input format yourself. See #290 for more details.
Improvements
- (Node.js only) The client package now exports a utility method
drainStream
New features
-
It is now possible to get the entire response headers object from the
query/insert/command/execmethods. Withquery, you can access theResultSet.response_headersproperty; other methods (insert/command/exec) return it as parts of their response objects as well. For example:const rs = await client.query({ query: 'SELECT * FROM system.numbers LIMIT 1', format: 'JSONEachRow', }) console.log(rs.response_headers['content-type'])This will print:
application/x-ndjson; charset=UTF-8. It can be used in a similar way with the other methods.
Improvements
-
Re-exported several constants from the
@clickhouse/client-commonpackage for convenience:SupportedJSONFormatsSupportedRawFormatsStreamableFormatsStreamableJSONFormatsSingleDocumentJSONFormatsRecordsJSONFormats
New features
-
(Experimental) Added an option to provide a custom HTTP Agent in the client configuration via the
http_agentoption (#283, related: #278). The following conditions apply if a custom HTTP Agent is provided:- The
max_open_connectionsandtlsoptions will have no effect and will be ignored by the client, as those are part of the underlying HTTP Agent configuration. keep_alive.enabledwill only regulate the default value of theConnectionheader (true->Connection: keep-alive,false->Connection: close).- While the idle socket management will still work, it is now possible to disable it completely by setting the
keep_alive.idle_socket_ttlvalue to0.
- The
-
(Experimental) Added a new client configuration option:
set_basic_auth_header, which controls whether theAuthorizationheader should be set for every outgoing HTTP request (enabled by default). One of the possible scenarios when it is necessary to disable this header is when a custom HTTPS agent is used, and the server requires TLS with certificates. For example:const agent = new https.Agent({ keepAlive: true, keepAliveMsecs: 2500, maxSockets: 10, maxFreeSockets: 10, ca: fs.readFileSync('./ca.crt'), cert: fs.readFileSync('./client.crt'), key: fs.readFileSync('./client.key'), }) const client = createClient({ url: 'https://myserver:8443', http_agent: agent, // With a custom HTTPS agent, the client won't use the default HTTPS connection implementation; the headers should be provided manually http_headers: { 'X-ClickHouse-User': 'username', 'X-ClickHouse-Key': 'password', 'X-ClickHouse-SSL-Certificate-Auth': 'on', }, // Important: authorization header conflicts with the TLS headers; disable it. set_basic_auth_header: false, })
NB: It is currently not possible to set the set_basic_auth_header option via the URL params.
See the doc entry regarding custom HTTP(s) Agent usage with code samples.
If you have feedback on these experimental features, please let us know by creating an issue in the repository or send a message in the Community Slack (#clickhouse-js channel).
New features
- Added an option to override the credentials for a particular
query/command/exec/insertrequest via theBaseQueryParams.authsetting; when set, the credentials will be taken from there instead of the username/password provided during the client instantiation (#278). - Added an option to override the
session_idfor a particularquery/command/exec/insertrequest via theBaseQueryParams.session_idsetting; when set, it will be used instead of the session id provided during the client instantiation (@holi0317, #271).
Bug fixes
- Fixed the incorrect
ResponseJSON<T>.totalsTypeScript type. Now it correctly matches the shape of the data (T, default =unknown) instead of the formerRecord<string, number>definition (#274).
Bug fixes
- The
commandmethod now drains the response stream properly, as the previous implementation could cause theKeep-Alivesocket to close after each request. - (Node.js) Removed an unnecessary error log in the
ResultSet.streammethod if the request was aborted or the result set was closed (#263).
Improvements
Formal stable release milestone with many improvements and some breaking changes.
Major new features overview:
- Advanced TypeScript support for
query+ResultSet - URL configuration
From now on, the client will follow the official semantic versioning guidelines.
Deprecated API
The following configuration parameters are marked as deprecated:
- The
hostconfiguration parameter is deprecated; useurlinstead. additional_headersconfiguration parameter is deprecated; usehttp_headersinstead.
The client will log a warning if any of these parameters are used. However, it is still allowed to use host instead of url and additional_headers instead of http_headers for now; this deprecation is not supposed to break the existing code.
These parameters will be removed in the next major release (2.0.0).
See the "New features" section for more details.
Breaking changes in 1.0.0
compression.responseis now disabled by default in the client configuration options, as it cannot be used with readonly=1 users, and it was not clear from the ClickHouse error message what exact client option was causing the failing query in this case. If you'd like to continue using response compression, you should explicitly enable it in the client configuration.- As the client now supports parsing URL configuration, you should specify
pathnameas a separate configuration option (as it would be considered as thedatabaseotherwise). - (TypeScript only)
ResultSetandRoware now more strictly typed, according to the format used during thequerycall. - (TypeScript only) Both Node.js and Web versions now uniformly export correct
ClickHouseClientandClickHouseClientConfigOptionstypes specific to each implementation. ExportedClickHouseClientnow does not have aStreamtype parameter, as it was unintended to expose it there. NB: you should still use thecreateClientfactory function provided in the package.
New features in 1.0.0
Advanced TypeScript support for query + ResultSet
The client will now try its best to figure out the shape of the data based on the DataFormat literal specified to the query call, as well as which methods are allowed to be called on the ResultSet.
Live demo (see the full description below):
Complete reference:
| Format | ResultSet.json<T>() | ResultSet.stream<T>() | Stream data | Row.json<T>() |
|---|---|---|---|---|
| JSON | ResponseJSON<T> | never | never | never |
| JSONObjectEachRow | Record<string, T> | never | never | never |
| All other JSON*EachRow | Array<T> | Stream<Array<Row<T>>> | Array<Row<T>> | T |
| CSV/TSV/CustomSeparated/Parquet | never | Stream<Array<Row<T>>> | Array<Row<T>> | never |
By default, T (which represents JSONType) is still unknown. However, considering the JSONObjectsEachRow example, prior to 1.0.0, you had to specify the entire type hint, including the shape of the data, manually:
type Data = { foo: string }
const resultSet = await client.query({
query: 'SELECT * FROM my_table',
format: 'JSONObjectsEachRow',
})
// pre-1.0.0, `resultOld` has type Record<string, Data>
const resultOld = resultSet.json<Record<string, Data>>()
// const resultOld = resultSet.json<Data>() // incorrect! The type hint should've been `Record<string, Data>` here.
// 1.0.0, `resultNew` also has type Record<string, Data>; the client inferred that it has to be a Record from the format literal.
const resultNew = resultSet.json<Data>()
This is even more handy in the case of streaming on the Node.js platform:
const resultSet = await client.query({
query: 'SELECT * FROM my_table',
format: 'JSONEachRow',
})
// pre-1.0.0
// `streamOld` was just a regular Node.js Stream.Readable
const streamOld = resultSet.stream()
// `rows` were `any`, needed an explicit type hint
streamNew.on('data', (rows: Row[]) => {
rows.forEach((row) => {
// without an explicit type hint to `rows`, calling `forEach` and other array methods resulted in TS compiler errors
const t = row.text
const j = row.json<Data>() // `j` needed a type hint here, otherwise, it's `unknown`
})
})
// 1.0.0
// `streamNew` is now StreamReadable<T> (Node.js Stream.Readable with a bit more type hints);
// type hint for the further `json` calls can be added here (and removed from the `json` calls)
const streamNew = resultSet.stream<Data>()
// `rows` are inferred as an Array<Row<Data, "JSONEachRow">> instead of `any`
streamNew.on('data', (rows) => {
// `row` is inferred as Row<Data, "JSONEachRow">
rows.forEach((row) => {
// no explicit type hints required, you can use `forEach` straight away and TS compiler will be happy
const t = row.text
const j = row.json() // `j` will be of type Data
})
})
// async iterator now also has type hints
//Similarly to the `on(data)` example above, `rows` are inferred as Array<Row<Data, "JSONEachRow">>
for await (const rows of streamNew) {
// `row` is inferred as Row<Data, "JSONEachRow">
rows.forEach((row) => {
const t = row.text
const j = row.json() // `j` will be of type Data
})
}
Calling ResultSet.stream is not allowed for specific data formats, such as JSON and JSONObjectsEachRow (unlike JSONEachRow and the rest of JSON*EachRow, these formats return a single object). In these cases, the client throws an error. However, it was previously not reflected on the type level; now, calling stream on these formats will result in a TS compiler error. For example:
const resultSet = await client.query('SELECT * FROM table', {
format: 'JSON',
})
const stream = resultSet.stream() // `stream` is `never`
Calling ResultSet.json also does not make sense on CSV and similar "raw" formats, and the client throws. Again, now, it is typed properly:
const resultSet = await client.query('SELECT * FROM table', {
format: 'CSV',
})
// `json` is `never`; same if you stream CSV, and call `Row.json` - it will be `never`, too.
const json = resultSet.json()
Currently, there is one known limitation: as the general shape of the data and the methods allowed for calling are inferred from the format literal, there might be situations where it will fail to do so, for example:
// assuming that `queryParams` has `JSONObjectsEachRow` format inside
async function runQuery(
queryParams: QueryParams,
): Promise<Record<string, Data>> {
const resultSet = await client.query(queryParams)
// type hint here will provide a union of all known shapes instead of a specific one
// inferred shapes: Data[] | ResponseJSON<Data> | Record<string, Data>
return resultSet.json<Data>()
}
In this case, as it is likely that you already know the desired format in advance (otherwise, returning a specific shape like Record<string, Data> would've been incorrect), consider helping the client a bit:
async function runQuery(
queryParams: QueryParams,
): Promise<Record<string, Data>> {
const resultSet = await client.query({
...queryParams,
format: 'JSONObjectsEachRow',
})
// TS understands that it is a Record<string, Data> now
return resultSet.json<Data>()
}
If you are interested in more details, see the related test (featuring a great ESLint plugin expect-types) in the client package.
URL configuration
- Added
urlconfiguration parameter. It is intended to replace the deprecatedhost, which was already supposed to be passed as a valid URL. - It is now possible to configure most of the client instance parameters with a URL. The URL format is
http[s]://[username:password@]hostname:port[/database][?param1=value1¶m2=value2]. In almost every case, the name of a particular parameter reflects its path in the config options interface, with a few exceptions. The following parameters are supported:
| Parameter | Type |
|---|---|
pathname | an arbitrary string. |
application_id | an arbitrary string. |
session_id | an arbitrary string. |
request_timeout | non-negative number. |
max_open_connections | non-negative number, greater than zero. |
compression_request | boolean. See below [1]. |
compression_response | boolean. |
log_level | allowed values: OFF, TRACE, DEBUG, INFO, WARN, ERROR. |
keep_alive_enabled | boolean. |
clickhouse_setting_* or ch_* | see below [2]. |
http_header_* | see below [3]. |
(Node.js only) keep_alive_idle_socket_ttl | non-negative number. |
[1] For booleans, valid values will be true/1 and false/0.
[2] Any parameter prefixed with clickhouse_setting_ or ch_ will have this prefix removed and the rest added to client's clickhouse_settings. For example, ?ch_async_insert=1&ch_wait_for_async_insert=1 will be the same as:
createClient({
clickhouse_settings: {
async_insert: 1,
wait_for_async_insert: 1,
},
})
Note: boolean values for clickhouse_settings should be passed as 1/0 in the URL.
[3] Similar to [2], but for the http_header configuration. For example, ?http_header_x-clickhouse-auth=foobar will be an equivalent of:
createClient({
http_headers: {
'x-clickhouse-auth': 'foobar',
},
})
Important: URL will always overwrite the hardcoded values and a warning will be logged in this case.
Currently not supported via URL:
log.LoggerClass- (Node.js only)
tls_ca_cert,tls_cert,tls_key.
See also: URL configuration example.
Performance
- (Node.js only) Improved performance when decoding the entire set of rows with streamable JSON formats (such as
JSONEachRoworJSONCompactEachRow) by calling theResultSet.json()method. NB: The actual streaming performance when consuming theResultSet.stream()hasn't changed. Only theResultSet.json()method used a suboptimal stream processing in some instances, and nowResultSet.json()just consumes the same stream transformer provided by theResultSet.stream()method (see #253 for more details).
Miscellaneous
- Added
http_headersconfiguration parameter as a direct replacement foradditional_headers. Functionally, it is the same, and the change is purely cosmetic, as we'd like to leave an option to implement a TCP connection open in the future.
Bug fixes
- Fixed an issue where query parameters containing tabs or newline characters were not encoded properly (#249).
This release primarily focuses on improving the Keep-Alive mechanism's reliability on the client side.
New features
-
Idle sockets timeout rework; now, the client attaches internal timers to idling sockets and forcefully removes them from the pool if it is considered that a particular socket has been idling for too long. This additional socket housekeeping intends to eliminate "Socket hang-up" errors that could previously still occur on certain configurations. Now, the client does not rely on the KeepAlive agent when it comes to removing the idling sockets; in most cases, the server will not close the socket before the client does.
-
There is a new
keep_alive.idle_socket_ttlconfiguration parameter. The default value is2500(milliseconds), which is considered to be safe, as ClickHouse versions prior to 23.11 hadkeep_alive_timeoutset to 3 seconds by default, andkeep_alive.idle_socket_ttlis supposed to be slightly less than that to allow the client to remove the sockets that are about to expire before the server does so. -
Logging improvements: more internal logs on failing requests; all client methods except ping will log an error on failure now. A failed ping will log a warning since the underlying error is returned as a part of its result. Client logging still needs to be enabled explicitly by specifying the desired
log.levelconfig option, as the log level isOFFby default. Currently, the client logs the following events, depending on the selectedlog.levelvalue:TRACE- low-level information about the Keep-Alive sockets lifecycle.DEBUG- response information (without authorization headers and host info).INFO- still mostly unused, will print the current log level when the client is initialized.WARN- non-fatal errors; failedpingrequest is logged as a warning, as the underlying error is included in the returned result.ERROR- fatal errors fromquery/insert/exec/commandmethods, such as a failed request.
Breaking changes
keep_alive.retry_on_expired_socketandkeep_alive.socket_ttlconfiguration parameters are removed.- The
max_open_connectionsconfiguration parameter is now 10 by default, as we should not rely on the KeepAlive agent's defaults. - Fixed the default
request_timeoutconfiguration value (now it is correctly set to30_000, previously300_000(milliseconds)).
Bug fixes
- Fixed a bug with Ping that could lead to an unhandled "Socket hang-up" propagation.
- The client ensures proper
Connectionheader value considering Keep-Alive settings. If Keep-Alive is disabled, its value is now forced to "close".
New features
- If
InsertParams.valuesis an empty array, no request is sent to the server andClickHouseClient.insertshort-circuits itself. In this scenario, the newly addedInsertResult.executedflag will befalse, andInsertResult.query_idwill be an empty string.
Bug fixes
- Client no longer produces
Code: 354. inflate failed: buffer errorexception if request compression is enabled andInsertParams.valuesis an empty array (see above).
New features
- It is now possible to set additional HTTP headers for outgoing ClickHouse requests. This might be useful if, for example, you have a reverse proxy with authorization. (@teawithfruit)
const client = createClient({
additional_headers: {
'X-ClickHouse-User': 'clickhouse_user',
'X-ClickHouse-Key': 'clickhouse_password',
},
})
New features
- (Web only) Allow to modify Keep-Alive setting (previously always disabled). Keep-Alive setting is now enabled by default for the Web version.
import { createClient } from '@clickhouse/client-web'
const client = createClient({ keep_alive: { enabled: true } })
- (Node.js & Web) It is now possible to either specify a list of columns to insert the data into or a list of excluded columns:
// Generated query: INSERT INTO mytable (message) FORMAT JSONEachRow
await client.insert({
table: 'mytable',
format: 'JSONEachRow',
values: [{ message: 'foo' }],
columns: ['message'],
})
// Generated query: INSERT INTO mytable (* EXCEPT (message)) FORMAT JSONEachRow
await client.insert({
table: 'mytable',
format: 'JSONEachRow',
values: [{ id: 42 }],
columns: { except: ['message'] },
})
See also the new examples:
New features
- (Node.js only)
X-ClickHouse-Summaryresponse header is now parsed when working withinsert/exec/commandmethods. See the related test for more details. NB: it is guaranteed to be correct only for non-streaming scenarios. The web version does not currently support this due to CORS limitations. (#210)
Bug fixes
- Drain insert response stream in Web version - required to properly work with
async_insert, especially in the Cloudflare Workers context.
New features
- Added Parquet format streaming support to the Node.js client. Examples: insert from a file, select into a file.
Bug fixes
pathnamesegment fromhostclient configuration parameter is now handled properly when making requests. See this comment for more details.
New features
- Added missing
default_formatsetting, which allows to performexeccalls without theFORMATclause. See the example.
Breaking changes
Date objects in query parameters are now serialized as time-zone-agnostic Unix timestamps (NNNNNNNNNN[.NNN], optionally with millisecond-precision) instead of datetime strings without time zones (YYYY-MM-DD HH:MM:SS[.MMM]). This means the server will receive the same absolute timestamp the client sent even if the client's time zone and the database server's time zone differ. Previously, if the server used one time zone and the client used another, Date objects would be encoded in the client's time zone and decoded in the server's time zone and create a mismatch.
For instance, if the server used UTC (GMT) and the client used PST (GMT-8), a Date object for "2023-01-01 13:00:00 PST" would be encoded as "2023-01-01 13:00:00.000" and decoded as "2023-01-01 13:00:00 UTC" (which is 2023-01-01 05:00:00 PST). Now, "2023-01-01 13:00:00 PST" is encoded as "1672606800000" and decoded as "2023-01-01 21:00:00 UTC", the same time the client sent.
Props to @ide for implementing it.
Introduces web client (using native fetch and WebStream APIs) without Node.js modules in the common interfaces. No polyfills are required.
The web client is confirmed to work with Chrome/Firefox/CloudFlare workers.
It is now possible to implement new custom connections on top of @clickhouse/client-common.
The repository was refactored into three packages:
@clickhouse/client-common: all possible platform-independent code, types and interfaces@clickhouse/client-web: new web (or non-Node.js env) connection, uses native fetch.@clickhouse/client: Node.js connection as it was before.
Node.js client breaking changes
- Changed
pingmethod behavior: it will not throw now. Instead, either{ success: true }or{ success: false, error: Error }is returned. - Log level configuration parameter is now explicit instead of
CLICKHOUSE_LOG_LEVELenvironment variable. Default isOFF. queryreturn type signature changed to isBaseResultSet<Stream.Readable>(no functional changes)execreturn type signature changed toExecResult<Stream.Readable>(no functional changes)insert<T>params argument type changed toInsertParams<Stream, T>(no functional changes)- Experimental
schemamodule is removed
Web client known limitations
- Streaming for select queries works, but it is disabled for inserts (on the type level as well).
- KeepAlive is disabled and not configurable yet.
- Request compression is disabled and configuration is ignored. Response compression works.
- No logging support yet.
New features
- Expired socket detection on the client side when using Keep-Alive. If a potentially expired socket is detected,
and retry is enabled in the configuration, both socket and request will be immediately destroyed (before sending the data),
and the client will recreate the request. See
ClickHouseClientConfigOptions.keep_alivefor more details. Disabled by default. - Allow disabling Keep-Alive feature entirely.
TRACElog level.
Examples
Disable Keep-Alive feature
const client = createClient({
keep_alive: {
enabled: false,
},
})
Retry on expired socket
const client = createClient({
keep_alive: {
enabled: true,
// should be slightly less than the `keep_alive_timeout` setting in server's `config.xml`
// default is 3s there, so 2500 milliseconds seems to be a safe client value in this scenario
// another example: if your configuration has `keep_alive_timeout` set to 60s, you could put 59_000 here
socket_ttl: 2500,
retry_on_expired_socket: true,
},
})
Breaking changes
connect_timeoutclient setting is removed, as it was unused in the code.
New features
commandmethod is introduced as an alternative toexec.commanddoes not expect user to consume the response stream, and it is destroyed immediately. Essentially, this is a shortcut toexecthat destroys the stream under the hood. Consider usingcommandinstead ofexecfor DDLs and other custom commands which do not provide any valuable output.
Example:
// incorrect: stream is not consumed and not destroyed, request will be timed out eventually
await client.exec('CREATE TABLE foo (id String) ENGINE Memory')
// correct: stream does not contain any information and just destroyed
const { stream } = await client.exec('CREATE TABLE foo (id String) ENGINE Memory')
stream.destroy()
// correct: same as exec + stream.destroy()
await client.command('CREATE TABLE foo (id String) ENGINE Memory')
Bug fixes
Breaking changes
- Node.js 14 EOL as its maintenance phase has ended in April 2023. Node.js 16+ is now required to use the client.
Bug fixes
- Fix NULL parameter binding. As the HTTP interface expects
\Ninstead of a'NULL'string, it is now correctly handled for bothnulland explicitlyundefinedparameters. See the test scenarios for more details.
Bug fixes
- Fix Node.JS 19.x/20.x timeout error (@olexiyb)
New features
- Added support for
JSONStrings,JSONCompact,JSONCompactStrings,JSONColumnsWithMetadataformats (@andrewzolotukhin).
New features
query_idcan now be overridden for all main client's methods:query,exec,insert.
New features
ResultSet.query_idcontains a unique query identifier that might be useful for retrieving query metrics fromsystem.query_logUser-AgentHTTP header is set according to the language client spec. For example, for client version 0.0.12 and Node.js runtime v19.0.4 on Linux platform, it will beclickhouse-js/0.0.12 (lv:nodejs/19.0.4; os:linux). IfClickHouseClientConfigOptions.applicationis set, it will be prepended to the generatedUser-Agent.- Run tests on
nodejs@v19
Breaking changes
client.insertnow returns{ query_id: string }instead ofvoidclient.execnow returns{ stream: Stream.Readable, query_id: string }instead of justStream.Readable
Breaking changes
log.enabledflag was removed from the client configuration.- Use the
CLICKHOUSE_LOG_LEVELenvironment variable instead. Possible values:OFF,TRACE,DEBUG,INFO,WARN,ERROR. Currently, there are only debug messages, but we will log more later.
For more details, see PR #110
Remove request listeners synchronously (#124) - closed issue #123
See #121
import { createClient } from '@clickhouse/client'
const client = createClient({
session_id: `<session_id>`,
})
Kudos to @KMahoney
Added SSL/TLS support (basic and mutual), resolving #52
Resolved #116
See #113
- Breaking changes:
Rowsabstraction was renamed toResultSet. - Breaking changes: now, every iteration over
ResultSet.stream()yieldsRow[]instead of a singleRow. Please check out an example and this PR for more details. These changes allowed us to significantly reduce overhead on select result set streaming. - Updated README and examples to cover the changes.
split2is no longer a package dependency.
Remove package.json dependency