Refactor: standard install/start/check/stop/load/query interface per system#860
Open
alexey-milovidov wants to merge 7 commits intomainfrom
Open
Refactor: standard install/start/check/stop/load/query interface per system#860alexey-milovidov wants to merge 7 commits intomainfrom
alexey-milovidov wants to merge 7 commits intomainfrom
Conversation
…/data-size
Each local system now exposes a small set of single-purpose scripts with a
stable contract, so they can be driven by a shared lib/benchmark-common.sh
and reused by external tooling (e.g. an online "run query against system X"
service):
install env prep + system install (idempotent)
start start daemon (idempotent; empty for stateless tools)
check trivial query, exit 0 iff responsive
stop stop daemon (idempotent)
load runs create.sql + loads data, deletes source files, sync
query SQL on stdin; result on stdout; runtime in fractional seconds
on the last line of stderr; non-zero exit on error
data-size prints data footprint in bytes (one integer to stdout)
Each system's old monolithic benchmark.sh is replaced by a 4-line shim that
sets a couple of env vars (BENCH_DOWNLOAD_SCRIPT, BENCH_RESTARTABLE) and
exec's lib/benchmark-common.sh. The shared driver runs the unified flow:
install -> start+check -> download -> load (timed) -> for each query
{flush caches; optionally stop+start to neutralize warm-process effects;
run query 3x} -> data-size -> stop. Output format ([t1,t2,t3], Load time,
Data size) matches the previous benchmark.sh exactly so cloud-init.sh.in's
log POST to play.clickhouse.com keeps working unchanged.
For dataframe/in-process systems (pandas, polars-dataframe, chdb-dataframe,
daft-parquet*, duckdb-dataframe, sirius), the engine is wrapped in a small
FastAPI server (server.py) so the start/stop/query interface still applies.
BENCH_RESTARTABLE=no for these (and for embedded CLIs like duckdb, sqlite,
datafusion, etc.) since restarting a single Python/CLI process between
queries would dominate query time.
Scope: 88 local systems refactored. Cloud/managed systems and a handful of
non-functional ones (csvq, dsq, locustdb, mongodb, polars CLI, exasol,
spark-velox) are intentionally left untouched.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Resolves conflict in clickhouse-datalake{,-partitioned}: upstream switched
the datalake variants from filesystem-cache to userspace page-cache (PR #818).
The refactored install/query scripts now adopt the page-cache approach.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
mongodb: query takes a MongoDB aggregation pipeline (Extended JSON, one line) on stdin instead of SQL — these are the same canonical 43 ClickBench queries, just expressed as mongo pipelines. queries.txt is generated from queries.js (the source of truth) by replacing JS-only constructors (NumberLong, ISODate, NumberDecimal) with their EJSON canonical form. The shim sets BENCH_QUERIES_FILE=queries.txt to point the driver at it. polars: wrapped in a FastAPI server analogous to polars-dataframe, but the load step uses pl.scan_parquet (LazyFrame) so the parquet file remains needed at query time — the load script does NOT delete hits.parquet. data-size returns the on-disk parquet size since a LazyFrame has no materialized in-memory size. Both systems now expose the standard install/start/check/stop/load/query/ data-size scripts and a 4-line benchmark.sh shim, removing the old benchmark.sh / run.js / query.py / formatResult.js paths. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
alexey-milovidov
commented
May 7, 2026
…use in query Per review: clickhouse-local persists table metadata in its --path dir, so the CREATE TABLE only needs to run once during ./load. ./query just runs the query against the persisted table. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
alexey-milovidov
commented
May 7, 2026
alexey-milovidov
commented
May 7, 2026
…atively Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
… readiness Per review (alexey-milovidov): clickhouse start leaves the system in the desired state (server running) even when it returns non-zero with "already running". Make the shared driver tolerate non-zero from ./start and rely on bench_check_loop as the authoritative readiness signal. This lets per-system start scripts stay simple — they just need to make a best-effort attempt to launch. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
prmoore77
added a commit
to gizmodata/ClickBench
that referenced
this pull request
May 7, 2026
…ouse#860) Adopts the per-system 7-script interface from ClickHouse#860 for gizmosql/, and replaces the Java sqlline-based gizmosqlline client with the C++ gizmosql_client shell that ships with gizmosql_server. Scripts (matching the contract from lib/benchmark-common.sh): benchmark.sh - 4-line shim that exec's ../lib/benchmark-common.sh install - apt + curl gizmosql_cli_linux_$ARCH.zip; no openjdk, no separate gizmosqlline download start - idempotent server bring-up (skips if port 31337 is open) check - cheap TCP probe (auth-gated SQL would need credentials) stop - kills tracked PID; pkill belt-and-braces fallback load - rm -f clickbench.db, then create.sql + load.sql via gizmosql_client; deletes hits.parquet and sync's query - reads one query from stdin, runs via gizmosql_client with .timer on + .mode trash; emits fractional seconds as the last stderr line (parsed from "Run Time: X.XXs") data-size - wc -c clickbench.db Notes: - BENCH_DOWNLOAD_SCRIPT=download-hits-parquet-single, BENCH_RESTARTABLE=yes (gizmosql is a server, so per-query restart neutralizes warm-process effects, matching the clickhouse/postgres pattern in ClickHouse#860). - util.sh now exports GIZMOSQL_HOST/PORT/USER/PASSWORD - the env vars gizmosql_client reads natively, so query/load can call gizmosql_client with no flags. The server still receives the username via --username. - PID_FILE moved to a stable /tmp path (was /tmp/gizmosql_server_$$.pid, which broke across the start/stop process boundary in the new layout). This PR depends on ClickHouse#860 (which introduces lib/benchmark-common.sh and the contract). Once ClickHouse#860 lands, this PR's diff against main will be only the gizmosql/ files. Validated locally on macOS with gizmosql v1.22.4: the query script produces the expected fractional-seconds last line on stdout/stderr separation, and exits non-zero on error paths. See https://docs.gizmosql.com/#/client for gizmosql_client docs.
2 tasks
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Summary
benchmark.shinto 7 single-purpose scripts (install,start,check,stop,load,query,data-size) with a stable contract, driven by a new sharedlib/benchmark-common.sh.Why
Previously, every system's
benchmark.shbundled installation, server lifecycle, dataset download, data loading, and query dispatch into one script — andrun.shhard-coded the per-query orchestration. There was no programmatic per-query entry point, so:run.shran all 3 tries inside a single CLI invocation, so OS-cache warmth from try 1 leaked into tries 2/3.The new per-system interface
installstartcheckSELECT 1). Exit 0 iff responsive.stoploadsync.query0.123)data-sizeEach system's
benchmark.shbecomes a 4-line shim that sets a couple of env vars andexec's the shared driver:The shared driver runs
install → start+check → download → load (timed) → for each query: flush caches; if BENCH_RESTARTABLE=yes, stop+start; run query 3× → data-size → stop. The output log shape (Load time:,[t1,t2,t3],per query,Data size:) is identical to the oldbenchmark.sh, socloud-init.sh.in's POST to play.clickhouse.com keeps working unchanged.BENCH_RESTARTABLE=nofor embedded CLIs (duckdb, sqlite, datafusion, …) and dataframe wrappers — restarting a single CLI/Python process between queries would dominate query time. For these, OS caches are still flushed between queries.Scope
Refactored (88 systems):
Not refactored (intentionally out of scope):
Validated end-to-end on a 96-core / 185 GB ARM machine
null(framework's error path works)All 88 refactored systems pass
bash -nand have executable bits set on the 7 scripts + benchmark.sh.Bug fixes surfaced during validation
lib/benchmark-common.sh:data-sizenow runs beforestop(clickhouse and pandas need the server up to report size).clickhouse/start: idempotent (was erroring when already running).duckdb/load,sqlite/load:rm -f hits.db/mydbfor idempotent reruns.postgresql/load:-v ON_ERROR_STOP=1so COPY data errors actually fail the script instead of silently rolling back.BENCH_DOWNLOAD_SCRIPTmay now be empty for systems that read directly from S3 datalakes / remote services (clickhouse-datalake*, duckdb-datalake*, chyt, …).Flagged for follow-up review
duckdb-memory—:memory:semantics force a per-query reload; will inflate timings vs. the original single-process flow.cloudberry,greenplum— multi-phase install (reboot between phases); the shim only runs phase 1.sirius— GPU-dependent; long-livedduckdbCLI subprocess proxy; review the stdin/sentinel protocol.paradedb*,pg_ducklake,pg_mooncake— Docker container created ininstallthendocker cpinload(small divergence from the originaldocker run -v ...due to the lifecycle order:startruns beforedownload).Test plan
bash -non all 88 systems' scripts🤖 Generated with Claude Code