A tool for analyzing text against predefined topics using average weight embeddings and cosine similarity.
Fast Topic Analysis is a powerful tool for identifying topic matches in text with high precision. It uses embedding-based semantic analysis with an advanced clustering approach to detect nuanced topic variations.
Key features:
- Multiple Embeddings Per Topic: Creates several weighted average embeddings for each topic instead of a single representation, capturing different semantic variations
- Embedding Clustering: Groups similar phrases within topics to form coherent semantic clusters using agglomerative or HDBSCAN algorithms
- Cohesion & Silhouette Scoring: Measures cluster quality via per-cluster cohesion and global silhouette score
- Configurable Precision: Offers preset configurations for different use cases (high precision, balanced, performance)
- Fast Processing: Optimized for efficient text analysis with minimal processing time
- Powered by
embedding-utils: All vector math, clustering, similarity, and embedding generation provided by theembedding-utilslibrary
The project has two main .js files:
- A generator (
generate.js) that creates topic embeddings from training data - An interactive demo (
run-demo.js) that analyzes text against these topic embeddings
The tool now supports clustering of embeddings within each

Install dependencies:
npm installnode generate.jsThis will:
- Clean the
data/topic_embeddingsdirectory - Process training data from
data/training_data.jsonl - Generate embeddings for each topic defined in
labels-config.js - Cluster similar embeddings within each topic
- Save multiple embeddings per topic as JSON files in
data/topic_embeddings/
You can customize the clustering behavior using command-line arguments:
# Use a predefined configuration preset
node generate.js --preset high-precision
# Customize individual parameters
node generate.js --similarity-threshold 0.92 --max-clusters 3Available presets:
high-precision: Optimized for maximum accuracy with more granular clustersCLUSTERING_SIMILARITY_THRESHOLD=0.95CLUSTERING_MIN_CLUSTER_SIZE=3CLUSTERING_MAX_CLUSTERS=8
balanced: Default settings for good precision and performanceCLUSTERING_SIMILARITY_THRESHOLD=0.9CLUSTERING_MIN_CLUSTER_SIZE=5CLUSTERING_MAX_CLUSTERS=5
performance: Optimized for speed with fewer clustersCLUSTERING_SIMILARITY_THRESHOLD=0.85CLUSTERING_MIN_CLUSTER_SIZE=10CLUSTERING_MAX_CLUSTERS=3
legacy: Disables clustering for backward compatibilityENABLE_CLUSTERING=false
Command-line options for generate.js:
--preset, -p <name>: Use a predefined configuration preset--enable-clustering <bool>: Enable or disable clustering (true/false)--similarity-threshold <num>: Set similarity threshold for clustering (0-1)--min-cluster-size <num>: Set minimum cluster size--max-clusters <num>: Set maximum number of clusters per topic--algorithm <name>: Clustering algorithm to use (defaultorhdbscan)--incremental: Update clusters incrementally (new JSONL entries only)--help: Show help message
For more options, run:
node generate.js --helpAfter an initial full generation, you can incrementally update clusters when new training phrases are added:
- Append new phrases to
data/training_data.jsonl - Run incremental generation:
node generate.js --incrementalThis will:
- Validate the manifest (data integrity + model consistency)
- Embed only the new phrases
- Assign each new embedding to the nearest existing cluster using
assignToCluster() - Update cluster centroids via incremental weighted averaging
- Update the manifest for future incremental runs
Incremental mode requires a prior full generation (which creates a manifest file). If training data has been edited (not just appended), or the model/precision has changed, a full regeneration is required.
node run-demo.jsThe test runner provides an interactive interface to:
- Choose logging verbosity
- Optionally show matched sentences if verbose logging is disabled
- Select a test message file to analyze
You can also specify a test message directly:
node run-demo.js 1
node run-demo.js message-1.txtCommand-line options for run-demo.js:
--verbose, -v: Enable verbose logging--quiet, -q: Disable verbose logging--show-matches, -s: Show matched sentences--hide-matches, -h: Hide matched sentences--help: Show help message
Configuration preferences (last used file, verbosity, etc.) are automatically saved in run-demo-config.json.
The first time a model is used (e.g. generate.js or run-demo.js), it will be downloaded and cached to the directory speciifed in .env. All subsequent runs will be fast as the model will be loaded from the cache.
The analysis will show:
- Similarity scores between the test text and each topic cluster
- Which specific cluster matched each sentence
- Execution time
- Total comparisons made
- Number of matches found
- Model information
โโโ data/
โ โโโ training_data.jsonl # Training data
โ โโโ incremental-manifest.json # Incremental processing state
โ โโโ topic_embeddings/ # Generated embeddings
โโโ test-messages/ # Test files
โโโ modules/
โ โโโ embedding.js # Thin wrapper around embedding-utils provider
โ โโโ manifest.js # Incremental manifest operations
โ โโโ utils.js # Utility functions (toBoolean)
โโโ test/
โ โโโ cluster-test.js # Unit tests for clustering
โ โโโ v030-features-test.js # Tests for EU v0.3.0 features (assignToCluster, silhouetteScore, HDBSCAN, Float32Array)
โ โโโ manifest-test.js # Unit tests for manifest module
โ โโโ incremental-integration-test.js # Integration test for incremental gen
โ โโโ incremental-edge-cases-test.js # Edge case tests for incremental mode
โโโ generate.js # Embedding generator
โโโ run-demo.js # Interactive analysis demo
โโโ labels-config.js # Topic definitions
Change the model settings in .env to use different embedding models and configurations:
# Model and precision
ONNX_EMBEDDING_MODEL="Xenova/all-MiniLM-L12-v2"
ONNX_EMBEDDING_MODEL_PRECISION=fp32
# Available Models and their configurations:
# | Model | Precision | Size | Requires Prefix | Data Prefix | Search Prefix |
# | -------------------------------------------- | -------------- | ---------------------- | --------------- | --------------- | ------------- |
# | Xenova/all-MiniLM-L6-v2 | fp32, fp16, q8 | 90 MB, 45 MB, 23 MB | false | null | null |
# | Xenova/all-MiniLM-L12-v2 | fp32, fp16, q8 | 133 MB, 67 MB, 34 MB | false | null | null |
# | Xenova/paraphrase-multilingual-MiniLM-L12-v2 | fp32, fp16, q8 | 470 MB, 235 MB, 118 MB | false | null | null |
# | nomic-ai/modernbert-embed-base | fp32, fp16, q8 | 568 MB, 284 MB, 146 MB | true | search_document | search_query |Configure clustering behavior in .env:
| Variable | Description | Default | Example |
|---|---|---|---|
ENABLE_CLUSTERING |
Enable or disable clustering functionality | true |
ENABLE_CLUSTERING=true |
CLUSTERING_ALGORITHM |
Clustering algorithm (default or hdbscan) |
default |
CLUSTERING_ALGORITHM=hdbscan |
CLUSTERING_SIMILARITY_THRESHOLD |
Threshold for considering embeddings similar (0-1) | 0.9 |
CLUSTERING_SIMILARITY_THRESHOLD=0.85 |
CLUSTERING_MIN_CLUSTER_SIZE |
Minimum number of phrases per cluster | 5 |
CLUSTERING_MIN_CLUSTER_SIZE=3 |
CLUSTERING_MAX_CLUSTERS |
Maximum number of clusters per topic | 5 |
CLUSTERING_MAX_CLUSTERS=8 |
Example configuration:
# Clustering Configuration
ENABLE_CLUSTERING=true
CLUSTERING_ALGORITHM=default
CLUSTERING_SIMILARITY_THRESHOLD=0.9
CLUSTERING_MIN_CLUSTER_SIZE=5
CLUSTERING_MAX_CLUSTERS=5- Change the thresholds defined in
labels-config.jsper topic to change the similarity score that triggers a match. - Add more test messages to the
test-messagesdirectory to test against. - Add more training data to
data/training_data.jsonlto improve the topic embeddings.
Some models require specific prefixes to optimize their performance for different tasks. When a model has Requires Prefix: true, you must use the appropriate prefix:
Data Prefix: Used when generating embeddings from training dataSearch Prefix: Used when generating embeddings for search/query text
For example, nomic-ai/modernbert-embed-base requires:
search_documentprefix for training datasearch_queryprefix for search queries
Models with Requires Prefix: false will ignore any prefix settings.
The training data is a JSONL file that contains the training data. Each line is a JSON object with the following fields:
text: The text to be analyzedlabel: The label of the topic
{"text": "amphibians, croaks, wetlands, camouflage, metamorphosis", "label": "frogs"}
{"text": "jumping, ponds, tadpoles, moist skin, diverse habitats", "label": "frogs"}
{"text": "waterfowl, quacking, ponds, waddling, migration", "label": "ducks"}
{"text": "feathers, webbed feet, lakes, nesting, foraging", "label": "ducks"}
{"text": "dabbling, flocks, wetlands, bills, swimming", "label": "ducks"}The training data is used to generate the topic embeddings. The more training data you have, the better the topic embeddings will be.
The labels to be used when generating the topic embeddings are defined in labels-config.js.
Two clustering algorithms are available, selectable via --algorithm or CLUSTERING_ALGORITHM:
Groups similar embeddings based on cosine similarity:
- Calculate embeddings for all phrases in a topic
- Initialize the first cluster with the first embedding
- For each remaining embedding:
- Calculate average similarity to each existing cluster
- If similarity exceeds the threshold, add to the most similar cluster
- If no cluster is similar enough and we haven't reached max clusters, create a new cluster
- If we've reached max clusters, add to the most similar cluster regardless of threshold
- Process clusters that are smaller than the minimum size:
- If the combined small clusters are still smaller than the minimum and we have valid clusters, distribute them to the most similar valid clusters
- Otherwise, create a new "miscellaneous" cluster containing all small cluster items
- Calculate the average embedding for each final cluster
- Calculate a cohesion score for each cluster (average similarity between all embeddings and the centroid)
Density-based clustering that automatically determines the number of clusters:
- Calculate embeddings for all phrases in a topic
- Run HDBSCAN with the configured
minClusterSizeparameter - Any noise points (not assigned to a cluster by HDBSCAN) are reassigned to the nearest cluster using
assignToCluster() - Centroids and cohesion scores are recomputed after noise absorption
- If HDBSCAN finds no clusters, falls back to a single cluster containing all embeddings
HDBSCAN is more conservative about cluster formation and works best with larger datasets. Use --algorithm hdbscan to enable it.
Cohesion Score (per-cluster): Measures how tightly grouped the embeddings are within a cluster. Calculated as the average cosine similarity between each embedding and the cluster's centroid. Higher values (closer to 1.0) indicate tighter clusters.
Silhouette Score (global): Measures how well-separated clusters are from each other. Ranges from -1 to +1, where higher values indicate better-defined clusters. Returns 0 for single-cluster topics. Both metrics are saved to the cluster JSON files and displayed during generation.
These scores are useful for:
- Evaluating the quality of clusters
- Comparing clustering algorithms and configurations
- Identifying topics that might benefit from more training data
- Understanding why certain matches might be less reliable than others

