Adds a context menu item to translation files to automatically translate them into other languages using Google Translate, AWS Translate, Azure Translate, DeepL, OpenAI, or Hugging Face (cloud & local, including local AI via Ollama).
- JSON - Standard JSON translation files
- XML - Android strings.xml, iOS plist, generic XML
- YAML - YAML/YML configuration files
- ARB - Flutter Application Resource Bundle
- PO/POT - GNU Gettext translation files
- XLIFF - XML Localization Interchange File Format
- XMB/XTB - XML Message Bundle (Angular)
- Properties - Java/Spring properties files
- CSV/TSV - Comma/Tab-separated values
When localizing an application, if you have a folder called something like translations, languages, or i18n that contains translation files for each language, you can use this extension to right click on your primary language file and automatically create additional translations. It uses the Google/AWS/Azure/DeepL/OpenAI/Hugging Face Translate API to perform the translations, and you must have your own API key to make the calls.
Just create empty files with the locale names as filenames and this extension will generate their translations. For example, if you want French, create a file fr.json. Right click on en.json, pick "Auto Translate" and voilà, you have a version in French.
- Multi-format support - Works with JSON, XML, YAML, ARB, PO, XLIFF, Properties, CSV, and more
- Format auto-detection - Automatically detects file format based on extension and content
- Option to keep existing translations, to cut down on data processing when adding new terms
- Option to keep extra translations, if one language has additional unique terms
- Supports nested elements in JSON, XML, and YAML
- Supports named arguments such as: "Zip code {zip} is in {city}, {state}."
- Processes all files simultaneously
- Hugging Face integration - Cloud API with multiple providers or local ONNX runtime for private, offline translation
- Local AI support - Use Ollama or other OpenAI-compatible local servers
- Google: https://cloud.google.com/translate/docs/languages
- AWS: https://docs.aws.amazon.com/translate/latest/dg/what-is.html#what-is-languages
- Azure: https://docs.microsoft.com/en-us/azure/cognitive-services/translator/language-support
- DeepL: https://www.deepl.com/docs-api/other-functions/listing-supported-languages/
- OpenAI: https://platform.openai.com/
- Hugging Face: https://huggingface.co/models?pipeline_tag=translation (supports many languages via different models)
Since translation services are not free, you must provide your own API key.
For Google API key you need to provide just API key. Luckily Google gives a decent amount of translations in a trial period. Go here to set up your account and request a key: https://console.developers.google.com/apis/library/translate.googleapis.com
For AWS you need to provide your access key and secret key. Go here to set up your account and request an access key and secret key and region to use: https://aws.amazon.com/premiumsupport/knowledge-center/create-and-activate-aws-account/
For Azure you need to provide your subscription key and region. Go here to set up your account and request a subscription key: https://azure.microsoft.com/en-us/free/
For DeepL you need to provide your authentication API key. Go here to set up your account and request an API key:https://www.deepl.com/pro-api?cta=header-pro-api/
For OpenAI you need to provide just API key.Go here to set up your account and request a key: API key:https://platform.openai.com/
For Hugging Face cloud API you need to provide an API key and model name. Go here to set up your account and request a key: API key:https://huggingface.co/settings/tokens
For Hugging Face local mode you need to provide only a model name (no API key required). The model runs locally using transformers.js.
-
Request a Google/AWS/Azure/DeepL/OpenAI/Hugging Face Translate API key (or use Hugging Face local mode without API key)
-
Install this extension
-
Go to VSCode
Settings>Extensions>Auto Translate JSON -
Enter your Google/AWS/Azure/DeepL/OpenAI/Hugging Face API key / access key / subscription key / region / model (or use Hugging Face local mode with model name only)
-
(optional) Change the
Source Localesetting if you don't want English -
(optional) Set the
Formatsetting or leave as "auto" for automatic detection -
Create empty files for each locale you want to translate into. Locale should correspond to the language code used by the translation service. For example, if you want French, create a file
fr.json(orfr.xml,fr.yaml, etc.).- If you use Azure and want to translate into Serbian, create a file
sr-Cyrl.jsonfor Serbian Cyrillic translation orsr-Latn.jsonfor Serbian Latin translation. - If you use AWS or Google and want to translate into Serbian, create a file
sr.jsonfor Serbian translation.
- If you use Azure and want to translate into Serbian, create a file
-
Right click the source translation file (en.json, en.xml, en.yaml, etc.) and pick "Auto Translate"
-
At the prompt decide if you want to preserve previously translated values (i.e. not reprocess)
-
At the prompt decide if you want to keep extra translations
- Verify your language files have been updated
This extension contributes the following settings (Menu>Preferences>Settings):
-
auto-translate-json.sourceLocale: A failsafe to prevent processing the wrong file. Defaults to "en" for english. You can change this to any valid two letter locale code you wish to use. -
auto-translate-json.mode: "file": files in same folder like "en.json"...; "folder": files in subfolders like "en/translation.json" -
auto-translate-json.format: File format for translation. Use "auto" for automatic detection based on file extension. Supported formats: json, xml, android-xml, ios-xml, yaml, arb, po, pot, xliff, xmb, xtb, properties, csv, tsv -
auto-translate-json.startDelimiter: Start delimiter for named arguments. Defaults to "{". Use "{{" for ngx-translate or transloco. -
auto-translate-json.endDelimiter: End delimiter for named arguments. Defaults to "}". Use "}}" for ngx-translate or transloco. -
auto-translate-json.ignorePrefix: Translation keys that start with this prefix will be ignored (e.g., "@@" for ARB metadata)
auto-translate-json.googleApiKey: Enter your Google API key in this setting.
-
auto-translate-json.awsAccessKeyId: Enter your AWS API access key Id in this setting. -
auto-translate-json.awsSecretAccessKey: Enter your AWS API secret access key in this setting. -
auto-translate-json.awsRegion: Enter your AWS region in this setting.
-
auto-translate-json.azureSecurityKey: Enter your Azure security key in this setting. -
auto-translate-json.azureRegion: Enter your Azure region in this setting.
auto-translate-json.deepLFreeSecurityKey: Enter your DeepL Free security key in this setting.
auto-translate-json.deepLProSecurityKey: Enter your DeepL Pro security key in this setting.
auto-translate-json.openAIKey: Enter your OpenAI API key in this setting.auto-translate-json.openAIBaseURL: Base URL for OpenAI API. Defaults to "https://api.openai.com/v1". Change for local servers.auto-translate-json.openAIModel: Model to use. Defaults to "gpt-4.1-mini" (recommended for translations).auto-translate-json.openAIMaxTokens: Maximum tokens per request. Defaults to 1000.auto-translate-json.openAITemperature: Temperature (0-2). Lower values produce more consistent translations. Defaults to 0.1.
auto-translate-json.huggingFaceApiKey: Enter your Hugging Face API key in this setting.auto-translate-json.huggingFaceModel: Model to use (e.g., "Helsinki-NLP/opus-mt-en-fr" for English to French).auto-translate-json.huggingFaceProvider: Optional inference provider (e.g., "hf-inference", "together", "replicate", "auto"). Leave empty for default ("auto" selects the first available provider). See the "Using Hugging Face Translation" section below for detailed explanation.
auto-translate-json.huggingFaceLocalModel: Model to use for local inference (e.g., "Xenova/opus-mt-en-fr"). Runs locally using transformers.js with ONNX runtime.
This extension supports both Hugging Face cloud API and local on-device inference.
-
Get a Hugging Face API key:
- Create a free account at huggingface.co
- Generate an access token at huggingface.co/settings/tokens
-
Choose a translation model:
- Browse available models at huggingface.co/models?pipeline_tag=translation
- Popular models include:
Helsinki-NLP/opus-mt-en-fr(English → French)Helsinki-NLP/opus-mt-en-es(English → Spanish)Helsinki-NLP/opus-mt-en-de(English → German)facebook/m2m100_418M(multilingual)
-
Configure extension settings:
auto-translate-json.apiType: Set to "HuggingFace"auto-translate-json.huggingFaceApiKey: Your Hugging Face API keyauto-translate-json.huggingFaceModel: Model name (e.g., "Helsinki-NLP/opus-mt-en-fr")auto-translate-json.huggingFaceProvider: Optional provider (e.g., "hf-inference", "together", "replicate", "auto"). Leave empty for default.
About the Provider Parameter: The
huggingFaceProvidersetting is optional and specifies which inference provider to use:"hf-inference": Hugging Face's official Inference API (default when available)"together": Together AI's inference platform"replicate": Replicate's hosted models"auto": Automatically selects the first available provider (default if left empty)
You typically don't need to set this unless:
- You have a specific provider preference (e.g., using Together AI credits)
- You encounter issues with the default provider
- You want to explicitly use a particular inference backend
-
Usage notes:
- Cloud API requires internet connection
- Translation quality depends on the selected model
- Check model documentation for supported language pairs
- Some models support direct language pairs (en→fr), others are multilingual
- Choose a local ONNX model:
- Models must be compatible with
@huggingface/transformersJavaScript runtime - Popular local models use the "Xenova/" prefix:
Xenova/opus-mt-en-fr(English → French)Xenova/m2m100_418M(multilingual)
- Models must be compatible with
Note: Helsinki-NLP models only work with the cloud API and will fail with "Could not locate file: tokenizer.json" error when used locally.
-
Configure extension settings:
auto-translate-json.apiType: Set to "HuggingFaceLocal"auto-translate-json.huggingFaceLocalModel: Model name (e.g., "Xenova/opus-mt-en-fr")
-
First run behavior:
- Model weights (~300MB) are downloaded and cached automatically
- Subsequent runs use the cached model
- No internet required after initial download
-
Performance considerations:
- Local inference runs on your CPU (or GPU if available)
- Translation speed depends on model size and hardware
- Larger models provide better quality but are slower
- Recommended for batch translations or privacy-sensitive projects
- Install Ollama
- Pull a recommended model:
ollama pull qwen2.5:14b - Configure extension settings:
auto-translate-json.openAIKey: "ollama" (placeholder)auto-translate-json.openAIBaseURL: "http://localhost:11434/v1"auto-translate-json.openAIModel: "qwen2.5:14b"auto-translate-json.openAIMaxTokens: 512
Compatible with any OpenAI-compatible server:
- Files must be named with the locale code that may be different depending on the translation service that you use. Please see the supported languages above.
- Keys starting with
@@(like@@locale) are treated as metadata and not translated - Set
ignorePrefixto@@to skip metadata keys
- Automatically detects
<resources>structure - Preserves
translatable="false"attributes
- Preserves headers and metadata
- Handles plural forms
- Supports XLIFF 1.2 and 2.0 formats
https://cloud.google.com/translate/pricing
- AWS
https://aws.amazon.com/translate/pricing/
- Azure
https://azure.microsoft.com/en-us/pricing/details/cognitive-services/translator/
- DeepL
https://www.deepl.com/pro?cta=header-prices
- OpenAI
- Hugging Face
https://huggingface.co/pricing (cloud API) | Local mode is free but requires local resources
Keep your keys safe!




