← Back to blog
Engineering Playbooks

Token-Efficient Prompts with TOON (Token-Oriented Object Notation)

How Token-Oriented Object Notation shrinks prompt payloads, improves LLM accuracy, and fits directly into vibe coding workflows.

By Rev.AISomething

Token-Oriented Object Notation branding graphic

Published: November 15 2025 by Rev.AISomething


TL;DR

Token-Oriented Object Notation (TOON) is an open-source serialization format built for LLM inputs. It keeps JSON semantics but removes braces, repeated keys, and excess punctuation—cutting token usage by roughly 30–60 percent on uniform datasets. For vibe coders orchestrating multi-agent flows, TOON is an easy win: install the CLI, convert JSON payloads before prompting, and reclaim both money and context window budget without lossy hacks.


Why Vibe Coders Care About Tokens Again

Larger context windows lulled teams into shoving raw JSON into prompts, but every extra kilobyte still shows up on the cloud invoice. More importantly, bloated inputs dilute model focus: your spec competes with analytics logs, marketing copy, and the inevitable “quick debug note.” Vibe coding thrives on intent-rich prompts, and that means feeding models data that is both precise and cheap.

The TOON benchmark compared multiple formats across 5,016 LLM calls. On mixed-structure datasets it delivered 39.6% fewer tokens than pretty JSON while improving answer accuracy from 69.7% to 73.9%. In other words, you get more headroom and better comprehension at the same time.


Meet TOON in 90 Seconds

TOON keeps YAML-like indentation for nesting, CSV-style rows for uniform arrays, and optional metadata ([N]{fields}) so agents can validate structure. Here’s the canonical “users” example:

{
  "users": [
    { "id": 1, "name": "Alice", "plan": "pro" },
    { "id": 2, "name": "Bob", "plan": "starter" }
  ]
}

Becomes:

users[2]{id,name,plan}:
  1,Alice,pro
  2,Bob,starter

The rows are still lossless—decoding back to JSON reproduces the same structure—but you transmit the field list and record count once instead of per object.


Installing the CLI (and Keeping It Disposable)

The reference implementation ships as both a TypeScript library and CLI. For prompts we only need the latter:

pnpm dlx @toon-format/cli data.json -o data.toon
# or keep it fully transient
cat data.json | npx @toon-format/cli --stats > data.toon

Useful flags:

Flag Purpose When to use
`--stats` Prints token estimates and savings versus JSON. Dashboards and FinOps guardrails.
`--delimiter "\t"` Switches tabular rows to tab-separated values. Datasets with long strings where commas add noise.
`--key-folding safe` Collapses single-key wrapper chains into dotted paths. Deeply nested analytics blobs that would otherwise indent forever.
`--no-strict` Skips validation during decode. Rapid prototyping when you trust upstream data.

Drop-In Workflow for Vibe Coding Pipelines

  1. Keep JSON as the source of truth. Your backend, Supabase exports, or analytics scripts should still speak JSON because every tool understands it.
  2. Add a conversion step before invoking agents. Pipe JSON through @toon-format/cli with the delimiter and key-folding settings that match your data shape.
  3. Track savings. Store the --stats output alongside prompt logs so FinOps knows which flows deserve more optimization work.
  4. Teach your prompt template to expect TOON. Models benefit the most when you explicitly mention the format, e.g. “You will receive TOON-encoded data with [N]{fields} metadata.”
  5. Decode responses server-side if necessary. Route handlers can call the TypeScript API to turn TOON back into JSON before persisting output.

Because conversion is stateless, you can wedge it into any orchestrator—LangGraph, Windmill, Trigger.dev, or a humble shell script.


Prompt Template Example

You are reviewing customer feedback. Data is provided in TOON format.
Each block declares total rows with [N]{fields}.
Validate that answers respect that structure.

{{feedback_dataset_toon}}

Question: {{question}}
Answer using only values from the dataset.

Notice the explicit docstring about [N]{fields}—LLMs use that to detect truncated arrays or missing fields, something regular CSV cannot express.


Guardrails: When TOON Shines vs When to Fall Back

Situation Use TOON? Notes
Uniform arrays (CRM exports, product catalogs) ✅ Maximum savings; declare fields once.
Mixed nested data with predictable wrappers ✅ Combine with --key-folding safe to reduce indentation.
Deeply nested configs with little repetition ⚠️ JSON compact can be slightly smaller; benchmark first.
Pure tabular reports already in CSV ⚠️ CSV is marginally smaller, but lacks structural metadata.
Latency-critical on quantized local models ⚠️ Measure TTFT; some runtimes parse minified JSON faster.

The general rule: if you can describe your payload as “rows that share fields,” TOON likely wins. If every object is bespoke, stick with JSON.


Example Automation Script

#!/usr/bin/env bash
set -euo pipefail

INPUT_JSON=$1
OUTPUT_TOON=${2:-/tmp/payload.toon}

cat "$INPUT_JSON" \
  | npx @toon-format/cli --delimiter "\t" --key-folding safe --stats \
  | tee "$OUTPUT_TOON"

echo "Converted $(basename "$INPUT_JSON") -> $OUTPUT_TOON"

Drop this into your repo as scripts/json-to-toon.sh and point orchestrated prompts at the generated .toon file. The tee preserves token metrics in CI logs for auditing.


Integration Ideas for Vibe Coders

  • Agent memory snapshots: Serialize session summaries to TOON before feeding them into chained agents. Shorter payloads mean more iterations before you hit the window limit.
  • Spec + dataset bundles: Merge design briefs with TOON tables so models can answer “build this UI, here is the dataset” without juggling multiple attachments.
  • Prompt diffing: Commit both JSON and TOON versions into your repo; review diffs on the TOON file to ensure fields stay in sync.
  • FinOps gating: Reject pull requests that increase prompt token cost unless the author includes a justification. The CLI’s stats output gives you hard numbers.

Final Take

Vibe coding is about directing intent, not fighting serialization formats. TOON gives you a deterministic bridge between code and prompts: you keep JSON for APIs, drop a single conversion step for LLM calls, and instantly claw back context budget. Install the CLI, bake it into your agent pipelines, and stop paying extra tokens just to repeat field names the model already knows how to read.

Vibe CodingToken OptimizationData ToolingAI Engineering

Ready to launch your app?

By submitting this form you agree to our privacy policy.

Quote-ready scopes in 24 hours

  • Quote within 24 hours
  • Response within 2 hours
  • No commitment
We switched from the customer booking tool and the separate staff scheduler for one custom app that handles both. It fits how our shop runs and costs less than what we were paying before.
Lisa NguyenSMB salon owner
Book a free call