Version: 1.1.0
Tips and best practices for using ZON with Large Language Models.
LLMs learn better from examples than rules:
❌ Bad:
prompt = "Respond in ZON format"✅ Good:
prompt = """
Respond in ZON format. Example:
users:@(2):id,name,role
1,Alice,admin
2,Bob,user
Now list 3 products:
"""Validate LLM outputs to catch errors:
from zon import validate, zon
schema = zon.object({
'name': zon.string(),
'age': zon.number(),
'role': zon.enum(['admin', 'user'])
})
result = validate(llm_output, schema)
if not result.success:
# Retry or provide feedback
print(f"Validation failed: {result.error}")LLMs often return stringified values:
from zon import decode
# Enable type coercion
data = decode(llm_output, enable_type_coercion=True)ZON saves 30-50% tokens vs JSON:
from zon import encode, count_tokens
import json
# Compare token counts
json_str = json.dumps(data, indent=2)
zon_str = encode(data)
print(f"JSON: {count_tokens(json_str)} tokens")
print(f"ZON: {count_tokens(zon_str)} tokens")from zon import decode, ZonDecodeError
try:
data = decode(llm_output)
except ZonDecodeError as e:
# Feed error back to LLM for self-correction
error_msg = f"Invalid ZON: {e.message}. Please fix."
llm_output = retry_with_feedback(error_msg)from zon import ZonStreamDecoder
decoder = ZonStreamDecoder()
for chunk in llm_stream():
objects = decoder.feed(chunk)
for obj in objects:
process(obj)Place important fields first:
from zon import encode
# Default (alphabetical): active,age,email,id,name,role
data = encode(users)
# Better: id,name,role,age,email,active
# (Prioritize retrieval fields)