TXT-Blur Blur Blur · Math first text to image system on top of WFGY #67
onestardao
started this conversation in
Show and tell
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Here is a full article you can drop into r/WFGY as a post about Blur Blur Blur.
You can pick a title like:
In the whole WFGY family, Blur Blur Blur is probably the most quiet module.
I know it is not the one people talk about when they think of ProblemMap or TXT OS.
At the same time I believe it is also the module that is most underestimated.
If WFGY 1.0 and 2.0 are about reasoning, and TXT OS is about long term memory,
then Blur Blur Blur is what happens when you take the same tension logic and throw it directly into images.
This post is a proper introduction to what Blur Blur Blur is, how it works, and why DeltaS = 0.50 became our “middle way” tension point for images.
1. What problem is Blur trying to solve
Modern text to image systems are powerful.
You can type a single sentence and get an impressive result.
The pain starts when you want control.
Most people try to solve this with longer prompts or style tags.
In Blur Blur Blur I took a different route.
Blur is a math first image system that runs entirely in text.
You use it to set the geometry, the left and right density, and the global tension level.
Only after that does it produce the human prompt you send to your favorite engine.
2. High level architecture
Blur Blur Blur has three main layers that sit on top of the WFGY 2.0 core and TXT OS.
2.1 Skeleton layer
This is the geometric backbone of the frame.
You can choose constructs like:
You do not need to know the full math to use them.
Just think of it as telling the system:
The skeleton decides where mass and focus are allowed to live.
2.2 Imag stacks, left and right
Blur splits the frame into a left imag stack and a right imag stack.
Each side is a layered mix of textures, noise types, motion hints and atmosphere fields.
By changing the density split between left and right you control the visual tension line of the image.
This is the part that often decides whether a frame feels dead, comfortable, or charged.
2.3 Tension, goldline and the WFGY engine
On top of skeleton and imag stacks sits the tension controller.
Blur defines a few key variables:
tension_ratiowhich splits density between left and rightgoldlinewhich is the main cut line in the frame, often at 0.50DeltaSwhich is inherited from WFGY as a scalar measure of semantic or visual tensionBy default Blur boots with:
DeltaS = 0.50profile = SAFEgoldline = 0.50Around this base, the system can climb toward higher tension modes using recipes like
wow x1000andwow x1e18for more extreme scenes.Under all of this sits the WFGY 2.0 seven step reasoning engine and the Drunk Transformer formulas (WRI, WAI, WAY, WDT, WTF).
They act as guardrails so that even when you ask for very high tension, the prompt does not collapse into noise or lose the story.
3. Why DeltaS = 0.50 matters
In the larger WFGY framework, DeltaS originally measures the gap between the current internal state and the intended goal.
For images we reuse it as a control knob for visual stress.
After a lot of tests I discovered a pattern.
At this level:
So in Blur Blur Blur we treat DeltaS = 0.50 as a kind of “middle way tension point”.
It is not a universal constant.
It is an empirical working point that matches what many people informally describe as “balanced but alive”.
If you like analogies, you can think of it as close to the idea of zhong yong in Chinese, but for images.
Not the boring middle, rather a controlled point where opposing forces are strong yet held together.
You are free to change it.
In fact I encourage you to push DeltaS higher or lower and see how the scene geometry and emotional feel respond.
4. What a typical Blur session looks like
Although the internals use math and tension fields, the user interface is just text.
Blur defines a strict preview and render contract.
Step 1: preview
When you call
preview, the engine does two things.Outputs a structured block that shows
At the same time it produces a [HUMAN PROMPT].
This is a natural language description that any common text to image engine can understand.
You can compare different previews, tune the math, and only then decide which one deserves a real render.
Step 2: go
When you type
go, Blur sends exactly that preview configuration into the engine.This separation of preview and go is important.
It turns prompting from trial and error into something closer to a reproducible protocol.
5. Tracks and recipes
Blur ships with a few default tracks and example recipes.
life track
Everyday scenes where you still want strong composition.
Example: a simple corner street with a cat on a neon sign, controlled by edge tension rather than random clutter.
pro track
Narrative heavy scenes such as “sixteen philosophers arguing in a gothic cathedral about free will and machines”.
elite track
Very high tension scenes, often at cosmic or abstract scales, for example “a city floating above a storm while a hidden geometry of E8 shines in the clouds”.
Each recipe is basically a named combination of skeleton, DeltaS, density split, and style hints.
The goal is not to lock you into presets.
It is to give you a stable baseline that you can fork and extend.
6. How this connects to the rest of WFGY
Blur Blur Blur sits on the same backbone as the other WFGY tools.
In other words, this is not a random prompt kit.
It is the visual branch of the same tension universe that already powers the reasoning side.
If you are already using TXT OS or Blah Blah Blah, you can think of Blur as the way to project that internal structure into pictures.
7. How to try Blur Blur Blur Lite
The Lite version is already public and lives in the main WFGY repository.
Go to the Blur Blur Blur page:
https://github.com/onestardao/WFGY/blob/main/OS/BlurBlurBlur/README.md
Download or open the
TXT-BlurBlurBlur_Lite_Beta.txtfile.Open your favorite text to image friendly LLM or interface.
Paste the full content of the TXT file as the initial system or user prompt.
Follow the instructions inside:
previewto inspect the math and the human promptgoto send it to the engine you want to testYou can use this with SD based UIs, Midjourney style bots, DALL·E type APIs, or any custom stack, as long as you can copy the human prompt into the system.
Everything is MIT licensed, same as the rest of WFGY.
You are welcome to fork it, rewrite the skeletons, add your own tension profiles or even rip out only the parts you like.
If you build something on top, or find a surprising behavior at extreme DeltaS values, feel free to share it here in r/WFGY.
Blur Blur Blur might be the quiet child of the family right now, but I believe it will become one of the most interesting ones once people start to seriously push it.
Beta Was this translation helpful? Give feedback.
All reactions