If you are following along, I have been posting about my Print Order AI tool. I have been working on Establishing Flash Rules from Production Data for about 2.5 weeks.
Flash placement, so far, is one of the hardest things to codify in screen printing automation. Unlike base-first or white-last rules, flash cures don’t appear in separation files, they’re inserted by rotation setters and/or operators on press. So how do you teach an AI where flashes go?
Two simple dominant patterns emerged first. Pattern A covers 100% cotton dark garments: Base → Flash → Colors → Black → White. The flash immediately after base cures the underbase so top colors lay down clean without lifting. Pattern B handles poly blends: Blocker → Flash → Base → Flash → Colors. The double-flash sequence, confirmed across 30+ validated jobs from operator notes, gives the blocker and LB poly base each a full cure before any color hits the garment.
One separator described flash placement as “sectioning off” design regions, flashing after high-coverage layers to prevent smearing, then printing lighter blends on top. I think if it like slices of a pie, especially with water base.
The key insight: flash rules aren’t about position numbers. They’re about what just printed and what prints next. That conditional logic is what separates a lookup table from a real rules engine. It’s funny when an AI learns something we accept as common sense. Sometimes digging into common sense unearths massive complexity. How do you explain common sense?
Why Flash Is a Difficult Thing to Teach an AI
The hallucination problem. When we hit 90.3% overall accuracy on print order prediction (small sample), the gap was almost entirely phantom flashes, the AI inserting flash cures that the rules engine never called for. The engine knows the right sequence, but the AI added flashes “just to be safe,” mimicking patterns it sees without understanding the underlying logic.
The rules engine itself doesn’t hallucinate, it produces deterministic output based on coded rules. The phantom flash problem came from the AI layer that sits on top, interpreting the engine’s output and sometimes “helpfully” inserting extra flashes into the final recommendation.
The rules engine outputs a structured sequence, and the AI’s job is to explain and present that sequence, not modify it. Flashes are only inserted by the engine when specific trigger conditions are met (base layer needs cure, blocker-to-base transition, high-coverage color before white highlight).
The engine was already right, we just had to stop the AI from “improving” it.
So Why include the AI?
Because the rules engine can determine print sequence, but it can’t explain why to an operator or artist, adapt its recommendation when an operator pushes back with context it doesn’t have, or handle the edge cases that don’t fit neatly into coded rules.
The engine says: Base → Flash → Colors → Black → White.
The AI says: “Base prints first because this is a 60/40 poly blend on black, the LB poly white needs a flash before top colors, LB white protects against dye migration. I put the flash after base to add the top colors. The grays are ordered dark to light because they’re low-coverage solids that won’t lift, and Spot White is last as a highlight at 9% coverage.”
The engine is the authority on what. The AI is the interface for why. That’s also what makes it a training tool, new separators don’t just get a sequence, they get the reasoning behind it, making the AI your helpful, knowledgeable assistant.


Comments