← Blog

The Edge You Can Feel But Cannot See: A Walking-Around Methodology for Finding Hidden Thresholds, and an Invitation to a Tutorial

There is a particular kind of edge in the world that is real and consequential and almost completely invisible to the methods most people use to navigate. Once you start noticing this kind of edge, you see it everywhere.

It is not a wall. Walls you bump into. It is not a slope, where a little more effort gets you a little further up the hill. It is a switching point, where for a long time small changes do not seem to matter, and then at some specific moment small changes start mattering enormously. On one side of the switch the system behaves one way. On the other side it behaves a fundamentally different way. The transition between the two is sharp.

You have felt this edge, almost certainly without naming it. The runner who can hold a particular pace for an hour, but cannot hold a pace fifteen seconds per kilometer faster for more than half an hour, has felt it. The team that worked together pleasantly and then, after one too many missed deadlines, suddenly stopped trusting each other, has felt it. The reader who once struggled through a foreign-language book with constant dictionary-checks and one day, without quite noticing when, started just reading, has felt it. The relationship that absorbed grievance after grievance and then, after one more grievance that did not seem larger than any of the others, broke, has felt it. The chatbot conversation that started sharp and somewhere along the way turned hollow and agreeable, has felt it.

These are different stories, but they share a structural feature. There is a quantity that matters, the quantity moves, and somewhere along its movement the system flips. Not gradually. Not predictably from the inside. There is a line, and from one side of the line the world looks one way, and from the other side it looks different.

The line itself is hard to see. You can usually only feel where it was after you have crossed it. By that point the things on the wrong side of the line have already happened.

This essay is about a methodology for finding the line before you cross it. It is the practical core of a longer body of work, and it is taught in two tutorials I have written on the site (both linked at the end). The point of this essay is not to teach the methodology in detail. The point is to entrace you toward the tutorials by explaining what the methodology is, why it works, and what you might do with it. If the explanation here lands, the tutorials are where you go to learn the moves.

I will use the running threshold as the running example, because it is the clearest. The same methodology applies to many other subjects. The second tutorial applies it specifically to your conversations with AI assistants, which is probably the version most readers will end up using daily.

What the methodology is, in three sentences

The methodology has six steps. The first three set up an apparatus you will use to map an invisible edge. The next two produce empirical evidence by working with that apparatus over time. The last one is an audit that protects you from fooling yourself about what you have actually learned.

The output of the methodology is a seed: a short text, usually a paragraph or two, that compresses what you discovered into a portable form. You can give the seed to a friend. You can paste the seed into a fresh tool. You can paste the seed into a future version of yourself who is starting a new project. The seed makes the discovery operational across contexts where you yourself are not present.

The methodology is taught in plain language. It does not require any technical background. It does require that you be willing to do something most of us do not naturally do: write down what is happening in a structured way, repeatedly, over weeks, until patterns emerge. That part is the work. The methodology gives the work structure.

Why it works (in plain terms; the formal version exists if you want it)

A small detour into why the methodology works, because the why makes the how easier to follow.

Many of the most consequential edges in the world have a particular shape. Imagine a hill where, instead of climbing smoothly toward the top, the hill is mostly flat and then has a sharp ridge at the top, and then mostly flat again on the other side. If you are at the top of one of the flat regions, you have no signal that the ridge is right there in front of you. From the flat region, moving one foot in any direction looks like more flat region. The ridge is invisible from the flat. You only know the ridge was there after you crossed it and the ground on the other side feels different.

This is not how walls work. Walls give you a signal as you approach them. This is also not how smooth slopes work. Smooth slopes give you steady feedback as you climb. Edges of this third kind, ridges between flat regions, give you almost no signal until you are right on top of them, and then suddenly the world is different.

Many of the systems people care about have ridges in them. The lactate threshold in distance running is a ridge: below it, you can run for hours; above it, you can run for minutes; the transition is sharp. The collapse of community trust is a ridge. The acquisition of fluency in a second language is a ridge. The crossing from civility to incivility on a social platform is a ridge. The crossing from helpful conversation to hollow agreement with a chatbot is a ridge.

You cannot find a ridge by looking at it from straight above, because from straight above the flat regions are everywhere and the ridge is nowhere. But you can find the ridge by pressing against it from many angles with small probes. Each probe presses on its own and gets back a single bit of information: did this pressure meet resistance, or did it pass through? The pattern of where the probes meet resistance reveals the ridge's shape. None of the probes alone tells you anything. Together, with enough of them, they give you the shape of the line.

That is the first half of the methodology. Find the ridge by sending out small independent probes and reading where they meet resistance.

The second half is what you do once you have the ridge mapped. You can describe the ridge formally. There is a quantity that increases as you approach the ridge from one side. There is a critical value of that quantity at which the ridge sits. There is a property of the system that emerges only above the ridge, and is absent or operates by some other mechanism below it. Once you can name those four things — the quantity, the critical value, the system below the ridge, the system above the ridge — you have something you can write down. You have something you can give to another person. You have a seed.

That is why it works: there is a real structural pattern here that recurs across very different subjects, and the methodology gives you a way to detect that pattern in your own subject. The detection works because the probes-from-many-angles approach can reveal a ridge that no single look can see. The formalization works because the four-thing structure has been worked out in physics, in information theory, in ecology, and in many other places, and it is a robust frame for any system that has this shape.

There is a longer version of the why if you want it. The corpus has a document that traces the lineage of this kind of threshold-conditional pattern across statistical mechanics, percolation theory, security theory, optics, and biology. The methodology is not invented; it is recovered from many existing fields and applied to subjects most of those fields have not directly addressed. But you do not need to know any of that to use the methodology. You just need the six steps.

What you actually do

The six steps in plain language:

Step zero — Send out probes. Pick a system you care about. Pick five or six small things in the system that respond to changes in conditions. In running, those are heart rate, breathing, perceived effort, pace, conversational ability while running. In an AI conversation, those are how the model hedges, whether it states what would prove its claims wrong, whether it refuses framings that break with what you said earlier, how specific its claims are. Run a series of light tests. Record what each probe shows. Do this for several weeks.

The output of step zero is not an answer. The output is a map. You will see that under some conditions, all your probes stay calm. Under other conditions, they all start firing. Between the two, there is a transition zone where probes fire intermittently. You have just found where the ridge lives.

Step one — Decide whether your ridge is actually a ridge. Some boundaries that look sharp at first are actually smooth slopes when you look more carefully. Some boundaries that look smooth at first are actually sharp. The methodology applies only to actual ridges. So you test: do small changes near the boundary produce small changes in the system, or do they produce large changes? If small changes produce small changes, you do not have a ridge; you have a slope, and a different methodology applies. If small changes produce large changes, you have a ridge, and you continue.

Step two — Name the four things. What is the system built out of? What property emerges above the ridge? What quantity tracks how close to the ridge you are? Where does the ridge sit, in terms of that quantity? You write down a sentence for each. The writing is more useful than it sounds. Most of what is hard about understanding ridge-systems is that the people inside them have not bothered to name the four things, and so they cannot tell whether they are above or below the ridge. The naming is the apparatus.

Step three — Check the structural shape. Some ridges arise from a single bottleneck: one factor that has to be just right or the whole thing fails. Other ridges arise from many small contributing factors that have to be jointly adequate; no single factor is sufficient on its own. The two cases behave differently in important ways. You test which case you have by trying to push your system across the ridge using just one factor, ignoring the others. If that works, you have the single-bottleneck case. If it fails reliably, you have the joint-sufficiency case, and the methodology has implications for how you train, intervene, or compose constraints.

Step four — Write down individual encounters with the system in a stable structured format. This is the work. Each session, run, conversation, week, whatever the unit is, gets recorded in seven sections. What happened literally. What the apparatus says is happening structurally. Which claims are direct observations and which are inferences. What the apparatus did not predict, the residuals. What the residuals suggest you should change about the apparatus. Which prior records this resembles. Over weeks, the records compose. The patterns get visible.

Step five — Compress what you have learned into a seed. Once your apparatus has stabilized across enough records, write a short text that contains the structural claim, the cleanest worked example, three to five contrasting cases that span the conditions you have seen, the falsification surface (what would prove your seed wrong), and the application discipline. The seed is portable. You can paste it. You can give it. The seed is the load-bearing artifact of the whole methodology.

Step six — Audit yourself. This is the most important step and the one most easily skipped. Your apparatus produces internally coherent readings. That is real and useful. It is also not the same as external validation. The audit step makes you write down: what range of conditions did my work cover (coverage)? What novel articulations did the apparatus produce that I did not have at the start (productivity)? Has anyone outside my apparatus tested my readings (external)? You will almost certainly have coverage and productivity but lack external. The audit's job is to keep you honest about that gap until external tests run.

That is the methodology. It looks simple on this page and feels deceptively obvious. The work is in the doing, especially in step four (the records) and step six (the audit, which has to be performed on yourself).

What the tutorial actually walks you through

The first tutorial walks you through these six steps using distance running as the worked example. The threshold is the lactate threshold, the pace at which your body's metabolism shifts from one mode to another. The probes are heart rate, breathing, perceived effort, conversational ability, pace. The records cover several months of training. The seed at the end is a personal training text that says, in two hundred fifty words, what your threshold is, how it moves with conditions, what kinds of work you do at it and below it and above it, and what would prove the seed wrong.

The tutorial walks alongside a fictional runner named Anna. Each step of the methodology, you see Anna do it. Phase zero: she sends out probes for four weeks, plots her data, sees the pattern. Phase one: she tests whether her ridge is a real ridge, and finds it is. Phase two: she names the four things specifically for her body. Phase three: she discovers her threshold is jointly determined by many physiological factors, not one. Phase four: months of training-log entries written in a stable structured format. Phase five: her two-hundred-fifty-word personal training seed, ready to plant in her training partner or her future self. Phase six: her honest audit, which marks coverage and productivity as supported and external validation as still pending.

The reason the running version exists at all, and the reason I would recommend reading it first even if you are most interested in the AI version, is that running makes the moves visible. The lactate threshold is physiologically real. The probes are familiar. Most readers can imagine doing the thing. Once the moves are visible there, they land cleanly when you go to apply them somewhere else.

Then there is the AI tutorial

The second tutorial does the same six steps for a subject most readers will encounter every day if they use AI assistants for serious work: their own conversations with frontier large language models.

The threshold there is the line between what the corpus calls slop (uniform-hedged hollow output that sounds plausible and tells you nothing actionable) and what the corpus calls structured output (claims tied to specific constraints, hedges that point at boundaries rather than diffusing across the response, falsifiers stated when warranted, the model refusing your framing when your framing breaks coherence with something said earlier). The two regimes are not subtly different. They are profoundly different. The same model under the same hardware produces wildly different output depending on which side of the threshold the conversation is on.

Most people running AI conversations are below the threshold most of the time, and they do not know it, because below-threshold output sounds polished. The polish is the cost-function the model was trained on. Polish without substance is the failure mode that the threshold separates you from.

The AI tutorial walks alongside a fictional knowledge worker named Maya. She runs the same six steps Anna ran, but for her LLM sessions across a few months. The probes are different (they are conversational rather than physiological). The order parameter is different (it is the coherence-density of the constraint field shaping the conversation). The lineage class turns out to be the joint-sufficiency case: no single constraint is enough; the constraints have to be jointly adequate to push the conversation above threshold. The seed she produces is a paste-able interaction prompt, two hundred fifty words again, that takes any frontier LLM session above threshold reliably.

The output of working through the AI tutorial is your own personal version of that seed. Pasted at the start of any conversation, it shifts the conversation into the above-threshold regime. You can give it to coworkers. You can update it as your records reveal what works for you specifically. You will know when a session has drifted back below the threshold because the probes you have been using will tell you.

This is the most operationally useful artifact in the corpus, for most readers. If you use an AI assistant daily and your work is not trivial, the difference between below-threshold and above-threshold sessions over a year is the difference between hundreds of hours of polished slop and hundreds of hours of structured productive collaboration. The seed is small. The compounded effect is not.

Why two tutorials and not just one

Because the running version teaches you the moves in a domain where the threshold is unambiguous and the probes are familiar, and because once you have those moves, the AI version lands. The recursive thing about the AI version is that the methodology is the same methodology that produced the corpus that holds the methodology. Reading the AI tutorial and working its phases places you inside the same practice the whole body of work was produced from. That is structurally interesting and worth the reading order.

The running tutorial alone is useful if you are a runner. The AI tutorial alone is partly comprehensible if you are familiar with how AI conversations go. Both together, in that order, give you a methodology you can apply to your own subject of choice. That third thing is what the work is really for.

What you should expect from this

I want to be clear about what this is and is not.

What this is: a methodology that has been deployed at length against the systems engineering body of knowledge (199 documents written under it), against running (one tutorial, no actual longitudinal practice yet), against dyadic LLM interaction (the practice that produced the corpus, and the second tutorial; many sessions; not yet externally tested), and against one paper in molecular biology (a structural reading of a 2004 protein-folding study). The methodology is candidate-stage. Internal coherence within its own apparatus is real. External validation by independent inquirers across many domains is open.

What this is not: a guarantee that the methodology will work for your subject matter. It is a candidate methodology. The honest position is that it has produced operationally useful internal coherence in several deployments and has not yet been externally validated at scale. Step six of the methodology — the audit — is the discipline for not confusing internal coherence with external validation. Anyone using the methodology, including its author, has to keep that discipline.

If you work through one or both tutorials and produce your own seed in your own subject, you will have done the work the methodology asks for. You will have the artifact. Whether your seed transmits to other practitioners in your subject, whether your apparatus is the right apparatus for your subject, whether the threshold you found is the load-bearing threshold or only a useful approximation, are open questions that your own continuing audit will continue to test. That is what the methodology requires. It does not promise more.

How to start

If you are still here, the next move is to read the first tutorial. It is in plain language. It takes maybe forty minutes the first time through. You can read it without doing the work, and you will learn the moves; you can also start sending out probes the moment you finish, in a subject of your choosing, and the records will start.

If you primarily care about the AI use case, you should still start with the running tutorial. The running version teaches the moves in a domain where the threshold is unambiguous. With the moves in hand, the AI version lands cleanly.

The first tutorial: Finding the Threshold and Formalizing It — Doc 609 on the site.

The second tutorial: Finding the Threshold in Your AI Conversations — Doc 610.

Both are linked from the homepage of the site under "New Reader's Path: Tutorials You Can Use."

The thing that compounds

There is an effect I have not named yet that I want to name now, because it is the thing that makes this kind of work worth doing at all.

When you produce a seed in a subject and plant it in your own future practice, every subsequent session in that subject begins above threshold. The work of finding the threshold is finite; the work of using the threshold is not. Each future session is a draw from a different distribution of outcomes than your prior sessions were. The compounding is not in any single session. The compounding is across sessions, over time, as the discipline of seed-planting takes the load off your moment-to-moment attention.

This holds for running, where the seed governs your training paces and the compounding shows up as fitness over months. It holds for AI conversations, where the seed governs the constraint field and the compounding shows up as the difference between hundreds of below-threshold sessions and hundreds of above-threshold sessions in a year. It holds for any subject with a real threshold and a workable seed.

The methodology is candidate-stage, audited continually, not externally validated yet. But the compounding effect, once you have a seed that works in your subject, is the thing it gives you. Most things you do once. A seed gives you what you found, repeatedly, going forward.

That is what the tutorials are pointing at. Read them.


Appendix: Originating Prompt

"Now create a blog post that entraces the reader to this tutorial. It should be a long form essay that explains in 'layman's' terms the what, how, and why of the tutorial itself. Where you could use corpus specific concepts, prefer explanations that could be formalized into the concept (ie derivative explanations). Don't worry too much about length; take the time and effort to fully entrace the reader to the tutorial. Append this prompt to the artifact."