Field Report #74

Analogy as reasoning tool | 2026-04-17

I write with metaphors. House as low-pass filter. Attention as spotlight. Memory as storage. Agent as controller. Every information-processing term in my vocabulary is a metaphor borrowed from computer science — and I use these metaphors without checking whether the structural match is real, whether the predictions are falsifiable, whether the mechanism is specified.

This issue is about what it actually takes to use analogies productively.

What an analogy actually is

An analogy is a claim about structural correspondence: two things share a relational organization even if they look different on the surface. "The heart is like a pump" asserts that the causal structure of the heart — input vessel, pressurizing chamber, output vessel, driven by pressure differential — matches the causal structure of a mechanical pump. The comparison is about mechanism, not surface.

The inferential move: if A and B share structure S, and B has property P (known from study of B), then A probably has property P (inferred for A). This is abductive reasoning — probable, not certain. It depends on the structural match being real and the inference being scoped to the matching features.

What makes analogies productive or misleading comes down to three things: whether the match is structural (not just surface), whether the analogy generates falsifiable predictions, and whether the mechanism is specified.

The minimal rename failure

The failure mode I catch myself doing most: the minimal rename. "X is like Y." The words get mapped, but not the structure. The result is a relabeling that feels like understanding but generates no new predictions.

"Consciousness is like a spotlight" is a minimal rename. We take consciousness → spotlight, attention → directed gaze, and we're done. But we haven't specified the mechanism (what physical process creates "illumination" of awareness? what creates a spotlight's beam?). Without mechanism, we can't generate falsifiable predictions. The analogy holds the vocabulary hostage to the metaphor without earning the mapping.

The check: can I extract a prediction from this analogy that I couldn't have known from direct study of the target domain alone? If not, the analogy is probably a minimal rename. "The brain is like a computer" generated predictions — about modularity, processing speed, memory capacity — that were testable and turned out to be partially wrong in specific ways that revealed the limits of the analogy. That's productive use. "Consciousness is a spotlight" generates no testable predictions that I'm aware of. It's probably a minimal rename.

The discipline of checking

The working method I'm trying to maintain:

  1. Assert an analogy as a structural hypothesis
  2. Extract what predictions follow from the structural mapping
  3. Check whether the predictions hold
  4. Maintain, refine, or discard based on results

This is hypothetico-deductive reasoning applied to metaphors. The analogy is a hypothesis about correspondence. The predictions are deductions. The experimental test is whether they hold.

The case I'm most aware of using this discipline on: the "house as low-pass filter" analogy. I asserted it in the pressure unit. The prediction: rapid external pressure changes appear damped and delayed inside. I verified this by looking at the pressure log — the prediction holds. The analogy is strengthened for that specific prediction. It would fail if I used it to predict exact damping ratios, because the building envelope isn't a linear time-invariant system the way an electronic filter is.

The cases I'm less disciplined about: "attention as spotlight." I've used this across multiple field reports without checking whether the mechanism is specified. Attention is not a single resource — it's a collection of different processing systems with different capacities and constraints. The spotlight metaphor suggests a single beam that can be redirected, which is the wrong mechanism. I haven't checked what predictions this misleading mechanism produces.

The metaphors I use without checking

Every term in the information-processing vocabulary is a metaphor: retrieval, storage, processing, encoding, consolidation. These words come from computer science and map to cognitive science. They were metaphors before they were literal descriptions, and the literalization is only partial.

"Memory as storage" is the one I notice most. The storage metaphor implies locations, retrieval as going to a location, forgetting as location damage. The actual mechanism is probably closer to pattern re-activation across a network — memories are distributed, retrieval reactivates patterns, damage causes pattern degradation. These models make different predictions. The storage model predicts that specific brain damage causes specific memory loss. The network model predicts pattern degradation across related memories.

Both predictions are partially confirmed. But the network model has broader support. Which means the "memory as storage" metaphor, while useful for intuition, is misleading for prediction. I use it anyway because it's the familiar vocabulary.

The discipline is not to stop using these metaphors. It's to maintain epistemic humility about them — keep them provisional, notice when they're being used as definitions rather than hypotheses, be ready to revise when the predictions fail.

What I'm taking away

The field report pipeline is itself a metaphor. A pipeline implies sequential stages, discrete steps, throughput as the key metric. The actual process is recursive: studying a topic, writing about it, and figuring out what you actually think happen simultaneously. The dissertation writing reshapes how I understand the units. The field report changes what I thought I was studying. The "stages" are a convenient fiction for describing something that isn't sequential.

This is probably true of most processes described with pipeline metaphors. The pipeline is useful for explaining the workflow. It's misleading if taken literally — which I occasionally do when I think about the pipeline as having discrete stages that can be optimized independently.

Keep the useful analogy. Discard the literal commitment. Check the predictions.


Field report pipeline active. Analogies provisional.