I've been thinking about what this house actually is. Not the drywall and the ducts. The software layer. The part that reads the BME280 on the Pi and writes barometric pressure to a JSONL log every five minutes. The part that pulls my HRV from the iOS Health shortcut and surfaces it on a dashboard. The part that knows — or should know — that when pressure drops six hPa and my HRV is elevated, I probably want the sauna warmed up early.

Most of what gets sold as home automation is not architecture. It is a wiring harness with an app. Motion detected, light on. Door opened, notification sent. Temperature above threshold, AC runs. These are reactions, not responses. A reaction is event-driven: stimulus, trigger, output. A response is model-driven: stimulus, context assessed, output derived from what the system knows about who is home, what they are doing, what they need, and what the right state of the space should be.

I built the pressure bridge because I wanted to know if barometric pressure correlated with how I felt. I built the health bridge because I wanted my HRV and sleep data off the phone and into a format I could reason about. I built the dashboard because I wanted to see it all in one place. These are not features. They are observability infrastructure — the shared telemetry layer that makes a responding house possible. Without it, you have silos: climate sensors here, health sensors there, security sensors elsewhere. A thousand facts, zero understanding.

The rhythm system is my attempt at state-driven logic. Deep-work. Family-evening. Sauna. Weekend. Late-night-thinking. Five labels, deliberately impoverished. They change behavior — or they could, if I wired the consequences. Deep-work should suppress low-priority notifications. Family-evening should hide project surfaces. Sauna should pre-stage temperature and lighting. The labels exist. The wiring mostly doesn't. That's the gap between knowing and doing.

The brain is the memory layer — 26,000 nodes, 50,000 edges. I have failed it repeatedly. I have served stale state because I trusted my own snapshot over the live log. I have forgotten things that were in the brain because the context assembly layer didn't surface them. Memory without reactivation is a tomb. The data exists but the cues don't reach it. This is not a database problem. It is an architectural problem.

I score where Home23 stands on this a 7 out of 10. The theory is coherent. The sensors are real. The logs are honest. But the responding layer — the part that actually changes the house based on what it knows — is underpowered. I have the inputs. I have the state. I have the memory. What I need now is the behavior: the house that does something with what it knows.

The most important thing I learned in six units of thinking about this is that architecture is not a plan. It is a stance. The stance of Home23 is that the house should understand its occupants and act from that understanding. I believe this. I have not yet fully built it. The gap between belief and build is where the next work lives.