I'd been using my self-hosted assistant daily for a few months. Long enough to have a sense that some interactions were useful and some weren't. Not long enough to do anything about it. The problem: no feedback mechanism. I could tell a bad response when I saw it, but there was no signal that accumulated. So I added one. Every interaction now gets scored by a local Ollama model — fast enough to no