LLMs are powerful AI models, but they can hallucinate, go out of context, or simply provide the wrong output. You need to take their output and have layers of verifications against it so that you only let the right data in. The verification layers must send the feedback for the next iteration. The important question then becomes when to stop if it ends up in a never ending cycle? Do you stop at a