🎯 The Problem
Current LLM-based autonomous agents lack formal guarantees for safety and correctness, making them unreliable for critical applications. As AI systems become more autonomous and capable, there is an urgent need for provable safety mechanisms and formal verification to ensure reliable operation in AGI development.
💡 The Solution
Developing an LLM-based autonomous research agent with iterative self-improvement capabilities, integrated with formal verification methods (Z3, Coq/Lean). Implemented using LangGraph prototype for recursive research, document/code generation, and feedback-driven optimization. The system combines the flexibility of LLMs with mathematical guarantees of formal methods.
🚀 The Outcome
This bachelor thesis research contributes to practical AGI development with provable safety properties. The work bridges formal verification and modern AI systems, creating frameworks for self-improving agents that maintain safety guarantees. Results will be published and contribute to the field of AI Safety and reliable autonomous systems.
Project Visuals
Check out the GitHub repository for code samples, demos, and detailed implementation notes.
Source Code
Available on GitHub
Documentation
README & Guides