I wanted to try @OpenAICodexCli for building an end-to-end app. Since I have been reading a lot of Markdown files as I keep saving context while vibe-coding, I went ahead and built a Markdown reader app, Leaf.
A native macOS Markdown reader, using a vibe-coding loop powered by Codex CLI: quick intent, and fast iteration.
Spec-driven first
I started with a clear PRD highlighting the goals and non-goals, what the app is, what it is not, and how the core flows should feel. Codex CLI helped turn those specs into concrete tasks, and each change was measured against the spec to keep scope tight.
The vibe loop
The “vibe” part came from short, focused cycles: sketch the intent, implement a slice, test it, and refine.
Built key parts like:
- Markdown rendering
- File navigation
- A distraction-free reading layout
Beyond vibe code: Solving real engineering problems
This wasn’t just UI polish. I hit real Markdown rendering performance issues and went deep: profiled with Xcode Instruments, collected traces, and fed them to Codex to pinpoint the root cause and iterate fast. Seeing a coding agent reason through a gnarly performance bottleneck with the right context feels a bit magical.
Things I loved
- Works for much longer independently
- I was even able to solve arcane performance issues by recording and sharing trace files. Very few engineers are able to go that deep
- Context limits are off the charts
Don’t sleep on @OpenAICodexCli. @OpenAI you cooked with 5.2 thinking high with CLI harness!