Stop Prompting. Start Building.
Most people use AI like a vending machine.
You type something in.
Something comes out.
You stare at it and think, “That’s… not quite what I meant.”
That’s not because you’re bad at prompting. It’s because AI is not a designer. It’s an executor.
So the real workflow is not “open AI and ask for stuff.”
The real workflow is:
Design first. Delegate second.
This guide shows you how to do that, end-to-end, from “idea in your head” to “files created in your Obsidian folder by a CLI agent.”
The Big Idea
AI is extremely good at following instructions and extremely bad at guessing what you want.
If you skip planning, the AI will still produce something. It just might be the wrong “something,” confidently, at high speed.
So your job is to turn fuzzy ideas into clear instructions.
You do that in stages:
- You design the work (without AI).
- You use a chat AI to tighten the plan and generate a clean prompt.
- You use a CLI agent to execute that prompt inside your project folder.
- You review and iterate.
Step 1: Think Before You Touch AI
Before you open any AI tool, pause.
On paper, in Notes, or in your head, answer these four questions:
- What are you building?
- Who is it for?
- What is in scope?
- What is out of scope?
This is your blueprint.
If you skip this, you will spend your time arguing with AI instead of building with it.
Step 2: Use Chat AI as a Design Partner
Now you open a chat-based AI, like:
- ChatGPT (web or app)
- Claude (web or app)
- Gemini (web or app)
Important: you are not using chat AI to “do the whole project” yet.
You are using chat AI to:
- clarify your idea
- expose missing details
- tighten scope
- produce a high-quality system prompt you can hand to a CLI agent
This is where you talk to the AI.
Step 2.1: Describe the Idea
Tell the chat AI what you want to build and why. Keep it simple.
Do
I want to create a beginner-friendly how-to guide that teaches people how to plan with a chat AI first, then run a CLI agent that writes Markdown files into an Obsidian folder.
Click to copy
Do Not
Make me something cool about AI.
Click to copy
Why: the “Do” version gives the AI a clear deliverable. The “Do Not” version forces it to guess what “cool” means.
Step 2.2: Define Scope and Boundaries
This is where you prevent chaos.
Do
This is for complete beginners. Keep it step-by-step.
Only cover the workflow: planning, chat AI, Obsidian setup, terminal navigation, CLI execution, review loop.
Do not go deep into machine learning theory.
Click to copy
Do Not
Cover everything about AI and all tools and all platforms.
Click to copy
Why: “everything” is how you get a 5,000-word tangent you never wanted.
Step 2.3: Ask It to Challenge You
You want the AI to act like a reviewer, not a hype person.
Do
Ask me clarifying questions.
Challenge assumptions.
Point out missing steps a beginner would get stuck on.
Do not just agree with me.
Click to copy
Do Not
Sounds good, just write it.
Click to copy
Why: beginners do not know what they don’t know. This step forces those gaps to surface.
Step 2.4: Iterate Until Clear
You go back and forth until it feels obvious.
A good rule: if you can’t explain it in two sentences, it’s still fuzzy.
Do
Keep asking questions until the plan is specific enough that a beginner could follow it without guessing.
If something is unclear, stop and ask before proceeding.
Click to copy
Do Not
This is probably fine. Just ship it.
Click to copy
Why: “probably fine” is how you end up debugging your own instructions later.
Step 2.5: Ask for the Final Prompt
Now you turn the plan into a reusable instruction set.
This is the handoff point from “chat” to “CLI.”
Do
Turn everything we agreed on into a single system prompt for an AI agent.
Include: scope, tone, file structure, and output rules.
Output it in Markdown so I can save it as system.md.
Click to copy
Do Not
Start writing the project files now in the chat.
Click to copy
Why: the chat is for planning and prompt generation. The CLI is for execution in your folder.
Step 3: Install Obsidian and Create Your Project Folder
You want a place where files can be created and reviewed easily. Obsidian is great because it is basically a clean UI on top of normal Markdown files.
Install Obsidian
- Go to: https://obsidian.md
- Download Obsidian for your operating system.
- Install it like any normal app.
- Open Obsidian.
Create a Vault
When Obsidian opens:
- Click Create new vault.
- Name it something like
AI-ProjectsClick to copy - Choose where to store it (Documents is fine). Remember this location, you will need it in an upcoming step.
A vault is just a folder on your computer. Obsidian watches it for Markdown files.
Create a Project Folder Inside the Vault
Inside Obsidian:
- Look at the left sidebar (the File Explorer).
- Right-click your vault name.
- Click New folder.
- Name it based on the topic you are generating.
This folder is where your CLI agent will write files.
Step 4: Open a Terminal in That Folder
You want to run commands “inside” the folder so the AI outputs files in the right place.
macOS
Open Terminal, then:
cd path/to/your/ObsidianVault/Ollama
Click to copy
Tip: In Finder, you can right-click a folder and look for something like “New Terminal at Folder” (availability depends on settings and macOS version).
Windows
You have two main options:
Option A: Windows Terminal / PowerShell (native)
Option B: WSL (Linux inside Windows)
If you are a beginner and you plan to do more dev work, WSL is often smoother, but it is still another thing to install. If you want the simplest path, start native.
If you use WSL, your Windows drive is usually mounted like this:
cd /mnt/c/path/to/your/ObsidianVault/Ollama
Click to copy
Linux
Open Terminal, then:
cd path/to/your/ObsidianVault/Ollama
Click to copy
Step 5: Install and Use AI CLIs (Codex, Claude, Gemini)
Below are common CLI paths people use. Commands can change over time, so treat these as the “current typical approach,” and verify with each tool’s official documentation.
No matter which CLI you use, the pattern is the same:
- Install the CLI
- Verify it runs
- Run it inside your project folder
- Provide the prompt (often from a file like system.md)
OpenAI Codex CLI
Typical install (Python):
pip install openai-codex
Click to copy
Sanity check:
codex --help
Click to copy
Common execution pattern (example):
codex run --system-prompt system.md
Click to copy
Note: many agent-style CLIs ask for confirmation before writing files or executing actions. If Codex prompts you to approve steps, that is normal.
If you want “less interactive” behavior, check the CLI help for flags related to approval (search for words like approveClick to copy, yesClick to copy, autoClick to copy, or non-interactiveClick to copy):
codex --help
codex run --help
Click to copy
Anthropic (Claude) CLI or SDK-driven CLI
Anthropic commonly provides SDKs, and many people use CLI wrappers or build simple scripts around the SDK.
Typical install:
pip install anthropic
Click to copy
Sanity check:
python -c "import anthropic; print('anthropic installed')"
Click to copy
If you are using a Claude CLI wrapper (some tools name the command claudeClick to copy), always check:
claude --help
Click to copy
Some CLIs have an “allow” or “dangerous” style flag to let the agent execute without constant confirmation. These flags vary by tool and are easy to misuse, so treat them carefully:
- Only use them in a safe folder
- Only when you understand what it will write or run
- Prefer interactive mode when you are new
Google Gemini (Python SDK)
Gemini is commonly used via SDKs, and you may run it through scripts or agent tools that integrate it.
Typical install:
pip install google-generativeai
Click to copy
Sanity check:
python -c "import google.generativeai as genai; print('gemini sdk installed')"
Click to copy
If you are using a Gemini-specific CLI or agent wrapper, use:
<your_command> --help
Click to copy
Because the naming and interface depends on the wrapper, not just the SDK.
What About Grok and Meta
As of today, the common situation is:
- Grok: often accessed via web app or API, not a mainstream official “agent CLI” people run locally in a folder.
- Meta: commonly provides model weights (like Llama) and tooling around running models, but not a single universal “chat CLI” for this workflow.
In other words: you can still use them, but it is usually through APIs, local model runners, or third-party tools, not a simple “official CLI agent” you install and run like Codex.
Step 6: Run Your Agent in the Obsidian Folder
This is the “rocks and rolls” part.
- Save the final prompt you generated in chat into a file named:
system.md
Click to copy
- Make sure your terminal is in your project folder:
pwd
Click to copy
You should see something like:
.../ObsidianVault/Ollama
Click to copy
- Run your CLI agent using the prompt file.
Example with Codex:
codex run --system-prompt system.md
Click to copy
- Watch what it creates.
When it writes .mdClick to copy files into that folder, Obsidian will show them almost instantly.
Step 7: Review, Fix, Repeat
You are not trying to get perfection in one run.
You are trying to get:
- a solid draft
- correct structure
- consistent formatting
- minimal confusion for beginners
When something looks wrong, you have two choices:
- Fix it manually (fast for small edits)
- Go back to chat AI and adjust the prompt (better for systemic issues)
A healthy workflow looks like a loop:
- run agent
- review output
- improve prompt
- run agent again
The Pattern (In Simple English)
Here’s the pattern you are teaching:
- You plan the work without AI.
- You use chat AI to pressure-test the plan and produce a clean prompt.
- You use a CLI agent to execute that prompt inside a real folder.
- You review results and iterate.
Why this works:
- planning reduces ambiguity
- chat AI is great for thinking and clarity
- CLI agents are great for structured output and speed
- review keeps you in control
Final Thought
You are not “prompting.”
You are building a pipeline:
- Idea → Plan → Prompt → Execution → Files → Review
Once you do that a few times, AI stops feeling random and starts feeling predictable.
And predictable is where the real power is.