Back to blog
Oct 07, 2025
8 min read

Interfaces Are the New Code — And Agents Are Blossoming in Conversation

A response to Harrison Chase's vision for agentic interfaces, exploring how conversational development tools are transforming agent creation, and why the future is conversation-first, not visual-first.

“Workflows give you more predictability at the expense of autonomy, while agents give you more autonomy at the expense of predictability.” — Harrison Chase, Not Another Workflow Builder (LangChain Blog)

Harrison Chase’s recent post strikes a resonant chord: rather than chasing “yet another visual workflow builder,” the frontier lies instead in better, more expressive interfaces to agentic logic, in which code, prompts, and visual tools all cohabit.

While I agree with Harrison, I think there’s a larger point he missed — one that Andrej Karpathy has been emphasizing: English is the new programming language. What I see happening here in Austin goes beyond text-first development: many of us are speaking to our agentic development tools like Claude Code and Cursor, dictating our intent, and watching as these tools generate agent graphs, LangChain workflows, n8n automations, and more. The interface isn’t just text — it’s conversation, and increasingly, it’s voice.

Visual overlays are powerful, yes — but they don’t replace the generative potential of “conversation as interface.” Interfaces are becoming the new code, and visual tools are simply another layer atop that foundation.


From Visual Workflow Builders to Generative Interfaces

Harrison argues that visual workflow builders are reaching pressure points: they don’t truly lower the entry barrier, and they collapse under complexity. (LangChain Blog)

I’d extend that: the visual canvas should not be king, but rather one face of a broader generative-development stack. What’s truly transformative is how we’re building the builders themselves.

Here’s the key insight: agentic development tools like Claude Code, Cursor, and Codex are becoming the front end for creating agent frameworks. Developers use these conversational interfaces — often speaking to them via voice transcription — to build LangChain applications, LangGraph agent graphs, n8n workflows, and open-source scaffolds. The interface layer has shifted from “drag nodes” to “describe intent,” and increasingly, to simply speaking your agent architecture into existence.

This creates a fascinating recursive loop: we use agents to build agents. The development workflow itself becomes agentic.

flowchart TB Dev["🗣️
Developer Intent

Voice • Text • Conversation"]:::input Dev ==> Tool["🤖
Agentic Development

Claude Code • Cursor • Codex"]:::tool Tool ==> Code["⚙️
Agent Frameworks

LangChain • LangGraph • n8n"]:::code Code ==> Agent["🔮
Working Agents

Live Execution"]:::agent Tool -.-> MD["📝
Markdown

Human Readable"]:::doc Tool -.-> Mermaid["📊
Mermaid

Machine Parseable"]:::visual Agent -.-> MD Agent -.-> Mermaid MD ==> Final["✨
Living Documentation

Human + Machine Understanding"]:::final Mermaid ==> Final Final -.->|"🔄 Iterate"| Dev classDef input fill:#3b82f6,stroke:#1e40af,stroke-width:2px,color:#fff classDef tool fill:#f97316,stroke:#c2410c,stroke-width:2px,color:#fff classDef code fill:#a855f7,stroke:#6b21a8,stroke-width:2px,color:#fff classDef agent fill:#10b981,stroke:#047857,stroke-width:2px,color:#fff classDef doc fill:#ec4899,stroke:#be185d,stroke-width:2px,color:#fff classDef visual fill:#eab308,stroke:#a16207,stroke-width:2px,color:#fff classDef final fill:#6b7280,stroke:#374151,stroke-width:2px,color:#fff

My thesis:

  • The interface layer (text + voice + prompt + tool embeddings + generative structure) is itself a development environment.
  • Visual tools should be composable with — not exclusive of — that interface layer.
  • Because complexity grows, the textual “source of truth” remains indispensable — visual representations are projections, not the heart.
  • Agentic development tools (Claude Code, Cursor, Codex) orchestrate the creation of agent frameworks (LangChain, LangGraph, n8n), which in turn orchestrate AI capabilities.
  • Voice transcription is emerging as a natural modality for describing complex agent behaviors and workflows.

In other words: the new generation of “low code / no code” is really “smart code + generative interface + conversational development.”


Why Text-Based Agent Graphs Work — and Excel Locally in Austin

In the Austin community, I see teams and individuals achieving fast iteration, composability, and expressive power using text-first agent frameworks, chaining prompts and tools, weaving logic — and then allowing beautiful visualizations to bloom from those constructs.

Some key observations:

  1. Iterative prompt tinkering is natural in text. You can adjust, refine, test flow, branch logic, and tool calls directly in plain text — faster than dragging nodes and wiring edges.

  2. Composability and modularity. You can define sub-agents, subroutines, reusable prompt templates, and abstract pieces — just like code modules — all in text. Visual tools struggle to manage modular logic at scale.

  3. Visualization as a reflection, not an input mode. Once you’ve built your agent graph textually, rendering it as a visual graph gives you clarity, insight, and presentation value — but it’s downstream, not upstream.

  4. Aesthetic, expressive visualizations. The visual outputs (agent graphs, execution traces, attention flows) are not just functional, they are beautiful. They give you intuitions you couldn’t see in pure code. I’ve seen local developers light up when viewing these graphs — they help comprehension, debugging, and even evangelism.


Marrying the Two Worlds: Visual + Generative

To be crystal clear: I’m not against visual tools. Rather, I see them as tactical companions to a generative core. Some concepts toward that synergy:

  • Bi-directional sync. Edits made visually should reflect back to the textual representation and vice versa, without friction.
  • Generative “smart nodes.” Instead of rigid boxes, nodes can be prompt-templated, suggest code, generate branching logic on the fly.
  • Layered abstraction. Allow collapsing and expanding of agent subgraphs, with textual overrides where needed.
  • Machine-readable visualizations. The pattern of outputting structured documentation in Markdown and creating diagrams in Mermaid creates a powerful duality: these formats are natively understood visually by humans, but also parseable by machines. GUI tools like Claudia for Claude Code can render these representations, but the source remains textual and generative.

In this vision, visual is the “show” side, generative text is the “source / authoring side”, and both are first-class. The interface becomes a living document that both humans and agents can read, write, and reason about.


A Local Anecdote: Austin’s Agent Builders in Action

Let me share a sketch of what I see weekly at meetups like the AI Middleware Users Group (AIMUG):

  • A group of AI enthusiasts meet up in Austin, each building agents for niche tasks (e.g. meeting summarization, “smart calendar assistants,” research agents).
  • They open Claude Code, Cursor, or similar agentic development tools — many speaking directly to their IDE via voice transcription, describing the agent behavior they want to create.
  • The agentic tool generates LangChain applications, LangGraph state machines, or n8n workflows through conversational iteration. They refine logic, chain tools, iterate in minutes.
  • Once a few runs are solid, they export or auto-generate a visual graph. That graph is then used for debugging, for explaining to non-technical stakeholders, or for interaction logging.
  • They rarely start by dragging nodes. They prefer conversational fluency — text or voice — to describe their intent.

The result? Rapid prototyping, elegant agent systems, and visual clarity — all while maintaining fine-grained control. The development workflow itself has become agentic.


In Conclusion: Embrace the Interface, Respect the Conversation

I stand in agreement with Harrison: we don’t need yet another workflow builder. What we need is evolution of the interface layer itself — one that embraces conversation, voice, and generative text as first-class primitives for building agents.

The future I see emerging:

  1. Invest in conversational development — text, voice, and prompt-driven interfaces where English (or any language) becomes the primary programming interface.
  2. Build visual tools that complement rather than constrain — Markdown documentation, Mermaid diagrams, and GUI renderers that serve both humans and machines.
  3. Embrace the recursive loop — use agentic development tools (Claude Code, Cursor, Codex) to build agent frameworks (LangChain, LangGraph, n8n), creating a self-improving ecosystem.
  4. Celebrate machine-readable visualizations — formats that are beautiful for humans but also parseable by agents, creating living documentation.
  5. Lean into the Austin experience: conversational agent design, iterative refinement, then visual exposition.

The future of agent development isn’t visual-first or text-first — it’s conversation-first, where speaking your intent can materialize complex agent architectures, and the interface layer becomes a collaborative space between human creativity and machine capability.


Resources & References

  • Harrison Chase’s Original Post: Not Another Workflow Builder - The LangChain blog post that sparked this discussion
  • AI Middleware Users Group (AIMUG): https://aimug.org - Austin’s community of AI builders and agent developers

Let's Build AI That Works

Ready to implement these ideas in your organization?