The deepagents Python package provides a streamlined way to create and interact with “Deep Agents” designed for complex, long-horizon tasks. Built on LangGraph, it offers flexibility, extensibility, and powerful integration options. Let’s explore how to get started and what the future holds for this exciting project.
Installation
Getting started with deepagents is straightforward:
Requirements
- Python 3.11 or higher
- Core dependencies automatically installed:
langgraph
- The agent runtime frameworklangchain
- Base LangChain functionalitylangchain-anthropic
- Claude model integration
Core Package
pip install deepagents
Additional Dependencies
For specific features, you may need additional packages:
# For web search examples
pip install tavily-python
# For MCP (Multi-tool Concurrent Processing) integration
pip install langchain-mcp-adapters
Basic Usage: Creating and Invoking a Deep Agent
Creating and invoking a Deep Agent involves defining its capabilities and then providing it with a task. Here’s a step-by-step guide:
1. Define Custom Tools
First, provide a list of functions or LangChain @tool objects that your agent should have access to:
def internet_search(query: str) -> str:
"""Search the internet for information about a topic."""
# Implementation using your preferred search API
return search_results
def calculate(expression: str) -> float:
"""Perform mathematical calculations."""
# Safe evaluation of mathematical expressions
return result
tools = [internet_search, calculate]
Even without custom tools, the built-in planning and file system tools are always available.
2. Provide Instructions
Pass custom instructions as a string to steer the agent towards a specific task:
instructions = """You are an expert researcher specializing in
renewable energy technologies. Your goal is to provide accurate,
well-researched information with proper citations."""
These custom instructions combine with the built-in system prompt to guide the agent’s behavior.
3. Create the Agent
Call create_deep_agent
with your defined tools and instructions:
from deepagents import create_deep_agent
agent = create_deep_agent(
tools=tools,
instructions=instructions,
# Optional parameters
subagents=[...], # Custom sub-agents
model=None, # Uses Claude Sonnet by default
state_schema=None # Custom state schema if needed
)
By default, deepagents uses claude-sonnet-4-20250514
with a high max_tokens
value (64,000), optimized for tasks requiring extensive writing like research reports.
4. Invoke the Agent
Once created, the agent can be invoked like any other LangGraph agent:
result = agent.invoke({
"messages": [{"role": "user", "content": "Research the latest breakthroughs in solar panel efficiency"}]
})
# Access the results
final_message = result["messages"][-1]
generated_files = result.get("files", {})
Complete Example
Here’s a complete example bringing it all together:
from deepagents import create_deep_agent
# Define a research tool
def web_search(query: str) -> str:
"""Search the web for information."""
# Your search implementation
return f"Search results for: {query}"
# Create the agent
agent = create_deep_agent(
tools=[web_search],
instructions="You are a helpful research assistant."
)
# Invoke the agent
result = agent.invoke({
"messages": [{"role": "user", "content": "What is LangGraph?"}]
})
# Print the response
print(result["messages"][-1]["content"])
Async Example
For async operations, especially useful when dealing with I/O-bound tasks:
import asyncio
from deepagents import create_deep_agent
async def async_web_search(query: str) -> str:
"""Async search implementation."""
# Your async search logic
await asyncio.sleep(0.1) # Simulate API call
return f"Async search results for: {query}"
async def main():
# Create agent with async tool
agent = create_deep_agent(
tools=[async_web_search],
instructions="You are an async research assistant."
)
# Async invocation
result = await agent.ainvoke({
"messages": [{"role": "user", "content": "Explain async programming"}]
})
print(result["messages"][-1]["content"])
# Run the async example
asyncio.run(main())
More complex usage scenarios are demonstrated in the project’s examples, such as examples/research/research_agent.py
.
LangGraph Interaction and Advanced Features
Deep Agents are LangGraph graphs, which means they inherit all of LangGraph’s powerful capabilities:
React Agent Loop
The core algorithm relies on LangGraph’s create_react_agent
wrapper, implementing a “React agent” loop where the LLM:
- Decides whether to stop or take an action
- Executes the action and receives feedback
- Continues the loop with updated context
Standard LangGraph Features
All standard LangGraph features work seamlessly with Deep Agents:
Streaming
Get results incrementally as the agent works:
# Synchronous streaming
for chunk in agent.stream({"messages": [{"role": "user", "content": "..."}]}):
print(chunk)
# Async streaming
async for chunk in agent.astream(
{"messages": [{"role": "user", "content": "Research quantum computing"}]},
stream_mode="values"
):
if "messages" in chunk:
chunk["messages"][-1].pretty_print()
Human-in-the-Loop
Allow human intervention in the agent’s process:
# Configure checkpoints for human review
config = {"configurable": {"thread_id": "research-1"}}
result = agent.invoke(input_data, config=config)
Memory
Maintain conversation history and state over time using LangGraph’s persistence layer.
LangChain Studio Integration
Visualize and debug agent execution through LangChain Studio’s intuitive interface.
Multi-tool Concurrent Processing (MCP) Integration
The deepagents library supports MCP (Multi-tool Concurrent Processing) integration, expanding agent capabilities:
from langchain_mcp_adapters import create_mcp_tools
# Create MCP tools from various servers
mcp_tools = create_mcp_tools([
"database-server",
"api-server",
"analytics-server"
])
# Combine with local tools
all_tools = tools + mcp_tools
# Create agent with expanded capabilities
agent = create_deep_agent(
tools=all_tools,
instructions="You can now access multiple external systems..."
)
This integration allows agents to utilize tools from multiple servers or sources, greatly expanding their capabilities beyond locally defined tools.
Roadmap: Future Development
The deepagents project has an exciting roadmap for future enhancements, as outlined in the project documentation:
Customizing the Full System Prompt
While parts of the system prompt are currently customizable through the instructions
parameter, future releases will give users complete control over the entire system prompt, allowing for:
- Domain-specific agent personalities
- Custom workflow patterns
- Specialized reasoning approaches
Code Cleanliness
Improving internal code quality through:
- Comprehensive type hinting
- Detailed docstrings
- Consistent formatting
- Better error handling and logging
More Robust Virtual File System
Enhancing the current virtual file system to support:
- Sub-directories and hierarchical organization
- Advanced conflict resolution for parallel file edits
- File metadata (creation time, modification history)
- Larger file handling capabilities
Deep Coding Agent Example
Creating a dedicated example showcasing:
- Complex code generation tasks
- Multi-file project management
- Test generation and execution
- Documentation creation
Benchmarking
Evaluating performance through:
- Standardized task benchmarks
- Comparison with other agent frameworks
- Performance optimization opportunities
- Resource usage analysis
Human-in-the-Loop Support for Tools
Adding specific integration for:
- Human review of proposed actions
- Approval workflows for sensitive operations
- Modification of agent-proposed solutions
- Interactive refinement of results
Best Practices and Tips
Based on community experience and the examples provided:
- Start Simple: Begin with basic tools and gradually add complexity
- Use Sub-Agents: Delegate specialized tasks to maintain focus
- Leverage Planning: Let the agent break down complex tasks
- Monitor Token Usage: Deep agents can generate extensive output
- Iterate on Instructions: Refine your agent’s instructions based on results
Community and Contribution
The deepagents project welcomes contributions:
- GitHub: hwchase17/deepagents
- Issues: Report bugs or suggest features
- Pull Requests: Contribute code improvements
- Examples: Share your use cases and implementations
As Harrison Chase discusses in his presentation, the project aims to democratize access to sophisticated agent capabilities that were previously only available in closed systems.
Conclusion
The deepagents library represents a significant step forward in making sophisticated AI agents accessible to developers. By combining:
- Advanced planning capabilities
- Virtual file systems for persistence
- Sub-agent delegation for complex tasks
- Extensive customization options
- Seamless LangGraph integration
It enables the creation of agents that can tackle real-world, complex problems that go far beyond simple chatbot interactions.
Whether you’re building research assistants, code generators, data analysts, or creative tools, deepagents provides the foundation for creating truly “deep” AI agents.
This concludes our 4-part series on Deep Agents. We hope this series has provided you with a comprehensive understanding of this powerful library and inspired you to build your own deep agents.
Get Started Today:
- Deep Agents GitHub Repository
- Harrison Chase’s Deep Agents Presentation
- Deep Wiki - Deep Agents Documentation
Have you built something interesting with deepagents? Share your projects and experiences with the community!