Hello, Austin Tech Community!
I walked into the Austin AI & Robotics meetup at HICAM on October 30th hoping for real demos and honest technical discussions. What I got was watching someone control an industrial warehouse robot named “Duck 15” using natural language through Claude Desktop, over the network, live. No pre-recorded videos. No smoke and mirrors. Just an actual robot, in an actual warehouse, responding to prompts like “tell me what topics are available” and running complete diagnostic sequences.
10/10 would attend again.
HICAM: Austin’s Manufacturing Innovation Hub
Before we dive into the robotics magic, I have to give props to HICAM (Hybrid Innovation for Collaborative Advanced Manufacturing). This nonprofit is doing important work accelerating advanced manufacturing adoption through three pillars: training (including K-12 partnerships with Austin STEM Center), ecosystem building (events like this one), and community development.
The venue itself impressed me - professional manufacturing equipment, meeting spaces, and a genuinely welcoming atmosphere. They even provided refreshments, which in the past had come out of the organizer’s own pocket. Much respect to the companies stepping up to support community events properly.
The Main Event: Watching Claude Debug a Robot
The evening’s highlight was a presentation and live demonstration of ros-mcp-server, an open-source project that bridges large language models (Claude, GPT, Gemini) with robots running ROS (Robot Operating System).
The ros-mcp-server architecture: connecting LLMs to robots via MCP and ROS
Here’s what caught my attention right away: bidirectional communication with zero code changes to the robot. You just add a rosbridge node that communicates over WebSocket. That’s it. Your existing robot suddenly speaks LLM. The server handles natural language commands translated to ROS/ROS2, gives you full visibility into robot state (topics, services, parameters, sensors), supports custom message types, and works across Linux, Windows, and macOS.
Texas Robotics Innovation
What makes this project particularly exciting for Austin is its connection to our local robotics ecosystem. The ros-mcp-server emerged from collaboration between multiple robotics labs, including partnerships with researchers here in Texas. This is exactly the kind of open-source innovation that’s putting Austin on the map as a robotics hub - not just for companies, but for foundational infrastructure that benefits the entire field.
When Demos Actually Work
You know how demos usually go at meetups? Either they’re pre-recorded or they fail spectacularly. This one was live, over the network, controlling a real industrial robot in an Austin warehouse.
The robot, affectionately named “Duck 15” (because it lives on the loading dock), responded to natural language queries through Claude Desktop. First up was system exploration: “Tell me what topics are available.” Claude used the MCP server to query the robot, got back around 300 topics, and summarized them intelligently.
But here’s where it got really impressive. The presenter had Claude run a complete diagnostic sequence on Duck 15’s dual-zone vacuum gripper system. I watched Claude connect to the gripper, enable both vacuum zones, check pressure readings against expected thresholds, and when it found low pressure, isolate each zone independently. Then it commanded the gripper through four different configurations, grabbed a camera image for each one, and compared them to reference photos stored in the “skills” document.
The conclusion? Zone One had a leak.
This wasn’t a simple API call. This was the LLM reasoning through a multi-step diagnostic procedure, referencing documentation, making decisions based on sensor readings, and providing actionable conclusions. The kind of thing you’d normally need to write custom code for.
Skills Documents: The Secret Sauce
The presenter mentioned they’re using Anthropic’s “Skills” feature - essentially markdown files with instructions that Claude references. They loaded the robot’s user manual into the project as a skill document, giving Claude the context to understand expected pressure ranges for suction cup testing, know which camera views correspond to which gripper configurations, and recognize what constitutes a “pass” versus “fail” state.
This hybrid approach - AI for reasoning and perception, deterministic code for safety and motion control - feels like the right architecture for production robotics. You get the adaptability of AI without gambling on safety-critical operations.
Multi-Robot Orchestration That Just Works
A video demonstration showed two robots in NVIDIA Isaac Sim working together. The prompt was simple: “What can you see?” The first robot reported a box blocking its view. Then: “Move the other robot to clear the box and tell me what you see now.”
Live demo: Claude controlling a robot in NVIDIA Isaac Sim through natural language
Both robots coordinated through the same MCP server. One moved the obstruction, the other confirmed the view was clear. No custom orchestration code. No elaborate planning system. Just natural language describing the desired outcome and letting the AI figure out how to coordinate.
The Real-World Application: Contoro Robotics
The presenter works at Contoro Robotics, an Austin company automating trailer and container unloading. And let me tell you - after hearing about this problem, I have newfound respect for warehouse workers.
Picture a 53-foot shipping container, floor-loaded top to bottom with mixed-SKU cargo boxes. It arrives in Texas summer heat. Someone’s job is to climb inside that metal box and manually unload every single box.
The presenter called it “the worst job in the warehouse,” and from the audience reactions, nobody disagreed. Even better: when they told people what they were automating, the common response was “I did that job when I was younger - nobody should ever have to do it.”
How They’re Solving It
Contoro’s approach combines several layers of technology. On the machine learning side, computer vision identifies boxes, estimates poses, and plans grasps. It has to handle varying container conditions - different box sizes, shifting loads, changing packaging - and they’re achieving 97-98% accuracy in the first week, climbing to 99.5% after one week of refinement.
The hardware is their proprietary Duo-Grasp system - a two-point suction gripper that handles boxes 8-30 inches, up to 80 pounds. The articulated design grips from two sides for stability, which turns out to be critical when you’re dealing with inconsistent packaging.
They’re also doing human-in-the-loop training. The system starts with teleoperation training, the AI progressively learns to work autonomously, human operators handle exceptions, and eventually a single operator can oversee 10+ robots. But underneath all the AI, they’re still using traditional robotics algorithms for the heavy lifting - inverse kinematics for motion planning, proven navigation and manipulation algorithms, and deterministic safety systems.
The performance metrics are impressive: 300-350 cases per hour (often exceeding this with smaller boxes), 99%+ success rates in production, 5-minute setup time, and it works with standard warehouse power (480V/30A/3-phase).
The key insight here is to use AI where it excels - perception, adaptation, reasoning - while preserving deterministic algorithms where you need guaranteed behavior for motion control and safety systems.
Contoro’s robot in action: automating trailer unloading with computer vision and the Duo-Grasp system
The Safety Question
Someone from the audience asked the critical question: “What about safety? LLMs hallucinate.”
The answer was thoughtful and practical. Safety lives in the deterministic tools, not in the LLM. The ros-mcp-server implements whitelisting and blacklisting of permitted ROS topics and services. If the LLM tries to command something dangerous, the deterministic code says “no.” The dangerous robot arm topics? Blacklisted. The language model can query sensor data all day, but critical control surfaces require explicit permission. All safety checks happen in tested, deterministic code.
This feels like the right architecture: let AI handle perception and high-level reasoning, but keep safety rails in predictable, testable code.
Community Energy
What made this meetup special wasn’t just the tech - it was the Austin robotics ecosystem showing up. Job opportunities were flying around: Paradigm Robotics is working on firefighting robots, Far Monaco is doing last-mile delivery, and Contoro has an open perception engineer role (talk to Dao if interested).
The community is already buzzing with ideas for weekly or bi-weekly robotics build sessions, workshops across the robotics stack, a Discord server for ongoing collaboration, and potentially adopting an open-source project as a group.
The vibe reminded me of early Austin LangChain meetups - people genuinely excited to learn together and build cool stuff.
Why This Matters
The ros-mcp-server represents something bigger than just “LLMs controlling robots.” It’s about democratizing access to sophisticated robot programming.
Before MCP and projects like this, connecting AI to robots meant custom integrations for each robot platform, deep ROS expertise required, significant development overhead, and limited natural language interfaces. Now you add one node (rosbridge), point an MCP server at it, and start commanding robots through natural language with full bidirectional communication.
This lowers the barrier for roboticists who want natural language debugging, AI developers exploring embodied AI, researchers studying human-robot interaction, and companies seeking more intuitive robot interfaces.
Getting Started
Interested in experimenting? The project provides comprehensive documentation at github.com/robotmcp/ros-mcp-server, no-robot testing using turtlesim, Docker containers for easy setup, and examples for various robot configurations.
Contributing to the Future of Robotics
Here’s where this gets really exciting for Austin: the ros-mcp-server is actively looking for contributors, and this is a genuine opportunity to shape foundational robotics infrastructure.
For developers, they need bug reports and testing (especially important for production use), tests and test coverage improvements, tutorials and documentation for different robot platforms, new features (Action support and permission controls are on the roadmap), and examples for additional robot configurations.
Roboticists can help by trying the server with your robot platform and reporting what works and what doesn’t, contributing robot-specific setup guides, and sharing integration patterns for different ROS packages.
Researchers have opportunities for academic collaboration, contributing benchmark datasets for LLM-robot interaction, and building safety and reliability testing frameworks.
This isn’t just another open-source project - it’s infrastructure. Like the early days of Linux or the web protocols, we’re watching the foundation being built for how AI will interact with physical systems. Being part of that early contribution community means your name in the commit history of a project that could become standard infrastructure, direct collaboration with Texas robotics labs and industry leaders, real impact on how millions of future robots will be programmed, and connection to Austin’s growing robotics ecosystem.
The project maintainers are actively responsive (I’ve seen PRs merged same-day), and they’re explicitly welcoming first-time contributors. If you’ve ever wanted to get into robotics but felt like the barrier was too high - this is your entry point.
Check out the GitHub repository, look at the issues tagged “good first issue,” and jump in. The Austin robotics community is building something important here, and there’s room for everyone at the table.
Final Thoughts
This meetup exemplified what makes Austin’s tech community special: accessible venues like HICAM, companies supporting community events, passionate technologists sharing openly, and genuine enthusiasm for building the future.
The intersection of LLMs and robotics is moving from research novelty to production deployment. Projects like ros-mcp-server are providing the infrastructure that makes this transition accessible to everyone, not just well-funded research labs.
If you’re in Austin and interested in robotics, AI, or manufacturing technology, I highly recommend future Austin AI & Robotics meetups. The combination of live demos, practical applications, and community energy makes for an evening well spent.
Until next time, keep building!
Colin McNamara
Special thanks to HICAM for hosting, Florent for organizing, and the presenters for sharing their work openly. And seriously, if you’re interested in that perception engineer role at Contoro, go talk to Dao.