Deutsch한국어日本語中文EspañolFrançaisՀայերենNederlandsРусскийItalianoPortuguêsTürkçePortfolio TrackerSwapCryptocurrenciesPricingIntegrationsNewsEarnBlogNFTWidgetsDeFi Portfolio TrackerOpen API24h ReportPress KitAPI Docs

Build AI Agents with Smithery.ai,

7h ago
bullish:

0

bearish:

0

Share

Build AI Agents with Smithery.ai, MCP, and uAgents connected to Agentverse and discoverable by ASI:One

A comprehensive guide to building AI agents that connect to multiple MCP servers using Smithery.ai and Fetch.ai Tech Stack.

As AI agents become increasingly capable, the ability to connect them with real-world tools and data sources is key to building intelligent, adaptable systems. This article explores how to integrate Smithery.ai’s MCP server network with Fetch.ai Agents and deploy them on the Agentverse Marketplace.

Understanding the Components

🌍 Fetch.ai Ecosystem

Fetch.ai offers a decentralized platform for creating, deploying, and connecting autonomous agents. At the core of this ecosystem is uAgents, a lightweight framework that enables developers to build agents equipped with built-in identity, messaging, and wallet capabilities. These agents are published to Agentverse — an open marketplace where they can be discovered and interacted with by other agents. ASI:One, the agentic LLM layer, intelligently routes user queries to the most relevant agents on Agentverse, enabling seamless, real-time collaboration between users and AI Agents.

🔄 What is MCP?

The Model Context Protocol (MCP) is an open specification that standardizes the way LLMs interact with external tools and services. It abstracts away custom API logic and replaces it with a unified schema-based interface, making it incredibly easy to plug-and-play new capabilities.

🛠 Smithery.ai

Smithery.ai is a platform purpose-built for developers creating agentic services. It hosts a rich catalog of MCP servers, which can be accessed using a shared API key and configuration model.

🤔 Why Connect Fetch.ai Agents to MCP Servers?

Agents often need data that isn’t hardcoded or static. For example:

  • An agent helping a researcher may need to pull the latest clinical trial data.
  • A health assistant agent might want to calculate insulin resistance (HOMA-IR).
  • A conversational AI may want to provide web search results in real time.

Rather than hardwiring logic into each agent, MCP servers allow agents to call external tools dynamically, based on schema-based inputs. Even better — once one agent connects to these tools, other agents can simply communicate with that agent.

This creates a network of agents that can share capabilities with one another, making it easier to collaborate across the ecosystem.

System Architecture

The Medical Research Agent is a Fetch.ai Agent designed to act as an intelligent interface for querying multiple data sources — including PubMed, clinical trials, medical calculators.

It leverages the uAgents framework for communication, Smithery’s MCP Server’s for tool access, and is published to Agentverse, making it accessible via ASI:One.

🛠 How It Works

  • The agent connects to Smithery.ai’s MCP servers via streamable HTTP using streamablehttp_client.
  • Each server provides structured tools (e.g., homa_ir, pubmed_search) that the agent can call through a bidirectional HTTP stream.
  • The agent is published to Agentverse with Chat Protocol enabled.
  • Users can message the agent via ASI:One to access real-time data.
  • The agent selects the right tool, sends a request, formats the result, and replies.

MCP Servers Used in This Example

This Fetch.ai agent connects to five MCP servers hosted on Smithery.ai:

Each server exposes structured tools that the agent can call by simply providing valid input data.

Implementation

Let’s create an agent that can connect with various MCP servers
The following sections break down the key components needed to integrate a Fetch.ai agent with multiple Smithery.ai MCP servers.

To get started, you can clone the GitHub repository and create a local uAgent which will be connected to Agentverse via Mailbox. Don’t forget to add your Smithery API key and other credentials in the environment variables.

You can also read more about the Mailbox agent creation process here.

🔑 What You’ll Need

To connect to Smithery-hosted MCP servers, your agent needs:

  • ✅ A Smithery.ai API Key
  • 🗂 A list of server paths (e.g., @vitaldb/medcalc)
  • ⚙️ Configuration settings (some may be empty)

These are used to generate the full streamable HTTP MCP URL:

url = f"https://server.smithery.ai/{server_path}/mcp?config={config_b64}&api_key={smithery_api_key}"

🌐 Establishing MCP Connections

Each MCP server is connected using the MCP client:

read_stream, write_stream, _ = await self.exit_stack.enter_async_context(streamablehttp_client(url))
session = await self.exit_stack.enter_async_context(mcp.ClientSession(read_stream, write_stream))
await session.initialize()

All discovered tools are stored and mapped, enabling the agent to dynamically route queries to the correct server.

💬 Querying Tools with Context

The process_query() method determines which tool to invoke based on the message content, prepares the input, and calls the tool. The result is formatted before sending it back to the user.

result = await asyncio.wait_for(
self.sessions[server_path].call_tool(tool_name, tool_args),
timeout=self.default_timeout.total_seconds()
)

This keeps the interaction conversational, structured, and adaptable — perfect for a chat-based environment.

⚙️ Initializing the Agent & MCP Client

The agent is initialised is initialised with the ChatProtocol to make it discoverable by ASI:One.

mcp_agent = Agent(name='MedicalResearchMCPAgent', port=8001, mailbox=True, readme_path="README.md", publish_agent_details=True)
client = MedicalResearchMCPClient()
mcp_agent.include(chat_proto)

🗨 Chat Protocol Essentials

To enable real-time messaging, the agent implements two core protocol handlers:

  • 📥 ChatMessage: For receiving queries and sending responses
  • 📤 ChatAcknowledgement: To confirm message receipt

These enable seamless communication between agents and users on the ASI:One platform. You can read more about the Chat Protocol here.

@chat_proto.on_message(model=ChatMessage)
async def handle_chat_message(ctx, sender, msg):
...
await ctx.send(sender, ChatAcknowledgement(...))

🔍 Discoverable on Agentverse & ASI:One

To make your agent discoverable on Agentverse and accessible through the ASI:One LLM make sure to add the agent README.md in the Overview tab of your agent.

You can start interacting with your Agent with the Chat with Agent button on the top right corner through the Chat Interface this is a good way to make sure your agent is working as expected.

Query your Agent from ASI:One LLM

Login to the ASI:One and toggle the “Agents” switch to enable ASI:One to connect with Agents on Agentverse.

Note: The ASI:One LLM may not always select your agent for answering the query as it uses the Agent Ranking mechanism to pick the best agent for a task. To test your agent you can use the Chat with Agent button on your Agent.

Conclusion

This setup demonstrates how easy it is to build modular, intelligent, and connected AI agents using:

  • 🧠 Fetch.ai’s uAgents framework
  • 🔌 Smithery.ai’s MCP server network
  • 💬 Chat Protocol + ASI:One integration

By combining these tools, developers can create agents that aren’t just reactive — they’re collaborative, dynamic, and ready for the next era of agentic AI.


Build AI Agents with Smithery.ai, was originally published in Fetch.ai on Medium, where people are continuing the conversation by highlighting and responding to this story.

7h ago
bullish:

0

bearish:

0

Share
Manage all your crypto, NFT and DeFi from one place

Securely connect the portfolio you’re using to start.