Skip to main content

Overview

eigi.ai agents become powerful when connected to your existing systems. Use Dynamic API Tools for REST integrations and MCP (Model Context Protocol) for advanced AI tool connections.

Dynamic API Tools

What Are API Tools?

API Tools let your agents interact with external services during conversations:
  • Check order status in your database
  • Book appointments in your calendar
  • Look up customer information in your CRM
  • Process payments through payment gateways
  • Send emails or SMS notifications

Creating an API Tool

1

Define the Endpoint

Specify the URL, HTTP method, and headers
2

Configure Parameters

Define what data the tool needs
3

Map Responses

Tell the agent how to use the response
4

Test the Tool

Validate it works before using in production
5

Assign to Agent

Add the tool to your agent’s capabilities

API Tool Configuration

Endpoint Settings

SettingDescriptionExample
NameDescriptive tool name”Check Order Status”
DescriptionWhat the tool does (for LLM)“Looks up order status by order ID”
URLAPI endpointhttps://api.example.com/orders
MethodHTTP methodGET, POST, PUT, DELETE
HeadersRequest headersAuthorization, Content-Type

Authentication

Support for multiple auth methods:

API Key

Pass API key in header or query parameter

Bearer Token

JWT or OAuth bearer tokens

Basic Auth

Username and password authentication

Custom Headers

Any custom authentication headers

Parameter Types

How Data Gets to Your API

Fixed values that don’t change:
{
  "source": "voice_agent",
  "version": "1.0"
}
Values extracted from conversation by AI:
  • “Get the order ID the customer mentioned”
  • “Extract the appointment date and time”
  • “Capture the email address provided”
Direct input from the caller:
  • DTMF digits entered
  • Specific prompted responses
Data from the phonebook contact:
  • {{customer_id}}
  • {{account_number}}
  • {{email}}

Response Handling

Processing API Responses

Configure how your agent uses the response:
// API Response
{
  "order_id": "ORD-12345",
  "status": "shipped",
  "tracking_number": "1Z999AA10123456784",
  "estimated_delivery": "2025-12-06"
}

// Agent says:
"Your order ORD-12345 has shipped! The tracking number is
1Z999AA10123456784 and it should arrive by December 6th."

Error Handling

Define fallback behavior:
ScenarioConfiguration
API Timeout”I’m having trouble looking that up. Let me try again.”
Not Found”I couldn’t find an order with that number. Can you verify it?”
Server Error”Our system is temporarily unavailable. Can I take a message?”

MCP (Model Context Protocol)

What is MCP?

MCP is a standard for connecting AI models to external tools and data sources:

Tool Discovery

MCP servers automatically expose available tools to your agent

Rich Capabilities

Access databases, file systems, specialized AI tools, and more

Standardized

Works with any MCP-compatible server

Real-Time

Persistent connections for fast tool execution

MCP Server Types

TypeDescriptionUse Case
STDIOStandard input/outputLocal tools, CLI integrations
HTTPREST-based serversRemote services, cloud tools

Adding MCP Servers

Configuration

1

Add Server

Provide server name and connection details
2

Configure Connection

Set up STDIO command or HTTP endpoint
3

Test Connection

Verify the server connects successfully
4

Discover Tools

See what tools the server provides
5

Assign to Agent

Enable the MCP server for your agent

Connection Settings

# STDIO Server Example
name: "Database Tools"
type: stdio
command: "npx"
args: ["@your-org/db-mcp-server"]
env:
  DATABASE_URL: "postgresql://..."

# HTTP Server Example
name: "Calendar Integration"
type: http
url: "https://mcp.yourservice.com"
headers:
  Authorization: "Bearer your_token"

MCP Performance

Optimized Initialization

eigi.ai optimizes MCP for voice conversations:

Connection Pooling

Reuse connections across calls to reduce latency

Smart Caching

Cache tool definitions and reduce startup time

Lazy Loading

Initialize servers only when needed

Health Monitoring

Automatic reconnection if servers disconnect

Performance Metrics

MetricTarget
Connection Time< 500ms for cached servers
Tool Execution< 200ms overhead
ReconnectionAutomatic within 5 seconds

Tool Usage in Conversations

How Agents Use Tools

The LLM decides when to use tools based on conversation context: Customer: “What’s the status of my order 12345?” Agent (internally):
  1. Recognizes need for order lookup
  2. Calls check_order_status tool with order_id=“12345”
  3. Receives response with status and tracking
  4. Formulates natural response
Agent (to customer): “I found your order! It shipped yesterday and should arrive by Friday. Would you like the tracking number?”

Silent Tool Calls

Tools are called silently in the background. Customers only hear the agent’s natural response—never technical details about tool execution.

Multiple Tools

Combining Capabilities

Agents can use multiple tools in a single conversation:
1. Check customer identity (CRM lookup)
2. Verify account status (Billing API)
3. Check order history (Orders API)
4. Book follow-up appointment (Calendar tool)
5. Send confirmation email (Email tool)

Tool Chaining

Tools can be called sequentially based on conversation flow:
Customer asks about upgrading →
  → Check current plan (API) →
  → Get available upgrades (API) →
  → Apply upgrade if confirmed (API) →
  → Send confirmation (Email)

Testing Tools

Validation

Before going live, test your tools:

Direct Testing

Test API calls with sample data

Mock Responses

Simulate responses for edge cases

Conversation Testing

Full conversation tests with real tool calls

Error Simulation

Test error handling and fallbacks

Monitoring

Tool Performance

Track how your tools perform:
MetricDescription
Call VolumeHow often each tool is used
Success RatePercentage of successful calls
LatencyAverage response time
ErrorsError types and frequencies

MCP Server Status

Monitor MCP server health:
  • Connection status (connected/disconnected)
  • Last activity timestamp
  • Error logs
  • Tool availability

Best Practices

Clear Descriptions: Write tool descriptions that help the LLM understand when to use them.
Handle Errors Gracefully: Always configure fallback responses for API failures.
Test Edge Cases: Verify behavior with missing data, timeouts, and errors.
Never expose sensitive credentials in tool configurations. Use environment variables or secure credential storage.

Troubleshooting

  • Check the tool description—LLM may not understand when to use it - Verify the tool is assigned to the agent - Test with explicit prompts mentioning the tool’s function
  • Verify connection settings (URL, command, arguments) - Check server logs for errors - Ensure required environment variables are set - Test the server independently first
  • Optimize your API endpoints - Consider caching frequently accessed data - Use connection pooling for MCP servers - Monitor and address timeout issues