Overview
eigi.ai agents become powerful when connected to your existing systems. Use Dynamic API Tools for REST integrations and MCP (Model Context Protocol) for advanced AI tool connections.Dynamic API Tools
What Are API Tools?
API Tools let your agents interact with external services during conversations:- Check order status in your database
- Book appointments in your calendar
- Look up customer information in your CRM
- Process payments through payment gateways
- Send emails or SMS notifications
Creating an API Tool
1
Define the Endpoint
Specify the URL, HTTP method, and headers
2
Configure Parameters
Define what data the tool needs
3
Map Responses
Tell the agent how to use the response
4
Test the Tool
Validate it works before using in production
5
Assign to Agent
Add the tool to your agent’s capabilities
API Tool Configuration
Endpoint Settings
| Setting | Description | Example |
|---|---|---|
| Name | Descriptive tool name | ”Check Order Status” |
| Description | What the tool does (for LLM) | “Looks up order status by order ID” |
| URL | API endpoint | https://api.example.com/orders |
| Method | HTTP method | GET, POST, PUT, DELETE |
| Headers | Request headers | Authorization, Content-Type |
Authentication
Support for multiple auth methods:API Key
Pass API key in header or query parameter
Bearer Token
JWT or OAuth bearer tokens
Basic Auth
Username and password authentication
Custom Headers
Any custom authentication headers
Parameter Types
How Data Gets to Your API
Static Values
Static Values
Fixed values that don’t change:
LLM Parameters
LLM Parameters
Values extracted from conversation by AI:
- “Get the order ID the customer mentioned”
- “Extract the appointment date and time”
- “Capture the email address provided”
User Input
User Input
Direct input from the caller:
- DTMF digits entered
- Specific prompted responses
Contact Variables
Contact Variables
Data from the phonebook contact:
{{customer_id}}{{account_number}}{{email}}
Response Handling
Processing API Responses
Configure how your agent uses the response:Error Handling
Define fallback behavior:| Scenario | Configuration |
|---|---|
| API Timeout | ”I’m having trouble looking that up. Let me try again.” |
| Not Found | ”I couldn’t find an order with that number. Can you verify it?” |
| Server Error | ”Our system is temporarily unavailable. Can I take a message?” |
MCP (Model Context Protocol)
What is MCP?
MCP is a standard for connecting AI models to external tools and data sources:Tool Discovery
MCP servers automatically expose available tools to your agent
Rich Capabilities
Access databases, file systems, specialized AI tools, and more
Standardized
Works with any MCP-compatible server
Real-Time
Persistent connections for fast tool execution
MCP Server Types
| Type | Description | Use Case |
|---|---|---|
| STDIO | Standard input/output | Local tools, CLI integrations |
| HTTP | REST-based servers | Remote services, cloud tools |
Adding MCP Servers
Configuration
1
Add Server
Provide server name and connection details
2
Configure Connection
Set up STDIO command or HTTP endpoint
3
Test Connection
Verify the server connects successfully
4
Discover Tools
See what tools the server provides
5
Assign to Agent
Enable the MCP server for your agent
Connection Settings
MCP Performance
Optimized Initialization
eigi.ai optimizes MCP for voice conversations:Connection Pooling
Reuse connections across calls to reduce latency
Smart Caching
Cache tool definitions and reduce startup time
Lazy Loading
Initialize servers only when needed
Health Monitoring
Automatic reconnection if servers disconnect
Performance Metrics
| Metric | Target |
|---|---|
| Connection Time | < 500ms for cached servers |
| Tool Execution | < 200ms overhead |
| Reconnection | Automatic within 5 seconds |
Tool Usage in Conversations
How Agents Use Tools
The LLM decides when to use tools based on conversation context: Customer: “What’s the status of my order 12345?” Agent (internally):- Recognizes need for order lookup
- Calls
check_order_statustool with order_id=“12345” - Receives response with status and tracking
- Formulates natural response
Silent Tool Calls
Tools are called silently in the background. Customers only hear the agent’s
natural response—never technical details about tool execution.
Multiple Tools
Combining Capabilities
Agents can use multiple tools in a single conversation:Tool Chaining
Tools can be called sequentially based on conversation flow:Testing Tools
Validation
Before going live, test your tools:Direct Testing
Test API calls with sample data
Mock Responses
Simulate responses for edge cases
Conversation Testing
Full conversation tests with real tool calls
Error Simulation
Test error handling and fallbacks
Monitoring
Tool Performance
Track how your tools perform:| Metric | Description |
|---|---|
| Call Volume | How often each tool is used |
| Success Rate | Percentage of successful calls |
| Latency | Average response time |
| Errors | Error types and frequencies |
MCP Server Status
Monitor MCP server health:- Connection status (connected/disconnected)
- Last activity timestamp
- Error logs
- Tool availability
Best Practices
Troubleshooting
Tool not being called
Tool not being called
- Check the tool description—LLM may not understand when to use it - Verify the tool is assigned to the agent - Test with explicit prompts mentioning the tool’s function
MCP server not connecting
MCP server not connecting
- Verify connection settings (URL, command, arguments) - Check server logs for errors - Ensure required environment variables are set - Test the server independently first
Slow tool responses
Slow tool responses
- Optimize your API endpoints - Consider caching frequently accessed data - Use connection pooling for MCP servers - Monitor and address timeout issues

