Docs Cloud Agentic AI MCP Remote MCP Concepts MCP Tool Execution and Components Page options Copy as Markdown Copied! View as plain text Ask AI about this topic Add MCP server to VS Code This page explains how MCP tools execute and how to choose the right component type for your use case. After reading this page, you will be able to: Describe the request/response execution model Choose the right component type for a use case Interpret MCP server traces for debugging and monitoring How components map to MCP tools Each MCP tool is implemented as a single Redpanda Connect component. The component type determines what the tool can do. The following table shows which component types are available and their purposes: Component Type Purpose as an MCP Tool Processor Transforms, validates, or computes data. Calls external APIs. Returns results to the AI client. Output Writes data to external systems (Redpanda topics, databases, APIs). Can include processors for transformation before writing. Input Reads data from external systems. Returns the read data to the AI client. Cache Stores and retrieves data for use by other tools. Most MCP tools are processors. Use outputs when you need to write data. Use inputs when you need to read from external data sources. The MCP execution model When an AI client calls an MCP tool, the MCP server handles the request in a specific sequence. The execution follows these steps: The AI client sends a JSON request to the MCP server with the tool name and parameters. The MCP server finds the corresponding component configuration. The MCP server executes the component with the input data. The component runs to completion and returns a result. The MCP server sends the result back to the AI client. The component instance is torn down. This execution model has several important characteristics: Stateless execution: Each tool invocation is independent. Tools do not maintain state between calls. If you need state, use an external store such as a cache, database, or Redpanda topic. Synchronous by default: Tools run synchronously from the AI client’s perspective. The client waits for the response before continuing. Timeout boundaries: Tools should complete quickly. Long-running operations should be avoided or handled asynchronously. Set explicit timeouts on external calls. No continuous processing: Unlike a traditional Redpanda Connect pipeline, MCP tools do not poll for messages or maintain connections between invocations. They start, execute, and stop. MCP tools use an agent-initiated execution model where agents invoke tools on-demand. Redpanda also supports pipeline-initiated integration where pipelines call agents using the a2a_message processor. For guidance on choosing between these patterns, see Integration Patterns Overview. Choose the right component type Every MCP tool is implemented as a single component. Choosing the right component type is a critical design decision that affects what your tool can do and how it behaves. Decision framework To choose the right component type, ask what the tool’s primary purpose is. Use the following table to match your tool’s intent to a component type: Question Component Type Does the tool compute or transform data and return results? Processor Does the tool call external APIs and return the response? Processor Does the tool write data to an external system (database, topic, API)? Output Does the tool read data from an external source and return it? Input Does the tool store and retrieve temporary data for other tools? Cache The core principle is to choose the component type that matches the tool’s primary intent. Processor tools Processor tools transform, validate, compute, or fetch data and return results to the AI client. This is the most common tool type. See the processors reference for available processors. When to choose a processor tool Choose a processor tool when the tool’s purpose is to compute or transform data, call an external API and return the response, or validate inputs and return errors or results. Use case: Fetch and transform external data Consider a scenario where an AI agent needs current weather data to answer a user’s question about whether to bring an umbrella. The following prompts should trigger this type of tool: "What’s the weather in Berlin?" "Is it raining in Tokyo right now?" "Get me the current temperature for Seattle." A processor is the right choice because the tool fetches data from an API, transforms it into a useful format, and returns it. Use case: Validate and normalize data Consider a scenario where an AI agent needs to validate user-submitted data and return structured feedback about any issues. The following prompts should trigger this type of tool: "Validate this customer record before saving." "Check if this order has all required fields." "Normalize this JSON and tell me what’s missing." A processor is the right choice because the tool examines data, applies validation rules, and returns results. No data is written anywhere. Output tools Output tools write data to external systems. Use them when the primary purpose is to create a side effect such as persisting data, publishing an event, or triggering an action. See the outputs reference for available outputs. When to choose an output tool Choose an output tool when the tool’s purpose is to write data to Redpanda, a database, or an external API. The side effect (writing) should be the primary intent, not incidental. You can use processors: within the output to transform data before writing. Output tools are appropriate when you want the AI to trigger real-world actions. Understanding tool response vs. side effect Output tools have two outcomes: the side effect (data is written to the destination) and the tool response (the AI client receives confirmation that the write succeeded). The AI client does not receive the written data back. It receives status information. If you need to return the written data, consider using a processor tool instead. Use case: Publish events to Redpanda Consider a scenario where an AI agent needs to publish order events to Redpanda for downstream processing. The following prompts should trigger this type of tool: "Publish this order to Redpanda." "Send the order event to the orders topic." "Record this new order for processing." An output is the right choice because the purpose is to write data to Redpanda. The AI needs to create a persistent record, not just compute something. Use case: Transform and publish Output components can include a processors: section that transforms data before writing to the destination. This is a single output component, not a combination of component types. Consider a scenario where an AI agent asks an LLM to summarize a document, then stores both the original and summary in Redpanda. The following prompts should trigger this type of tool: "Summarize this document and save it." "Process this feedback with GPT and store the analysis." "Analyze this text and publish the results." An output with processors is the right choice because the primary intent is to store data. The processors provide pre-processing before writing. The execution flow for this pattern is as follows: AI client calls the tool with input data. The processors section transforms the data. The output component writes the transformed data to the destination. The tool returns a response to the AI client. For implementation examples, see outputs with processors in the tool patterns guide. Input tools Input tools read data from external sources and return it to the AI client. They’re useful when you need to query or fetch existing data. See the inputs reference for available inputs. When to choose an input tool Choose an input tool when the tool’s purpose is to read and return data from an external source, consume messages from a Redpanda topic, or build a query-style tool that retrieves existing data. Bounded vs. unbounded reads Input tools must return a finite result. Use bounded reads that fetch a specific number of messages or read until a condition is met. For example, "get me the latest N events" or "read messages from the last hour". Unbounded reads that poll continuously are not appropriate for MCP tools because the tool would never return a response to the AI client. Latency and scope considerations Keep these factors in mind when building input tools: Input tools may have variable latency depending on the data source. Scope your reads appropriately. Don’t try to read entire topics. Consider consumer group behavior: with a consumer group, each invocation advances through the stream. Without one, each invocation may read the same data. Use case: Query recent events Consider a scenario where an AI agent needs to retrieve recent user activity events to understand user behavior. The following prompts should trigger this type of tool: "Show me recent user events." "Get the last 10 login events." "What events happened in the user-events topic recently?" An input is the right choice because the tool reads from an existing data source (topic) and returns what it finds. Cache tools Cache tools store and retrieve temporary data that other tools can access. They’re useful for sharing state between tool calls or storing frequently accessed data. See the caches reference for available caches. When to choose a cache tool Choose a cache tool when the tool’s purpose is to store temporary data that expires after a set time, share state between multiple tool calls in a conversation, or reduce repeated calls to slow external APIs by caching results. Use case: Session state management Consider a scenario where an AI agent needs to remember user preferences across multiple tool calls within a conversation. The following prompts should trigger this type of tool: "Remember that I prefer metric units." "Store my timezone as America/New_York." "Save this search filter for later." A cache is the right choice because the data is temporary, session-scoped, and needs to be accessible by other tools during the conversation. Use case: API response caching Consider a scenario where an AI agent frequently looks up the same reference data (like exchange rates or product catalogs) and you want to avoid repeated API calls. The following prompts should trigger cache usage: "Get the current exchange rate" (cached for 5 minutes) "Look up product details" (cached for 1 hour) "Check inventory levels" (cached briefly to reduce load) A cache is the right choice because you want to store API responses temporarily and serve them on subsequent requests without hitting the external API again. Component selection summary The following table summarizes when to use each component type: Component Primary Intent Example Tools Returns Processor Compute, transform, validate, fetch Weather lookup, data validation, API calls Computed result Output Write data with side effects Publish events, store records, trigger webhooks Write confirmation Output + processors Transform then write Summarize and store, enrich and publish Write confirmation Input Read and return data Query recent events, search logs Retrieved data Cache Store and retrieve temporary data Session state, API response caching Cached value or confirmation For implementation examples and common patterns, see MCP Tool Patterns. Observability MCP servers automatically emit OpenTelemetry traces for monitoring and debugging. For detailed information about traces, spans, and the trace structure, see Transcripts and AI Observability. To monitor MCP server activity, consume traces, and debug failures, see Monitor MCP Server Activity. Service account authorization When you create an MCP server or AI agent, Redpanda Cloud automatically creates a service account to authenticate requests to your cluster. The service account is created with the following: Name: Prepopulated as cluster-<cluster-id>-<resource-type>-<resource-name>-sa, where sa stands for service account. For example: MCP server: cluster-d5tp5kntujt599ksadgg-mcp-my-test-server-sa AI agent: cluster-d5tp5kntujt599ksadgg-agent-my-agent-sa You can customize this name during creation. Role binding: Cluster scope with Writer role for the cluster where you created the resource. This allows the resource to read and write data, manage topics, and access cluster resources. Manage service accounts You can view and manage service accounts created for MCP servers and AI agents at the organization level in Organization IAM > Service account. This page shows additional details not visible during creation: Field Description Client ID Unique identifier for OAuth2 authentication Description Optional description of the service account Created at Timestamp when the service account was created Updated at Timestamp of the last modification From this page you can: Edit the service account name or description View and manage role bindings Rotate credentials Delete the service account Deleting a service account removes authentication for the associated MCP server or AI agent. The resource can no longer access cluster data. Customize role bindings The default Writer role provides broad access suitable for most use cases. If you need more restrictive permissions: Exit the cluster and navigate to Organization IAM > Service account. Find the service account for your resource. Edit the role bindings to use a more restrictive role or scope. For more information about roles and permissions, see Role-based access control. Next steps Create an MCP Tool MCP Tool Design MCP Tool Patterns Troubleshoot Remote MCP Servers Back to top × Simple online edits For simple changes, such as fixing a typo, you can edit the content directly on GitHub. Edit on GitHub Or, open an issue to let us know about something that you want us to change. Open an issue Contribution guide For extensive content updates, or if you prefer to work locally, read our contribution guide . Was this helpful? thumb_up thumb_down group Ask in the community mail Share your feedback group_add Make a contribution 🎉 Thanks for your feedback! Overview Quickstart