If you are currently configuring MCP to connect an LLM to your desktop environment, you are likely looking at your existing FDC3 intent registry as a ready-made menu of “AI Tools.”. The logical first step is a direct 1:1 mapping. If your app supports ViewChart and ViewNews, why not just expose those tools to the model?
In the presentation below from OSSF 2025 in NYC, Bob Myers and Kalin Kostov explain why this won’t always work:
The core conflict is between Deterministic Desktop Workflows and Probabilistic LLM Planners.
When a user wants to “Analyze a Portfolio,” the desktop behavior must be deterministic: a specific layout must load, specific windows must open, and context must pass reliably between them. If you expose granular FDC3 intents to the MCP server, you are effectively forcing the LLM to act as your workflow orchestrator.
You are asking the model to successfully chain three separate function calls (e.g., ViewChart → ViewBlotter → ViewNews) in the correct order. In production, this approach introduces significant friction:
- State Drift: The model might successfully open a chart but fail to pass the context to the subsequent tools, leaving the user with a disjointed UI.
- Hallucinated Parameters: Without rigorous schema validation, the LLM often “guesses” context data (like ticker symbols) that strictly typed desktop apps will reject.
- Latency & Token Waste: Generating multiple sequential tool calls requires multiple round-trips to the LLM, introducing unacceptable latency for what should be a snappy UI interaction.
The Solution: Composite Intents
The video details a more robust pattern: shifting the orchestration logic out of the prompt and back into the code.
Instead of exposing 50 atomic intents to the MCP server, the presenters advocate for exposing a smaller set of Composite Intents or workflows.
The Composite Architecture:
- The LLM Signal: The model makes a single tool call, such as
AnalyzePortfolio. - The Desktop Orchestrator: io.Connect receives this intent and triggers a deterministic sequence like restoring a workspace, broadcasting context to multiple channels, and focusing the correct windows.
- The Result: The user gets a predictable, complex UI result from a single, low-latency AI inference.
Making Your App.json AI-Ready
To implement this via the MCP server, your FDC3 app.json definitions need to evolve. The metadata you wrote for humans is now a “system prompt” for an AI.
- Descriptions are Prompts: The
descriptionfield in your intent definition is critical for the LLM’s planner. Vague descriptions like “Shows a chart” lead to poor tool selection. The video discusses how to write descriptions that guide the AI on when and why to use a specific tool. - Strict Schema Validation: You must enforce strict input schemas to constrain the model’s output to valid FDC3 context data (e.g., forcing a
fdc3.instrumentcontext type). - Bi-Directional Data: If your AI workflow involves RAG (e.g., “Summarize this portfolio”), your intents must return structured JSON data that the MCP server can serialize back into the LLM’s context window.
Deep Dive Resources
The transition to MCP is not just about installing a server; it requires curating the API surface area you expose to the model.
More FDC3 videos available on the FINOS Resources page: FDC3 Resources
We highly recommend reading the companion article here:
From FDC3 to MCP – How to Make Desktop Workflows AI-Ready