io.Intelligence Framework: AI Web, io.Assist, and AI Server Now Available

We’re excited to announce the release of three new modules in the io.Intelligence framework: AI Web, io.Assist, and AI Server. Together with our previously released MCP-Core and Working Context offerings, these modules round out the intended architecture for io.Intelligence and give teams everything they need to get started building AI assistants and agents on top of io.Connect.

AI Web is the web SDK for teams that want to build their own assistant or copilot experience. It provides agent conversations, thread management, MCP connectivity, prompt management, and optional Working Context support without forcing you into a prebuilt UI. AI Web handles the infrastructure while leaving layout, interaction design, and product behavior in your hands.
AI Web Documentation

io.Assist is the ready-made assistant for teams that want to ship fast without giving up the io.Intelligence stack underneath. The first delivery is for Angular: configure it at bootstrap, pass the active user through the component input, and you have a working assistant UI with threads, prompts, tool traces, MCP Apps, and Working Context support. It is the quickest path from “we want an assistant” to a usable assistant experience inside a real product.
io.Assist Documentation

AI Server is the backend layer that frontend assistant clients rely on. The first released item is AI Mastra Bridge, a backend package that turns a Mastra application into a clean bridge service capable of streaming agent runs, exposing agents, and managing conversation threads through a stable HTTP API via the io.Intelligence Agent Protocol.
AI Server Documentation

This release also includes MCP Apps support, with the ability to render MCP Apps inline or in workspaces. We believe this represents a significant step forward for AI UX patterns in the enterprise.

Full documentation is available at: https://docs-ai.interop.io

What’s ahead:

Our roadmap includes Code Mode, which allows agents to write code that calls tools programmatically instead of requiring each tool to be described individually to the LLM. This significantly reduces the amount of context a model needs to hold at once. We’re also investing in context window management and OTEL-based observability to give teams full visibility as they deploy io.Intelligence in production.

To learn more or explore how io.Intelligence can support your AI strategy, reach out to our team. If you have any questions, comments or ideas for improvement just comment here or reply to the email thread.