Measuring ROI

Have you been asked to measure ROI/prove of efficiency?

Is it taken for granted that connecting systems and better UX will bring benefits (save time/decrease errors) or do your stakholders expect evidence e.g. metrics, estimates, surveys or something else? If yes what do you use?

This is something we often think about. Often frame it as outcome vs output (or leading/lagging indicators). The outcome is the thing we are trying to influence, but the output is the thing that influences outcome. It is often very hard for us to measure the impact on the outcome so we use outputs as proxy measures.

As an example, we may ultimately be trying to impact best execution (the outcome) but there are many things that can influence that, our software being just one of them. So what we do is try to identify outputs that could be leading indicators.

You can start off simply with user adoption metrics:

  • How many of our users are relying on our platform on a regular basis measured by percent of user base onboarded (the simplest measure)
  • Number of key FDC3 invocations acrtoss the platform
  • Usage of the legacy app that you are trying to migrate people away from

As you get more sophisticated you could think about performance stats:

  • Latency on your key workflows is probably where I would start.
  • Then look at measures like average time from order arrival until first execution
  • This would need to be broken down by asset class or product type to be truly useful

After that I would see if there are some quality-related measures e.g. how often would a trader accept an algo suggestion provided by the system, vs overriding the suggestion and going with their gut instead (this could indicate issues with your suggestion logic)

None of these outputs should be taken in isolation however they probably all work in concert with eachother and should be seen as all being related. However any one indivudual output getting worse should be taken as a potential early warning that there is something amiss that needs further investigation.

4 Likes

Typically we get customers to estimate how much budget they’ve had to steal from strategic budgets to fund unplanned re-work that was a consequence of “saving money” by not doing user research at the beginning and during the release process.

2 Likes

Measuring is pre & post interop user productivity improvements has been a challenge. Our stakeholders expect us to measure it via metrics. Not sure if there is silver bullet out there that can help with this but pre-interop process measures (if existed) would make the task some what easier in the sense it has a precedence. In case it is measured currently the expectation is to start now and keep improving by identifying the bottle necks. Simple numbers like time taken for a workflow from start to finish would be a great stepping stone in the right direction.

We have tried to use general web analytics tools like google, but it has been a struggle to get everyhing instrument just right,

1 Like

Interesting discussion, you’re all basically describing the exact gaps we’ve been trying to solve with io.Insights, so I thought I’d share how we are approaching it.

At a high level, io.Insights is an OpenTelemetry-based observability layer baked into io.Connect Desktop/Browser. Instead of wiring Google Analytics or hand-rolled logging into every app, the platform itself emits metrics, traces and logs about app usage and user journeys, and you can plug that straight into whatever stack you already use (Prometheus, Grafana, Jaeger, Datadog, etc.).

You can use it’s out of the box metrics, to easily get started and understand leading indicators like - which apps/workspaces are actually used, how often, how long they run, crashes/errors, etc.

Because the io.Connect APIs themselves are instrumented, calls to interop, intents, contexts, etc. create spans like interopio.api.interop.invoke, interopio.api.contexts.update, etc.
Those spans can be turned into metrics (“traces as metrics”), so you can, for example, easily count:

  • Number of specific intents invoked across the platform

  • Which apps act as providers vs consumers

  • Error rates / latency for key interop hops

    This gives you the platform-level view of FDC3 usage Nicholas mentioned (outputs that are plausible leading indicators for outcomes like better execution or fewer errors).

The same approach using “Traces as metrics” you can use also for timing workflows, establish baselines and have data proof that the interop workflow optimization you have done is indeed improving the life of your users.

Note that io.Insights Traces and Logs are currently in a Beta state and provided on demand to customers. They will be made GA in Q1,2026.

Let me know if you have any questions or want to discuss specific use cases. Happy to explore with you how io.Insights can help you to get this observability in place that can prove ROI and measure the benefits of your work.

2 Likes

I really hope interop releases some ready to use grafana templates to speed up this process.

2 Likes