Have you been asked to measure ROI/prove of efficiency?
Is it taken for granted that connecting systems and better UX will bring benefits (save time/decrease errors) or do your stakholders expect evidence e.g. metrics, estimates, surveys or something else? If yes what do you use?
This is something we often think about. Often frame it as outcome vs output (or leading/lagging indicators). The outcome is the thing we are trying to influence, but the output is the thing that influences outcome. It is often very hard for us to measure the impact on the outcome so we use outputs as proxy measures.
As an example, we may ultimately be trying to impact best execution (the outcome) but there are many things that can influence that, our software being just one of them. So what we do is try to identify outputs that could be leading indicators.
You can start off simply with user adoption metrics:
- How many of our users are relying on our platform on a regular basis measured by percent of user base onboarded (the simplest measure)
- Number of key FDC3 invocations acrtoss the platform
- Usage of the legacy app that you are trying to migrate people away from
As you get more sophisticated you could think about performance stats:
- Latency on your key workflows is probably where I would start.
- Then look at measures like average time from order arrival until first execution
- This would need to be broken down by asset class or product type to be truly useful
After that I would see if there are some quality-related measures e.g. how often would a trader accept an algo suggestion provided by the system, vs overriding the suggestion and going with their gut instead (this could indicate issues with your suggestion logic)
None of these outputs should be taken in isolation however they probably all work in concert with eachother and should be seen as all being related. However any one indivudual output getting worse should be taken as a potential early warning that there is something amiss that needs further investigation.
3 Likes
Typically we get customers to estimate how much budget they’ve had to steal from strategic budgets to fund unplanned re-work that was a consequence of “saving money” by not doing user research at the beginning and during the release process.
2 Likes
Measuring is pre & post interop user productivity improvements has been a challenge. Our stakeholders expect us to measure it via metrics. Not sure if there is silver bullet out there that can help with this but pre-interop process measures (if existed) would make the task some what easier in the sense it has a precedence. In case it is measured currently the expectation is to start now and keep improving by identifying the bottle necks. Simple numbers like time taken for a workflow from start to finish would be a great stepping stone in the right direction.
We have tried to use general web analytics tools like google, but it has been a struggle to get everyhing instrument just right,