Calabrio needed have a better understanding about user sentiment for its software. The company had never had any standard measures in their system prior to this project.
Create a system to track user sentiment over time related to software changes and updates
Identify key pain points for users across all product lines over time
Establish a baseline understanding of users' perceptions of different product lines to use feed into other tools like the product health score
I am responsible for all aspects of the on-going product UX metrics from continuing to field surveys to make product recommendations. Based on the need for a short survey, I recommended using a revised UMUX-Lite to measure user sentiment over time with a focus on ease and utility.
The UX metrics provided a way for us to measure change in our software over time and provided a way to compare different products to each other.
Research also uncovered key difficulties that users were having that was not showing up in other user outreach channels. With the findings from the research, we moved forward with changes in specific areas, like the log-in experience, to make the software easier to use.
Because this was the first time Calabrio had collected regular metrics on its software over time, I wanted to make sure the findings were easy and actionable for everyone - from product owners to executive leadership. In addition to regular metrics and insights decks, I also integrated the metrics with our PowerBI product dashboards. Some of the assets I created are:
Regular monthly slide for the board and executive leadership
Monthly user feedback and insights reports
Yearly insight assessment for roadmap support
The primary challenge for this project was creating culture change within the organization that was not used to collecting user metrics on their products.
Because of this challenge, I spent a lot of time upfront discussing the metrics with different product teams and after results started rolling in, I provided continuous discussions on how to interpret scores. Instead of "good" or "bad" scores, the people were encouraged to think of how they could address use problems to improve the software (and thus scores).