Conversations Everywhere - Measuring integration success

  • 26 January 2018
  • 0 replies
  • 126 views

Measuring integration success

We all make assumptions about user behavior and the effects our communication channels will have on our products and services. While we will often be close to the truth, we must measure and validate these assumptions.



Only then can we be sure our efforts will be in line with business goals. Nothing less is true for social layer integrations, so data-driven improvements are a must to qualify success.



Analysis using direct effects methods

Channel integrations using community content can come in many shapes and forms. Some will employ links to community the user can click to, while others will only display content or employ community functionality to be used right there and then. In the first case, traffic analysis tools such as Google Analytics or Adobe SiteCatalyst can work well for data gathering and analysis of clicks in the full customer journey. A/B-testing variations of the widget is another popular option. As measuring traffic is not possible for isolated content and functionality though, quantitative tests such as user surveys or focus groups can be of help in these scenarios. In any case, think about direct effects the integrations can have on user behavior, and prepare for ways to quantify those. You’ll be glad you did.



Best practises in traffic analytics

Tracking user behavior using Google or Adobe’s software virtually always happens. But did you know these tools often will have been set up in ways that measuring traffic effects for integrations will be restricted, or even impossible? We’ll provide two best practises to prevent this. First, don’t fall in the trap of directing site and community traffic into separated properties or views. If traffic data measured across domains is not stored within one bucket, analysis of the full customer journey will be near to impossible. Second tip is to make sure all inbound community links will have properly tagged source and campaign identifiers. Only then you’ll be able to identify your customers benefiting from the integration and learn how their journey has improved.



Shortcomings of direct metrics

Say we would want to add a “frequently asked questions” community widget to our customer service page, listing often used and answered question topics. If such an integration is aimed at directing quality traffic towards the customer community, it would make sense to measure clicks as a direct KPI. While this will tell us how effective the widget performs, we shouldn’t lose sight of our end goal. In this case, the relevant business metric to monitor will probably be call deflection. Are all those clicks really helpful to the end-user, i.e. is the community capable of solving the issue at hand? Or will he or she still call customer service after?



Benefits from indirect metrics

Direct metrics describing user behavior are only part of the story. We should also try to monitor relevant circumstantial metrics to support hypotheses for wanting to expand our social layer. Let’s continue with the FAQ widget example. Keeping track of amount of calls to customer support would be a great metric, and community activity indicators such as question topics started can also help. Both would supposedly be influenced from the moment the widget has been introduced in the customer journey, assuming its positive effects. Inference statistics can then provide informed clues about the integration’s success. Keeping track of launch dates for integrations will therefore prove to be a smart move.



Action plan

Turn data statistics into actionable insights by establishing a sense of impact before you start. Use the following tips to make sure you’re prepared:




  1. List all metrics as direct and indirect KPIs that would tell you something about the expected effects on community success and/or business goals.
  2. Verify KPI data is accessible to you, and will be reliable for your use case.
  3. Establish a baseline for these KPIs at least a month prior to the integration’s release to production.
  4. Start monitoring/logging these KPIs at set intervals and stick with the scheme for at least 6-8 weeks.
  5. Plan ahead for the evaluation of these KPIs for consideration of experiment continuation and possible adjustments.
  6. Reserve 2x capacity from expert colleagues involved with integration development (e.g. front end, development, UX, data scientists, copywriters) both for the initial release and for executing on proposed improvements after 6-8 weeks.
  7. Test and realize your integration on your customer channel.
  8. Log the date of the exact moment of push to production.
  9. Share and celebrate this go-live moment with your inSided consultant.
  10. Iterate with small improvements after analyzing your KPIs.

0 replies

Be the first to reply!

Reply