top of page
Search
  • Writer's pictureLouis Keating

Marketing Attribution for not-for-profits

Multi-touch Attribution (MTA) for marketing is all about evaluating the effectiveness of marketing investments by examining their role in generating steps in the customer journey. It was initially developed for ecommerce, but not all MTA is about sales.


In the case of not-for-profit organisations (NFPs), it could be about attributing marketing to successful volunteer sign-ups, or donations. Not all customer journey steps are equal – and for accurate results, it is important to ensure that the approach used includes all the relevant channels, and that calculations can include every event in the journeys towards the target outcomes.


The greater the transparency, the better the insight – and the more opportunities there are to optimise marketing to help achieve NFPs’ marketing goals.


UniFida has been working recently for an NFP, an alcohol awareness charity whose objective is to lead people towards areas of their website that offer support. In this case we have been able to plot their customer journeys leading up to the use of one or other of their tools, and from these evaluate the contribution of their different digital marketing campaigns.


This same approach can be used for evaluating marketing in a number of areas, for example, test drives for cars, university courses, or media content. However, under the bonnet, the technology and the analytics remain very much the same.


How does MTA work?


1 Collating online and offline customer journeys


This requires a customer data platform (CDP) or equivalent to bring together, for instance, web browsing actions with email opens, and perhaps phone calls and direct mail sends. Each journey will end with the designated result, but the steps to get there need to be assembled in time sequence.


This requires a 100% data feed from the website (from which we collect first-party data), a feed from an email service provider and a contact history file for the direct mail if needed. The average number of steps in a journey are around three, but the average is deceptive as the steps can range from one up to ten or more. Existing customers also usually take more steps than new ones.


In this example, the journey started on the 12 March and ended with a sale on 1 June. The recipient opened two emails and a catalogue, and undertook two entries to the website (one via PPC and the other going directly).


2 Weighting each step in the customer journey


All steps are not equal and can play different roles. They can help initiate a journey, maintain a customer’s interest and help reach the end result. We use advanced mathematics to undertake the weighting of the steps and this also looks at the role they play. We split the roles into Initialiser, Holder and Closer.


We train the mathematics algorithm to take account of the particular characteristics of the journeys for each individual client, as they can vary considerably. Higher priced items, or choices with more consequences like university courses, will tend to have longer journeys as more consideration is usually required. The algorithms respond to the time periods before and after each step as well as, for a browsing step, the level of engagement with the website.


To continue with the same purchase example above, we have now added in the scores for each step in the journey. The IHC score is the combination of the Initialiser, Holder and Closer scores. Each column adds up to a total of 2, so a 0.4978 IHC score will give that step a quarter of the sale vale. The two emails received some credit for Initialiser, but not as much as they would have done if the sale date had been earlier, whereas the Closer rewards went to the catalogue and PPC.


3 Aggregating the results


Some marketers are interested in puzzling over individual journeys to understand how channels work together for different customers. Others want accurate answers to questions like ‘How did the email test campaign do?’, or ‘Do I get better returns from using Facebook at certain times of the year?’.


To answer these kinds of questions we have to aggregate up the results. If we take an email campaign, for example, it will have contributed to steps in many different customer journeys, some of which will have led to a successful outcome, and usually a much larger number that have not.


We only look at the opened emails within successful journeys and give a value to each of these steps, depending on the value given to the individual outcome, multiplied by the fraction of the overall journey process that the step contributed. So, mathematically, if the outcome is worth $50, and the step has contributed 30% of the journey, then that step is judged to be worth $15.


In this way every step in a successful journey gets a value and these can be summed up to give a value to the overall campaign. This example will help explain how fundamental the weighting approach is to the evaluation of the campaign results – give the step contribution just 10% of the journey and then it’s only worth £5.


Aggregation can provide answers to a number of different questions. It can sum up to an individual campaign, or to a channel, and that is usually for a time period, such as PPC in July. By looking at different time periods for the same channel we can understand the impact of seasonality on response. But we can also turn the data around and look at how different customer groups respond to different types of marketing.


We have already mentioned the differences in behaviour between new and existing customers, but what about, from the charity example above, whether different campaigns lead people to visit different tools in different parts of the website. A car vendor might, for example, be interested to know which media are better at driving test drives for different models.


Here is an example of an aggregated report, taking the channel view for a particular time period. Each channel has a share of the overall value based on the steps it contributed that led up to completed sales. It is interesting to see how the sales impacted are for every channel greater than the share of sales. So, if in this case the marketer decided to stop sending out catalogues, then 26,478 sales would have been impacted and many may not have happened.


4 Avoiding the ‘black box’ classification


There is a natural fear of basing decision making on results created by unknown algorithms that live inside impenetrable black boxes. Happily, this does not need to apply to MTA, as long as the technology can provide a table that shows the values given to every step in every customer journey. With the individual scores visible, the results can be challenged.


We sometimes set up manual scoring systems that give declining weightings to events that are further away in time to the ‘result’. This is a common-sense approach and useful for comparing the outcome with what the algorithms have come up with. For the more mathematically inclined, it is also important to be able to challenge how the algorithms themselves are programmed and to receive back a full explanation.


5 When does MTA not apply?


MTA works when there is a direct link between the person making the customer journey and the steps that take place. So, it is not applicable for TV, or outdoor advertising for instance. For these indirect channels econometrics is the right approach, and the good news is that we can now merge the outputs from the econometrics with the MTA results to provide a truly 360-degree view.



Proof of concept


UniFida can deliver the expertise and technology ‘out of the box’ to help you automate your attribution. We can start with a low-cost proof of concept to demonstrate how attribution can be calculated for your business.


To learn more about UniFida click here


27 views0 comments

Comments


bottom of page