3 Tips for Putting the "C", "R", "M" in Your Program Measurement | npENGAGE

3 Tips for Putting the “C”, “R”, “M” in Your Program Measurement

By on Nov 15, 2012


As  a strategist, I live and die by having access to timely, relevant, and accurate data to advise clients on program performance and planning.  And, when it comes to  customer relationship marketing (CRM), the marketing aspect cannot be successfully executed without having that “M” also be driven by measurement.  With more robust tracking and interaction management, also come so many different ways to spin and track data, that it can be hard to see the forest for the trees.  So below, are three quick tips for approaching the CRM data playground.


Create key performance metrics in advance that will guide your program and structure reporting to facilitate tracking performance to get at those metrics.  An example: do you measure your direct mail acquisition list performance based on breakeven or net income per name mailed? (Let’s say you don’t want to mail lists that don’t breakeven in two years or less). Well, in a CRM-enabled world, you may want to consider what “break-even” really means. Maybe you have a list that doesn’t do great in direct mail, but names acquired from that list end up being REALLY great captain prospects for events, or for whatever reason,  turns out are heavily cross-referenced with your online program and mailing them ends up really bolstering their response to emails. You will want to create reporting metrics that take that into account and may influence your offline list selection.


Reporting is not the same as analytics.  In a CRM-enabled world, you are going to be able to run A LOT of reports.  Don’t let yourself drown in data.  Based on the “C” above, look for opportunities to create reports that don’t just track trends, but drive insight into actual performance—and this will require the human factor.  So, sure, you can run a campaign report that shows how a campaign did across channels (direct mail, telemarketing, email, social media).  And let’s say you did an email test that shows the test did not win over your control. Reporting on this might lead you to conclude that the test is a loser and to move on.  Analytics—digging into the data and having a  human spend some time looking into it, might actually reveal that while the test did not work in email,  folks that got the test creative and also received a direct mail piece as well for the campaign actually gave at overall higher rates to the campaign AND drove more revenue.


Make it count. This is the same concept as relevancy.  Identify the data and reports/insight tracking that is actually going to impact decision making and focus on getting at that info.  Do you really need to spend hours creating individual donor performance reports if you only look at performance on a segment or package level?  How important is it to track say gross revenue per name mailed if that’s now how your organizations makes decisions (the key metric to you might be net revenue per name mailed).  Do you agonize over understanding the performance of a segment with less than 300 names in it? Unless it’s a major donor segment, why are you bothering? Conversely, if you are tracking social media interactions, how are you using that information to understand program performance. For example, when segmenting your email audience, are you using information you’ve tracked about social media engagement to create email versions that are full of social media call to actions and engagement paths?


Leave a Reply

Your email address will not be published. Required fields are marked *