You are here
By Jamie Rapperport, Co-founder and CEO of Eversight
The term “big data” has become ubiquitous as it has permeated nearly every industry over the last 10 years. While definitions abound, its fundamental implications remain constant: it represents a cross-functional focus on leveraging exponentially growing volumes of data to increase operational performance and ROI. Though it hit the mainstream only recently, big data has been a mainstay of brick-and-mortar retail -- where data is created every time a consumer makes a purchase -- for years.
Today, most major retailers use sophisticated analytics to better understand their customers and deliver more personalized shopping experiences. But while big data continues to provide many benefits and has become operationally critical to most brick-and-mortar retailers, the industry is also becoming familiar with its limitations. This is particularly true, and perhaps most critical, in the $300 billion retail trade promotions space. Despite the use of sophisticated data analytics, more than half of all in-store promotions fail to deliver a significant ROI, with many actually losing money for the manufacturer.
One of the main reasons this continues to happen is the use (or misuse) of post-event analysis or trade promotion optimization (TPO) solutions -- systems originally designed to measure the results of promotional events. These systems enable retailers and CPGs to apply econometric regression to sort through large volumes of aggregated data; these “insights” are then used to plan promotional calendars. While TPO solutions are sufficient for their intended purpose (e.g., determining the performance of promotional spend after the fact), they fall short when it comes to discovering new promotions that could potentially work better. As a result, promotional calendars look the same as they did last year (and the year before that).
Let’s take a closer look at three culprits behind the limitations of big data in retail promotions: aggregation, granularity and homogeneity.
Ironically, a downside to big data is the sheer volume that’s collected. With the number of data points collected by a modern retailer on a daily basis, it can be difficult to determine which data sets are most relevant for identifying future consumer buying behavior. To get above the noise, this data is aggregated from various sources to control for numerous external factors, such as differences in weather, geography, and competition. As a result, many valuable insights are lost as data is aggregated. Retailers may only know how a particular discount level performed, with no insight into how variations in offer structure may have impacted performance. This presents a major challenge in developing optimal promotions, as offer structure can vary consumer response by as much as 200 percent.
Complicating things further is the fact that today’s systems simply weren’t designed to capture the level of detail required to understand how the differences in the way an offer is “framed” impact its overall effectiveness. Thanks to advances in behavioral economics, we now know that consumers respond to far more than just price when making purchase decisions; they use social, cognitive and behavioral cues to sort through the overwhelming amount of information they are confronted with on a daily basis. Unfortunately, retailers are often in the dark about the impact of leading text, calls-to-action and artwork -- each of which can heavily influence consumer response. The few retailers that have invested in more robust infrastructure still struggle, as even sophisticated models can’t decipher the critical nuances that can make or break large promotional campaigns.
Quite possibly the biggest limitation of big data’s application to retail promotions is its inability to support offer innovation. No matter the scale and volume, using data that describes past transactions to determine go-forward strategy inherently limits insights to only what can be culled from that data. In other words, the data can only tell you about what’s already been tried in the past, not what might work in the future. This means that possibilities for novel offer structures, cross-merchandising, discount levels and so on, can’t be found in that data. Furthermore, as you continue to optimize in a “backwards-looking” mode using transaction data, the resulting set of promotions on your calendar will continue to converge, and the data will become increasingly homogeneous. This explains why certain brands only run one or two types of promotions; they simply have no other data points to look at.
So what’s the solution? Instead of relying on “what worked okay last year,” some companies are beginning to adopt an approach similar to A/B testing (a methodology that ecommerce players have been effectively using for years), running structured experiments for the purposes of finding new, more effective promotions. Taking advantage of the recent growth of omnichannel, these CPGs are digitally testing new offer ideas by serving dozens of variations with different offer structures, discount depths, product combinations, marketing language, images, etc., to balanced groups of shoppers online. Based upon how consumers engage with each offer, brands gather insights regarding which kinds of promotional tactics work best. Structured experimentation of this sort ensures the creation of data that is specifically tailored to the task at hand -- in this case, deploying more effective promotion.
Why look for a needle in a haystack when you can design precisely the needle you need?