A recent article in VentureBeat said that marketing automation tools had only a 3 percent penetration rate at non-tech companies. Meanwhile, marketers are clamoring for ways to act upon data.
More or less, the weakest link in the chain of digital analytics has been the “make necessary changes” part. It’s now been several years since marketers began to understand that having the information alone really didn’t help the business. Recommendations became important. And after recommendations, then action.
Action is messy. It hasn’t had much to do, until recently, with automation. It required getting marketers, developers, creatives and business owners to agree on what changes were needed based on the data. And then the often too-laborious process of actually implementing the changes and trying to tell if there was a meaningful difference in the before and after states. Too often these efforts fell apart in partisan bickering between teams and refusal of many to take risks.
When we talk about marketing automation today, we are referring to SaaS offerings like Eloqua, Hubspot, Leadsius, Act-on and others that build a form of call-and-response matrix into marketing efforts. The easiest way to understand this is to compare it to what used to happen if you were reading a comic book when you were a kid, and saw an ad to “send away” for something either free or cheap. You would do that, and then you’d get more offers from the same company in the mail, as they hoped you’d soon spend more.
Much more dimensional and sophisticated versions of this are being played out by marketing automation tools, and according the the VentureBeat article, there’s plenty of room to grow.
A recent example of how one company is addressing a call for marketing automation is Tealium’s AudienceStream. Tealium already has a key foothold in the tag management industry, and that puts it at an important juncture of data collection. AudienceStream links the collected data from many sources (legacy of Tealium’s TMS) and allows the marketer to quickly set rules, thresholds and triggers that communicate via new APIs to marketing-action software already in the market. In other words, an AudienceStream powers an Eloqua. Once the rules are set, AudienceStream can communicate with a tool like Eloqua and help determine what message goes out to what user without continuing human intervention.
We’re not at the stage yet where entire site pages and app screens are being re-made on the spot based on very fresh data. We are at a stage where certain updatable modules on sites, and certain marketing messages can be automated and substituted based on data. The reason why this market sector has such growth potential is that it actually fixes a real problem.
While we’ve had lots of time to gnaw on old chestnuts like page views and unique visitors, we’ve hardly gotten to a point where we can say we’ve got organized, incremental methods that improve marketing velocity. And we know that most of the friction comes from friction between different teams with different agendas.
Marketing automation has no agenda except to respond to data and seek a return on marketing content. It frees up humans to do more strategic work. It may have only a small percentage of the market today, but as marketers get more and more familiar with successes based on these tools, that percentage is likely to begin growing rapidly in the near future.
Think of 2014 as the year when marketing automation finally got some of the recognition it deserves.
I was sitting in my office last week working on a targeted email when I realized something so fundamental – it’s a bit embarrassing to admit. As a data-driven marketing guy, you’d think I’d realize the most fundamental building block of any conversion starts with accurate “top of the funnel” CRM contact data. With garbage in you only get garbage out.
There’s a lot of talk about convergence of all things. There’s the convergence of all the systems we use, and the convergence of new roles – especially those of marketing and sales and building a culture of measurement.
As marketers struggle with all of the new tools, we need to review the most fundamental component of marketing, yet one of the most overlooked – quality information, our small data. Without good contact information these systems are just plain dumb, and they cost us more than they help. According to Gartner, contact information ages up to 50 percent in any year, becoming inaccurate and out of date, only serving to compound the issue. The top of the funnel data just has to be solid. And we have to be agile and act on it quickly.
A Typical Day
At our consulting company, Efectyv Digital, we use a number of tools to help us target and engage our current customers, and find new ones. We use a marketing automation system, in our case HubSpot, and a bunch of Google tools, SEO, pay-per-click (PPC), analytics, and Viralheat for social analytics. We also use various email products so we can test and optimize the send and open rates with tuned messages.
Like many of our clients, we are a B2B firm; we build marketing lists and segment and send targeted emails about our services to specific personas from those lists. The messages vary by role, industry, and need. They contain calls to action and other things you’d recognize as conversion events. We track our funnel and outbound conversions – and let me say we could do much better. Our list bounce rates are high and our open rates are low. We’ve hired an outbound lead generation person, and we’ve seen similar results.
We wondered if the issue was with our contact data or perhaps it was our offers and messages or timing (we just started using alerts)? To test, we decided to start one step at a time, and look specifically at the contact information contained within a few of the popular lead-generation tools to see why our conversion rates were so low.
It didn’t take us long to confirm, as we suspected, that our data just sucked and we needed to start making it better. Here’s our analysis. At least step one. We’ll always work on our messages.
The Simple Test
While there are scores of products on the market, including LinkedIn, Zoom, and One Source, and some great new start-ups that have various degrees of content mash-ups like Tempo and Refresh, we chose to test three of the more popular systems that come integrated with CRM systems, including D&B 360, which has contact and company information – mostly generated manually; Data.com, the roots of which are crowdsourced with Jigsaw data acquired by Salesforce; and InsideView, which claims to rely on technology to deliver results. The levels of integration vary, depending on the CRM system, Dynamics, Oracle, Sugar, or Salesforce.
We used a real person and a real institution, in this case Krystin Mitchell, senior vice president of human resources at 7-Eleven Inc. Since 7-Eleven’s revenue is in excess of $80 billion and they’re public, we thought they might be a good test to see how we can find her in our test systems.
The results. So, where is Krystin? According to their current company Web page, she is indeed at 7-Eleven, but according to Data.com Krystin Mitchell is not included in a “Find Contacts” 7-Eleven search results. When we broadened it, we found there were 16 wrong results with her name, company, and email address. That’s crazy and not acceptable. I can see why our emails bounce.
We then tested the trusty old saw, D&B 360. Since much commerce is based on its data, it has to yield accurate results, right? D&B is the gold standard of contact data, the truth, built with human editorial control so we thought we’d get correct results. But, even with D&B, Krystin Mitchell is not included in “Build a List” of custom search results…although this time 65 wrong contacts came up instead.
To find her we needed to do a “general people in search,” but like in Data.com it yielded multiple/duplicate results, and different types of wrong contact info that tends to defeat the whole purpose of a contact tool. Interesting and again not acceptable. More bounces and wrong numbers.
We then moved to our free version of InsideView, which works with Salesforce.com. They are one of the companies that include data from multiple sources (thousands), and then validate it through a technology they call “entity triangulation.” Using analytics, this process is designed to determine the relative “truth” about people, content, and key event information. For our test about Krystin they got it right, listing her correct title and correct company and contact information, which match the company website and public disclosure. It was CRM Intelligence and we now use their Target product to build our lists and are getting much better results.
All in all we’ve analyzed vendor results many times, testing with different contacts, and companies and a great percent of the time fundamental results were different between vendors. I admit it’s hard to do, to really know where someone is, but that’s what we need and I am optimistic that there are new tools just coming to market, like marketing automation company Autopilot, releasing a new prospecting tool for sales and marketing that they claim is generating up to 42 percent reply rates on cold emails.
So, while this column is about conversion marketing and analytics, and I usually write about more meta subjects, I thought I’d share some personal real world issues that impact marketing, and ultimately sales. In this case our sales. We expect to at least double our conversation rate by spending more time creating quality data and lists.
We are drowning in data. It is no simple feat to filter this sea. But, it seems to me that we need to get the basics right about “small data” before we talk about optimizing big data, real-time data, and the impact of attribution models. B2B or B2C, quality contact information is fundamental. It’s best to walk before we run and finally sprint to the holy grail of real-time conversions, and revenue falling from the trees.
Images via Shutterstock.
Recently, a customer suffered a costly setback in a CPM campaign — and I suspect they’re not the only ones with the problem. In fact, I think there’s ample evidence to suggest the CPM and PPC world is rife with opacity and strange artifacts.
Here’s what happened: the advertiser runs a professional services firm and wanted their ads to appear at certain sites where they believed customers might be found. That sounds simple enough. They contacted a relatively small ad network that claimed to specialize. The CPM program (which charges per impression, not per click-through) ran for several months and cost nearly twenty thousand dollars. It was canceled due to non-performance.
Why did this happen?
A review of the firm’s Google Analytics revealed the traffic sources from the campaign and the amount of impressions from each venue. It turned out that over twenty per cent of the impressions delivered came from a site owned by the ad network itself. This seemed an anomaly. Upon further investigation, the site proved to be a page full of videos for a variety of brands. It had no other content besides the videos and there seemed little reason for anyone to actually visit the site. So where did all the impressions come from?
It’s really anyone’s guess. But one might conclude this URL was used simply to generate impressions with no actual visitors or at least, no visitors relevant to the campaign. When the facts were presented to the network, they credited the customer for all the impressions from that site.
Score a win for analytics!
But what if you’re running multiple campaigns and don’t have time to run down each and every bad batch of impressions? How would you know where your impressions were coming from, whether they were actually visible, and whether the site was of any real quality relative to your goals?
comScore, the well known media measurement firm, may have some answers. While they recently came out with a product called Validated Media Essentials (vME) designed to help media sellers increase advertising inventory and revenue, they also have a product called Validated Campaign Essentials (vCE) that is designed to help buyers fight opacity in networked advertising.
Here are some of the things it can do:
- Enables in-flight campaign management and optimization.
- Evaluates audience delivery, viewability, brand safety, geographic delivery, engagement and non-human traffic.
- Reports data by publisher, placement and creative.
The net result of these kinds of insights is that the buyer can increase campaign effectiveness and, just as importantly, they play an important role in decreasing wasted impressions (and dollars).
In the newest version of vCE, you can gain access to important data based on an accounting of impressions delivered across a variety of dimensions, such as ads delivered in-view, in the right geography, in a brand safe environment and absent of non-human traffic. It also evaluates the degree to which validated impressions reached the campaign target audience.
Why does this matter?
Geographic data is key because ad networks use a wide variety of sites to deliver impressions. But if you have an interest in generating traffic within a specific region, your ad network won’t necessarily target geography, or at least not with sufficient accuracy. Tools like vCE 2.0 will help you understand where the traffic is coming from. This is accomplished by comScore because they rely in part on a panel of known users. By extrapolating data from these users (there are over a million in the US), they can give you a good idea of what the geographic profile looks like for your impressions.
Viewability is also a key metric, yet your typical analytics tool will not provide any insight into this. The term refers to whether or not your ad, even if served, actually showed up in the screen seen by the user. Think of it this way: many pages are longer than a single screen, and often you have to scroll to see the entire page. What if your ad was served, but was too far down the page? And what if the user left the page without ever seeing it? vCE lets you know whether or not the ad was actually capable of being seen, or if it was instead hidden “below the fold” (to use an old newspaper phrase) and never actually viewed by a person. You should not have to pay for ads never capable of being seen.
Brand safety is often overlooked, but it’s the equivalent of insisting that your billboard for milk does not appear on top of a slaughterhouse. How can you know that the sites where your ad is seen don’t in fact make your ad reflect poorly on your company? What is the quality of that site? Does it have lots of negative comments in its forums? Is it badly formatted, possessed only of thin, irrelevant content? Does it seem to present an unsavory image for your brand? Make sure your ads are appearing only in places that make you seem both relevant and elegant.
Non-human traffic is also a major problem, as pointed out in my example above. There’s no single reason why a particular ad network might generate impressions to non-human traffic, and we’re not here to determine whether this is ever done on purpose to inflate impressions. But it’s not an insignificant number of impressions in many cases. And without a way to track it down, you’ll pay for ads that somehow got served, but in a manner not associated with human activity. Unfortunately, bots today do not buy things. Perhaps one day they shall (as we enter the Age of Drones) but right now they are not very good prospects. There is no reason to pay for these impressions.
Piercing the Veil
Many buyers believe, mistakenly, that CPM campaigns are a set-it-and-forget it proposition, and that they are getting a clean deal from their ad network. But there are too many variables involved, and ad networks sometimes prefer opacity to accountability.
You may not need a specialized tool like vCE — as indicated above, you can certainly learn much just from reviewing your standard analytics with care. But vCE and tools like it go beyond what analytics alone can provide. Especially if you’re managing high-volume campaigns, it would be a mistake to rely simply on reporting from the ad network. Self-reporting is too one-sided. You need a way to arbitrate your buy. Using third party tools to measure ad buys is just as important as using third party analytics to understand traffic patterns.
Don’t rely on first impressions when it comes to ad impressions. Dig deeper and you’ll uncover more value.