Wednesday, January 30, 2013

How Should Brands Give Back in a Multicultural Market?

Marketers frequently ask how their brands can give back to a community to create good will and enhance their brand's position. The answer to the question is complex, but one of the ways of trying to address it is by asking consumers of different cultural and acculturation backgrounds how they rate different actions that brands can take in order to give back.

In 2012 with the cooperation of Research Now and the leadership of +Melanie Courtright, we again collected an online national sample composed of Hispanics and Asians born in the US and those born abroad, in addition to African Americans and Non-Hispanic Whites. We used the country of birth as a proxy for acculturation to see if technology adoption varied accordingly.

We asked respondents to rate different actions that companies can take to give back as follows:
“When a brand gives back to a community, which of the following are most and least important contributions from your perspective? Please rank in order, from 1, most important, to 5, least important, each of the following items.”

The following chart shows the total across all respondents (indicated by the blue bars) and for each of the culturally unique groups (indicated by the colored lines) for the rank of “Most Important” in regard to the following possible brand actions:

  • Provide jobs
  • Give scholarships
  • Help clean the environment
  • Keep jobs in the local community, and
  • Employees get free time to do community service



The rank shown is just the “Most Important” for each of the items. The totals for each culturally unique or acculturation group add to slightly more than 100% because each item was rated independently.

The first surprise is that the differences across culturally unique and acculturation groups is relatively small and that these cultural groups agree on the priority of the items.  The number one priority across the board is that the most important contribution that brands can make is to provide jobs to the community, followed by keeping jobs in the local community. It is perhaps not surprising that these two items have the highest priority given the economic downturn that most Americans have experienced in the recent past.

At a distance the next two priorities for brands are to help clean the environment and give scholarships. This does not necessarily mean that these are not important brand contributions, but that jobs are a more prevalent contribution at this time.

Interestingly, giving employees free time to do community service was ranked as top by the smallest proportion of respondents in each cultural group. This is perhaps due to the lack of visibility that such action may have as a contribution.

What are the lessons from these findings?

  1. Cultural groups and those at different levels of acculturation tend to agree on approaches that brands need to take to give back to the communities where they operate. Clearly, the implementation of providing jobs has to be by cultural group in order to satisfy the expressed sentiment of these consumers. Creating jobs is not enough but creating jobs that satisfy these segments individually.
  2. At times of economic distress there are actions that consumers feel are important but they subside to the more pressing issues of the time. While cleaning the environment and giving scholarships are important, jobs take preeminence in economic downturns.
  3. Marketers are encouraged to emphasize how their brands contribute to employment of these different cultural groups with specific emphasis on the local community.

The data for this study was collected by Research Now of Dallas, Texas, thanks to the generous initiative of +Melanie Courtright. Research Now contributed these data to the research efforts of the Center for Hispanic Marketing Communication at Florida State University (+Hispanic FSU). This online survey included the responses of 936 Asians (398 US born), 458 African Americans, 833 Hispanics (624 US born), and 456 non Hispanic Whites. This national sample had quotas for US region, age, and gender to increase representativeness.

AdWords for video makes reporting more insightful, purposeful and beautiful

Building a brand online is about creating authentic connections with your audience. Since launching AdWords for video last year, we’ve helped more brands capture the power of sight, sound, and motion in a simple and easy way. Today, we’re helping brands further understand the impact of their campaigns by bringing three new measurement features to AdWords for video that make reporting more consistent with other media, more goal-oriented and just plain prettier.

Reach & Frequency Reporting: Speak the same measurement language across media
AdWords for video now displays reach and frequency metrics in your campaign reporting interface. These metrics give you more insight into how many unique viewers have seen your ad and the average number of times they’ve seen it, helping you better measure against other media such as TV. To view these metrics on a campaign, ad or targeting group level, just click on Columns >> Customize Columns and look under the Performance section.


Column Sets: Tell us your marketing goals, and we’ll pull the metrics
To help you organize the metrics that matter most to your campaign, we’re introducing the Column Sets feature which groups relevant metrics by marketing objective. So all you need to do is select your advertising goal and we’ll show you useful reporting columns for your account. For example:
  • Want to build brand awareness? Select the Branding objective in the “Columns” drop down to see how broadly your video ad was viewed. We’ll automatically show unique viewers, average view frequency and average impression frequency. 
  • Want to optimize for conversions? Select the Website Traffic and Conversions objective to see how your video ads drove viewers to action. We’ll show you website traffic, number of conversions, cost-per-conversion, and your conversion rate from people who viewed your ad. 
  • Want to grow your audience? Select the Audience objective to understand how your video ads drove people to watch and engage with more of your content. We’ll show you follow-on subscribers and follow-on views.
  • Want to drive more views? Select the Views objective to understand the follow-on actions viewers take such as when a viewer goes to your channel to watch more videos. We’ll show you follow-on views and unique viewers.

GeoMap: Visualize your views
Where in the world are your views coming from? With the new AdWords for video visualization feature, you can tell with a mere glance. Just select the Campaign tab and click “Map View” to generate a beautiful snapshot that displays view activity on an interactive map. You can even click on regions to drill down to states and provinces globally, and to the DMA-level in the U.S. These geographic insights can help you understand which of your ad messages are resonating with specific markets.


We hope these new features to help you easily compare campaigns across platforms, discover new metrics and derive actionable insights. Head over to AdWords for video to try them out today!

Posted by David Tattersall, YouTube Product Manager, recently watched “Top Gear - Reliant Robin Space Shuttle”

Tuesday, January 29, 2013

Win moments that matter in 2013 with Learn with Google webinars

What was your business’ New Year’s resolution, and how do you plan to keep it? At Google, ours is to help make the web work for you. Our new series of Learn with Google webinars will teach you how to use digital to build brand awareness, and they’ll give you the tools you need to drive sales. By tapping into technology that works together across your business needs, you can resolve to win moments that matter in 2013.

Check out our upcoming live webinars:

Build Awareness

02/12 [Multiscreen] Brand Building in a Multiscreen World
02/20 [YouTube] How to Build your Business with YouTube Video Ads
03/05 [Social] How to Use Google+ and Make Social Work for You
03/12 [Mobile] Understanding Mobile Ads Across Marketing Objectives
03/27 [Wildfire by Google] The Call for Converged Media

Drive Sales

01/29 [Analytics] Google Tag Manager: Technical Implementation *today*
02/07 [Search] Your Shelf Space on Google: Get Started with Google Shopping
02/26 [YouTube] From Awareness to Sales: Making the Most of Video Remarketing
02/27 [Search] What's New and Next in AdWords
03/06 [Display] Biggest Loser: Digital Ad Spend Edition
03/13 [Mobile] The Full Value of Mobile
03/20 [Display] Getting Started with Dynamic Remarketing

Visit our webinar site to register for any of the sessions and to access past webinars on-demand. You can also stay up-to-date on the schedule by adding our Learn with Google Webinar calendar to your own Google calendar to automatically see upcoming webinars.

During our last series of webinars, attendees had the chance to win a Nexus 7. Our lucky winner was Donella Cohen, who is happily enjoying her new tablet. Check out our upcoming webinars for another chance to win!

Learn with Google is a program to help businesses succeed through winning moments that matter, enabling better decisions and constantly innovating. We hope that you’ll use these best practices and how-to’s to maximize the impact of digital and grow your business. We’re looking forward to seeing you at an upcoming session!

Posted by Erin Molnar, Learn With Google

Monday, January 28, 2013

Legacy impression share columns to be retired next week

On November 7th we rolled out several improvements and changes for AdWords impression share (IS) reporting. As we mentioned in that initial roll, we’re continuing with our plans to phase out the old IS columns on February 4th.

Any saved reports using the old IS columns will need to be updated to use the new columns. If you don’t remove/replace those columns before they’re retired on February 4th, you won’t be able to run those saved reports.

Upgrading to the new, network-specific columns comes with a number of improvements including:

  • Distinct search and display columns. We’ve added new columns to cleanly separate search and display impression share.
  • “Hour of day” segmentation. We’ve enabled “Hour of day” segmentation so you can evaluate how your ad coverage varies by the hour.
  • Filters, charts and rules. With the new IS columns you can apply filters, see charts, and apply automated rules using IS metrics.
  • Device segmentation.  Starting today you can also segment your IS columns by device so you can see coverage for desktop, mobile and tablet devices separately.

For more information on what’s changing and what you may need to do, visit our AdWords help center, which has additional details.

Posted by Rob Newton, Product Marketing Manager

Dashboards, Advanced Segments, And Custom Reports For Your Business Needs

We’ve heard you loud and clear that getting started on Google Analytics can be challenging. It’s such a robust tool with a variety of reports, filters, and customizations that for a new user it can be overwhelming to figure out where to look first for the data and insights that will enable you to make better decisions. For more advanced users it can be time consuming to build out different variations of reports and dashboards to get the clearest snapshot of your performance. That is why we’ve created the Google Analytics Solution Gallery.

The Google Analytics Solution Gallery hosts the top Dashboards, Advanced Segments and Custom Reports which you can quickly and easily import into your own account to see how your website is performing on key metrics. It helps you to filter through the noise to see the metrics that matter for your type of business: Ecommerce, Brand, Content Publishers. If you're not familiar with DashboardsAdvanced Segments and Custom Reports, check out these links to our help center for detailed descriptions on how they work and the insights they can help provide.

Solution examples
Here are a few examples of the solutions that you can download into your account to see how the analysis works with your data.
  • Social sharing report - Content is king, but only if you know what it's up to. Learn what content from your website visitors are sharing and how they're sharing it. 
  • Publisher dashboard - Bloggers can use this dashboard to see where readers come from and what they do on your site.
  • Engaged traffic advanced segment - Measure traffic from high-value visitors who view at least three pages AND spend more than three minutes on your site. Why do these people love your site? Find out!


How do I add these to my account?
We’ve designed it so it’s easy to get started. Simply go to the Google Analytics Solution Gallery, pick from the drop drown menu the solutions that will be most helpful for your business. Select from Publisher, Ecommerce, Social, Mobile, Brand, etc.. . Hit “Download” for the solution you want to see in your account. If you are not already logged into Google Analytics we’ll ask you to sign in. Then you’ll be asked if you want to accept this solution into your account and what Web Profile do you want to apply it to. After you select that it will be in your account and your own data will populate the report.

We’re planning on expanding on this list of top solutions throughout the year so be sure to check back and see what we’ve added. A big thank you to Justin Cutroni & Avinash Kaushik for supplying many of the solutions currently available.

Posted by Ian Myszenski, Google Analytics team

Sunday, January 27, 2013

Big Data Is A Retail Bank Marketing Mirage

Over the past week, I have reached out to many of my banking industry colleagues in the U.S. and abroad asking for examples of where 'big data' is being used effectively in retail banking. 


The response was underwhelming to say the least, as the majority of banking leaders provided examples of 'works in progress' or 'initial wins', with some of the most mentioned case studies being in the areas of risk and fraud prevention as opposed to marketing. 


In addition to a post on big data by Aite Group's Ron Shevlin on The Financial Brand, and widely covered discussions about 'big data hype' on blogs from Gartner and CapGemini this past week, most industry leaders believe banks need to focus on data close to home before expanding their pursuit of the next shiny object. To this end, a friend from the U.K. offered to provide a guest post on the topic from his perspective as a supplier to the financial services industry.


By Darren Oddie, CEO and co-founder of AGILEci

Consumer banking behavior is changing rapidly before our eyes. Will this changing consumer behavior mean that incumbent retail banking 'zombies' may become corpses walking the halls of banking, as energizing and engaging competitors take enlightened customers away from them?

I firmly believe that many retail bankers are operating on autopilot in an increasingly dynamic and complex environment. They are trying to understand, develop, deliver and manage new solutions with buzzwords such as cloud, big data, mobile, social, NFC and mobile wallets to name a few.

I'm going to highlight one of these trending terms within the context of retail bank marketing, and the mots de jour are 'big data'. 

Why Big Data is a Bank Marketing Mirage

We hear it, we allegedly see it, but we can't touch it. We can't touch it because we don't know what 'it' is. There are official and unofficial definitions of big data mostly distributed by vendors who want to sell a 'solution' to the industry. 'It's big', 'It's fast', 'It's varied', 'It's unstructured', 'It's social', 'It's not technology', 'It's data programming', 'It's a process', 'It's statistics', 'It's analytics', 'It's hype', 'It's bullshit' . . . and so on.

If a bank were truly using big data, bank marketers would be engaging customers in ways that were unforeseen only a few years ago and their technologies would enable this. Retail banking would be operating faster than the speed of changing customer behaviors, similar to the post a couple weeks ago on this site written by Scott Bales from Movenbank entitled, 'Finding Serendipity in Big Data'.

Full digitization of financial services offerings would be available to the majority of customers and the bank would be in constant omni-channel dialogue with their customers to self-individualize their chosen offerings as shown in the illustration below. Those who wanted physical interactions would be able to have physical units. Those who wanted digital could have digital. Those who wanted everything, well . . . they could have everything. This wouldn't be an issue for banks, as their technology would be agile enough to individualize every interaction and every offering.



Customers would have adopted their physical modus operandi because it would be truly self-personalized, cheaper, with much better service and with richer benefits. The automated banking service would encompass full and transparent management of personal and business finances, within the personalized context of the individual customer (not the customer group).

Much of the big data use to date has been around risk monitoring and fraud control. Bank marketing big data examples that are currently expounded tend to rely on using data for customer engagement and satisfaction. 'We'll send you an individualized statement of your account', 'We''ll ping you with an offer as you walk past your favorite store' and 'Access personalized offers online, via mobile or at the point of sale' are not examples of big data in action. They are examples of taking structured and/or unstructured data, analyzing it and using it for marketing purposes.

Data may be pulled from disparate sources and targeted at a customer, however, it's unlikely that this communication is truly at an individual level, real-time automated and as sophisticated as payment scheme processing, authorizations and risk/fraud management is today. The day that a retail bank's marketing infrastructure is fully data integrated and as sophisticated as a payment scheme's processing infrastructure is the day that I believe big data is well and alive in retail bank marketing.

Individualized engagement and dialogue to create self-personalized products is my idea of applied big data. It's not just about real-time, 1:1 push and pull marketing. Retail bank marketers need to realize that they are operating at a speed that is slower than the changing behaviors of their customers. The more they talk about big data and don't deliver the way other industries are delivering, the further away from the reality of truly next generation products and engagement they will be.

The consumer is becoming more aware of what is possible with today's insight and computing capabilities, while retail bankers are looking more like zombies, bereft of consciousness yet able to barely respond to surrounding stimuli.

Non-Bank Big Data Examples


Let's take a look at two companies that retail bank marketers can learn from.

Borders Group, Inc. and Blockbuster are great examples of companies that failed to keep up with the digitization of consumer behavior. Once seen as the corporate face of physical sales on main street, they battled for survival and then collapsed. They went from market dominance to death, in a relatively short time, with no strategic reinventions. They are the most cited, but by no means the first or last case studies that could be used.

Conversely, Amazon and Apple grew from nothing to prominence in digital sales globally (books and music among other things) in a relatively short period of time in comparison to the growth of most banks. This marketplace distribution disruption didn't happen overnight, so retailers had every chance to fight back as some still are.

Why do the examples above matter to retail bank marketers? The key for me is the amount of publicity that retailers receive in terms of public sentiment along the lines of, "I love the store experience and enjoy going there (Borders), but I never buy anything from them". In retail, this phenomenon is called 'showrooming', where a customer visits a bricks and mortar establishment only to buy online later. Whether for convenience or to drive down pricing, this activity is disrupting the distribution model.

Retail Industry Parallel to Banking


Are visits to your branch as frequent as in the past? Are your customers still using checks as much as they did in the past? What about those customers that are already 100% digital and haven't visited a branch for years? 

Trendsetter customer behaviors are likely to become mainstream customer behaviors at some future date similar to the trendsetting mobile banking customer or the trendsetting photo check deposit customer. When will these behaviors become mainstream? I have no idea. But people who follow the financial services industry would say it is sooner than most retail bank marketers would hope for (see recent post entitled, 'From Passbook to Mobile: The Evolution Of The Bank Account' by author and Movenbank founder Brett King).

Relying on incremental technology advancements and talking about or playing along with the latest fads, such as big data, will not put you ahead of the new competitive aggressors. We arguably already have examples of fintech start-ups big enough to enter the mainstream banking sector such as PayPal globally, Intuit in the U.S., Square in the U.S., Fidor Bank in Germany and M-Pesa in Kenya.

Even newer and smaller start-ups such as Simple, Bluebird from American Express and Walmart, GoBank from GreenDot and the soon to be introduced Movenbank should be watched for innovations and trends that can quickly move market share.

The specific challenge for retail bank marketers is to realize the potential of big data (or whatever you want to call expanded customer insight) and to stay up with, or eventually move ahead of, customers and the competition. Find ways to use the data at your disposal today more effectively and efficiently. Find ways to interact and communicate with customers in the manner they prefer in real time. Become proactive as opposed to reactive to customer needs.

The ultimate goal is to not sit on the sideline and become a dinosaur that is driven to extinction by an unforeseen player that created a new future.

About the Author:



Darren Oddie is the CEO and co-founder of AGILEci, the only business intelligence consultancy and customized software provider uniquely designed for marketers. Darren has held senior marketing positions at Visa, American Express, Glaxo SmithKline and Reuters. He has worked across all marketing disciplines for 20 years. He holds an MBA from the University of Cape Town. He also manages a customer insight blog for marketers.


Additional Insights:


Friday, January 25, 2013

Digital Analytics Association Awards Are Back

It’s that time of year again - award season. No, not Hollywood awards, Digital Analytics awards! 

The Digital Analytics Association has announced its list of nominees for the DAA Awards of Excellence. These awards celebrate the outstanding contribution to our profession of individuals, agencies, vendors and practitioners.

This year we’re honored to be nominated for two awards.


Google Tag Manager has been nominated for New Technology of the Year. Launched in October 2012, Google Tag Manager has helped many companies simplify the tag management process.

Google, as an organization, has been nominated in the category Agency/Vendor of the year. 

We’re incredibly humbled by these nominations - thank you. Our goal is to provide all businesses with the ability to improve their performance using data. We’re excited to be part of this community and we look forward to an even more amazing future.

In addition, a few Googlers have been nominated for individual awards:

Eduardo Cereto Carvalho and Krista Seiden have been nominated for Digital Analytics Rising Star.

Our Analytics Advocate, Justin Cutroni and our Digital Marketing Evangelist, Avinash Kaushik, who travel the world sharing Analytics love have each been nominated as Most Influential Industry Contributor (individual).

If you’re a DAA member make sure you vote by February 6. Winners will be announced at the 2013 DAA Gala in San Francisco on April 16. Tickets are available now.

Posted by the Google Analytics Team

lastminute.com finds that traditional conversion tracking significantly undervalues non-brand search


The following post originally appeared on the Inside AdWords Blog.

Understanding the true impact of advertising
Advertisers have a fundamental need to understand the effectiveness of their advertising. Unfortunately, determining the true impact of advertising on consumer behavior is deceptively difficult. This difficulty in measurement is especially applicable to advertising on non-brand (i.e. generic) search terms, where ROI may be driven indirectly over multiple interactions that include downstream brand search activities. Advertising effectiveness is often estimated using standard tracking processes that rely upon ‘Last Click’ attribution. However, ‘Last Click’ based tracking can significantly underestimate the true value of non-brand search advertising. This fact was recently demonstrated by lastminute.com, a leading travel brand, using a randomized experiment - the most rigorous method of measurement.


Experimental Approach
lastminute.com recently conducted an online geo-experiment to measure the effectiveness of their non-brand search advertising on Google AdWords.  The study included offline and online conversions.  The analysis used a mathematical model to account for seasonality and city-level differences in sales.  Cities were randomly assigned to either a test or a control group. The test group received non-brand search advertising during the 12 week test period, while the control group did not receive such advertising during the same period. The benefit of this approach is that it allows statements to be made regarding the causal relationship between non-brand search advertising and the volume of conversions - the real impact of the marketing spend.

Download the full lastminute.com case study here.

Findings
The results of the experiment indicate that the overall effectiveness of the non-brand search advertising is 43% greater1 than the estimate generated by lastminute.com’s standard online tracking system.

The true impact of the non-brand search advertising is significantly larger than the ‘Last Click’ estimate because it accounts for
  • upper funnel changes in user behavior that are not visible to a ‘Last Click’ tracking system, and
  • the impact of non-brand search on sales from online and offline channels.
This improved understanding of the true value of non-brand search advertising has given lastminute.com the opportunity to revise their marketing strategy and make better budgeting decisions.


How can you benefit?
As proven by this study, ‘Last Click’ measurement can significantly understate the true effectiveness of search advertising. Advertisers should look to assess the performance of non-brand terms using additional metrics beyond ‘Last Click’ conversions. For example, advertisers should review the new first click conversions and assist metrics available in AdWords and Google Analytics. Ideally, advertisers will design and carry out experiments of their own to understand how non-brand search works to drive sales.

Read more about AdWords Search Funnels
Read more about Google Analytics Multi-Channel Funnels

-- Anish Acharya, Industry Analyst, Google; Stefan F. Schnabl, Product Manager, Google; Gabriel Hughes, Head of Attribution, Google; and Jon Vaver, Senior Quantitative Analyst, Google contributed to this report.

1 This result has a 95% Bayesian confidence interval of [1.17, 1.66].

Posted by Sara Jablon Moked, Google Analytics Team

Thursday, January 24, 2013

Increasing Your Analytics Productivity With UI Improvements

We’re always working on making Analytics easier for you to use. Since launching the latest version of Google Analytics (v5), we’ve been collecting qualitative and quantitative feedback from our users in order to improve the experience. Below is a summary of the latest updates. Some you may already be using, but all will be available shortly if you’re not seeing them yet. 


Make your dashboards better with new widgets and layout options



Use maps, devices and bar chart widgets in order to create a perfectly tailored dashboard for your audience. Get creative with these and produce, share and export custom dashboards that look exactly how you want with the metrics that matter to you. We have also introduced improvements to customize the layout of your dashboards to better suit individual needs. In addition dashboards now support advanced segments!

Get to your most frequently used reports quicker

You’ll notice we’ve made the sidebar of Google Analytics even more user-friendly, including quick access to your all-important shortcuts:


If you’re not already creating Shortcuts, read more about them and get started today. We have also enabled shortcuts for real-time reports, which allows you to set up a specific region to see its traffic in real-time, for example.

Navigate to recently used reports and profiles quicker with Recent History


Ever browse around Analytics and want to go back to a previous report? Instead of digging for the report, we’ve made it even simpler when you use Recent History.

Improving search functionality



Better Search allows you to search across all reports, shortcuts and dashboards all at once to find what you need.

Keyboard shortcuts

In case you've never seen them, Google Analytics does have some keyboard shortcuts. Be sure you’re using them to move around faster. Here are a few useful ones:

Search: s , / (Access to the quick search list)
Account List: Shift + a (access to the quick account list)
Set date range: d + t (set the date range to today)
On screen guide: Shift + ? (view the complete list of shortcuts)

Easier YoY Date Comparison


The new quick selection option lets you select previous year to prefill date range improving your productivity to conduct year over year analysis.

Export to Excel & Google Docs 

Exporting keeps getting better, and now includes native Excel XSLX support and Google Docs:


We hope you find these improvements useful and always feel free to let us know how we can make Analytics even more usable for you to get the information you need to take action faster.

Posted by Nikhil Roy, Google Analytics Team

lastminute.com Finds That Traditional Conversion Tracking Significantly Undervalues Non-brand Search

Understanding the true impact of advertising

Advertisers have a fundamental need to understand the effectiveness of their advertising. Unfortunately, determining the true impact of advertising on consumer behavior is deceptively difficult. This difficulty in measurement is especially applicable to advertising on non-brand (i.e. generic) search terms, where ROI may be driven indirectly over multiple interactions that include downstream brand search activities. Advertising effectiveness is often estimated using standard tracking processes that rely upon ‘Last Click’ attribution. However, ‘Last Click’ based tracking can significantly underestimate the true value of non-brand search advertising. This fact was recently demonstrated by lastminute.com, a leading travel brand, using a randomized experiment - the most rigorous method of measurement.




Experimental Approach

lastminute.com recently conducted an online geo-experiment to measure the effectiveness of their non-brand search advertising on Google AdWords.  The study included offline and online conversions.  The analysis used a mathematical model to account for seasonality and city-level differences in sales.  Cities were randomly assigned to either a test or a control group. The test group received non-brand search advertising during the 12 week test period, while the control group did not receive such advertising during the same period. The benefit of this approach is that it allows statements to be made regarding the causal relationship between non-brand search advertising and the volume of conversions - the real impact of the marketing spend.

Full lastminute.com case study

Findings

The results of the experiment indicate that the overall effectiveness of the non-brand search advertising is 43% greater1  than the estimate generated by lastminute.com’s standard online tracking system.

The true impact of the non-brand search advertising is significantly larger than the ‘Last Click’ estimate because it accounts for
  1. upper funnel changes in user behavior that are not visible to a ‘Last Click’ tracking system, and
  2. the impact of non-brand search on sales from online and offline channels.
This improved understanding of the true value of non-brand search advertising has given lastminute.com the opportunity to revise their marketing strategy and make better budgeting decisions.




How can you benefit?

As proven by this study, ‘Last Click’ measurement can significantly understate the true effectiveness of search advertising. Advertisers should look to assess the performance of non-brand terms using additional metrics beyond ‘Last Click’ conversions. For example, advertisers should review the new first click conversions and assist metrics available in AdWords and Google Analytics. Ideally, advertisers will design and carry out experiments of their own to understand how non-brand search works to drive sales.

Read more on AdWords Search Funnels
Read more on Google Analytics Multi Channel Funnels

Anish Acharya, Industry Analyst, Google; Stefan F. Schnabl, Product Manager, Google; Gabriel Hughes, Head of Attribution, Google; and Jon Vaver, Senior Quantitative Analyst, Google contributed to this report.

1 This result has a 95% Bayesian confidence interval of [1.17, 1.66].

Posted by Jeremy Tully, Inside AdWords Crew

Wednesday, January 23, 2013

Multi-armed Bandit Experiments

This article describes the statistical engine behind Google Analytics Content Experiments. Google Analytics uses a multi-armed bandit approach to managing online experiments. A multi-armed bandit is a type of experiment where:
  • The goal is to find the best or most profitable action
  • The randomization distribution can be updated as the experiment progresses
The name "multi-armed bandit" describes a hypothetical experiment where you face several slot machines ("one-armed bandits") with potentially different expected payouts. You want to find the slot machine with the best payout rate, but you also want to maximize your winnings. The fundamental tension is between "exploiting" arms that have performed well in the past and "exploring" new or seemingly inferior arms in case they might perform even better. There are highly developed mathematical models for managing the bandit problem, which we use in Google Analytics content experiments.

This document starts with some general background on the use of multi-armed bandits in Analytics. Then it presents two examples of simulated experiments run using our multi-armed bandit algorithm. It then address some frequently asked questions, and concludes with an appendix describing technical computational and theoretical details.

Background

How bandits work

Twice per day, we take a fresh look at your experiment to see how each of the variations has performed, and we adjust the fraction of traffic that each variation will receive going forward. A variation that appears to be doing well gets more traffic, and a variation that is clearly underperforming gets less. The adjustments we make are based on a statistical formula (see the appendix if you want details) that considers sample size and performance metrics together, so we can be confident that we’re adjusting for real performance differences and not just random chance. As the experiment progresses, we learn more and more about the relative payoffs, and so do a better job in choosing good variations.

Benefits

Experiments based on multi-armed bandits are typically much more efficient than "classical" A-B experiments based on statistical-hypothesis testing. They’re just as statistically valid, and in many circumstances they can produce answers far more quickly. They’re more efficient because they move traffic towards winning variations gradually, instead of forcing you to wait for a "final answer" at the end of an experiment. They’re faster because samples that would have gone to obviously inferior variations can be assigned to potential winners. The extra data collected on the high-performing variations can help separate the "good" arms from the "best" ones more quickly.
Basically, bandits make experiments more efficient, so you can try more of them. You can also allocate a larger fraction of your traffic to your experiments, because traffic will be automatically steered to better performing pages.

Examples

A simple A/B test

Suppose you’ve got a conversion rate of 4% on your site. You experiment with a new version of the site that actually generates conversions 5% of the time. You don’t know the true conversion rates of course, which is why you’re experimenting, but let’s suppose you’d like your experiment to be able to detect a 5% conversion rate as statistically significant with 95% probability. A standard power calculation1 tells you that you need 22,330 observations (11,165 in each arm) to have a 95% chance of detecting a .04 to .05 shift in conversion rates. Suppose you get 100 visits per day to the experiment, so the experiment will take 223 days to complete. In a standard experiment you wait 223 days, run the hypothesis test, and get your answer.

Now let’s manage the 100 visits each day through the multi-armed bandit. On the first day about 50 visits are assigned to each arm, and we look at the results. We use Bayes' theorem to compute the probability that the variation is better than the original2. One minus this number is the probability that the original is better. Let’s suppose the original got really lucky on the first day, and it appears to have a 70% chance of being superior. Then we assign it 70% of the traffic on the second day, and the variation gets 30%. At the end of the second day we accumulate all the traffic we’ve seen so far (over both days), and recompute the probability that each arm is best. That gives us the serving weights for day 3. We repeat this process until a set of stopping rules has been satisfied (we’ll say more about stopping rules below).

Figure 1 shows a simulation of what can happen with this setup. In it, you can see the serving weights for the original (the black line) and the variation (the red dotted line), essentially alternating back and forth until the variation eventually crosses the line of 95% confidence. (The two percentages must add to 100%, so when one goes up the other goes down). The experiment finished in 66 days, so it saved you 157 days of testing.




Figure 1. A simulation of the optimal arm probabilities for a simple two-armed experiment. These weights give the fraction of the traffic allocated to each arm on each day.

Of course this is just one example. We re-ran the simulation 500 times to see how well the bandit fares in repeated sampling. The distribution of results is shown in Figure 2. On average the test ended 175 days sooner than the classical test based on the power calculation. The average savings was 97.5 conversions.





Figure 2. The distributions of the amount of time saved and the number of conversions saved vs. a classical experiment planned by a power calculation. Assumes an original with 4% CvR and a variation with 5% CvR.

But what about statistical validity? If we’re using less data, doesn’t that mean we’re increasing the error rate? Not really. Out of the 500 experiments shown above, the bandit found the correct arm in 482 of them. That’s 96.4%, which is about the same error rate as the classical test. There were a few experiments where the bandit actually took longer than the power analysis suggested, but only in about 1% of the cases (5 out of 500).

We also ran the opposite experiment, where the original had a 5% success rate and the the variation had 4%. The results were essentially symmetric. Again the bandit found the correct arm 482 times out of 500. The average time saved relative to the classical experiment was 171.8 days, and the average number of conversions saved was 98.7.

Stopping the experiment

By default, we force the bandit to run for at least two weeks. After that, we keep track of two metrics.
The first is the probability that each variation beats the original. If we’re 95% sure that a variation beats the original then Google Analytics declares that a winner has been found. Both the two-week minimum duration and the 95% confidence level can be adjusted by the user.

The second metric that we monitor is is the "potential value remaining in the experiment", which is particularly useful when there are multiple arms. At any point in the experiment there is a "champion" arm believed to be the best. If the experiment ended "now", the champion is the arm you would choose. The "value remaining" in an experiment is the amount of increased conversion rate you could get by switching away from the champion. The whole point of experimenting is to search for this value. If you’re 100% sure that the champion is the best arm, then there is no value remaining in the experiment, and thus no point in experimenting. But if you’re only 70% sure that an arm is optimal, then there is a 30% chance that another arm is better, and we can use Bayes’ rule to work out the distribution of how much better it is. (See the appendix for computational details).

Google Analytics ends the experiment when there’s at least a 95% probability that the value remaining in the experiment is less than 1% of the champion’s conversion rate. That’s a 1% improvement, not a one percentage point improvement. So if the best arm has a conversion rate of 4%, then we end the experiment if the value remaining in the experiment is less than .04 percentage points of CvR.

Ending an experiment based on the potential value remaining is nice because it handles ties well. For example, in an experiment with many arms, it can happen that two or more arms perform about the same, so it does not matter which is chosen. You wouldn’t want to run the experiment until you found the optimal arm (because there are two optimal arms). You just want to run the experiment until you’re sure that switching arms won’t help you very much.

More complex experiments

The multi-armed bandit’s edge over classical experiments increases as the experiments get more complicated. You probably have more than one idea for how to improve your web page, so you probably have more than one variation that you’d like to test. Let’s assume you have 5 variations plus the original. You’re going to do a calculation where you compare the original to the largest variation, so we need to do some sort of adjustment to account for multiple comparisons. The Bonferroni correction is an easy (if somewhat conservative) adjustment, which can be implemented by dividing the significance level of the hypothesis test by the number of arms. Thus we do the standard power calculation with a significance level of .05 / (6 - 1), and find that we need 15,307 observations in each arm of the experiment. With 6 arms that’s a total of 91,842 observations. At 100 visits per day the experiment would have to run for 919 days (over two and a half years). In real life it usually wouldn’t make sense to run an experiment for that long, but we can still do the thought experiment as a simulation.

Now let’s run the 6-arm experiment through the bandit simulator. Again, we will assume an original arm with a 4% conversion rate, and an optimal arm with a 5% conversion rate. The other 4 arms include one suboptimal arm that beats the original with conversion rate of 4.5%, and three inferior arms with rates of 3%, 2%, and 3.5%. Figure 3 shows the distribution of results. The average experiment duration is 88 days (vs. 919 days for the classical experiment), and the average number of saved conversions is 1,173. There is a long tail to the distribution of experiment durations (they don’t always end quickly), but even in the worst cases, running the experiment as a bandit saved over 800 conversions relative to the classical experiment.





Figure 3. Savings from a six-armed experiment, relative to a Bonferroni adjusted power calculation for a classical experiment. The left panel shows the number of days required to end the experiment, with the vertical line showing the time required by the classical power calculation. The right panel shows the number of conversions that were saved by the bandit.

The cost savings are partially attributable to ending the experiment more quickly, and partly attributable to the experiment being less wasteful while it is running. Figure 4 shows the history of the serving weights for all the arms in the first of our 500 simulation runs. There is some early confusion as the bandit sorts out which arms perform well and which do not, but the very poorly performing arms are heavily downweighted very quickly. In this case, the original arm has a "lucky run" to begin the experiment, so it survives longer than some other competing arms. But after about 50 days, things have settled down into a two-horse race between the original and the ultimate winner. Once the other arms are effectively eliminated, the original and the ultimate winner split the 100 observations per day between them. Notice how the bandit is allocating observations efficiently from an economic standpoint (they’re flowing to the arms most likely to give a good return), as well as from a statistical standpoint (they’re flowing to the arms that we most want to learn about).





Figure 4. History of the serving weights for one of the 6-armed experiments.

Figure 5 shows the daily cost of running the multi-armed bandit relative to an "oracle" strategy of always playing arm 2, the optimal arm. (Of course this is unfair because in real life we don’t know which arm is optimal, but it is a useful baseline.) On average, each observation allocated to the original costs us .01 of a conversion, because the conversion rate for the original is .01 less than arm 2. Likewise, each observation allocated to arm 5 (for example) costs us .03 conversions because its conversion rate is .03 less than arm 2. If we multiply the number of observations assigned to each arm by the arm’s cost, and then sum across arms, we get the cost of running the experiment for that day. In the classical experiment, each arm is allocated 100 / 6 visits per day (on average, depending on how partial observations are allocated). It works out that the classical experiment costs us 1.333 conversions each day it is run. The red line in Figure 5 shows the cost to run the bandit each day. As time moves on, the experiment becomes less wasteful and less wasteful as inferior arms are given less weight.





Figure 5. Cost per day of running the bandit experiment. The constant cost per day of running the classical experiment is shown by the horizontal dashed line.

1The R function power.prop.test performed all the power calculations in this article.
2See the appendix if you really want the details of the calculation. You can skip them if you don’t.

Posted by Steven L. Scott, PhD, Sr. Economic Analyst, Google

Tuesday, January 22, 2013

Google Tag Manager: Technical Implementation Deep Dive Webinar

Just three months ago we launched Google Tag Manager to make it easier for marketers (or anyone in the organization) to add and update website tags, such as conversion tracking, site analytics, remarketing, and more. The tool provides an easy-to-use interface with templates for tags from Google and templates for other vendor’s tags, as well as customizable options for all your tagging needs. This minimizes site-coding requirements and simplifies the often error-prone tagging process.

In November, we held an introductory webinar (watch the recording here, plus read Q&A), and next week we’re holding a second webinar going beyond the basics and diving into the technical details and best practices for how to implement Google Tag Manager. This webinar will be hosted by Rob Murray, our Engineering Manager, and Dean Glasenberg, Sales Lead.

Webinar: Google Tag Manager Technical Implementation
Date: Tuesday, January 29, 2013
Time: 10 am PST / 1pm EST / 6pm GMT
Register here: http://goo.gl/17OFd
Recommended Audience: IT or webmaster team members

During the webinar we’ll go through a step-by-step process for implementation, and we’ll cover some more advanced topics (i.e. deploying more complex tags). We’ll introduce the role of a Data Layer and use it in conjunction with Events to show how you can set up a site to gather detailed usage metrics, for example, to help you understand why users are dropping off at a specific page.  We’ll also show you how common browser Developer Tools, as well as the Google Tag Manager Debug mode, can be used to help verify that your tags are working correctly (and fix them if they’re not).

Hope to see to see you on Tuesday!

Monday, January 21, 2013

Optimistic Forecast for FinTech Providers


A new report, being released today by the William Mills Agency, reveals that spending by financial institutions is recovering as the economy and industry rebounds. The tenth annual ‘Bankers as Buyers’ study shares indepth insights and research from more than thirty individuals and organizations regarding what technology, services and solutions banks and credit unions are expected to invest in 2013. 


This report is a compilation of viewpoints from many of the most influential research and fintech support institutions in the country and is available as a free download here.


In this year's report, IDC Financial Insights projected that technology spending is expected to increase to $57 billion, with much of the spending expected to occur in the ‘second tier’ of financial institutions ($1 billion - $10 billion) as opposed to the largest banks.

"As technology continues to be central to customer interactions and an improved customer experience, we are constantly reminded that technology in not a banking department, but is everywhere . . . including in the hands of consumers”, states Scott Mills, president of the Williams Mills Agency. “Demographic and behavioral changes, combined with changing technology preferences and the need for improved trust and brand loyalty will force banks and credit unions to evaluate the role of technology in the delivery of services", adds Mills.

Additional findings of this year’s ‘Bankers as Buyers’ report include:
      • A total of 14,210 financial institutions make up today’s depository landscape, which is down 3.7 percent from 2011 according to the FDIC and CUNA.
      • While much of the focus on payments technology is on mobile, organizations are also looking at improvements in online payments, ACH, P2P and prepaid cards to attract customers.
      • Mobile banking gained a stronger foothold in 2012, as FIs strived to meet increasing consumer demand for anytime, anywhere financial services.
      • Consumer mobile banking is now used by 33% of mobile consumers according to Javelin Strategy and Research.
      • According to the 2012 KPMG Community Banking Outlook Survey, 47 percent of responding institutions identified regulatory and legislative pressures as the most significant barrier to growth over the upcoming year.
      • Raymond James predicts North American IT spending will continue to grow at a relatively modest three-year compound annual growth rate of 3.1 percent.
      • Branch/teller capture will have a 98 percent expected adoption rate in 2013 and 2014 according to Celent.
      • Cloud computing has had a rapid acceptance, with many banks inquiring about alternative cloud strategies, according to Dan Holt, president of CSI.
      • Being able to leverage ‘big data’ will be increasingly important to profitably serving both retail and small business customers according to Jim Swift, CEO of Cortera.
      • Mobile Remote Deposit Capture (RDC) is being considered by 80 percent of financial institutions according to Celent.
Spending Outlook

As mentioned above, IDC Financial Insights expects North American financial institution technology spending to increase to $57 billion, with the largest financial organizations seeing slower growth rates than their smaller counterparts. This trend is expected to continue in 2014 and 2015 as shown below.



This post is recapping some of the spending highlights from the 'Bankers as Buyers' report, including those in the areas of mobile banking, compliance and security and payments. Additional areas of spending covered in the 'Bankers as Buyers' study in significant detail include:
      • Analytics/Big Data
      • Small Business
      • Branch Technology
      • Cloud Computing
      • Community Banking
      • Loyalty Programs
      • Personal Financial Management (PFM)

Mobile Spending

This year's report emphasizes that, with the penetration and use of smartphones and tablets continuing to increase, mobile banking technology is expected to impact all aspects of technology spending in financial services in the coming years. “Mobile payments are a major driver behind mobile banking and a potential customer retention and revenue tool for financial institutions”, states Richard Crone, founder of Crone Consulting, LLC.

Ron Shevlin, senior analyst from Aite Group agrees saying, “Aite Group anticipates that mobile banking users will triple between 2012 and 2016 in the U.S.” He continues, “Tablets will become financial management devices, and smartphones will become financial transaction devices. FIs need to invest accordingly.”

Many others in the ‘Bankers as Buyers’ study point to tablet growth as being the foundation for the next phase of mobile investment by banks and credit unions. With growth of this device category far surpassing that of smartphones, financial institutions are currently behind the eight ball, lagging in both offerings and functionality. In fact, some mid-tier banks still do not offer a customized tablet application for tablets, deferring to a reconstructed mobile or web application.



According to David Peterson, executive vice president for Q2 in Austin, TX and a report contributor, “The key for financial institution executives is to understand and leverage the tablet, smartphone and other devices that customers use, and present them with the right capabilities for the right device.”

Additional areas of technology investment for mobile in 2013 will be focused on remote deposit capture capabilities (beyond check capture), improved mobile alert functionality and voice recognition.

Perhaps reflected in the increased technology investment by mid-tier financial institutions, many community banks have lagged their larger counterparts and credit unions in mobile banking offerings. With mobile banking becoming the primary way many consumers interact with their bank on a transactional basis, hesitation to respond to consumer behavioral trends could have a significant impact on customer acquisition growth in the future.

Compliance and Security

Compliance and security costs continue to put a strain on financial institutions of all sizes according to the study. Beyond the extensive investment in human resources required to keep abreast of requirements, data management tools are being used to comply with new regulations and to monitor all areas of the organization for potential security breaches.

Some institutions are adjusting to the new regulatory reality, however, with some costs seemingly being reduced over time. According to report contributor Jimmy Sawyers from Sawyers & Jacobs, LLC, “Some institutions are getting innovative (around the cost of compliance). They are starting to do more with less and adapting to the new playing field.”

Unfortunately, the same can’t be said for security costs, which are increasing and a very high priority for all institutions given the growing threat from a highly creative fraud community. All is not bad news on the security front, however, since the report indicates a direct correlation between superior security and loyalty according to Javelin Research. In other words, the investment in security may have a consumer payback.

Payments Technology

While the majority of the focus around payments technology is on mobile, financial institutions are also looking to improve online payments, ACH, P2P and are spending funds to develop prepaid offerings according to this year’s report.

“The challenge banks have is in trying to better understand how people will transact in the future”, said David Wilkes, CEO of Fuze Networks and one of the report’s contributors. “The reality is that there is really no such thing as an ‘unbanked’ consumer.” While some may interact with their financial provider in a non-traditional manner, there is some form of payments system supporting virtually every consumer.

While many theories of how the payments marketplace will finally settle exist, the competition (and the need to keep up with new entrants and innovation from traditional players) will require significant investment to support the payments process.

“Payments will continue to evolve.” says John Balose from ORCC. “Fifteen years ago, few people were using online payments. Mobile solutions have changed everything. It’s a very fractured market.” According to the report, there are nearly 50 digital wallet providers currently, with more expecting to emerge.

It is clear from the report that financial organizations may want to opt for playing a game of ‘payments roulette’, placing smaller bets on a variety of potential outcomes, hoping to hit the jackpot when the competitive dust settles. One thing is clear, however. Financial institutions should not sit on the sideline and wait for a winner. By then it may be too late.

Additional Insights

Beyond the insights collected for the development of this year’s ‘Bankers as Buyers’ report, Williams Mills provides four feature articles from some of the best minds in the FI space. The titles of these must-read articles and are included in the free download:

‘U.S. Banks and Core Replacement’ - Jeanne Capachin

Technology in Wealth Management: Opportunity or Threat?’ – JP Nicols

Mobile Payments Offer a Variety of Payment Opportunities’ – Richard Crone and Heidi Liebenguth

Top Ten Trends Impacting Bank Technology for 2013’ – Jimmy Sawyers


FREE Downloadable Report

Bankers as Buyers 2013: William Mills Agency (January, 2013) 


Contributors to Report

Aite Group, American Banker, BankInfoSecurity, Banno, Jeanne Capchin, CARDFREE, Clelent, Clientific, Comscore, Cortera, CSI, Credit Union National Association, Crone Consulting, Finovate Group, Federal Deposit Insurance Corporation, Federal Reserve Bank of Cleveland, First Annapolis Consultion, Fuze Networks, IDC Financial Insights, Jack Henry Banking, Javelin Strategy and Research, KPMG, Mercator Advisory Group, MoneyDesktop, Morgan Stanley, Online Banking Report, ORCC, ProfitStars, Q2 Banking, Raymond James, Sawyers & Jacobs, Symitar, Wells Fargo and Zions Bank.