I recently attended my second session of the first day at Search Engine Strategies in NYC. Ads in a Quality Score World was over-crowded with PPC fans looking for insights from agency and engine representatives. Unlike our PPC panel at SearchFest, the participants provided helpful insights and specifics (as much as possible) into the black box known as Quality Score. Overall, the key takeaway was that you shouldn’t worry about Quality Score when developing and managing your PPC campaigns…just use the filter of the user experience and perspective. This is not earth-shattering for me, as that has always been my philosophy with SEO, and has carried through to our client work at Anvil Media. For the unwashed masses, however, it may have shed a good degree of light.
Joshua Stylman with Reprise Media opened with background on Quality Score, including some of the unintended consequences of Quality Score, including artificial PPC inflation, engines defining quality (rather than users) and testing being penalized. As each of the engine reps communicated in response, the engines are trying not to define Quality Score as anything other than the best possible user experience. They also clearly communicated that testing is not only not penalized, but is highly encouraged. This makes complete sense in retrospect, as they stand to make more money long term, but the audience (myself included) were quite skeptical, based on historical PPC campaign performance. Regardless, Stylman was the first to intimate that Quality Score puts the “marketing” back in search engine marketing, as it weeds out the posers. He encouraged the audience to focus on the holistic campaign (from keyword to conversion) and the rest of the panelists agreed.
Andrew Goodman with Page Zero Media shared a few specific examples of campaigns performing better overall in terms of CPC and CTR, but conversions were way down after Quality Score was factored in. He discussed the two different Quality Scores (one for minimum bid and the other for position) and how predictive data affects performance during testing. This issue came up repeatedly during the Q&A, where the issue of exact vs. broad match reared its ugly head and the engines weren’t able to directly answer the question as to what their policy was regarding the difference, when a term like “ring” performed better than “5 carat engagement ring.” The basic response was: they would have to look at the campaign specifically, and it could be anything including specific keyword relevance to the landing pages causing the difference.
Jonathan Mendez with OTTO Digital expanded on previous themes, including the differences between Quality Score types and criteria. Mendez encouraged the audience to “ignore the score” when developing campaigns and go with a holistic approach that meets the users needs and expectations. A duh of sorts, but certainly raised a few eyebrows in the process. His philosophy was completely validated by the engine representatives. He did dive into an example of contextual relevance in the mobile market, using a DoCoMo/Starbucks SMS messaging concept and applied that to PPC nicely. Not surprisingly, his example of reinforcement (carrying over search terms to landing page header/copy) generated a 71+ percent lift.
The engine representatives available for Q&A included Nick Fox from Google; Brian Boland from Microsoft and Gulshan Verma from Yahoo! They all said roughly the same thing: it’s all about the user experience. Companies that are successful in making campaigns relevant will ultimately succeed when it comes to Quality Score. Furthermore, they agreed collectively that testing is a great thing and encouraged everyone to do more of it, as there is no penalty. Some of the audience shook their head in disagreement. Fox from Google mentioned that the two Quality Scores will be merging sometime in the future, for what that’s worth. Boland from Microsoft warned advertisers to be careful when asking for personal information, and that Microsoft plans to publish best practices for landing pages, as they too, will move to a Quality Score model. Verma from Yahoo! intimated that their Quality Score is based on a much more granular level than perhaps the other engines. Overall, refreshing insights into the Quality Score Black Box.