Traffick - The Business of Search Engines & Web Portals
Blog Categories (aka Tags) Archive of Traffick Articles Our Internet Marketing Consulting Services Contact the Traffickers Traffick RSS Feed

Archive for March, 2011

Google Suggest No Longer Riddled with ‘Scam’

Thursday, March 31st, 2011

More than a year ago, we covered the growing problem of thousands of legitimate businesses implicitly accused of being “scams,” by virtue of the self-reinforcing nature of the autocomplete in Google Suggest. This seemingly created a nasty snowball effect whereby consumers would latch onto that word wherever it might have showed on the list,  and click on it, virtually guaranteeing it bubbling up to second or third on the list. (“What? JetBlue scam!? I’d better find out about this!!”). Moreover, opportunists (essentially networks of SEO’s who created websites with ads and affiliate links around such search phrases) both seeded and furthered the problem.

It wasn’t fair to many of the businesses involved. So belatedly, Google has apparently blocked the word scam from appearing in Google Suggest.

Does that mean Google is censoring what you see? Yes, but that doesn’t mean Google is censoring search results. It’s censoring what it “suggests” based on your initial keystrokes, and that is generally accepted to be driven by keyphrase popularity. This doesn’t wildly distort your search experience. You can still search for the phrase “jetblue scam,” or any-other-company scam, if you want to.

As some have pointed out, this doesn’t put an end to the issue, and some online reputations are bound to be besmirched unfairly with words like ripoff, scandal, lawsuit, etc. Some individuals will have to fight off unfair character assassinations as well, and Google will have to deal with its potential role in spreading perceptions of individuals being “judged guilty until proven innocent”.

I think it’s likely that Google will quietly but heavily censor Google Suggest for certain accusatory or inflammatory words.

What the Radian6 Acquisition Really Means

Wednesday, March 30th, 2011

Radian6, a Canadian social media monitoring startup, has been acquired by Salesforce for $320 million, $250 million of that in cash. This is a relatively rare slam dunk for the Canadian venture capital community (Radian6′s investors include BCE Capital, Brightspark, and Summerhill Ventures), who often have difficulty finding enough domestic pure digital media and software deals to assess.

Like many, I can’t comment directly on the quality of the platform or the ROI it provides to customers. More than anything, the company seems to have established a lead and momentum in a space by getting into the right space at the right time, and then executing brilliantly. The analysis we’re hearing in the blogosphere is that they are part of “helping companies get more social”. Partly true.

But what they really are is Google 2.0.

I don’t mean they were poised to become the next Google. Google is worth north of $150 billion.

But Google Search, intellectually, is founded on PageRank. And PageRank was an innovation that sought to solve, in part, the difficulty of assessing reputation online. Google’s solution was relatively narrow in 1998, yet it was still a huge leap forward from the spam-ridden previous generation of entrants in the web search game.

Radian6, and companies like it, take that slice of what Google is and get heavily involved in mapping the future. Who’s talking about my company? If you were to rate how much reputation we have, and what type of reputation we have, as if you were some kind of neutral arbiter, what would our score be? How can we do better?

Consumers — end users — are ultimately the ones gaining massive amounts of control as the old corporate bullhorn gets inverted. It is companies and vendors (and people who are selling themselves) who will be subjecting themselves to the court of public opinion. Online reputation is complex. Tools to measure it — Google included — will become more complex.

Investments in adjusting to this new consumer-empowerment “empire” make a lot more sense than continuing to pour all the same media buying dollars into the same old bullhorn.

As companies like Radian6 work on understanding the complexities of reputation, one day, collectively they may surpass today’s search engines. Or they’ll be working alongside search engines, patenting and pioneering whole new ways of curating, listening, and rating vendors, companies, and self-promoters.

The complexities of measuring and acting upon measures of corporate reputation are 100,000X more complex today than when Google got started. These principles will soon apply to businesses of all sizes in all verticals — not just “corporates”. They’ll also apply to individuals, whether they’re subject matter experts, people seeking dates, job applicants, or students applying to Ivy League colleges.

Companies like Radian6 address just a portion of that addressable market. They work alongside the “places” and “environments” where reputation is created — whether that be the open Web, or Google, Twitter, Facebook, or Yelp. What they have accomplished to date is just the tip of the iceberg.

Focus Your SEO More on Revenue

Tuesday, March 29th, 2011

To this day, many SEO’s are over-focused on keyword rank, to the exclusion of other analytics. Measures of effectiveness of organic site visitors — especially revenue conversions — should take up much more of your mental bandwidth in discussing your site’s SEO, especially if it’s an ecommerce site.

It can be tempting to cater to what search engines and users “really” want in many situations — information, resources, tools, tips, widgets, templates, and general expressions of your giving nature and good citizen-hood. Logically, this makes sense. You can often rank well using these methods. Resources can act as linkbait.

But what happens when you log into your Analytics and sort through the top 500 search referral terms, sorted by clicks? Then do the same, sorting by revenue? Go back 6, 12, or 24 months.

You may find something shocking. Your “resource” area and your “free widget” that are responsible for all that attention and traffic may be generating a big fat goose egg in terms of sales.

You console yourself with the indirect effect. With the brand boost. Because someone’s got your free widget or PDF on their desktop, the sales cycle will churn slowly, they will like you, and eventually come back and buy something.

But what do the numbers say? Sure, it may be hard to attribute perfectly when someone’s buying from you 12-18 months down the road. But you know what? Some analytics packages can attribute those conversions even which such a long time lag. If you put them all together and the result is still zero, then it’s close enough to zero to be telling you something. The “helpful information” doesn’t convert!

Meanwhile, littered throughout your other stats are incredibly high-converting, consistent revenue search referral queries married to product and product category landing pages. It’s harder to get these to rank, but not impossible. The search volumes are lower, so you can’t brag about your traffic. All you do is make money.

This is not an uncommon scenario.

Have you been assuming that your informational and resource-offering outreach efforts have been a great SEO strategy that will convert to revenue “indirectly, somehow,” because “you hear a lot of great things from people about them”?

In the end it may be helpful to remind yourself that as an online business, you are either a resource, or a store. Granted, maybe you’re really a bit of both. And sure, it might make sense to delight customers, gain positive PR (public relations), and pump up your stats by offering some resources. But don’t kid yourself too much: the store parts relate tightly to purchase intent, and so often, the resource parts bear little or no relationship to purchase intent.

For starters, stop sorting your stats in order of clicks. And care about rank reports only if you’re pulling up revenue reports at the same time.

Your Paid Search Campaign: It’s An Asset (Literally)

Tuesday, March 8th, 2011

Can an advertising campaign be an asset?

I don’t mean asset as in “Say, Biff, you’re a real asset to Norelco. Another quarterly sales increase, and thanks for taking care of my dog in July!”

I recently stumbled on this idea posed as an accounting question in places like Harvard Business School (with thanks to Philip D. Broughton, What They Teach You at Harvard Business School: My Two Years Inside the Cauldron of Capitalism). Could a medical company list its late-night infomercial campaign on its books as an asset, because it had data about the predictable response attributable to the campaign – based on tailored 800 numbers? You would have to treat regular TV ads as an expense. But in the Harvard Business School case, some interpreters of tax laws believed that “where the revenue from an ad was measurable, you could treat it as an asset and depreciate it over time.”

Imagine that!

I have no comment on tax laws or accounting strategies, of course. But as a thought exercise in examining the (even metaphorical) assets a company invests in so that it can get wind under its wings, this is an immensely useful concept.

If an infomercial campaign could be treated as an asset, what about a keyword search campaign where the predicted cost per acquisition and response behavior is spread across thousands of keywords and hundreds of ads, with statistical confidence in the outcomes increasing over time? It’s not a stretch to suggest that the keyword-driven digital ad campaign is not only an asset, but a response engine that is much more versatile and diversified, and often more reliable than a simple infomercial campaign.

Super Crunchers vs. Superstitious Bystanders

There’s nothing particularly new about direct marketing intelligence or “database marketing.” Even the 1.0 version of this has been a carefully guarded secret for companies you may have received a lot of postal mail from over the years. In today’s “Super Crunchers” era, data miners bring increasing levels of sophistication to the task. They use data to hone existing campaigns, but also to put feelers out for emerging or latent customer demands.

And it doesn’t begin and end with searches performed on major search engines like Google. If an online retailer is large enough (think Amazon), imagine what trends you could tap into just by looking at their site search query data.

Marketing intelligence can be laborious if you bring old-school instincts to the task. You can spend years in focus groups, or hanging out in nightclubs trying to assess what style of boots or what kinds of makeup the cool kids are wearing. And yet search marketers understand that a great deal of that information is available to us in seconds in the form of search queries – what John Battelle famously called “the database of intentions.”

Consumers never launched a full-scale revolt against the old market research methods, and they won’t launch one against the new, either. The biggest sin in the marketplace is not asking the questions or digging up the data from people marginally willing to part with it; it’s not asking, not digging, and getting it wrong. Customers hate you most when you flog the wrong stuff to them.

Today, some of the world’s richest businesses are so info-driven that their actual lines of business seem almost secondary to their data optimization approaches. Seth Godin rightly notices that “Zara is an information business that happens to sell clothes.” That resonates powerfully in our world. All search marketing clients are potential Zaras. An information business that happens to sell flowers. An information business that happens to sell travel, banking, candy. You get the idea. And search marketing professionals get to watch, and participate. We understand the power of that data. We build long-lasting, highly granular advertising assets for visionary companies willing to invest in them.

Get a Head Start, or Don’t Win

If you’re into Web analytics, one thing you’ll notice is that aggressive customer acquisition through online direct response gets you a major lift in (free) direct navigation and organic search brand queries. Those benefits don’t start to really kick in until you’ve been investing in this for three to four years. But they do come eventually. One thing is for certain: if you play it too cautiously in the first couple of years, down the road, you won’t enjoy any pleasant upside surprises. You don’t reap what you don’t sow.

Über-riches await those at the top of the information heap, those who survey the whole landscape like they’re living in an Alan Parsons Project song. Godin notes that “what’s now” is companies who have “information about information” – “it’s what Facebook and Google and Bloomberg do for a living.” You can’t be them, but you can care about building your own information asset.

Granular response data offers an incredible head start – Godin again – to those who own it and understand how to use it. Head starts make fortunes. Think about the stock market. It’s illegal to trade on insider information about a takeover or disappointing assay results. You’d get too rich trading on the options with information others didn’t have fair access to. Even the watered-down (legal) version of “an information edge” drives billions of dollars in securities research every year.

A paid search campaign builds statistical confidence over time. If your investment in discovering what works turns up a factoid about the expected return on the search phrase “turnip juice stain,” you can project future returns with some degree of confidence. Spread that knowledge across many other specific data points, get a two-year lead on competitors, and the gap may never close.

If your funnel is highly functional, then the lead compounds into market share dominance and you begin lapping your competitors.

In light of that, what are companies doing to build, nurture, and recognize the asset that is paid search? What should they be doing? Are they doing it?

I’ll examine this in the next column.

This article originally appeared at ClickZ on July 30, 2010. Reprinted by permission.

Google’s Mechanical Turk

Wednesday, March 2nd, 2011

You can’t run a web search index as massive as Google’s without algorithms, scale, and massive automation.

You also can’t run it without editorial judgments. And “computers alone” do not determine what ranks.

You can’t ever fully describe a composite concept like “relevance” or “quality” fully “accurately,” because these are inherently subjective qualities. All you can do, as a scientist, is to develop the measuring tools to measure a version of that. And in so doing, the scale used to measure becomes, for all intents and purposes, synonymous with that quality — at least for the purposes of the study, and to those consuming the outcomes of the study.

Think of proving that “religiosity is highly correlated with a gene that also causes you to have green hair.” You could isolate that gene and figure out if this was true, but hold on: what the heck is religiosity, anyway?

It turns out the definition would be arbitrary. A group of social scientists (maybe using past literature, an expert panel, or other means) would create a weighted scale to measure it from a composite of factors, based on discoverable facts or askable questions. (“How many times did you attend church or a religious institution in the past month?” “On a scale of 1 to 10, how important do you believe it is that your choice of employment be consistent with time for religious practices?”. For the sake of certainty you might create a long list of factors, or if you didn’t care too much about methodology, you would just come up with a rough approximation of what we generally see as the quality of religiosity, and the scale itself would have to do in terms of being true to that concept.)

So what about the “quality” of the content on web pages in relation to informational queries?

As many know, Google has long employed human raters to make notes on specific pages, to help them better design the algorithm, assess where certain websites stand in relation to generally accepted qualities of quality, spam-or-not, etc.

Quality raters are generally asked to answer simple questions, but it gets as sophisticated as them having to know the difference between “thin” and “thick” affiliates. Presumably, then, they could be asked to take on more nuanced judgments still.

There is nothing to say, then, that Google doesn’t run additional pilot projects using human raters to crack down on certain areas where poor quality is creating generalized malaise and spam complaints.

And what additional qualities might they look for? In the case of the latest “Farmer” update intended to lower the overall rankings of companies dubbed “content farms” — allegedly, producers of lower-quality, SEO-friendly written content that falls below the editorial standards of “true” editorial organizations — you could begin to feed human raters questions like: on a scale of one to five, how “content farmy” does this page feel? (I guess I’m paraphrasing) … so that data could then in turn be fed back into algorithm design.

Presumably, web search algorithms evolve over time to encompass many arbitrary but important factors to ensure that users find content that is as relevant and useful as possible.

But much like trends in geopolitics that result in a shift in coverage emphasis on 24-hour news channels, specific themes and trends can increasingly pose themselves as the proverbial “growing threat” to the integrity of search results. Often these are composite, hard-to-define-with-laser-precision efforts to subvert the “spirit” of search algorithms by publishers seeking to tailor content to the “letter” of those algorithms.

If, for example, it turns out you can get an unfair advantage in search engine rankings just by studying the frequencies of many highly specific “question and answer” style search queries, and you fill the “answer” void with tailored, but untrustworthy and “slapped-together” answers, then you must might create a business around that. And a person (not a search engine) might argue that there are better sources of information generally, where that material is covered (indirectly or harder to find), and a search engine’s algorithm should be tweaked to give the latter a fighting chance over the former.

In that process, human quality raters could be getting new questions about how deep, rich, authoritative, etc. a piece is. New factors could come into play as to answer length, external validation, and just about anything you or a scientist could dream up.

Those raters might also be asked just to do a specific job in “calling out” a page or website. Similar to asking a quality rater to say if something is “spam” or not, and defining a “thin affiliate” as “spam” for said purposes, you could ask them to define whether an answer or how-to article was “low quality” or “original”, with further instructions that state that if something looks “content farmy,” then it may be low quality or unoriginal.

That human input could then be used as a shortcut for downgrading whole websites, or pages with certain qualities from those websites, at least on an “all else being equal” basis (the downgrades could be overruled by other strong quality and relevance signals).

On the whole, this paints a picture of a fallible process that still moves in vaguely the right direction, as far as “quality” and “relevance” are concerned. It’s not perfect, but if the goal is to plug certain opportunistic SEO loopholes so we do a better job of highlighting great content, it’s a nice way to reward the producers of original, “real” content and give them the courage to stick with that process for the long term, rather than worrying about being outranked in the short-term by cynical opportunists.

Like “religiosity,” the process of determining “relevance” and “quality” is not entirely subjective, because you can create and refine the scale and the composite measuring stick that tells you who has more of it.

But certainly there is an arbitrariness to it, so that some will feel wronged by the process. Like the “highly religious” person who scores lower on the official scale of “religiosity” because they don’t live near a church and don’t have a car, some useful web pages might not fare well when the algorithm finds them sharing qualities in common with other low-quality sites. With the Web being the scale it is, there are bound to be many shortcomings in ranking algorithms, especially on “long tail,” infrequently-searched terms.

If the situation gets bad enough, the search engines need to take big shortcuts to make fewer obvious mistakes. One enormous shortcut involves looking globally at “domain authority” as a major factor in measuring the trustworthiness or quality of a website’s content. While it would be nice to think that the algo can adjust to accurately assess the quality and helpfulness of specific pages of content, it’s fair to say that it can’t do a great job of that in many cases. So huge shortcuts are taken; some sites are relatively greenlighted before turning too greedy for their own good; so, the dance will continue. One day, certain sites — like The Huffington Post or Squidoo — will be flying high. The next day, they take a big hit across the board. The day after: who knows.

How “algorithmic” are those shifting fortunes? What exact mechanisms are leading to site-wide and brand-wide promotions and demotions? Many people are curious to know the details. Google is unlikely to provide specifics.

Not without cause, good-sites-gone-bad are often “slapped” by an across-the-board downgrade in response to open scheming and boasting by their owners or partners — or SEO’s who find means of becoming publishers on those sites — about how great they are at leveraging their site’s high trust to generate advertising or affiliate revenues, even for lower-quality or tailored pages. High trust where? In Google.

The parts of those sites that actually play by the rules — the real Squidoo lenses, the good articles on HuffPo — then need to be preserved so the “slap” doesn’t throw the baby out with the bathwater. Hard work, even for Google.

This puts every content-driven, advertising-driven business in a perilous position. Google is careful to describe certain directed “anti-spam,” “anti-low quality” initiatives in such a way that demonizes the “offenders”. Usually, this is in line with consumer protection, but every so often, you worry that just about any site could find itself on the wrong end of a new definition of what counts as “original” “quality” “content”.

Certainly, Yelp isn’t feeling too confident about its relationship with Google, despite its obvious leadership status in local business reviews.

And certainly, these “growing threat” sweeps should never be undertaken by first asking if “someone could build an entire business” around “getting unfairly high rankings in search engines.” This, of course, is subjective, and could apply to anyone.

Imagine if you built a database of filmographies and biographical information pertaining to motion pictures and their stars. Imagine if you built a user-constructed encyclopedia that had nearly definitive information on an incredible range of subjects. Imagine if you built the first major website that offered comprehensive information and user reviews of every travel destination in the world. That would give you great organic search traffic! But is it wrong? I hope not!

Of course, by any algorithmic test you can imagine — for now — these kinds of sites would rank well in Google – across tens of thousands of pages of useful content. Whole businesses could be built around them (and have been).

But in a heartbeat, this can change. Computers don’t make that decision — not on their own.

That’s the scary part.

 


Traffick - The Business of Search Engines & Web Portals

 


Home | Categories | Archive | About Us | Internet Marketing Consulting | Contact Us
© 1999 - 2013, Traffick.com. All Rights Reserved