Traffick - The Business of Search Engines & Web Portals
Blog Categories (aka Tags) Archive of Traffick Articles Our Internet Marketing Consulting Services Contact the Traffickers Traffick RSS Feed

Archive for the ‘Search Engines’ Category

Blekko Update: Mechanical Turk Meets Algorithmic Web Search

Friday, August 13th, 2010

Blekko, the stealth search engine startup co-founded by Rich Skrenta (of ODP fame), has recently launched a limited private beta. I’ve been able to give it a spin and can confirm, it feels like early days yet for the project but it does have its core metaphors and functions nailed down solidly.

As close observers now know, the service allows users to use pre-set (or custom) “slashtags” to reduce a whole web search to a narrower slice of relevant websites. “Pre-sets” include /people, /stock, /weather, /define, etc. These work somewhat the same as certain Google pre-sets, which can be either activated by knowing the nomenclature, or may be activated automatically based on the style of your query. Blekko’s pre-sets already go beyond some of what Google offers; as these develop they constitute a potentially useful toolkit.

Some quirky areas they’re working on are near and dear to the founder’s heart: because he’s always aimed to pitch the product to search pros and SEO’s, Blekko has spent considerable time building out SEO-candy features. For example, a slashtag called /rank (and no doubt other features in development) reveal the factors in ranking for certain phrases, which would allow one to understand why some sites rank higher than others. Go ahead and game Blekko if you want — people won’t be using it to search the whole web, so they’re unlikely to include your site in a slashtag site list if it’s spammy.

More germane to the spirit of the project are “topic” slashtags (from /arts to /discgolf to /yoga) and “user” slashtags (anything users customize). The idea is basically to metasearch all the sites that are included in the list, but none of the rest of the web. (Or you can call it a mini index or a multi site search if you will.)

It’s important to note that Blekko needs to maintain its own full web index to offer this to users and partners as original technology; after all, when you enter a site to add to the list, it’s got to be spidered and already included in the master list of sites. One of the key guiding principles Skrenta mentioned to me in the earlier going was that so few search startups are going into the Google arena of “whole web search” that the field is starved of healthy competition. He’s right. To keep costs down, Blekko’s index is of course much smaller than Google’s. That means it’s less comprehensive, but it makes the project feasible and further limits the incentives for spammers.

So many other search startups have been content to set their sights lower, tinkering with mere “features.” Yet somewhat disturbingly, in a recent Techcrunch interview, Skrenta’s emphasis is on slashtags as a “feature”. It’s still unclear, then, whether Blekko is a big idea, or a small one.

A few points are worth mentioning as far as this core functionality goes.

1. There is some question as to who Blekko is competing with and where its positioning lies. Clearly, the identified shortcoming in mega web indexes is junk, spam, and an inadequate commitment to personalization.

2. To address that, some return to the curated web must be contemplated. Fixed directories had their day and were susceptible to claims of corruption. Moreover, as Steve Thomas of a startup called Moreover once noted, directories and other curated sources suffer from the “fixed taxonomy problem”. What if your own list or own approach would be different? Shouldn’t you be able to follow an “editor” who “slashes” the web in a way that you approve of? Shouldn’t you be able to contribute your own value as an editor to the community, without being stuck on categories you don’t actually love because someone else got there first?

3. The large number of pre-set slashtags are interesting. They suggest yet another attempt could be made to intelligently curate the web, using people who are good at putting together the lists. Someone would need to admit that it was opinionated categorization, of course. There’s something disingenuous about Google News’s helpful reminder that the whole thing is “generated by a computer program” and has no editorial judgment involved. Presumably Blekko suggests we might (or must inevitably) go in a different direction. Admitting to curation might be a start!

So Blekko is a way to customize the web and shrink the universe. It’s a cousin of wave 1 of “peer to peer” search (OpenCola) or shared bookmarking services (Backflip, HotLinks).

Here’s the rub at this stage. When I try to use certain slashtags in an intuitive way to find content, it doesn’t work very well.

I built my own slashtag called /ugc to encompass a number of user review sites I like, such as Yelp and Chowhound (and Tripadvisor and…). When I tried to find certain restaurants, no matter what I typed, I still got a mess of irrelevant results.  And this is arguably not an advanced search problem, but one of the simpler ones you’ll throw at an engine. I’d be way better off just going directly to Yelp or Chow.com, or to their mobile apps. Then, I discovered the slashtag called /reviews. It didn’t work any better. At the end of the day, I know how Yelp, Tripadvisor, and HomeStars work when I visit them directly. I don’t know how Blekko works, and/or, it currently just doesn’t do the job. In many important ways, it offers a less rich experience than the individual sources, which offer a rich navigational experience, community, etc. It’s a search engine, of course, so it helps you find stuff. Which in this day and age, isn’t all that impressive to the average person… especially if it does a poor job of that.

Similar problems came up when I tried intuitive seeming searches using the /vegan slashtag. With a very specific intent, maybe that tag might work. 90% of the time, it’s safe to say you’ll be just as frustrated as you would be using Google.

One generous reviewer noted that “Slashtags aren’t perfect“. In my opinion, they don’t really work as billed, and might never work intuitively. As a user, when I see the slash /liberal or /local or /review, I think of an attribute, not a simple notation that the web universe will be shrunk to a curated list of websites, however small or large. The state of the semantic web is in shambles, there is no question about that. So it is indeed pie-in-the-sky to expect key attributes of pages (such as “this is a consumer review”) to be widely and universally available for search engines and users. But I get overexcited when I see a slashtag like /funny or /green. I expect it to work magically. Of course, it doesn’t.

Granted, the project is still in Beta and even the individual site searches on the underlying sites are often weak and cluttered until you become a power user. At HomeStars, we quibbled for a couple of years over the best way of treating the fact that 40% or more queries are on a company name, but generic words in company names often overlap with review content, giving the “wrong” rank-order for that user’s intent. Those are solvable problems to an extent, so with more curation, more tweaking, more feedback on the beta, etc., Blekko can maybe solve many of these problems.

I definitely empathize with the challenge, having been involved in a site that fails often even just making the search and navigational experience work for a narrow subset of users. Novices come and go who think they can write code that will “fix” search and make it “work”. It does work, for 0.5% of users. Then it breaks for the other 99.5%. Everyone has opinions about rank-order, and pretty much everything else to do with search. You don’t “solve” it with a couple of pointers as it’s incredibly complex and subjective.

Ruminating on all of that, it’s clear that Blekko is an enormous idea, not a small idea or a simple feature. As a result, it opens up enormous cans of worms — as it should.

Other (Googly) ways of assessing relevance and quality are going to eventually need to be built into an engine like Blekko, regardless. Google has a light-year’s worth of head start in gathering data about user behavior, click trails and paths, and other signals that help propel one page or site higher than another. Even if you cut out 95% of Google’s index, or 99%, Google’s proprietary data and ranking technologies would be immensely helpful to ranking. Heck, you can use Google site search for your own site and tap into the same technology.

Meanwhile, Google is moving forward to aggregate content and serve up types of content with certain attributes, drawing from qualifying lists of underlying data providers. One such area is user review aggregation. They’re making a botch of it now, and are heavy-handedly trying to funnel users into their own review app. But they’re iterating fast.

In other words, Google is going hard after solving this type of problem in areas where it really matters. Blekko is experimenting with how the problem itself is posed, and where to take it next. And that is bound to be strikingly different in tone and execution than Google’s method.

Rich Skrenta is right. The field of large scale search engines is in desperate need of innovation and genuine competition. Blekko can act as an incubator for better ideas about search, but perhaps just as importantly, different ideas about search.

Google’s Shuttered Projects: Does it Come Down to Trust?

Monday, August 9th, 2010

Danny Sullivan offers a thoughtful piece “celebrating” and reviewing Google’s failed (closed) projects, accompanied by “celebratory” (rationalizing, explaining) quotes from Google.

Of course, there can be any number of reasons why pilot projects fail to take off. On the flip side, Google’s dominance in some of the lines of business that started out in Google Labs or otherwise alpha level functionality, is nothing short of astounding — think Google Maps.

But is there a deeper reason for Google’s inability to gain traction with what often seemed to be promising initiatives? With their tech savvy and reach, they should have an advantage from the standpoint of promotion as well as deep pockets to ride them through to completion.

One thing that comes to mind is user trust. Is there an unconscious upper threshold of total company product adoption in many users’ minds? For business users, is that perception particularly acute? As in: “I love Google Search and YouTube, and Google Maps I can’t live without. I’m willing to try Google 411… but when it comes to this latest invention, I think I’m going to take a pass and stick with [independent provider x].”

In the old school of American political science (pluralism, ne0-pluralism, Robert Dahl’s Who Governs?) the idea was that contemporary America was reasonably democratic even though there were elites (C. Wright Mills). As long as no one elite had total control over all facets of an economic region (for Dahl it was New Haven, CT), there was something analogous to the checks and balances the Founding Fathers of the country envisioned. That concept (wish?) still resonates with many people. In fact, it’s baked into the American psyche.

Why wouldn’t people approach their business relationships or technology adoption decisions the same way? (“I’m using five Google products all the time, but something about this new one bothers me. I think I’ll take my business to the other guys for a change.”) It’s hugely different when a startup starts doing well with a similar product. Why? Well, the underdog syndrome has consumers cheering that entity, community, product on to greater heights. When a product is Google-owned, you can no longer cheer for Page and Brin (though the company would like us to still see them all as underdogs). And the developers of Google’s products (even the widely loved GMail and Chrome) are rarely charismatic, or even visible. They are software developers: that’s what they do. They don’t feel the need to be hybrid entrepreneur-evangelist-developers. With a new or acquired Google product (say Aardvark?), it’s not cool to recommend it to friends anymore, because you’re recommending “just another Google product”. You make a conscious or unconscious choice not to work too hard at adopting that thing. The underdog syndrome, too, is baked into the American psyche.

Beyond trust is being top of mind in a category. Foursquare gets to be top of mind in something specific. If the same product is owned by Google, the positioning gets muddied.

So is any of this true? Do new Google initiatives now fail because people don’t trust Google to be a powerful elite in the ecosystem across too many categories? Are users consciously wanting to spread the love, if not to the tiniest garage startups, at least to other elites?

Let’s go through Danny’s list product by product, to find out.

Google Wave: Beyond the product being potentially too advanced and new to achieve quick adoption by the average (especially non-technical) user, there is a huge trust component to sharing internal company messaging across different communications styles, with different objects, etc. At the risk of oversimplifying, the product works in the general category of the old-school “office intranets”. Those kinds of systems worked in an atmosphere akin to “lockdown” and the service providers took security seriously. Despite the fact that similar players in the category like 37 Signals are offputtingly casual to some, the fact remains that Wave entered a paranoid type of market category with a freewheeling, experimental attitude. If people are going to be free-wheeling in their work and project environments, they’ll do it in ways they’ve already grown accustomed to (even using Skype or various cobbled-together solutions, or multiple types of solution). Despite the Cluetrain 2.0 style lectures by “how to set up a Wiki” evangelists near the end of the Web 2.0 hype phase, it’s clear that many aren’t adopting. The alternative to going whole hog into a single, consolidated system, for many, appears to be using multiple solutions clumsily and sometimes badly. The alternative, though, is akin to totalitarianism. Inhibited by trust issues? Yes.

SearchWiki: Did Google ever really seriously intend to keep this feature? Was adoption really the problem, or was this one of the many weak experiments in the Google Search world intended to act as a trial balloon to see how much genuine behavior would result, as opposed to self-dealing and spam? Leaving the feature in wouldn’t hurt anything, but Google must have felt it was a distraction to users. I’m sure they collected enough data to learn a few things during the time it was running. Trust issues? No.

Google Audio Ads: This was a very ambitious initiative. Nothing has changed in Google’s thinking long term: they’d like to broaden the idea of advertising auctions to any conceivable medium. There’s a slight snag, though. This works best on sites Google owns. To enable ad auctions on publishing platforms & content Google does not own, Google needs to sign partnerships. The end result of such partnerships may be that the publishers & providers won’t partner with Google for prime inventory but only remnant inventory. Take the analogy of a billboard owning company like CBS or Pattison. The best case scenario for Google would be to run an auction where advertisers could bid on the inventory digitally, and run the ads digitally. But they’re still at the mercy of the providers. Would the providers be resistant to such partnerships? Absolutely. They don’t see their interests as aligned with Google’s, especially if in the short term their revenues are not threatened by the old model, and especially if they believe they could create their bidding technology when their world becomes digitized. Google, in short, faces competition with ad networks, other technology platform providers, and resistance from publishers and media sources. They’ll continue to study ways to control more of their inventory directly, in partnerships where the other party is eager to try something out long term. Closing Google Audio Ads doesn’t reflect at all on Google’s backing away from the larger strategy. It does point to some of the roadblocks they’ll face in attempting to play a middleman or platform role for media assets they do not own. Did trust issues derail this one? In a strange way, yes.

Google Video. Google Video got its butt whipped by YouTube, so Google bought YouTube. The user culture at YouTube was much more vibrant. Google Video got hammered by a high spam to content ratio, in both videos and comments. On YouTube, when it came to the user base, there was a lot of “there there”; on Google Video, it felt like no one was home. YouTube’s culture was edgy and fast-moving. They also moved quickly to become the standard for embedded videos.  As a result of all this, Google Video failed to gain momentum while YouTube momentum was huge. Trust issues? Pretty much. But also, users simply couldn’t strongly identify Google’s brand with video. (Not until after the YouTube acquisition.)

Dodgeball. Why don’t we chalk this one up primarily to the “first draft effect”. In the first attempt to identify opportunities in a category, services don’t always find their feet. The valuable experience (and fundability) the founders gain from being acquired by Google helps them succeed much bigger on their next venture, especially if it’s playing in a similar space. On top of that, Foursquare being identifiable as a standalone thing, and coming along at a time when tweets helped to augment what it does, helped its momentum as an agenda-setter in the very space it wanted to define for itself. One part experience, one part positioning. You can’t entirely rule out the possibility that users feel better about sharing their location when it’s somehow a “cool” service that isn’t owned by the big guys. Typical early adopters of mobile and social apps are not exactly careful about their privacy; but like everything else, they like the exploitation of their willingness to trust, to at least be spread around a bit. Trust issues? Not entirely, but somewhat.

Jaiku. Google decided not to devote engineering resources to a Twitter clone, which just might indicate that it’s the user base and the data, not the amazing innovation and code base, that hold the key to Twitter’s current market value. In other words, there is room in the world for only one “show about nothing”. Trust issues? Hard to say, but don’t rule them out as a factor in slowing adoption had Jaiku gotten far enough along.

That’s enough examples to make the point. The many advantages Google’s reach, brand, and resources bring to a project can be offset by users’ unspoken “single overlord adoption threshold”. Both impulses will be at work when Google rolls out its mega social networking initiative near the end of this year. And indeed, it’s hard to completely untangle the issue of “I don’t trust you” from the positioning issue of “I trust you to do certain things really well, and I don’t believe you’re actually good at these other things.” These issues are inevitable as a company grows very large. It doesn’t mean anyone at Google is actually untrustworthy. It speaks more to an inherent fact of contemporary customer psychology, where the very big will do anything to mask or soften that bigness in order to appear humble and scrappy.

Google, Caffeinated

Wednesday, June 9th, 2010

Search is in our veins as surely as that morning cup of java is a required kickstart for many of us. Being caffeinated will be the only way to make it through the remainder of the week, with the SMX Advanced and SES Toronto conferences in high gear. (Toronto’s main festivities start tomorrow. For delegates, there is a pre-party at the Charlotte Room tonight at 7:00 p.m.)

And I can only assume that you’d need to be highly caffeinated if you’re one of the very few who are hopping from Seattle to Toronto so they can attend both conferences.

In keeping with the times, Google’s search index is now fully caffeinated. A new indexing architecture has gone live. Overall, Google’s message is that it promotes “freshness” in search results, but that we shouldn’t misinterpret this to mean it affects the ranking algorithm.

Vanessa Fox is one of the very few commenters who adds significant insight related to the Caffeine project. In a recent piece, she quotes Google’s Matt Cutts:

“It’s important to realize that caffeine is only a change in our indexing architecture. What’s exciting about Caffeine though is that it allows easier annotation of the information stored with documents, and subsequently can unlock the potential of better ranking in the future with those additional signals.”

In SEO, it’s always important to be able to read the tea leaves. “Subsequently can unlock the potential of better ranking in the future with those additional signals?” This means, of course, that major algorithmic evolution, and major volatility in search rankings awaits: no doubt to the benefit of companies who understand how to marry timeless elements with freshness, vibrancy, and sociability. Better “annotation” of pages and elements will mean, long term, more accurate

So of course, the release of Caffeine is a harbinger of a new phase of evolution in Google’s means of sorting out remarkable and relevant wheat, from spammy and counterfeit chaff. Of course, then, ranking and algorithm changes come with this territory. Don’t be alarmed, cough cough, but they do!

Even the mention that pages can now be associated with “multiple countries” in Google’s architecture (“not that they couldn’t be before!”) is evidence that the old Google indexing environment (and by extension, the ranking algorithm) wasn’t up to the task, and many more holes than anyone would let on.

In a recent talk, I pointed to the importance of social media savvy as a direct and indirect driver of search visibility. The talk was entitled “No Social, No See” (with apologies to Bob Marley).

Certainly, these trends will spur the development of a range of new third-party tools and agency services. Perhaps most importantly, though, corporate cultures — all corporate cultures, if they want to play Google’s game — will have to evolve from within. Means of providing freshness, vibrancy, and original content will have to be developed — in some cases, from scratch. In other cases, by changing how you think.

These are exciting times.

Say Your Final Prayers, Black Hat SEO’s: Guest Post by Dr. Ken Evoy

Tuesday, June 1st, 2010

[Editorial note:  Dr. Ken Evoy is one of the most successful entrepreneurs in our space. Back when a great many small businesses (today's "savvier more experienced online businesses") were coming onstream, Evoy's company, SiteSell.com, was providing one of the first "bibles" of Internet Marketing, Make Your Site Sell!. Many Internet marketers cut their teeth on that book. SiteSell morphed into a tool and process providing company called Site Build It!, which over 40,000 customers use to build their own, independent, e-businesses. Recently, Ken and I engaged in some friendly discussion about some of the blackest of black hat tactics that are still lurking around, and the practitioners' ham-fisted attempts to profit from them. Some of them are quite harmful to the ecosystem, but most importantly, after a time they stop working. And the howls and screams begin. We shouldn't listen to those howls and screams. They're about search engines doing their job: protecting consumers and competing businesses from shady, spammy, and often borderline illegal marketing tactics.

--Andrew]

One Day, the SEO’ers Will Get It: But Not Today

[by Dr. Ken Evoy, Guest Post for Traffick.com]

I am constantly amazed by the “psychology of the herd.”  Any herd.  From Bernie Madoff’s super-rich herd to the Toronto Maple Leafs fans (who believe that one day they will win the Stanley Cup — sorry, Andrew), the ability of bandwagon psychology to blind us from seeing the obvious is powerful stuff.

The practitioners of Search Engine Optimizations (“SEO” and “SEOers”) form such a herd.  This herd is the object of this article.

SIDEBAR:  I define SEO as the manipulation of search engines to produce search results which rank their Web pages (or those of their clients) “incorrectly high.”  I do not consider pure “white hat” practices to fall within the definition.

Pure white hats do not mislead the engines. The emphasis is on “keeping it real” (eg., quality content and links, optimal site architecture, etc.). Their practices fall comfortably within search engine guidelines.

Some SEOers fool themselves into thinking they are white hats. Here are two tests to help you find out if your white hat is, in fact, a little gray or worse…

1)  The true white hat NEVER talks about doing anything to “avoid detection by Google.”

2) The truest of white hats can answer “yes” to the following question… ”If Google announced that they are launching absolutely perfect, human-level, Artificial Intelligence tomorrow, will they perform better or worse?”  Only the purest of white hats are ready for such an occurrence.  Their sites should expect to see more traffic, IF their content is original and of high quality.

With that understanding in hand, when I use the term “SEOer,” I include black, gray and all but the purest-of-white-hat practitioners.

From the early days of keyword stuffing and doorway pages to today, the arms race between SEOer and search engines continues. But it fascinates me that SEOers just… don’t… get… it.  It’s not a fair race.

NO SEO black hat trick has ever survived the propellorheads who work for the search engines. They don’t survive because they compromise the quality of search results.

What do you think would happen if an SEOer’s techniques succeed and degrade the quality of the SERPs?  Nothing, at first.  But when the engines find you, and they will, they will hurt you.  Why?  It’s elementary…

If you degrade their quality, surfers are less likely to keep coming back.  Soon, you will be delivering fewer targeted surfers to your REAL customers, the advertisers.  In other words…

If you degrade the quality of their SERPs, you threaten their very business.

Case in point…

I once knew two brilliant black hats.  They were WAY beyond what anyone was doing or writing about.  No one who is SERIOUSLY good at SEO writes about it.

These two guys were math wizards, the types who got 100 in advanced calculus… in kindergarten. Nicholas Taleb type of brains.

THESE guys, I respected.  I didn’t agree with them, but I respected them. They broke no laws. They shattered every search engine guideline ever published (and a whole bunch more, I’d wager). They made 7 figures/year.

I would ask/tell them… “Why don’t you guys just take it to Wall Street? You’re smart enough and you won’t be facing off against hundreds of Computer Science PhDs whose entire lives revolve around making search better. Even if they don’t target your techniques directly, you will get swept away by the ever-improving ability to detect real, genuine and excellent content. You can’t beat that forever.”

I understand why they thought I was wrong.  No one had ever outsmarted them.

But this was not a 1-on-1 fight.  One got knocked out of the box around 2 years go. I can’t reach and have not heard from the other, who used to like to e-mail me about how well he’s doing.

It is simply not possible to stay ahead. The sheer complexity of ARTIFICIALLY ranking high goes up, up and up, making it continuously harder to manipulate.

Recently, a very large ring of sploggers was dismantled by Google.  This ring consisted of thousands of people creating countless mini-money sites.  They admitted that they took content from top-ranking sites for high-paying keywords, then altered it just enough so Google could not recognize it.  They then created huge fake link networks (often coordinating efforts with each other) to prop the “money page up” at the SERP for that keyword.

The amount of spammy content and links must have been staggering.  The major “gurus” had high traffic and they coached countless thousands of others how to do it.  They taught this system publicly, which shows the degree of invulnerability they felt.

Note that this was not high-tech cutting edge like the black hats who should have gone to Wall Street.  This was low-tech link-bombing, the simple overwhelming of one factor, inbound links, by thousands of people.

Well, it turns out they were not invulnerable. Reality is reality. You can recognize it earlier and adapt to it, or you can ignore it until it’s two inches away from your nose, when you can’t ignore it anymore, and it stomps you.  The sploggers have been self-admittedly stomped, their blogs full of the gory stories of rankings and traffic disappearing.

Bottom line? No matter how cutting edge or low-tech-brute-force your black hat may be, it won’t survive.

I was slammed when I published that message in “The Tao of SBI!” in 2005, explaining why SEO is doomed.  Everything in that book is happening. SEOers are morphing their job descriptions, some now calling it SEM, rather than admit that SEO is dead.

Look at just a few trends since then… Universal search, personalized search, Google Suggest, tools to help us search smarter.  We all get something a little different from Mother Google, including influences by your social network, and on and on and on… And what about AI, where Google is doing cutting edge work?

Can SEOer really keep up and manipulate this?  As they say in the Sopranos… fuhgeddaboudit.

And still, SEOers don’t get that it will soon be impossible to manipulate rankings.

There’s good news for all of us, though.  Lao-Tzu (604 BC – 531 BC) said the following in “Tao te Ching”, 2500+ years before I said it in “The Tao Of SBI!” in 2005 AD (so I can’t take the credit)…

One who wins the world does so by not meddling with it.

One who meddles with the world loses it.

The good news?  In the midst of the increasing complexity emerges the simplicity of how to generate free, targeted search engine traffic…

Add Value.  Real Content. What people want. Love what you do, but make sure you can make some money at it (if that’s important to you — if it’s not, it’s a hobby but you likely still want an audience).

Sure, get the on-page stuff “right enough” to let Google know the page is about “Anguilla beaches” and not “chinese restaurants.” Secure some quality inbound links, not article-marketing spam or other link tricks (the next bad group to howl about Google depriving them of a living)….

Watch your snowball grow as Google sends you more traffic because…

You are delivering exactly what they want.  As you deliver more if it, traffic increases further.  Google tracks behavior and sees that visitors like your site, improving your rankings, increasing your traffic. And round it goes, like a snowball gathering momentum.

What started this virtuous circle of success?  You gave Google exactly what their searchers want, which is exactly what Google wants.

And that takes me way back to when I wrote my first book, Make Your Site SELL! in 1997.  Although I had been successful with black hat tricks when “working just for myself,” the critical question I had to answer for future readers of MYSS! was…

“Is this just a game, like counting cards in Vegas?”

“Or are search engines going to turn out to be real important for the folks I’m writing this book for?”

If you “play” only for yourself, you can afford to play a game if that is what you want.  But if you are making business recommendations to people who will act upon them, you can only make one recommendation…

“Stop playing games against the engines and work WITH them.  Give them what they want and you will never have a sleepless night.”

Life becomes so simple and worry-free.

[Editorial note: Ken, I am a Montreal Canadiens fan. -- Andrew]

Brief bio of Ken Evoy:

SiteSell.com sold over 100,000 copies of Make Your Site SELL!, which received high critical acclaim, in the late 90s.  The company self-published several other successful “Make Your ____ Sell” books.  They stopped when Dr. Ken Evoy realized that most people need more than a book to build a business online.

As he says, “They need the step-by-step and the tools to execute.”  SiteSell’s flagship product, SBI!, is based upon simple, “keep it real” principles.  Ken calls SBI! “the living test tube that proved up the decision to give humans and search engines what they want.” But even that, he says, is only one small part of the bigger picture of building an e-business.

Google’s Browser-Based Data Collection Opt-Out: Staying Ahead of Regulation

Tuesday, May 25th, 2010

Google is rolling out an opt-out browser add-on to allow any web user to block their data from being shared with websites using Google Analytics. This goes beyond the requisite privacy policies and disclosure on websites using Google tracking tools, and seems to get ahead of most possible forms of policy regulation on user privacy. In other words, Google has chosen the rights of users over the wishes of marketers. They have a long history of doing so. It’s not purely selfless: it’s also turned out to be the right stance to take on a wide variety of issues. Despite whatever controversies Google may have faced in recent years, overall their reputation has been along the lines of “pretty mature and respectful, for a company of this size”.

What? Google interested in privacy?

The casual observer might not associate Google with a forward-thinking approach to privacy, given recent controversies and given that you inherently give up a significant chunk of your personal privacy if you use many Google tools and services. That’s by dint of their overwhelming scale and scope in many walks of digital life; many of those worlds overlap and collide.

But for years, discussions of user data and opt-outs were virtually nonexistent in the traditional marketing world, and at some of Google’s competitors. That world has made rumblings of someday “pushing” the Googles and Yahoos of the world into a more marketer-friendly stance when it comes to sharing user data.

Google pushed back in many small ways. For example, using the Google AdWords Conversion Tracker used to require advertisers to display a prominent logo that led to more information about data collection. Not exactly the kind of thing you’d want prospective purchasers to pore through while perusing your landing page, but Google made advertisers do it anyway. (That got sort of relaxed later.)

By comparison with competitors and the mainstream thinking in direct marketing, in their core offering to advertisers, Google didn’t even budge, didn’t even dabble, in areas like demographic targeting. That’s begun to change, of course. But it took them a lot longer than many would have expected to loosen up on that front.

Again, I’m not saying that your privacy is safe in a Google-centric world. But Google should be rightly recognized for taking a strong position here and going beyond what might be forced on them. If you use a major browser (Chrome, IE, or Firefox, with Safari coming soon), you can simply set it so no website ever gets your session data to use with Google Analytics. That data loss is a major threat to marketers trying to attribute their spending efforts, or to otherwise make sense of user behavior.

Now Google knows that probably only 2% max of users will bother, so at this stage it’s the same as various security settings in the browsers that 90% of users don’t play with. But if that’s the case, then why don’t other analytics vendors come out with a similar plug-in? The answer seems clear: Google has made a point of always getting ahead of the curve on these issues, so they don’t run into a backlash. Other vendors have generally taken a more short-sighted (or simply unethical) view of users’ privacy rights and concerns.

Is a self-interested other shoe likely to drop? Possibly. Once the opt-out functionality is firmly in place, what’s to say that Google won’t move forward more aggressively with demographic and behavioral targeting? It’s quite possible that in taking care of users and regulatory concerns first, Google can more confidently offer more data to marketers. It will be interesting to watch.

And here I almost got through this whole post without saying “Facebook”.

 


Traffick - The Business of Search Engines & Web Portals

 


Home | Categories | Archive | About Us | Internet Marketing Consulting | Contact Us
© 1999 - 2013, Traffick.com. All Rights Reserved