Welcome to YLOAN.COM
yloan.com » Marketing » Search Engine Optimisation In Personalised Serps
Marketing Advertising Branding Careers-Employment Change-Management Customer Service Entrepreneurialism Ethics Marketing-Direct Negotiation Outsourcing PR Presentation Resumes-Cover-Letters Sales Sales-Management Sales-Teleselling Sales-Training Strategic-Planning Team-Building Top7-or-Top10-Tips Workplace-Communication aarkstore corporate advantages development collection global purchasing rapidshare grinding wildfire shipping trading economy wholesale agency florida attorney strategy county consumer bills niche elliptical

Search Engine Optimisation In Personalised Serps

Technical aspects of what we learnt

Technical aspects of what we learnt

Here at Hull SEO, there didn't seem to be much evidence that the computer OS or browser type had any significant role in the re-ranking

processes or mean averages. As mentioned earlier, further testing could include isolating aspects such as Google ToolBars

being installed, state of java script & so forth.

There was also an interesting fact in that the lone Safari browser on Mac had the cleanest information. Meaning that when they

looked at the mean average rankings, this set up had the rankings that best represented the average ranking. It's been

known to not be compatible with Google personalized search which may have been relevant.

What they learned so far

At this point there aren't likely any giant affects relating to the technical set up of the searcher in query.

How much flux is there in the rankings?

There was certainly considerable movement in the rankings to the extent that no two result sets were the same.

Sometimes there were minor adjustments & others with movements from 9th up to 2nd which is a healthy move thinking about

the location above the fold.

What is worth noting is that this wasn't truly reflected in profiles with personalized search ON more so than when it was

disabled for the most part; re-ranking existed with & without personalized search.

Ultimately, while the information showed a fair amount of re-ranking, there was not to truly reshape one's SEO programs

or reporting. That is to say those potential behavioural re-rankings are not generating a giant flux that inhibits valuations. Not

that those behavioural signals aren't having a pre-delivery ranking effect; basically that they don't seem to be having a major

role in re-ranking by personalized search or query analysis.

Top canines & usual suspects - There was a tendency for the top 10 results to be re-ranked over complete

upheaval across the top 20 placing. For the most part the first page rankings remained consistent as a group in the majority

of query spaces & there were nominal placement of URLs not found across all the results.

There's also instances where personalization enabled results & then paused state results (same user) showed

considerable retention of personalized results (or at least ranking anomalies). This could insinuate a level of non search history

related signals as well. Another consideration is that they haven't inquired in to the strongest performing URLs from the queries

to establish relative competitiveness of the query spaces. More competitive search terms may have greater (or lesser)

levels of re-ranking.

What is affecting the rankings (& what are the effects)?

Thinking about the affects of having personalized search turned on were often minimal, there seems to be other factors at

play here - some causation could be related to;

> Behavioural - information other than search history could also be affecting as previous searches prior to the experiments,

logged or not, could have an effect (query analysis comes to mind). In the future ensuring that respondents restarted

This was even more evident in the top 3-4 placed URLs for most of the queries. The top results were often unchanged or

interchanged. Thinking about the tendencies noted at this point there is tiny evidence of severe re-rankings such as pages

ranking 20th moving in & out of the top 10.

They can also take note that the weaker listings in the top 10 are the ones most likely to be moved out of the top 10 when

any type of re-ranking outside the usual suspects occurs (common urls). This means they are still interested in ranking top 4 on

a mean average (query a set of DCs for ranking reports) as they are seldom if ever dumped from the top 10 in re-ranking

scenarios.

their computers/browsers & start new search sessions would limit this effect better.

& that is this set of information - keep in mind these are generic informational searches. None of the queries tested

involved a high level of QDF (query deserves freshness) nor geographic triggers. They do know that these factors can easily

generate a higher level of SERP re-ranking & flux.

Personalization seems to have the greatest effect on the weakest urls in the results information sets. The ranking anomalies they

noted in the information were often found in both the active & disabled personalized search setting. Generally speaking any

personalization re-ranking would be minimal & dampening effects, while evident, seem to be relatively benign in nature.How can they make the most from it?

As far as tracking SEO projects are concerned, I would be wary of any single information set & be sure to try & isolate Google

information centers when doing ranking/competitive analysis & use a mean average as your primary indicators. This also

highlights the need to geographically target information centers & ensure strong rankings across your target markets.

While they only looked at a handful of international information, searching the Google.com domain showed no major re-rankings

beyond what they were seeing elsewhere. While slightly more movement was evident among international respondents, not

to skew SEO efforts ultimately.

Summary - Adapting the SEO plan

At this point there may be evidence to warrant further inquiry but not to abandon

rankings as an indicator in your SEO programs. If anything, there is evidence that makes a top ranking (1-4) more valuable

than ever. These positions were shown to be the strongest with the least amount of movement due to re-ranking.

Above the fold still holds value

What is also important is how four valuates these rankings. Identifying target markets & getting mean search ranking information

from these locales is an important aspect for consideration. This is because any deviations from re-ranking are

stable & setting a baseline from target locations (geographic) should be to gauge efficacy in targeting (the rest can

be established by analytics).

The core take-away from this round is; No two SERPs were the same (personalization ON or not)

> Personalization re-rankings are minimal (for informational queries)

> Establish geo-graphic baselines (or segment information even)

> Top 4 positions are primary targets

> Top 10 are secondary targets

> Top 20 may be leveraged through behavioural optimization

Personalization re-rankings are minimal - from what they could see (using an informational query) the effects of

personalization were minimal. This may be due to a lack of history around the queries used, but they did use terms loosely

related to topics the respondents would naturally be using. Even factoring in room for error, there is no evidence to show that

personalization is drastically changing the ranking landscape.

Obviously this is for the core/secondary terms.. tracking long tail this way

wouldn't be cost effective. Generate terms that become the baselines; valuating long tail terms should be completed by analytics

information ultimately.

Top 4 positions are primary targets - the information showed that top rankings 1-4, (above the fold) are more stable than the

rankings 5-10 as far as being re-ranked were concerned. This means not only is ranking analysis still a viable SEO program

metric, but in all likelihood these top rankings have more value than ever. They do seem to have stronger resistance to

personalization/ranking anomalies.

Top 10 are secondary targets - as noted there is still value to be had in top 10 rankings as they generally remained within

the top 10; merely re-ranked through the information sets. That being said, when re-ranking outside of the top 10 occurred, it was

more often the positions 5-10 that would be likely candidates for demotion. If you aren't in the top 4 then ensuring your page

is four of the stronger listings will better ensure potential personalization/re-ranking doesn't affect your listing.

Top 20 may be leveraged - while they haven't conducted research in to the top 20 listings at this time; they can extrapolate


within reason that the stronger 11-20th ranked pages would have an obvious likelihood of migrating in to the top 10 in

personalized search situations. If you can't break the top 10; be a strong contender to ensure the best chance of capitalizing

on potential opportunities.

by: WillLewis10
Tips When Choosing Blankets Choosing Blankets: Tips And Ideas Tips When Giving Money To Charity Driving Traffic With Your Article Marketing Strategy Optimizing Search Engine Results With An Article Marketing Strategy The Dollars4gold Scam Is Not True A Timely Decision: Dollars4gold Sell Jewelry Today And Get The Best Price For Your Gold Aim For Empirical Success In Your Online Marketing Online Marketing Blueprint To Success - You Owe It To Yourself Part 7. Get Swarms Of Readers Loving Your Content: Blog Marketing Magic Affiliate Marketing Software Helps You Save Your Time, Effort, Hard Earned Money And Get Success In Gold Refiners Can Be Your Best Friend Nowl
print
www.yloan.com guest:  register | login | search IP(216.73.216.20) California / Anaheim Processed in 0.020275 second(s), 7 queries , Gzip enabled , discuz 5.5 through PHP 8.3.9 , debug code: 173 , 9080, 66,
Search Engine Optimisation In Personalised Serps Anaheim