This really was one of the best sessions I attended. [Rae Hoffman](http://www.sugarrae.com/) and [Roger Montti](http://www.martinibuster.com/), both long time WebmasterWorld members and moderators both gave presentations that had more information than I could even keep up with.
Rae (who also provides [seo consulting services](http://www.sugarrae.com/blog/consulting/)) was kind enough to [publicly link to her presentation](http://www.sugarrae.com/pubconvegas2006raehoffman.ppt) (Powerpoint File) on [Delegating Link Development](http://www.sugarrae.com/blog/back-from-vegas-pubcon-download-the-presentation/ ). That presentation is worth reading through a few times to get all the information. A couple big highlights of her talk for me:
* Training a link developer is not a light task. But, given the right person with the right training, efficient management and monitoring, they will rapidly do better than outsourcing can do.
* It is easier to train someone inexperienced with marketing who is familiar with the internet than it is to train someone experienced with marketing but lacking experience using the internet.
And also her list of interview questions for a potential in-house link monkey was quite helpful:
1. What is your favorite search engine?
2. What is a blog? A message board? A link?
3. Three favorite websites?
4. Do you use IM?
(And Rae made it clear that IM use is a positive thing as an indicator of computer and internet familiarity.)
She had even more questions in her presentation (linked above) which I won’t duplicate here. She also recommended having a computer handy and asking the applicant to perform a specific task on the internet, such as “Can you find me a {Brand} {Model} digital camera that I can actually buy?” to see if they can tell the difference between an e-commerce site and an affiliate/content site.
Joel Lesser of [LinksManager.com](http://blog.linksmanager.com/?cat=2) spoke mostly about reciprocal linking. This is a topic that I felt (and he confirmed) was mostly taboo in the SEO community. He recognized that full duplex linking schemes with no editorial discretion will cause problems with search engines but also made a case for limited, on-topic reciprocal linking.
Joel, correctly, points out that the nature of the internet allows for the organic growth of reciprocal linking and he says, therefore, the search engines mustn’t completely devalue these links. Unfortunately, his proof that recips still have value is that “I can’t tell you what the search engines do or don’t do.” This strikes me as dodging the burden of proof, a logical tactic that sends off “scam alarms” in my head. The whole “no one can prove that recips are bad” struck me as a [Russell’s teapot](http://en.wikipedia.org/wiki/Russell%27s_teapot) approach to something that can actually be independently tested to a reasonable degree of satisfaction.
Ignoring that one aspect of his presentation, however, Joel did bring back to the table the non-search engine benefits of relevant reciprocal links:
* Cost effective
* Provides qualified traffic independent of search engines
* Provides value to your users by connecting them to other relevant resources
Another interesting aspect of his presentation was the idea of alternate forms of publishing reciprocal links. Most of these alternatives blend the links as sidebar resources or as contextual links inside of your content.
Finally, he wrapped up with some link request etiquette.
* Use link request forms whenever available
* Don’t send out link requests longer than 3 sentences
* Don’t require links be placed on a page with *x* pagerank.
Roger Montti, who (surprise, surprise) also provides [link development services](http://www.martinibuster.com/), gave a presentation full of alternative link building ideas. If this blog entry wasn’t serving as my own personal notes, I would think twice about posting a lot of his ideas. :)
He discussed things to look for when you are going to be buying a text link (or even a banner link) from a site:
1. **Relevance**
2. No mention of PageRank
3. No ads for non-relevant sites
4. Year-long contracts
Smaller magazines which are published offline often have an online presence that is poorly developed. So, they will be happy to sell a banner ad or other link for relatively low dollar amounts.
Buying websites is a good way to accumulate their links and direct that link popularity to your own site. Older sites which are inactive or under-performing are good candidates for a low dollar amount (less than $1,000) buy.
Site of the month/week/day/second sites (and also newsletters) are handy for getting some traffic and links. If they do not permanently archive the links, you may still see the readers of those sites re-publishing links to your site if it’s any good. Ideally you would find a “site of the whatever” site that is specifically focused on your niche.
Sponsorships of sites, groups, and events all provide opportunities for static links — often from .org or .edu domains. Check your competitors backlinks for .org and .edu backlinks to see how they are getting those links.
Although he discussed the creation of satellite informational sites as a means of acquiring inbound links and I agree that method still works, I have anxiety about it’s long-term value if you take any shortcuts on quality with those sites.
Although it seemed a bit of catch-all session (and mostly named to give each presenter a nod), the [Feeds, Blogs, News, and Social Search](http://pubcon.com/sessions.cgi?action=view&record=61) was quite informative.
[Niall Kennedy](http://www.niallkennedy.com/) presented, mostly, a basic overview of feeds, defining them for attendees who were *completely* new to the world of feeds. The depth in which he covered the topic however, was impressive. I think Brett would do well by having Niall kick off any presentation on feeds/content syndication.
There were a few gems (for me) from his presentation. First, a helpful code snippet for linking to alternate language versions of the same document. This code can be read by search engine spiders and by some browsers (like Firefox).
Place in the `
` of the document:
``
Niall reminded the attendees the branding value of including a logo in your feed and the importance of validating your feed using a service like [feedvalidator.org](http://feedvalidator.org) or [installing a feed validator locally](http://sourceforge.net/projects/feedvalidator). He provided a list of sites to which you should publish/ping but personally I think [ping-o-matic](http://pingomatic.com/) does the trick for everything.
Don’t forget to subscribe to your own feed in the major online feed readers out there including [My Yahoo!](http://my.yahoo.com/), [Google Reader](http://www.google.com/reader), and [Bloglines](http://www.bloglines.com/). Take advantage of any tagging/categorization/rating features of those services to flesh out your own feed.
Niall also made a point of discouraging the creation of new tags, attributes, and categories for your feed. Take advantage of existing standards and extended vocabularies as that increases the chance of broad support in the various readers and aggregators.
[Nick Klau](http://www.rklau.com/tins/), VP of Business Development over at [Feedburner](http://www.feedburner.com/) gave a presentation on trends they are seeing at feedburner and their own impressive growth.
One of the first, and most important points he makes, is that it is *vital* to feed owners to make sure that their feeds can be auto-discovered in all the major browsers. IE7 and Firefox 2 (ahem, and Safari) all have built-in readers so it’s also important to check your feed in all those apps.
When considering using a third party to host your feed, you should map a subdomain (like feeds.example.com) so that you can retain control over your feed hosting. Although you might expect a feedburner representative to suggest otherwise, Rick made it clear that they do not have any intention of locking in users to their service and they believe firmly that publishers should maintain control regardless of which service they choose.
[Owen Byrne](http://www.digg.com/about/owen), co-founder of Digg, spoke about the state of Digg and a little of their history.
Some of the scalability lessons he has learned with Digg are invaluable and reveal his software engineer background:
* Avoid premature optimization
Get the code out there then see what needs to be optimized
* Cache, cache, and more cache
I take this to mean that they do a lot of writing to disk instead of hammering the database for every single page load. He also mentioned [memcached](http://en.wikipedia.org/wiki/Memcached) in this context
* Hardware is cheap, downtime is not
Normally this argument goes “Hardware is cheap, developers are expensive” but I prefer the opportunity cost as a comparison against downtime. :)
* Lots of servers – spare, monitoring, testing, developing
He said (if I recall correctly, I didn’t make a note of this) that Digg has something like 90 servers but that many of those are spares or development mirrors of the production servers.
Chris Tolles, VP of marketing at [topix.net](http://www.topix.net/) gave some background about [his work at the ODP](http://dmoz.org/profiles/tolles.html) and motivations in building an algorithmically edited news aggregator.
Topix.net provides over 50,000 feeds and ranks very well for locality + “news” searches. Chris believes it is the “freshness” of their content. Specifically that, even though they are republishing other sites content, they are doing it so quickly and frequently that the search engines love it.
Topix saw somewhat of a stagnation in growth. In response to this stagnation they added a message board for every news item and locality. You can see a [U.S. map of forum activity on topix.net here](http://www.topix.net/forum/geo) (broken in Safari, use Firefox).
On the subject of getting feed readers, aggregators and search engines to recognize updated content correctly, Chris recommends the appropriate use of the `` and `` tags to mark changes.
Finally, some of the “value added” by topix.net is simply that they are categorizing the content of others. This small bit of difference makes it possible for them to rank as well or better than the original content.
I’ll be publishing my notes from all the pubcon sessions (at least, those for which my notes are understandable). The keynote this morning by [Guy Kawasaki](http://blog.guykawasaki.com/) was excellent.
The Art of Innovation
1. Make meaning
Make good things, end bad things. Make people more productive.
2. Make mantra
Guiding light that always stays consistent. 2 or 3 words. NOT a mission statement.
Wendy’s Healthy Fast Food
Nike Authentic Athletic Performance
FedEx Peace of Mind
“Mission statements are bullshit.”
3. Jump to the next curve
No ice harvester became an ice maker, no ice maker became a refridgerator company.
4. Role the DICEE
D Depth Great products grow with you.
I Inteligent Three different sizes of batteris
C Complete Pre sales, Sales, Atersales
E Elegant Nano
E Emotive Generate emotions
5. Don’t worry, be crappy
Version 1 of a technology means you never have to say sorry. We ship, then we test. Ship revolutionary stuff with elements of crap to it.
6. Polarize People
“Toyota! For all the money you have why did you buy a car designer that was fired by Volvo?” (In regards to the Scion xB.)
7. Let a hundred flowers blossom
Customers you weren’t expecting buying your product in large quantities is not a problem. Go to the people who are buying your product and ask them “Why are you buying our product?” Then give them more reasons along those lines.
8. Churn, baby, churn
9. Niche thyself
(Chart with “Value to the customer” along the bottom and “Uniqueness” on the left.)
High value, not unique, compete on price
No value, very unique, your stupid
No value, not unique, dotcom
High value, high uniqueness, is target – Fandango is example
10. Follow the 10/20/30 Rule
“I pitch therefore I am.”
10 slides
20 minutes
30 point font