> I‘ve always found the spidering of one‘s own site to be highly > ineffective and prone to error. Some of the problems I‘ve seen (and > I‘ve done 10 page to 2+ million page sites): > - navigation/copyright/boilerplate text frequently ends up > prominent > in the result or used in query evaluation ranking (such > content should > be excluded) Better search engines give you a way around this. For example, Ultraseek has stop/start tags you can place around such content so it doesn‘t index it. Heck, even Fluid Dynamics Search Engine, a $40 Perl app, does that. Somebody who‘s serious about search needs to spend some time looking at the architecture of their individual documents to ensure users get to just the information they need. This goes from excluding boiler plate to making sure documents have meaningful titles and descriptions (how many sites have you gone to where a search results in 400 documents all of which seem to have the same title and generic description?). > - heavy load needs to be managed tightly so as to not conflict with > customers of the site (how many customers pull every page daily?); Solution: Run the search engine on a separate box. CPUs are cheap ... > - spiders generally cannot tell when a page is no longer > reachable via > the site navigation (forgotten content still searchable); Then maybe this content should be pulled off the site, in which case the search engine would drop it after X number of attempts. > > You could put an event driven indexer into your content publishing > system. Base it on the Verity, Autonomy, or Lucene engines (or > something else), and construct the ‘document‘ that they see > to contain > the specific content minus all of the wrapper. You will only be > indexing/removing the content that has changed moments after > it has been > changed. This sort of leads into the old dynamic/static publishing question - this model only works if all your content is dynamic. Although we‘ve never played with it, Ultraseek does have an API that, theoretically, would let your CMS notify the search engine of a new document as soon as it‘s published (rather than waiting for, say, your nightly crawl). > > Think if it the same way as publishing your site. Do you > publish _all_ > the contents every night (and _only_ every night) throwing > out the old > and completely replacing it? Or, do you publish only the > documents that > have changed whenever you need to? Good point. How you implement search should probably be dependent on how you publish your pages. But Ultraseek doesn‘t re-index every page every night - it only grabs files with a new or changed timestamp. No, I don‘t work for Verity - I just like my Ultraseek :-). Adam Gaffin Executive Editor, Network World Fusion [EMAIL-REMOVED] / (508) 490-6433 / http://www. "I programmed my robotic dog to bite the guy who delivers the electronic mail." -- Kibo |
|