I’m working with a unit on campus who have some stimulating challenges ahead of them. UBC’s Careers Online is [one of] the university’s most visited website[s] (about 1,000 distinct visits per day from students). Most of the visitors who log on to the site are after one thing — job postings — and tend to avoid learning anything about effective strategies for career devlopment, the application process, or professional conduct.
The Career Services unit has redeveloped the site to emphasize the teachable moments that present themselves during the job search process, cleverly deploying motivational hooks to lure visitors into exploring the literally thousands of useful links carefully positioned throughout the site.
While they constitute a tremendous resource to Career Service’s clients, those links are a potential nightmare in terms of maintenance. Dead links will pile up and spread like rust on a Ford Pinto, with grave effects on the usefulness of the pages. The unit also needs a means of gathering, categorizing and assessing new resources as they become available.
I worked with CS when they were developing the site — mainly by providing them with weblog and wiki spaces, a wee bit of training and support, and getting the hell out of the way. I’ve been asked to do something similar for the upkeep and maintenance phase… I’m fairly clear on a strategy for collective research and aggregation (an RSS and social bookmark frenzy). But I have yet to identify a link checking package. I’ve come across a few open source tools, but they are abandoned 0.4 builds (or lower), which makes me nervous (unless I had evidence that they were working effectively). I’ve seen a few commercial packages with nice feature sets, and the prices aren’t too prohibitive, but again I’d be more confident if I knew that these systems worked well.
Does anyone have recommendations (or horror stories) for link checking software?
There’s always the W3C utility at
http://validator.w3.org/checklink
IIRC, Dreamweaver has something similar, if you have access to the .html source.
Oh – and I use a bookmarklet to test any page’s links… Let’s see if this comment will get through the filters…
javascript:void(document.location=’http://validator.w3.org/checklink?url=’+document.location)
Here
Sorry, the code is here:
http://oncampus.richmond.edu/~emiles/linkchecker.html
So I’m not exactly clear about your scenario, but here it goes:
– one option would be to do the link checking directly within del.icio.us via soemthing like http://www.unixdaemon.net/delicious_checker.html
– but if you are publishing the links to html pages on your site manually somehow, and are wanting to check those pages, either do as others suggest and use one of the web page authoring packages with built in link checkers, or else there are free standalone programs like Xenu (http://home.snafu.de/tilman/xenulink.html) that work o.k. (I’ve never been that keen on Xenu’s output format, but hey, it’s free).
And don’t I recognize the woman in those e-strategy articles you pointed to 😉
Cheers, Scott