What it’s all about

What’s new

Whither Web 2.0?

Posted: August 16th, 2011, by Chris Blanton

It is now the last week of LIBR 559M, the week of summer term which marks the end of the academic year, which in the bigger scheme of things means September is just around the corner. Not too early, in fact, to start thinking about 2012. For those who follow such things, 2012 is the year that the Mayan long count ends, which by some accounts foretells the end of the world. But more relevantly for this class, 2012 – specifically October 1, 2012 – marks the end of Web 2.0.

Or maybe not. What October 1, 2012 really marks is the date that the tech commentator Christopher Mims (quoted by John Naughton) predicts that the now-declining frequency of appearances of the term “Web 2.0” in Google searches will reach zero. It might be more accurate to say, not so much that this is the end of Web 2.0 the technology, but of Web 2.0 the buzzword. What is certain that there will still be tools and services that will allow people to create and share content on-line, and people will be meeting and forming communities, though whether they will even be calling what they do social networking – or calling the tools that they are using social media – is an open question.

One of the affordances – or perhaps even obligations – of a course like this is the opportunity for each student to wrestle with the questions about the lasting utility or value of the social media we have been studying (the question “are social media a fad?” is as foundational to this course as, say, “what is art?” might be to another).

Is Web 2.0 a buzzword? Almost beyond question. The shoulders of the information superhighway are littered with discarded pieces of eJargon (or do I mean iJargon? it all starts to run together after a while).  But also indisputably, there is something, however fuzzy, hidden behind the term, whether it’s AJAX programming that lets your browser run software “in the cloud” or a participatory ethic that lets the user control the transaction. Perhaps, as Naughton muses, it “is simply to say that it’s ‘the web done properly’.”

Are social media a fad? Perhaps, if only to the extent that some Web 2.0 services and products may have promised more than they ultimately delivered, that many people signed up for services following the example of friends or family or celebrities, only to discard them after awhile, like hula hoops and pet rocks. Think of how many people you know started a blog only to abandon it after a few posts, or signed up for a social networking site only to become bored with it and stop using it. Other services have shown tremendous staying power: Wikipedia, You Tube, Flickr, just to name a few.

And what about Library 2.0? Individually, there have been libraries that have had successful ventures into social media that reengaged their patrons and sparked the imagination of the surrounding community. However, I’m inclined to think that if libraries are a dying institution, Library 2.0 as a programme is not going to save them, and if libraries are thriving (and many of them actually seem to be), Library 2.0 can’t take the credit. I say this not least of all because the libraries that I visit on a regular basis, both public and academic, are such bustling places exactly because they are valued physical (not virtual) spaces.

In this course I’ve read and watched a fair number of tech pundits and futurists, so I am now inspired to take my turn and pull out my own doubtlessly unreliable crystal ball. (After all, participation is the name of the game, n’est ce pas?) I predict that the future growth of the social web is going to be constrained at over time by diminishing returns. At some point (soon) most everyone who wants a networked computer in the “have” nations will have one. This rapidly becoming true for mobile telephones, even in much of the developing world, and a similar saturation with smart phones will follow. There is also a limit to how many meaningful connections one person can have with other people, even on line – there is even a limit to the number of meaningless ones! Meanwhile, the number of connections between machines will continue to increase – resulting in a state of hyperconnectivity. Much of the communication between these devices will be simple data, telemetry and the like, but increasingly machines will also autonomously query each other for information with semantic content. These transactions ultimately will be very complex, not to the point envisioned by some of the more utopian visions of the Semantic Web, but complex enough to challenge some of our current notions about information and agency. Like the waves of technology that preceded it, this wave will make some people a lot of money. It will be accompanied with a healthy dollop of hype, some of it undeserved, and, it will be accompanied by its own buzzwords, some of which will be sillier than others. And it will leave librarians and other information professionals struggling for a while, trying to make sense of how the new technologies will affect our institutions, but we will ultimately figure it out, in no small part because of the experiences we will have had with Web 2.0 and all the other technologies we’ve had to master before it. We will understand the changing information needs of our users and will learn how to help them (and their machines). We quite possibly will continue to read books. And at the end we will be … the hyperconnected librarian?

References

John C. Abell (2008). The end of Web 2.0. Wired Magazine. Retrieved 16 Aug 2011.

Cisco Systems (2011). Entering the Zettabyte Era. Retrieved 16 Aug 2011.

David Chartier (2008). No off switch: “Hyperconnectivity” on the rise. Ars Technica. Retrieved 16 Aug 2011.

John Naughton (2011). The death of Web 2.0 is nigh. The Observer / guardian.co.uk. Retrieved 16 Aug 2011

Aggregation, the semantic web, and The Daily Me

Posted: August 13th, 2011, by Chris Blanton

It was many years ago, long even before the advent of Web 2.0, that I remember first hearing Internet pundits telling us that soon we would be able to subscribe to The Daily Me, a virtual newspaper delivered to our computer screen and customized exactly to our individual requirements. Don’t like sports? There’s no sports page. Are you a fan of the team MODO Hockey in the Swedish Premier League? Here’s the latest scores and highlights from Örnsköldsvik, in a sidebar right on your front page. Are you interested in art forgery, Antarctic exploration, the future of the Canadian potash industry? No matter how specialized your interests, The Daily Me would find the breaking news on your topics and put them right there front and centre for you to read in the order you want to read them.

Didn’t quite work out that way. Instead, we got RSS feeds at first, which I could never get enthusiastic about. The first browser I had with built-in RSS aggregation I think was Apple Computer’s Safari, which came preconfigured with (among others) a feed for the BBC: the first time I clicked on the RSS menu, it told me I had something like 10,000 unread BBC news stories. I was sufficiently intimidated I never opened that particular menu again.

The only hope I can hold out that The Daily Me will ever become reality is if the semantic web ever gets some traction. As far as I can tell, the aggregators I’ve seen so far just replace the experience of taking your sippy cup from one fire hose to another, with the experience of directing all the fire hoses on you simultaneously: equally unsatisfying, if perhaps more efficient. What I would like to see in an aggregator is something that would open each channel, filter whatever is coming through that I’m interested in and discard the rest. For bonus marks, it would then combine related items into threads, sort of the way that Google News does. But for this to be possible, the incoming content would have to be tagged in a semantically meaningful way. Not necessary meaningful to me — I don’t need to know how to read MARC records to use an OPAC — but meaningful to the aggregator.

The semantic web as envisioned by Tim Berners-Lee and his followers may still be many years away; it may end up being the flying car of the information age (that is, a futurist’s prediction that proves impractical in the real world). But even a partial implementation standards for machine-to-machine exchange of semantic markup would go a long way to making a better aggregator, not just  better personal aggregators for web-skeptics like me, but for the kinds of professional-grade aggregators that information organizations might employ as the keystone of the web portals of the future.

Creation: the Librarian as Muse

Posted: August 6th, 2011, by Chris Blanton

Many Internet users have shifted from being passive consumers of content to creating it, but even creators have information needs, and the librarian can have a role as a facilitator of the creative process. The users of social media want to learn how to use new tools, they want to find ideas to inspire creative work, they want to find the raw materials to make mashups and other creations, and they may want to find new ways to share their finished work and possibly even profit financially from it. Currently, they probably won’t think of the librarian as the first person to turn to for help; if libraries develop services and practices with these people in mind, however, that may change.

A librarian, for example, doesn’t need to become a graphic designer, but should be able to help a patron locate a source for a type font or search an archive of stock images. A librarian on duty in the information commons may even be asked for help installing a font on patron’s laptop. (Perhaps this will be one day be seen as routine a part of our job as refilling the inkwells in the reading room might have been in an earlier era).

Here are some of the ways librarians can facilitate creation:

  • Help patrons find the digital “raw materials” in the local collection (clip art, sound effects, photo stock) and on line
  • Offer how-to information through the library’s on-line presence (web site, You-tube channel)
  • Offer on-site workshops and informal opportunities for creators to meet F2F and share ideas
  • Incorporate digital creation opportunities into children’s and YA programming
  • Provide facilities for recording sound and images (a public library’s main branch or an academic library might go so far as having a digital production facility)
  • Insure that public multi-purpose computer workstations have image editing software and software to edit sound and audio files (just as they typically have word processing software today)
  • Explain how copyright and creative commons licensing work
  • Exhibit work created using the library’s facilities

Collaboration: what makes a project a candidate for collaboration?

Posted: August 1st, 2011, by Chris Blanton

One of the questions facing us as information professionals is, when do we collaborate, and when do we work alone? In some cases, we won’t have the choice. Some external force, perhaps a manager or funding agency, will tell us we have to collaborate. In other circumstances a new task brings with it the question, “do I tackle this on my own, or should I seek out collaborators and approach the problem as a collective?”

The rule for deciding when a project is good candidate for collaboration can be formulated in economic terms: one collaborates when the benefits outweigh the costs of doing so. Such a model is deceptively simple, because costs or benefits can be hard to measure, subjective, or intangible. Costs and benefits can accrue on multiple axes, with costs in one dimension being compensated for by benefits on another dimensions. The personality of the prospective collaborator is a key factor, especially in artistic productions and academic research. (King and Snell provide an in-depth description of process of screening and selecting collaborators in the natural sciences). Except in certain highly structured settings, I suspect most people use a heuristic approach rather than a formal cost calculation when deciding whether to collaborate, with the decision being highly influenced by personal experiences and prejudices about collaboration.

Mary Frank Fox and Catherine Faver divide the costs of collaboration into process costs and outcome costs. Process costs are incurred in the ongoing operation of the process, and in particular of maintaining a team, with its required channels of communication and transactions (costs which may be measured in time, money, or emotional wear and tear on the participants). Outcome costs refer to any reduction in the value of the final product relative to the value that would have been realized without collaboration. Examples of outcome costs include a team of scientists who let themselves be “scooped” by a rival investigator, or a wiki page that is unreadable because of differences in objectives and writing style among the authors.

One type of process cost occurs from the diminishing returns of adding additional members to a team, as Frederick P. Brooks illustrates in his classic book on software engineering, The Mythical Man Month. When knowledge workers are added to a project, the contribution per worker decreases and in some cases becomes negative as time spent on communication and coordination outweighs the contribution of the new additions to the group. This situation becomes exacerbated when people joining the project must master a new body of highly specific or technical knowledge before they can contribute to the project, which means each prospective collaborators have a lengthy period of learning in which their peers have to dedicate time to mentoring them.

Thus one has to weigh the network effect (where the value of the network increases with the number of connections) against the costs of those connections, and the potential of interference effects (as for example, having so many cooks in the kitchen at one time that they are tipping over each other’s sauce pans and blocking access to the refrigerator). Collaboration is most effective when the task can be partitioned into sub-tasks, or when the team members have complementary, rather than redundant, skills (see again King and Snell)..

Web 2.0 applications tilt the balance in favour of collaboration; they lower the cost of participation and widen the pool (mixed metaphor) of potential collaborators. However, there are still likely to be tasks where sustained, concentrated effort by a single person is the most effective approach.

Sometimes an organization may choose to undertake a project in a collaborative model, and deliberately incur extra costs or delays in doing so, because of longer term objectives. For example, the organization may have goals such as:

  • fostering a culture of collaboration within the organization
  • building teams that will be expected to work together on other projects in the future, and will be able to profit from the experiences gained and relationships build whiled  working on the initial project
  • building a feeling of ownership in the end product (for example, a tool or process), by including the end users in the development
  • obtaining funding by including participants from a potential funding agency
  • engaging the clientele of an organization as participants, to retain loyalty, increase mindshare, or recruit brand ambassadors

Works cited

Brooks, F. 1975. The Mythical Man-Month: Essays on Software Engineering. New York: Addison-Wesley

Fox, M. F., & Faver, C. A. (1984). “Independence and cooperation in research: The motivations and costs of collaboration.” The Journal of Higher Education, 55(3), 347-359.
 

King, Z. & Snell. S. (2008). Knowledge Workers and Collaboration: the HR Agenda. Paper for the Centre for Strategic Management and Globalization’s mini-conference on HRM, Knowledge Processes and Organizational Performance. Retrieved from http://www.printedelectronics.net/documents/CBSconferenceKingSnell.pdf 1 August 2011.

What is Web 2.0?

Posted: July 25th, 2011, by Chris Blanton

Web 2.0, or so we are told, underlies the recent proliferation of online social media. Is it a technology, a business model, a cultural shift, or maybe a bit of all three? Here’s a quick look at all three facets, along with a few arguments that suggest ways in which Web 2.0 is more alike to what came before than some of its champions might like us to believe. Revolutionary, or evolutionary? You be the judge.

A technology

Have you ever heard the slogan, Web 2.0 = Ajax? There is a grain of truth to it: Ajax is an enabling technology, without which Web 2.0 as we know it wouldn’t be possible. Ajax (Active JavaScript and XML) allows JavaScript code running in the user’s web browser to exchange small amounts of data asynchronously with the server, letting the browser perform transactions without having to reload the entire page. You don’t need a technology like Ajax to have interactivity in a Web site, but it simply would not have been possible to deploy complex applications like Google Docs over the web without Ajax or something like it.

Déjà vu: Web 2.0 is not really a fundamentally new Internet architecture. Web traffic still relies on the HTTP protocol, and HTML markup (or XHTML or XML) remains the basic structure for encoding web pages. JavaScript and Adobe Flash also predate the emergence of Web 2.0. Even Ajax just provides an additional layer of complexity running on top of existing Internet protocols.

A business model

A cynic might say that the business model of Web 2.0 is all about getting your customers to create your content for you. A true believer would say it is a new way of doing business where customers interact with businesses and each other to create value in ways that would not have been possible before.

Déjà vu: Haven’t we been here before? Many of the same symptoms that preceded by the dot.com crash in 2000 are now visible in the Web 2.0 world: unquestioning acceptance of hype, too many vendors crowding into a limited and unstable market, and companies are exploding from startup to mutli-billion dollar market valuations in a matter of months to years — often without ever recording a single quarter’s profit. There’s even a term for it now: Bubble 2.0.

A cultural paradigm

Social networking is so closely intertwined with Web 2.0 that we sometimes see people using the two terms synonymously. Even on web sites whose primary purpose is not social, the prevalence of Web 2.0 enabling technology allows site owners to tack on social media components at a modest marginal cost. Communities of users grow up around these sites, often bringing people together who would never have had the opportunity to meet or interact in person.

Déjà vu: Tim Berners-Lee, founder of the World Wide Web, has responded to Web 2.0 boosters by saying that the Web was never designed as a one-way communication medium. Interactivity, participation, and collaboration have been implicit in the design of the Web since its inception: “If Web 2.0 for you is blogs and wikis, then that is people to people. But that was what the Web was supposed to be all along.” (quoted in Anderson, 2007). Where it is different is that new tools for web application design do seem to be lowering the barriers to participation (consider the wider uptake of Facebook and Twitter compared to traditional blogging or the publication of personal web sites). This may perhaps be the most lasting accomplishment of Web 2.0.

Anderson, N. (2007). Tim Berners-Lee on Web 2.0: “nobody even knows what it means” Ars Technica (Online periodical). http://arstechnica.com/business/news/2006/09/7650.ars

Dvorak, J. (2007). Bubble 2.0 coming soon. PC Magazine. Republished online at: http://www.pcmag.com/article2/0,2817,2164136,00.aspx

Foley, S. (2011). Bubble 2.0: will the new dotcom boom go bust? The Independent. Online edition: http://www.independent.co.uk/news/business/analysis-and-features/bubble-20-will-the-new-dotcom-boom-go-bust-2216115.html

Casey, M. E., and L. C. Savastinuk (2007) Library 2.0, A Guide to Participatory Service. Medford, N.J.: Information Today.

The social network of things

Posted: July 21st, 2011, by Chris Blanton

Following along the thread of hyperconnectivity, I thought I’d share this somewhat amusing attempt to convey what life might be like in a world where all of our everyday devices were networked together.

http://www.wired.com/beyond_the_beyond/2011/04/design-fiction-ericsson-social-web-of-things/

Whimsy (and anthropomorphism aside), I think it’s dubious to suggest that machines are capable of social interaction. But would this kind of hyperconnectivity have an effect on how people interact with social media in general? Would the increasing traffic of interaction with networked machines start to use up some of the networking capacity that people previously used to dedicate to their on-line friends? Would hyperconnectivity facilitate interaction between people, and between people and institutions, or would people end up just staying home in front of the TV and ordering takeout?

About me and welcome

Posted: July 16th, 2011, by Chris Blanton

I am a student in LIBR559M, “Socia Media for Information Professionals,” and this is my blog.

To plagiarize my Twitter  profile, I am “by day, a technical communications specialist in Ottawa for Ericsson Canada, and by night a second-year MLIS student at UBC.” On paper — or perhaps I should say in a computer file — I must look something like the very model of a postmodern iSchool student: extensive background in XML, structured authoring, electronic document distribution, and all of that. But secretly I’m the kind of person that library schools try to screen out at all costs: you know, the one who wants to become a librarian because they like books and don’t like people (and did I mention the cats? I have three of them). I am overstating the case a little for dramatic effect, because I’m not really a misanthrope, but you get the general idea: I’m not exactly the person you’d vote most likely to be your library’s standard-bearer for Web 2.0. (For an interesting emprical study on the personality types of librarians that are (and are not) likely to get involved in Library 2.0, see: Aharony, N. (2009). Web 2.0 use by librarians. Library & Information Science Research, 31(1), 29-37.)

But enough about me — let’s move on to the term “hyperconnected” and why it’s  in the title of my blog.  Because I work in the telecommunications industry, I’m being reminded that the Internet is reaching a turning point in that soon there will be more devices connected to the Internet than users. In the next decade, the biggest contributor to the growth of the Internet will be the addition of assorted smart devices, many of which would be machines we would not normally think as being network entities (refrigerators, bread machines, fire hydrants, and the like). In the hyper-connected network, the user isn’t just an atomic point in the network, but a small cloud of interconnected devices, linked by Ethernet and Bluetooth and technologies that haven’t even been designed yet. Layered on top of this trend, the proliferation of social media and interactive networked applications continues. Each user is potentially connected, not only to more people, but to the same people through an increasing number media.  How will all of these changes affect the professional life and work environment of the librarian, archivist, or curator — and will they really make as much difference as the futurists say they will? This course promises to offer some fieldwork in the digital ecosystem. So this week are getting our nets, specimen jars, and tranquilizer darts ready. Next week, we start looking for answers.


 

 

Spam prevention powered by Akismet