Xanadu, lost? Google as “Memex”?

The readings in this section were fascinating. Ted Nelson’s philosophy for Xanalogical structure on the World Wide Web provide a valuable historical context. As powerful as the web is, its visionaries had even bigger ideas.

As I read, I considered how a male, western construct shaped hypertext. This set parameters through which we operate, excluding other cultural approaches to text and discourse. Large segments of the global community struggle with accessing the web; what feels ‘natural’ to one culture may not feel ‘natural’ in another.

Hypertext is not an entirely original idea. Annotating text has precedent in ancient documents such as the Torah. In addition, anonymous influencers surely helped develop technologies; for all we know, Nelson’s esteemed wife and partner may have played a great, unsung role in the shaping of hypertext. As is often the case, history leaves out many key players.

It seems now that Nelson was left out as commercial interests dominated the web. I sympathize with Nelson’s lamentations that the web could have created new forms of literature entirely (perhaps these are yet to come), but I was disappointed by his assertion that “fonts and glitz, rather than content connective structure, prevail.”

Typography and visual elements are powerful communication tools. Aren’t stories what this is all about? Divergent points of view could have ultimately helped Nelson achieve his vision, albeit perhaps in a different way. A more inclusive and cooperative strategy could have led to other discoveries and collaborations and may have afforded Nelson his ideology. As it stands today, transclusion has not caught on and some say it never will.

Alongside Nelson’s vision, I took a great interest in the vision put forward by Vannevar Bush in “As We May Think.” The 1945 article superbly imagined a future of tools to realize hyper-mediated and hyper-textual environments, however distant a reality those may have been at the time. Consider:

  • “…will the author of the future case writing by hand or typewriter and talk directly to the record?” (voice dictation tools);
  • “The camera hound of the future wears on his forehead a lump a little larger than a walnut.” (GoPro camera);
  • “…we can enormously extend the record; yet even in its present bulk we can hardly consult it.”  (information overload); and
  • “Selection by association, rather than indexing, may yet be mechanized.” (search engines).

My favourite prediction in this article is the coining of the term “memex”:

“Consider a future device for individual use, which is a sort of mechanized private file and library… A memex is a device in which an individual stores all his books, records and communications, and which is mechanized so that it may be consulted with exceeding speed and flexibly.” (emphasis mine)

Initially, I mused that Google is our virtual Memex, as a searchable ‘mechanized’ (ditigized) filing and library system. On greater consideration, I concluded that today, the hand-held device (i.e., the smartphone and iterations such as the tablet) better fits the description. Most of us are holding a Memex in our hands!

That could change again, though. Just this past week, Google registered a patent that could allow us to record and replay our memories with Google Glass and similar technologies. Perhaps the future of hypertext is filed, stored, retrievable and personal hypermedia. Maybe Nelson would consider these “deep” structures; maybe not. Either way, if virtual spaces outlast devices, Google may emerge as our ultimate Memex after all.

Julia

References

Baker, M. (2014). Transclusion Will Never Catch On. Every Page is Page One. Accessed July 28, 2015 from http://everypageispageone.com/2014/09/15/transclusion-will-never-catch-on/

Nelson, T. (1999). Xanalogical Structure, Needed Now More than Ever: Parallel Documents, Deep Links to Content, Deep Versioning and Deep Re-Use. ACM Computing Surveys 31(4), np. Accessed July 28, 2015 from http://www.cs.brown.edu/memex/ACM_HypertextTestbed/papers/60.html

Bush, V. (1945). As We May Think. The Atlantic Monthly, 176(1), 101-108. Accessed July 28, 2015 from http://www.theatlantic.com/doc/194507/bush

Tamblyn, T. (2015). Google ‘s ‘Black Mirror’ Patent Could Let Us Record And Replay Our Memories. The Huffington Post UK. Accessed July 28, 2015 from http://www.huffingtonpost.co.uk/2015/07/24/google-s-black-mirror-patent-could-let-us-record-and-replay-our-memories_n_7863152.html

Image credit: http://www.huffingtonpost.co.uk/2015/07/24/google-s-black-mirror-patent-could-let-us-record-and-replay-our-memories_n_7863152.html

3 thoughts on “Xanadu, lost? Google as “Memex”?

  1. Julia,

    As a amateur web developer and someone who is simply interested in web programming technology, I am embarrassed to admit that I have no idea what a Xanadu was all about. Reading through Nelson’s article, it may have seemed like a good idea to attach an “IP” address to every single character to a document before the web even existed. But it hardly seems possible today now that the web has grown to the point where there are so many IPv4 addresses that we’ve actually run out and had to implement IPv6. To push this even further, that was a few years ago! And thats when we give an IP address to the entire video or entire document. Imagine if each character … or even word had an IP address. I think the problem with Xanadu is if that system existed today, the addresses would be so complex that the memory needed to store those addresses would far exceed the data that an Xanadu system is representing. But no one it seems could have predicted how far the web would go. I’m sure Vint Cerf (considered the “father of the internet”) would have never imagined running out of IPv4 addresses.

    I should also mention that from a technical point of view, you could possible have a hyperlinked system like Xanadu using our current http / IPv4 system. And in a sense, the packet system already does that by slicing up our data and sending it through the internet via tiny addressed packets of data instead of all at once. This is called packet-switching and is done through TCP/IP (Transmission Control Protocol, Internet Protocol). So in a sense, the Xanadu system exists in a similar form today. Here is some info on packet switching:
    https://www.youtube.com/watch?v=dtpvGmQTXTA

    You know I read Bush’s article. To be blunt, I thought it was rather uninteresting until near the middle of the article when I decided to look back to the first page. I was quite shocked to learn that this article was written in 1945! He had amazing clairvoyance in predicting what our information society would look like today. It makes me wonder if there is an academic paper today that has accurately predicted the state of the information society 50 years from now?

  2. Hi Daniel,

    Your comment is an education in itself! I am not too familiar with programming technology, but am grateful that readings such as these are able to place where we are into context. The issues of trying to hyperlink the vast amounts of information out there today would be overwhelming – and I wonder, would they even be necessary? Perhaps there were players who imagined the states of the web and blocked the development of Xanadu-like initiatives? The multiple forces and stakeholders at work are probably impossible to pin down. Who knew where the web would end up? Well… speaking of visionaries, I don’t know which academic paper to turn to, but here is one visionary to start with: Ray Kurweil. I heard him speak in 2012 at FITC (http://fitc.ca/speaker/ray-kurzweil/) and recently noticed some of his associates on Twitter writing via his Singularity University project (http://singularityu.org/). Maybe that’s a start!

    Thanks,
    Julia

  3. I also found the readings on the origin of the internet fascinating. Engelbart’s conceptual framework was difficult to get into at first, but I persisted with it and found it very rewarding. His framework was based around the human-technology system he called H-LAM/T, for Human using Language, Artifacts, Methodology, in which he is Trained. Engelbart’s work is intended to further augment the H-LAM/T system to work on complex problems. He focused his work on developing a system for manipulating language based on his Neo-Wharfian Hypothesis, ”Both the language used by a culture, and the capability for effective intellectual activity are directly affected during their evolution by the means by which individuals control the external manipulation of symbols.”

    Engelbart takes an experimental approach to enhancing this system, and uses as his research subjects the team of programmers who are working on enhancing the system; Engelbart calls this a bootstrapping approach, and when the system is refined, the solutions that are developed could then be applied to more general problems.

    Engelbart published his conceptual framework in 1962. What makes Engelbart’s vision for hypertext especially fascinating is that he did develop it into a working system. His 1968 demonstration of the system can be viewed at http://web.stanford.edu/dept/SUL/library/extra4/sloan/mousesite/1968Demo.html. In this demo, he uses video conferencing, screen sharing, hyperlinks, and a mouse, which he is credited with inventing. He shows how to use the computer as an idea processor, and how his research team uses it in their daily work.

    245 words

    References:

    Englebart, D. (1963). “A conceptual framework for the augmentation of man’s intellect.” Retrieved August 5, 2015, from http://web.archive.org/web/20080331110322/http:/www.bootstrap.org/augdocs/friedewald030402/augmentinghumanintellect/ahi62index.html

    Engelbart, D. C., English, W. K. (1968, December 8). A Research Center for Augmenting Human intellect [Video file]. Retrieved Aug 5, 2015 from http://web.stanford.edu/dept/SUL/library/extra4/sloan/mousesite/1968Demo.html

Leave a Reply

Your email address will not be published. Required fields are marked *

Spam prevention powered by Akismet