I’ve recently started subscribing to Jon Udell’s blog. One of his recent posts relates to our own information publishing as a cell – in the sense that it has a membrane where we detect interactions with the outside world.
A compelling visual no doubt – I think it’s a great way to describe to those who have not really thought about how their information is aggregated, redistributed and shared once they send it out. Shortly after he wrote about this illustrative analogy, he was informed that his own site was blocking crawlers via his robots.txt file. The irony was not lost on him:
A comment from Mark Middleton perfectly illustrates the point I was making the other day about visualizing your published surface area. I started this blog in December, and ever since I’ve been running with a robots.txt file that reads:User-agent: * Disallow: /
In other words, no search engine crawlers allowed. Of course that’s not what I intended. I’d simply assumed that the default setting was to allow rather than to block crawlers, and it never occurred to me to check. In retrospect it makes sense. If you’re running a free service like WordPress.com, you might want to restrict crawling to only the blogs whose authors explicitly request it.
WordPress.com’s policy notwithstanding, the real issue here is that these complex information membranes we’re extruding into cyberspace are really hard to see and coherently manage.
We’re all learning and probing and figuring out this new medium, even 10-15 years on now. We’re struggling with the abundance of information, and concurrently, the distinct lack thereof. We can connect with people from anywhere, at anytime, assuming they’re connected and watching the same streams of information. And yet, we cannot see who’s watching, who’s aggregating and saving for later.
Thanks, Jon, for the nice analogy. I’ll use it myself, with a link back, of course – so you can sense it.View blog reactions