The IPv6 specification has been around for about 9 years now. People understand the problems with IPv4. Yet the world is struggling to move to IPv6.
Ruby has been around for about 14 years now. It had enough time to evolve before it was widely adopted.
Java has been around for about 12 years. The adoption was quite fast. Now although newer versions of Java are being released, it is slowly losing its charm. There are some known problems, which would rather have not been there in the first place, but now it is too late or difficult to correct.
Browsers were meant to be used to browse documents containing hypertext. HTTP is a request/response application level protocol. People are now using both of these for things that they are not designed to do; browsers as application platforms (mashups from multiple sites) and HTTP for pushing data from the server aka Comet.
I can go on. But, do you see a pattern here? Or are these just random bits from history?
The point I am trying to make is that it is very difficult for technology to evolve once it has been widely adopted. In such cases, it is considered okay to bypass the working of the technology without breaking it by identifying loopholes!
So do we blame adoption before a technology matures? Or should there be a way that technology can mature and evolve without carrying its sins forward? Is it possible to mandate a 'big-bang' when specifications can move from one version to another or when there can be radical changes in technology without backward compatibility? How can this be achieved by ensuring that there is minimal side-effect? Does the answer lie in The Tipping Point?!
Seems like an interesting problem to solve.
I recently blogged about Slaves of technology, where I mentioned that over-dependency on technology might create problems.
Here's what I found today regarding responsibilities of humans in technological evolution. This is an audio excerpt from Ray Kurzweil's interview in Accelerating Change 2005.
There was a time when there was no electricity. And then came this wonder. Today we take it for granted, although there are occasions when we need to go without power. It is a really dreaded feeling.
Now we are talking of cyborgs and things like pervasive computing (which means you don't even know you are using it). There are these wonders called Digital Homes, where it is possible that doors will not have knobs. Everything is controlled using biometric recognition or may be directly controlled by the brain!
In fact, if you read Ray Kurzweil's articles, you will go a step further into this. Kurzweil talks of virtual worlds and singularity.
All these sound really great! But is this over-reliance of man on computers/technology? Is there a limit to comfort? Can we live without these once it is available?
The more you rely on technology, the more riskier it gets, the more vulnerable we are to attacks. Consider world without paper money for example. There's just e-cash. Sounds great?
Now consider that the bank where you store your money has a really great virus attack and everything goes down in a moment. What happens? What if the home with no door knobs experiences some failure and you are not able to open the door? This might sound silly (perhaps for lack of a better example), but the problems are quite clear.
This does not in any way mean that reliance on technology is bad. After all that is how man progresses. Technology is a tool. It facilitates something that is otherwise tough or impossible to do. But over-reliance is bad. But what exactly is over-reliance?
Well, it is really tough to draw the line. It becomes an ethical discussion if we proceed on this. But this point is worth considering.
Suppose you have read Kurzweil's articles, try answering this question: “If you are given the freedom to live till eternity, what will you do?” Something that you should really consider!