A while back you would still hear a lot of talk about Web 2.0. It’s exciting to a lot of people, but I am troubled by it. It strikes me as a very rigid form of constrained thinking, coloring in the lines. If you insist on buzzwords, I’m not so much interested in Web 2.0 as I am in Net 2.0.
We seem to have forgotten that the Web is only one application of the Internet, or Net for short. There was a time when the Web didn’t exist, when we used the Net for other things. Telnet (and its modern avatar SSH), FTP, Email, News, and IRC all come to mind. These are still around, but to varying degrees have lost the importance they once had. Email is still going strong, but there are competitors starting to chip away at it making its future dominance unclear. The same will be true for the Web some day. Today, the Web certainly dominates the Net in the amount of time spent using it, even if it doesn’t dominate number of bytes transferred. Its dominance is reflected in the fact that we tend to conflate the terms Net and Web. However, there’s good reason to believe that this will not be the case eventually. The Net has changed before, and it will change again. The Web is not the Net. Who will be the next Gopher?
I should point out, the inventor of the Web, Tim Berners Lee, has some thoughts on the difference between the Web and the Net himself. He foresees the need for a Semantic Web, which he would rather be called a “Giant Global Graph”. I’m not here to disagree with him, I’m merely trying to suggest we need to open our eyes as wide as possible. In fact, I do believe the Semantic Web, with a more computer centric focus rather than human focus, will be an area that will provide us with some large advances in computing. But it is not the horizon, it is only another step along the way.
The Web was designed to deliver us a “web” of hyperlinked documents, containing text and images, each with an address. It does this quite well. Originally Tim Berners Lee conceived of a more write as well as read medium, which is what I believe the original intent of the HTTP POST verb was. The Web never really got this “produce as well as consume” spirit, until wiki’s came around. Wikipedia showed what you could accomplish with true read and write collaboration of hyperlinked documents. It would not be possible to replicate the success of Wikipedia without its technological underpinning of the wiki augmented Web. When used appropriately, the Web is a great medium to both produce and consume content on.
The web was intended for documents, but often we use it for applications. Gmail and Turbotax are examples of applications, not documents. Yet they are built with Web technologies, with their document centric focus. This technological mismatch makes itself painfully felt in many ways. The phrase “don’t hit your browser’s back button” is laughable but still all too common symptom of this mismatch. There are protocols designed to build applications that can be consumed remotely over the Net. X is one such example. X served its purpose for many years- the ability to consume a graphical application running on a remote server. X is now an ancient technology, new thinking is sorely needed in this area. The web development world is now focused on building Single Page Apps, but these are still built on a HTML/HTTP foundation, with all of the limitations inherent to them.
What if we agreed we didn’t have to make everything a web page? What if we were designing a collaboration medium from scratch on the Net? What if we removed all the limits? You might consider that such a medium would be a full 3-d environment with graphics and sound, in a deliberate mimic of the real world. Such media exist on the net, commercially with Second Life, in research with Croquet. These media can display documents, just like the web can, and can address them too, but aren’t limited to documents alone. These media are not the pinnacle of what we might use the Net for. But they do show us that there we can use the Net for things beyond just email and hyperlinked documents. I’m not recommending these platforms themselves. What I am recommending is the wider viewpoint they take.
Skeptics to new Net technologies may point out that the Browser has become a universal consumer for Net content, and that any new medium on the Neb not viewable in a browser is a non-starter. In the days of slow internet connections, when it wasn’t possible or easy to download a large client program quickly, this was true. Today however, with fast broadband connections in the home more the norm, if your content is good enough users will download the needed client. Second Life and Croquet require downloads. Games, which are increasingly social on-line applications these days, require downloads or purchasing a box in the store. Some games are so addictive that people actually pay real money to download their large fat clients. Your new idea will be the same, it doesn’t have to be shoehorned into “the Browser”.
Despite its faults, it’s obvious to see why the Web gets so much attention. It’s made some people very rich, and made many others a comfortable living, myself included. So long as this is true, people will continue to develop for and use the Web. As is so often the case, money trumps technical concerns. Even today, I see so many small companies trying to re-inflate the “.com” bubble, believing that any idea will sell merely because it is on “the Web”. But in the increasingly crowded space of today’s Web, the crowding brings in inherent competition which means making money is far from guaranteed. Some day, a new killer application will appear that doesn’t use the Web, and the innovator will make a fortune as a result. The copycats will come in, and the cycle will repeat.
I have no doubt that the Web will improve as time goes on. WebSockets, WebRTC, HTML5, ES5, these are all useful advances. But do we still have to see everything through the lens of a markup language and a stateless textual transfer protocol? A little revolution every now and then is a good thing.