The transport user has little interest in planes, trains and automobiles, but wants to get from A to B quickly, safely, comfortably, securely and as cheaply as possible. The information user has similar requirements.
Books and newspapers provided one-to-many communication of bulk information. Postal services provide personalised communication, and emerged when the cost of delivering letters became acceptable with efficient transport. Telegrams provided a rapid way of getting a message to distant sites. Telephony personalised communication further and provided the first example of synchronous communication, where both (or several) participants are present at the same time. Radio and television were the last major additions to traditional communication applications and add real-time dissemination of information. So we can think about communications mechanisms in terms of three dimensions: one-to-one versus one-to-many; synchronous versus asynchronous.
On the Internet, the source and sink of information is the computer. Applications have emerged that are related to pre-computer network communications, but are different because the computer mediates and manipulates the information transmitted.
File transfer allows people to move large documents and programs around the network. Electronic mail permits this to be done asynchronously, and personally. Bulletin boards allow efficient many-to-many communication. Archive servers allow browsing of large amounts of information in a way more akin to unstructured library access. Wide Area Information Servers permit indexed based searching of information res- ources, and retrieval via keywords or any other pre-determined relevant detail in the data.
中国A片
The "killer application'' is the World-Wide Web, which allows browsing of a worldwide hypermedia database. The number of WWW servers on the Internet rose from 130 in June 1993 to 3,100 in June 1994. Until now, all access to information resources had been through command line interfaces familiar to users of the 1960s and 1970s, but alien to new users used to graphical user interfaces.
It is also possible to convert existing documents for storing in the Web's markup language, HTML, from most existing word processor formats. Work at Southampton University on authoring tools shows that the imposition of a single tool does not work, but WWW permits the cutting and pasting of existing material into new structures, and is ideal. The main problem with the Web is its size, and the difficulty for new wanderers to find anything. University College, London have developed a tool for generating tours of the web to enable experts to guide novices.
中国A片
The Internet is packet-based, breaking communication into short bursts of data. Any packet can be sent any time to any place; this is connectionless or datagram communication. Each packet is routed along a path, which can be re-computed between one packet and the next, providing easy automatic repair around broken components in the middle of the network, and by the same technique, very easy addition of new sites at the edges. This has been enhanced by the addition of multicast so that any packet can be delivered to any number of places almost simultaneously, although there are no guarantees that anything sent reaches its destination.
The Internet has carried audio and video traffic around the world for more than four years. The problem is, this traffic requires guarantees such as a minimum bandwidth below which audio and even compressed video become incomprehensible. For human interaction, there is also a maximum delay above which conversation becomes intolerable.
To meet these guarantees, queues must be dealt with in the net more frequently for traffic types that need more capacity.
The delay increase seen by this traffic is only affected by the amount of other similar traffic on the network. So long as this stays within tolerable limits, the receiver can adapt continuously, for instance within silences in audio or between video frames, and all is well. Meanwhile, any spare capacity carries the old best effort traffic as before.
UCL has developed a scheme with colleagues at Lawrence Berkeley Labs in California, called Class Based Queueing that enhances the Internet to allow this. Yet, when the total amount of traffic is higher than capacity, things go wrong. What can we do?
We could re-engineer the network so that there is enough capacity. This is feasible only while most people's access speed is limited by the "subscriber loop", or tail circuits that go to their homes/offices.
中国A片
When we all have fibre to the home/desktop, the potential for drowning the Net is alarming.
We could police the traffic, by asking people who have real-time requirements to make a "call set up" like they do with the telephone networks. When the Net is full, calls are refused, unless someone is prepared to pay a premium, and incur the wrath of other users by causing them to be cut off.
中国A片
We could simply bill people more as the Net gets busier. This model, proposed by economists at Harvard, is similar to models of charging for road traffic proposed by the transport studies group at UCL. Since we have already re-programmed the routers to recognise real-time traffic, we have the ability to charge on the basis of logging this traffic. This is the preferred approach.
The current model for charging for traffic in the Internet is that sites attach via some "point of presence" of a provider, and pay a flat fee per month according to line speed, whether they use it to capacity or not. For existing applications traffic and capacity, this model is perfect. The network provider maximises profit by admitting all data traffic, and simply providing a fair share. As the speed decreases, utility decreases, so the user is prepared to pay less, and the increase in possible income from the additional users outweighs this. The underlying constant cost of adding an additional user to the Internet is so low that this is always true.
The addition of multicast, multiple destination delivery of data to the Internet has led to another refinement of the model. Receivers, rather than senders, decide which, and how much data they need.
In radio and television this has been known for some time. In computer networks, where we wish to find information, it is less obvious. However, if we disseminate information, then the easiest way to cope with very large-scale groups is to copy the television model, and allow people to "tune in". This does not mean that we cannot charge, or that we have no security.
These two aspects of the Internet are solved easily. The receivers are the people who specify which groups they join, and this information propagates through the routers, which are already supporting audio/video and data distribution at various quality levels. As they log the data, they add tags (classifiers) to say which users are currently receiving which information when. This also allows users who have less good equipment to specify a lower grade delivery, and the network can then save resources by only delivering a subset of the data, for instance lower resolution video, or lower frame rate.
For security, we encrypt all data at source, and only users with the correct key can decrypt the data. This relies on a more permissive approach to privacy in networks than has been politically feasible for some time. However, this is essential if business is to take full advantage of the information superhighway.
中国A片
Dr Jon Crowcroft is senior lecturer, computer science department, University College, London.
Register to continue
Why register?
- Registration is free and only takes a moment
- Once registered, you can read 3 articles a month
- Sign up for our newsletter
Subscribe
Or subscribe for unlimited access to:
- Unlimited access to news, views, insights & reviews
- Digital editions
- Digital access to 罢贬贰’蝉 university and college rankings analysis
Already registered or a current subscriber? Login