Cost-quality tradeoffs in the Internet

Cost-quality tradeoffs in the Internet

c N I”s”DN SYSTEMS ELSEVIER ComputerNetworks and ISDN Systems 28 ( 1996) 645-651 Cost-quality tradeoffs in the Internet Jean-Chrysostome Bolot * INR...

787KB Sizes 1 Downloads 40 Views

c N I”s”DN SYSTEMS ELSEVIER

ComputerNetworks and ISDN Systems 28 ( 1996) 645-651

Cost-quality tradeoffs in the Internet Jean-Chrysostome Bolot * INRIA,

B.P: 93, 06902

Sophia-Antipolis

Cedex,

France

Abstract The data delivery service currently offered in the Internet is a point-to-point best effort service. Work is underway in various IETF (Internet Engineering Task Force) working groups to include new services for applications that require a specific quality of service. Examples of such applicationsincludethe so-calledreal-timeapplicationssuch as audio and videoconferencing, distributedinteractivesimulation,etc. The cost in termsof network resources for providing theseservicesis higherthanthat for providing besteffort service. Thus,the existenceof new servicespresentsa tradeoffbetweenquality andcost. Furthermore,recentwork hasshownthat many real-timeapplicationscan adapttheir output rates (and hencetheir network resourcerequirements)dependingon networkconditions(and henceon the availablenetwork service).Suchrate adaptiveapplicationspresenta further tradeoff betweencost andquality of the datasentinto the network. In this paper,we describethesetradeoffsand analyzetheir impacton the applicationsand on the network.We illustrate our points with recent resultsand measurements obtainedwith IVS, which is a rate-adaptiveaudio and videoconference applicationfor the Internet developedat INRIA. Keywords:

Internet;Adaptiveapplications; Videoconferencing; Integrated services intemet

1. Introductioa The Internet usesdatagram switching as a means of dynamically allocating network resourceson a demand basis.Datagram switching provides flexible resource allocation, but it provides little control over the packet delay andlossprocessesat the switches.Specifically, when the scheduling discipline at the switches is the FIFO di?;cipline,the departure times of packets from a particular connection depend very strongly on the order of arrival of the packets from other connections at the switch. This makes it essentially impossible to provid,e the guarantees,typically expressedin terms of minimum bandwidth or maximum delay, as* E-mail:[email protected],

sociatedwith real-time applications. Two approaches have emergedto tackle this problem. One approachis to augment the current best effort service to include new services that provide the performance guaranteesdesiredby the applications. Examplesof such servicesidentified within the IETF include the guaranteedservice for intolerant applications (i.e. applicationsthat require hard performance guarantees), and the predictive service for tolerant applications (i.e. applications that can live with fairly reliable guarantees).Another approachis to adapt the applications to the service provided by the network. An example of this is given by the so-calledrate adaptive applications, which adjust their output rate based on network conditions. Theseapplicationstrade off bandwidth (and hencecost in terms of network resources)

0169-7552/96/$15.00 @ 1996ElsevierScience B.V.All rightsreserved SSD/Ol69-7552(95)000072-O

646

J.-C. B&t/Computer

Networks

and ISDN Systems 28 (1996)

for the quality of the data delivered to the destinations. The cost of implementing the first approach is high, since it requires that the architecture of the Internet be changed, and that admission control, policing, reservation, pricing, and/or sophisticated scheduling mechanisms be implemented in the network. The second approach does not require special support from the network, and hence it can be implemented in the current Internet. However, it is not clear whether real-time applications in general can be made to adapt to network conditions, and whether adaptation only is enough to provide the desired performance levels. We discuss these issues next. In Section 2, we describe our experience with adaptive applications in the current Internet. In Section 3, we describe how this experience can lead to understand the need for new services, and how these services could be offered in an integrated service Internet.

2. Supporting real-time current Internet

applications

in the

The current Internet offers a single class best effort service. From a connection’s point of view, this amounts in practice to offering a channel with timevarying capacity. This capacity is not known in advance since it depends on the (a priori unknown) behavior of other connections throughout the network. To avoid rate mismatch (and hence network congestion), it is crucial that applications adapt to the capacity available in the network at any given time. One way to do this is via a control mechanism which adjusts the rate at which packets are sent over a connection based on feedback information about the network state. Feedback control mechanisms are already used in the Internet to control sources of non real-time traffic. The best example is the window control mechanism of TCP There, the feedback information is packet losses detected by timeouts or multiple acknowledgements at the source, and the control scheme is Jacobson’s dynamic window scheme [ 121. The idea of using similar control mechanisms for sources of real-time traffic is not new. Consider for example the case of video sources. Video has conventionally been transmitted over specific networks which provide connections with constant or nearly constant capacity channels (e.g. telephone or CATV networks). However, the rate of a

645-651

video sequence can vary rapidly with time due to the effects of scene complexity and motion, The problem, therefore, is to obtain from the variable rate sequence a constant rate stream that can be sent into the network. This is typically done by sending the variable rate stream into a buffer which is drained at a constant rate. The amount of data in the buffer is used as a feedback information by a controller which adapts the output rate of the coder in order to prevent buffer overflow or underflow [4]. Feedback mechanisms for video sources have also been proposed for networks with variable capacity channels. There, the goal is to adjust the output rate of video coders based on feedback information about changing network conditions, i.e. changing capacity in the network [ 9,251. For both fixed and variable capacity channels, the feedback control mechanism trades off image quaiity for network resource (specifically bandwidth) requirements, the goal being to maximize the perceptual quality of the image received at the destinations while minimizing the bandwidth used by the video transmission. We next illustrate this tradeoff with measurements and results obtained with IVS. IVS is a software videoconference system for the Internet developed at INRIA [ 231. It includes a H.261 video codec [ lo] and a panoply of audio codecs. IVS data is sent over the Internet using IP multicast, UDP and RTP. A central part of the video codec is the compression/coding algorithm. In H.261, a picture is divided into blocks of 8 x 8 pixels. A discrete cosine transform (DCT) converts the blocks of pixels into blocks of frequency coefficients. These coefficients are quantized and then encoded using a Huffman encoding technique. In addition, images can be coded using intraframe or interframe coding. The former encodes each picture in isolation. The latter encodes the difference between successive pictures. It turns out to be surprisingly easy to change the output rate of the coder by adjusting parameters of the coder. In IVS, we adjust three such parameters, namely the refresh rate, the quantizer, and the movement detection threshold. The refresh rate characterizes the rate at which frames are grabbed from the camera. Decreasing the refresh rate decreases the average output rate of the coder. However, it also decreases the frame rate and hence the quality of the video delivered at the receivers. The quantizer characterizes the granularity used to encode the coefficients

J.-C. Bolot/Computer

Networks

Movement Detection

Fig.

and ISDN

H

DCT

Systems 28 (1996)

645-651

647

]-+i+j-&&-/-

I. Structure of the H.261 coder.

obtained from the discrete cosine transformation. Increasing the quantizer is equivalent to encoding the frequency coefficients more coarsely, and thus reducing the quality of the transmitted image. However, it is also equivalent to reducing the number of bits used to encode pixels, and thus reducing the output rate of the coder. The movement detection threshold characterizes the blocks in a frame which are “sufficiently different” from those in the previous frame. Increasing the threshold value decreases the number of blocks which are considered to have changed since the last frame. However, it also decreases the sensitivity of the coder to movement and yields lower image quality. Thus adjusting the parameters of the video coder is an easy way, particularly in a software coder such as IVS, to trade off a lower output rate (i.e. lower bandwidth requirements) for a lower image quality. Which of the three parameters above is adjusted depends on the specific requirements of a video application. The refresh rate is adjusted if the precise rendition of individ.ual images is important. The quantizer and the movement detection threshold are adjusted if the frame rate or the perception of movement is important. Other ways to control the output rate of video coders are described in [ 14,15,7]. Unfortunately, it is not so easy to modify the parameters of an audio coder to adjust its output rate. This is essentially because different compression schemes based on very different principles are used to obtain audio streams with different bandwidth requirements. One way around this problem is to use a panoply of audio codecs. IVS and other audioconference systems such as VAT [ 131 typically use PCM (at 64 kb/s), ADPCM (between 16kb/sand48kb/s),GSM (at 11 kb/s) , and LPC (below 5 kb/s) codecs. This makes it possible to choose the coding scheme most appropriate for the bandwidth available in the network at any given time.

At this point, we have shown that videoconference applications in general can adjust their bandwidth requirements. To use these applications over the Internet, however, we need a feedback mechanism to elicit information about the state of the network, and a control algorithm to adjust the audio or video output rate accordingly. The goal of the feedback mechanism is to estimate the state of the network, or rather its impact on the quality of the image received at the destinations. Since the number of destinations can be large and might even be unknown (recall that most audio and videoconference applications are expected to run in a multicast environment), the mechanism must scale well with the size of the multicast group. One such mechanism is described in [ 21. Furthermore, we need to identify variables to evaluate the perceived quality of the images received at the destinations. The Mean Opinion Score (MOS) has been used extensively as a subjective measure to design and compare video coding algorithms [ 111. However, a MOS-based feedback mechanism would be impractical, since it would have to include the user in some kind of continual rating procedure. We thus have to rely on objective measures. Unfortunately, objective measures typically do not reflect the user’s perception of an image [ 111. The signal to noise ratio (SNR) is a measure of the spatial quality of the image. However, it is an imperfect measure because the perceptual quality in a sequence of frames depends on the quality of each frame in the sequence. Furthermore, it cannot be computed by the receivers since it requires that the original image be available. Another objective measure is the frame rate, i.e. the rate at which video frames arrive at the destinations. Yet another measure is the loss rate of the packets on the paths between the source and the destinations. We have chosen in IVS a feedback information based on measured packet loss rates at the destinations. Specifically, each receiver

648

J.-C. Bolot/Computer

Networks

measures an average packet loss rate observed during a given time interval. It then considers the network to be unloaded, reasonably loaded, or overloaded (i.e. congested) depending on the measured rate. The control algorithm at the source gradually increases the audio or video bandwidth until the network is reasonably loaded. The source then transmits at this rate, continually polling its receivers to ensure that the network does not become congested. If the network is detected by one or more of the receivers to be congested, the source then reduces its output rate. Of course, a lower bandwidth at the source translates into a lower image quality for all destinations. However, it also translates into lower bandwidth requirements, and hence lower losses and delays in the network. There remains to quantify this bandwidthgained/quality-lost tradeoff. The problem again is to find a way to estimate the quality of the video data delivered to the receivers. We argued earlier that the loss rate and the frame rate are good indications of this quality. Experiments carried out with colleagues at University College London (UCL) show that the control mechanism decreases the bandwidth requirements as well as the loss rate at the receiver, as long as the video traffic makes up a significant part of the total traffic on the path from INRIA to UCL.

3. Supporting real-time applications integrated services Internet

in an

Our experience and that of others obtained with various audio and video transmission systems for the Internet such as VAT [ 13 1, NV [ 71, or NEVOT [ 201 indicates that the rate adjustment of audio and video coders makes it possible to maintain videoconferences of reasonable quality over the Internet. Of course, the audio and video signals (and therefore the user satisfaction) are degraded as the available capacity decreases, i.e. as the network load increases. It is not clear, however, whether this degradation is still tolerable when the available capacity is “very small”. Consider for example the case of audio data. Known audio compression algorithms require a bandwidth equal to at least a few hundred bits per second [ 111. There is clearly a problem if the available capacity in the network is lower than this minimum value. Ergonomic studies and anecdotal evidence from the Internet sug-

anti ISDN Systems 28 (1996)

64.5-651

gest (although this question would certainly benefit from further work) that users find audio and video data useful as long as the information content is above some minimum level which depends on the task at hand [ 26,1]. Therefore, there is a floor to the rate at which a real-time application can transmit and still send a useful stream of information. This presents the problem of who to satisfy when two applications compete for the same bandwidth, and whose combined minimum bandwidth requirements, i.e. combined floor rates, exceed the available bandwidth. The problem can be resolved either by turning off one of the applications (this is generally done by the end user who realizes that the network does not provide the desired service), or by preventing this situation from happening in the first place. This latter solution is typically implemented by means of admission control mechanisms. However, it is important to note that the above problem is not likely to occur frequently (and hence admission control is not likely to be required or useful) if the floor rates are low and if the network is dimensioned appropriately. Unfortunately, it is not clear yet which fraction of applications will be rate adaptive, and what their floor rates will be. The answers to these questions impact the way the network architecture should be designed to provide the services required by the applications. Scott Shenker has recently developed a simple model which brings out this impact [ 211. Consider a network with a single bottleneck link with bandwidth b shared by N applications. For simplicity, all applications are assumed to be identical. The quality of an application as observed by a user of this application is captured by a function referred to as a utility function U which depends on the service provided by the network. In our case, we assume that the service is completely characterized by the available capacity. The utility for one application is U(b/N), and the total network utility is T(N) = N x tr( b/N). The question then is how to maximize T(N). The answer to this question depends on the shape of the utility function U. Refer to Fig. 2. Shenker has shown that T(N) increases monotonically if U is concave as in case (a). However, T(N) first increases but then decreases as N exceeds some value Na if U is convex near the origin as in case (b). It is clear that in the latter case, the number of applications sharing the bandwidth

J-C.

Bolot/Computer

Networks

and ISDN Systems 28 (1996)

quality

649

MS-651

quality

(a)

service (bandwidth) Fig. 2. Example

(b) utility

service (bandwidth)

functions.

should not exceed NO, i.e. admission control should be exercised at the entrance to the network. We mentioned earlier that rate adaptive audio and video applications are characterized by a curve as shown in case: (b) above. This suggests that admission control mechanisms, and more generally that services other than the current best effort service, be implemented in the Internet to obtain an integrated services Internet ’ . This is indeed the solution advocated in [3]. A widely held approach divides the services to be provided in an integrated packet switched network into a few basic types of services [ 3,8]. One type is the guaranteed service which provides an application with guaranteed performance such as bandwidth, delay, or loss. Another type is the predictive or statistical service which provides an application with statistical performance guarantees. Yet another type of service is the best effort service. One possible solution to providing these different services is as follows. Each output queue in a router would be organized as parallel FIFO queues sharing the output link bandwidth via a Fair Queueing or equivalent packetized head-of-line processor sharing scheduling policy [ 191. In this policy, each queue is guaranteed a iparticular minimum bandwidth. Thus, the flows in each FIFO queue are isolated, which in turn allows different flows to be associated with services with differenl: quality of service (QoS) characteristics. The traffic from applications requiring a guaranteed service must be regulated, or shaped, before entering the network, for example via a leaky bucket mechanism. The network can then reserve, using protocols

such as RSVP [27] or ST-2 [22], appropriate size buffers and minimum guaranteed bandwidth such that guaranteed end-to-end delays are satisfied 2 . A videoconference application using bandwidth reservation with ST-2 is described in [ 61. The traffic for applications requiring a statistical service is segregated and managed based on statistical characteristics and desired QoS. The statistical characteristics and QoS requirements of an application would be described by measures such as the effective bandwidth [ 161, and they would be part of a contract negotiated before data transmission. In practice, applications with similar QoS requirements could simply share FIFO queues. The best effort traffic can either be allocated a fraction of the bandwidth, or receive service only after the guaranteed and statistical traffic. It is clear that, all other things being equal, it would be desirable to provide different types of services in the Internet rather than only the best effort service. However, it is also clear that providing these extra services is costly. The implementation of mechanisms such as the Fair Queueing discipline does involve extra cost compared to the FIFO discipline. However, the impact of providing new services is much more far reaching. For example, it is then necessary to provide, in addition to the new services, incentives to encourage applications to request the proper service class for their requirements. In the absence of such incentives, applications could request the guaranteed service no matter what their requirements. One way of providing incentives is via pricing. Currently, most users (or organizations in general) are

I We point out again, however, that this conclusion does not necessarily hold if most of the applications which are expected to use the integrated Internet are rate adaptive applications with very

2 Note, however, that end-to-end delay incurred by the as-yet-unshaped

low

which

floor

rates.

can be extremely

high

delays

for bursty

here do not include the traffic in the leaky bucket, traffic.

650

J.-C. BolotlComputer

Networks

charged a flat fee to access and use the network. However, it wouId be desirable to charge more users and applications that request the guaranteed service. This would provide the desired incentive in the sense that only those applications that do need the performance guarantees would be ready to pay for them [5]. The introductionof such usage-based pricing might change the (erroneous,. but surprisingly common) “Internet is free” mentality which has been responsible in part for the explosive increase of the Internet in recent years. Furthermore, it requires that tariffing, billing, and monitoring mechanisms be implemented in the Internet. Even though several have argued that the overhead associated with the billing mechanisms would be manageable [ 18 1, several problems might prove difficult to solve. For example, suggestions that tariffs should be based on traffic characteristics could be hard to implement in practice given the difficulties experienced in measuring and identifying such characteristics [ 171. One might then wonder whether the benefits expected from the new Internet architecture are worth the pain associated with the change. Much of the work being done in a number of IETF working groups is in fact related to answering this question. The currently held view is that the architecture of the Internet wiI1 be changed to provide services similar to the guaranteed and statistical services described above in addition to the current best effort service. Recall that the discussion on the evolving Internet architecture was triggered in this paper by the work in Section 2 on rate adaptive applications. One question then is whether such applications still have a place in the new architecture. Clearly, the results from Section 2 are still applicable to those applications that request only the best effort service. Applications that request the statistical or guaranteed service have to trade off the cost (i.e. the charge) associated with the service requested, and the quality obtained when using this service. One way to do this in a flexible manner is to use, for example in the case of a video application, a layered or hierarchical video coding scheme. In such a scheme, the stream of video data generated by an application is split into multiple streams, each one with different bandwidth requirements than the original stream. In the two layer case, the base stream includes

the low resolution

information,

and it can be

decoded into a meaningful service. The other stream

and ISDN Systems 28 (1996)

645-651

includes enhancement information. The idea then is to transmit the important stream (the base stream in the two layer case) using the guaranteed service, and the enhancement stream using the best effort service. Note that a similar idea can already be implemented in the current Internet [24]. For example, the data from the base stream could be sent such that it is resilient (up to a point) to loss. One way to do this is to add redundancy, e.g. using forward error correction techniques. In any case, the ability to trade off cost and quality makes rate adaptive applications attractive in an integrated services network. Acknowledgements

The insights obtained with IVS are the result of joint work with Thierry Turletti, Christian Huitema, and Ian Wakeman. The ideas presented in the paper have been shaped very much by work carried out in the Endto-End, RSVP, and Integrated Services IETF working groups, and by discussions with members (in addition to those mentioned above) of the networking groups at INRIA and UCL and with colleagues at Bellcore and at the AT&T Bell Laboratories. References [I ] U. Biking, A. Sasse, C-D. Schulz and T. Turletti, Remote seminars through multimedia conferencing: Experiences from the MICE project, froc. INET’94, Prague, June 1994. [2] J-C. Bolot, T. Turletti and 1. Wakeman, Scalable feedback control for multicast video distribution in the Internet, Proc. ACM Sigcomm ‘94, London, September 1994, pp. 58-67. [3] B. Braden, D. Clark and S. Shenker, Integrated services in the Internet architecture: an overview, RFC 1633, 1993. [4] C-T. Chen and A. Wong, A self-govemingrate buffer control strategy for pseudoconstant bit rate video coding, l/5% Trans. Image Process.

2 ( 1) ( 1993)

50-59.

[5] R. Cocchi, S. Shenker, D. E&in and L. Zhang, Pricing in computer networks: Motivation, formulation, and example, IEEE/ACM Trans. Networking 1 (6) (1993) 614-626. 161 C. Elliott, High quality multimedia conferencing through a long-haul packet network, Proc. ACM Multimedia ‘93, Los Angeles, Calif., August 1993, pp. 91-98. [7] R. Frederick, Experiences with real-time software video compression, Proc. 6th Packet Video Workshop, Portland, Oreg., September1994. [ 8 ] M.W. Garrett, ATM service architecture: From applications to scheduling, Contribution to the ATM Forum, available from URL ftp://thumper.belIcore.com/pub/mwg/ATMEqos. mw

J.-C. Bolot/Computer

Networks

19 1 M. Gilge and R. Gusella, Motion video coding for packetswitching networks - An integrated approach, Proc. SPIE Conf

on Visual

Communications

and Image

Processing,

Boston, Mass.. November 1991. ( IO 1 Recommendation H.261: Video codec for audiovisual services at p*64 kb/s, CCI’IT White Book, 1990. I I I 1 N. Jayant, J. Johnston and R. Safranek, Signal compression based on models of human perception, Proc. IEEE 81 ( IO) (1993) 1385-1422. I I2 I V. Jacobson, Congestion avoidance and control, Proc. ACM Sigcomm ‘8X, Stanford, Calif., August 1988, pp. 314-329. I I3 I V. Jacobson and S. MacCanne, VAT, Manual pages, Lawrence Laboratory, University of California, Berkeley, Calif. I 141 K. Jeffay, D.L. Stone, T. Talley and ED. Smith, Adaptive, best-effort delivery of digital audio and video across packetswitched networks, Prac. 3rd Int. Workshop on Network and OS Support,for Digital Audio and Video, San Diego, Calif., November 1992. I IS I H. Kanakia. P. Mishra and A. Reibman, An adaptive congestion control scheme for real-time packet video transport, Prnc. ACM Sigcomm ‘93, San Fransisco, Calif., September 1993. pp. 20-3 1. I I6 1 EP. Kelly, Effective bandwidths at multi-class queues, Queueing Systems 9 (1991) 5-16. I I7 I W. Eland, M. Taqqu, W. Willinger and D. Wilson, On the self-similar Inature of Ethernet traffic, Proc. ACM Sigcomm ‘93. San Fransisco, Calif., September 1993, pp. 183-193. I 181 J. MacKie-Mason and H. Varian, Pricing the Internet, available from URL ftp://gopher.econ.lsa.umich. edu/pub/Pape.rs/Pricing~he-Intemet.ps.Z. I I9 1 A. Parekh and R.G. Gallager, A generalized processor sharing approach to flow control in integrated services networks, Proc. IEEE Infocom ‘93, San Fransisco, Calif., March 1993, pp. 521-530. 1201 H. Schulzrinne, Voice communication across the Internet: a network voice terminal, Research Report, Department of Electrical Engineering, University of Massachussets at Amherst, July 1992. I2 I I S. Shenker, Fundamental design issues for the future Internet, presentation at the July 1993 IETF meeting. [ 22 I C. Topolic, Experimental Internet Stream Protocol Version 2 (ST-2), Internet RFC 1190, October 1990. / 23 1 T. Turletti, H.261 software codec for videoconferencing over the Internet, INRIA Research Report 1834, January 1993. 1241 T. Turletti and J-C. Bolot, Issues with multicast video delivery in Iheterogeneous networks, Proc. 6th Packet video Workshop, Portland, Oreg., Sepember 1994. 125 I I. Wakeman, Packetized video - Options for interaction between the user, the network, and the codec, Comput. J. 36 (I) (1993). 1261 F. Wilson, 1. Wakeman and W. Smith, Quality of service parameters for commercial application of video telephony, Proc. Human Factors in Telecommunication Synl~).,Darmstadt, Germany, March 1993. I27 I L. Zhang, S. Deering, D. Estrin, S. Shenket and D. Zappala, RSVP: a new resource Reservation Protocol, IEEE Network (September 1993).

and ISDN Systems 28 (1996 1) 645-651

651

Botot received his Ms. and Ph.D. from the University of Maryland at College Park in 1988 and 1991. respectively. Since then, he has been working at INRIA in Sophia Antipolis (France) in the high speed networking research group. His research interests include resource allocation and control algorithms, multimedia applications, and traffic measurement and analysis. Jean-Chrysostome