Communications Network Research Institute

QoS Framework for Multimedia Streaming Applications

Stuart Wallace
2 Year MPhil Research Masters Student (1st January 2008 - 31st December 2009)
Funding Agency: Science Foundation Ireland - Research Frontiers Program Award


With the ever increasing demand placed on networks by multimedia streaming applications it is important that the service is delivered with an acceptable quality to the end user. Due to the nature of video it is very difficult to determine the quality of a stream at the end user. For this reason, a video streamís overall quality, known as the Quality of Service (QoS), is split into two separate entities; the Quality of Delivery (QoD) and the Quality of Experience (QoE). The Quality of Delivery (QoD) is based on the end to end transmission over the network in terms of reliability, packet loss, throughput and delay. The Quality of Experience (QoE) is based on the perceived quality of the received multimedia stream by the user. QoE is usually determined by running large scale tests where many participants are subjected to video streams and supply a quality score. It is apparent that the relationship between the QoD and the QoE is extremely complex and depends upon many independent factors. These problems are further compounded by parameters specific to each stream. These can include the codec used to generate the stream, error concealment techniques, error correction techniques, packetization schemes employed and configuration.

Description of Project

In this project it is proposed to develop a QoS evaluation framework for streamed multimedia applications which explicitly considers network transmission impairments and end-user perceived quality. This work will undertake the development of a framework that will map the distribution of subjective and objective quality caused by the spatio-temporal transmission impairments and inter-frame dependencies. It is initially proposed to investigate the use of statistical techniques to realize this mapping.


January 2008

At this early stage of the project, work has been undertaken to investigate how the QoS is affected by the network parameters and induced impairments. In order to realize this, a simple test bed has been implemented to perform live system tests. On the wired side, this test bed consists of a Server running Windows Server Edition 2003 with Darwin Streaming Server, a Cisco Aironet 1200 Access Point (modified to operate as an IEEE 802.11e/WMM (WiFi MultiMedia) enabled QAP), a Windows XP PC to monitor the network traffic using the CNRI Wireless Resource Monitor and a Windows XP PC to generate video traffic. The wireless side of the test bed consists of a Windows XP computer fitted with a Netgear Dual Band Wireless PC Card. Further tests will introduce more clients on the wireless side of the test bed.

Experimental Testbed
Fig 1.0: Experimental Testbed.

Initial testing focuses on the effects of varying the values of the Contention Window (CW), AIFSN, and the TXOP of the IEEE 802.11e/WMM standard on the QoD.

Cisco 1200 AP

In order to function as an IEEE 802.11e WMM enabled QAP, the firmware version on the Cisco 1200 Access Point needed to be upgraded. This was achieved by downloading the latest firmware version from the Cisco website and utilizing a Trivial File Transfer Protocol (TFTP) server to load the firmware onto the Access Point. Once this was achieved the 802.11e and WMM parameters were set using the Cisco web browser interface. The EDCA values can be changed using the web browser interface and by using a Telnet session to log onto the QAP.

Traffic Generator

Currently the testbed utilizes the Distributed Internet Traffic Generator (D-ITG) to generate the simulated video traffic. This tool allows for simulated video traffic to be generated with varying packet sizes and for different lengths of time. This tool should aid in the examination of video quality when transported over networks with varying loads.


The Wireless Resource Monitor is used to monitor the real time load on the network and any given time. This tool is also useful as it allows the nature of the wireless load to be analyzed, and to be recorded and re-played at a later stage.

June 2008

Experiment 1

Examine the effects of varying the packet size and EDCA values on the packet delivery rate of a simulated video stream.

RTPsend was used to transmit a simulated video stream at a rate of 7168kbps and at packet sizes of 128 bytes, 256 bytes, 512 bytes and 1024 bytes. RTPdump was then used to capture the received packets at the client side. This test was carried out for all combinations of CWmin = 2, 4, 6, 8 and AIFSN = 2, 4, 6, 8.

Overall 64 tests were carried out to complete the parameter set.

A specifically developed perl script was then used to compare the transmitted video file and the client side captured file to compute the packet delivery loss.

These values were then input to Matlab for graphical representation.

Below are the graphs of the packet loss rate for 2 of the tests carried out.

PLR vs. CWmin and Packet Size
Fig 2.0: Packet loss rate vs. CWmin and Packet Size when AIFSN = 2

PLR vs. CWmin and Packet Size
Fig 2.1: Packet loss rate vs. CWmin and Packet Size when AIFSN = 4


Experiment 2

Examine the packet loss rate experienced by the video stream with default 802.11b and 802.11e EDCA settings compared to packet loss rate with the WLAN Radio Resource Controller (WRRC) active.

Using the default EDCA values for 802.11b {AIFSN = 2, CWmin = 5, CWmax = 10} a constant 2Mbps Video Stream was generated on the Video Class of Service. A traffic ramp from 64Kbps to saturation was generated on the Best Effort Class of Service. The effects of the increasing load on the available capacities was monitored. When the ramp had completed its run, the WRRC was employed to dynamically modify the EDCA values and the available capacities were again monitored with the ramp running. Packet sizes of 128 bytes and 1024 bytes were used for the Best Effort traffic. 1400 byte packets were used for the video stream.

This experiment was then carried out again using the default 802.11e EDCA values shown here in the form: CoS{AIFSN, CWmin, CWmax}. Voice{1, 3, 4}, Video{1, 4, 5}, Background{3, 5, 7}, Best Effort{7, 5, 10}

The figures below show the effect of the increasing load on the Best Effort and Video Classes. The red plot denotes the load on the Class of Service while the green plot denotes the available capacity.

Best Effort CoS, Default 11b EDCA settings Best Effort CoS, Default 11b EDCA settings
Fig 3.0(a,b): Effects of traffic load on available capacity, 802.11b, 128 byte packets. (a)Best Effort (b)Video

Best Effort CoS, Default 11e EDCA settings Best Effort CoS, Default 11e EDCA settings
Fig 3.1(a,b): Effects of traffic load on available capacity, 802.11e, 128 byte packets. (a)Best Effort (b)Video

Best Effort CoS, Default 11b EDCA settings Best Effort CoS, Default 11b EDCA settings
Fig 3.2(a,b): Effects of traffic load on available capacity, 802.11b, 1024 byte packets. (a)Best Effort (b)Video

Best Effort CoS, Default 11e EDCA settings Best Effort CoS, Default 11e EDCA settings
Fig 3.3(a,b): Effects of traffic load on available capacity, 802.11e, 1024 byte packets. (a)Best Effort (b)Video

It can be clearly observed from these graphs that with the WRRC activated there is a large improvment in capacity available to the video stream. It is also interesting to note that the default 802.11e EDCA settings heavily punish the Best Effort traffic with little improvment in the capacity available to the video stream



Further Work