Oppgaven er ikke lenger tilgjengelig

Understanding network performance bottlenecks

Understanding effects that may be able to help reduce degraded network performance is a high-stake venture for many companies. As an example, delaying results from Internet searches by only a few milliseconds may cost millions of dollars for a search provider like Google or Bing: In 2009 Microsoft Bing found that introducing 500ms extra delay translated to 1.2% less advertising revenue. Another experiment performed by Google showed that if a small amount of latency had been introduced for Google’s whole customer base, they would have lost 0.75% of their 2009 revenue, or $75M per year [6].

The Internet is a large interconnection of thousands of independent networks. These networks can be Internet Service Providers (ISPs) like AT&T and Telenor. Content providers such as Google and Yahoo. Corporates, Government and Campus networks.  Two independent networks can interconnect via dedicated long-haul leased lines or at shared colocation sites. The later is referred to as Internet Exchange Points (IXPs) [1].  All these networks interconnect to facilitate the end-to-end (e2e) delivery of data. Thus, they always strive to improve the performance of data delivery e.g. by reducing e2e delay and avoiding loss. Such goals are usually achieved by overprovisioning the underlying network infrastructure. But despite all these attempts users do occasionally suffer degraded network performance. Users experience such degradation in different forms. Examples include, long website loading times, hiccups during Skype calls, and inability to access online contents.

In this thesis work, we intend to investigate end-to-end delay and packet loss in the Internet and try to localize them. The common wisdom is that bottlenecks and congested links are usually at the edge of the network. That is between your home and your ISP. We want to check whether this common wisdom is valid or not. The result of this work will contribute to our understanding about the current state of the Internet; enlighten Apps developers and network protocols designers; and hopefully foster future research related to network design and transport protocols.

For more information please contact Ahmed Elmokashfi (ahmed@simula.no)  and Andreas Petlund (apetlund@ifi.uio.no).

 

What you should know:

You need to have general understanding of IP networks

And Ability to program in any language ( but preferably Perl or Python)

 

What you will do:

Design and run active measurements from a diverse set of hosts that are located around the globe. These will be NorNet nodes [2], PlanetLab[3] nodes and possibly other hosts that will be rented from commercial cloud providers such as Amazon EC2[4]

You will measure loss and delay between these hosts and a few thousands websites. Loss and delay will be measured in a hop-by-hop fashion using existing tools like MTR [5]

Use a set of statistical and topology mapping methods to locate delay and loss to either last-mile, core networks, or IXPs

 

What you will learn:

Better understanding of IP routing and Internet architecture which is very useful to students seeking PhD positions or computer networks related career

You will get an opportunity to run a real-world large experiment

Improve your understanding and use of statistical method

Depending on the outcome and funding availability, a possibility to continue as a PhD student at Simula

 

References:

[1] Brice Augustin, Balachander Krishnamurthy, Walter Willinger: IXPs: mapped? Internet Measurement Conference 2009: 336-34

[2] NorNet. http://www.nornet-testbed.no

[3] PlanetLab. http://www.planet-lab.org

[4] Amazon Elastic Compute Cloud (Amazon EC2). http://aws.amazon.com/ec2/

[5] “mtr”, http://www.bitwizard.nl/mtr/.

[6] M. Mayer. In Search of... A better, faster, stronger Web. In Proc. Velocity 2009, June 2009.

Publisert 24. sep. 2013 12:14 - Sist endret 3. sep. 2015 09:15

Omfang (studiepoeng)

60