IEEE 2014 NS2 NETWORKING PROJECTS Simultaneously reducing latency and power consumption in open flow switches

50 %
50 %
Information about IEEE 2014 NS2 NETWORKING PROJECTS Simultaneously reducing latency and...
Engineering

Published on September 20, 2014

Author: IEEEBEBTECHSTUDENTPROJECTS

Source: slideshare.net

Description

To Get any Project for CSE, IT ECE, EEE Contact Me @ 09666155510, 09849539085 or mail us - ieeefinalsemprojects@gmail.com-Visit Our Website: www.finalyearprojects.org

GLOBALSOFT TECHNOLOGIES IEEE PROJECTS & SOFTWARE DEVELOPMENTS IEEE FINAL YEAR PROJECTS|IEEE ENGINEERING PROJECTS|IEEE STUDENTS PROJECTS|IEEE BULK PROJECTS|BE/BTECH/ME/MTECH/MS/MCA PROJECTS|CSE/IT/ECE/EEE PROJECTS CELL: +91 98495 39085, +91 99662 35788, +91 98495 57908, +91 97014 40401 Visit: www.finalyearprojects.org Mail to:ieeefinalsemprojects@gmail.com Simultaneously Reducing Latency and Power Consumption in Open Flow Switches Abstract The Ethernet switch is a primary building block for today's enterprise networks and data centers. As network technologies converge upon a single Ethernet fabric, there is ongoing pressure to improve the performance and efficiency of the switch while maintaining flexibility and a rich set of packet processing features. The OpenFlow architecture aims to provide flexibility and programmable packet processing to meet these converging needs. Of the many ways to create an OpenFlow switch, a popular choice is to make heavy use of ternary content addressable memories (TCAMs). Unfortunately, TCAMs can consume a considerable amount of power and, when used to match flows in an OpenFlow switch, put a bound on switch latency. In this paper, we propose enhancing an OpenFlow Ethernet switch with per-port packet prediction circuitry in order to simultaneously reduce latency and power consumption without sacrificing rich policy-based forwarding enabled by the OpenFlow architecture. Packet prediction exploits the temporal locality in network communications to predict the flow classification of incoming packets. When predictions are correct, latency can be reduced, and significant power savings can be achieved from bypassing the full lookup process. Simulation studies using actual network traces indicate that correct prediction rates of 97% are achievable using

only a small amount of prediction circuitry per port. These studies also show that prediction circuitry can help reduce the power consumed by a lookup process that includes a TCAM by 92% and simultaneously reduce the latency of a cut-through switch by 66%. Existing system The Ethernet switch is a primary building block for today's enterprise networks and data centers. As network technologies converge upon a single Ethernet fabric, there is ongoing pressure to improve the performance and efficiency of the switch while maintaining flexibility and a rich set of packet processing features. The OpenFlow architecture aims to provide flexibility and programmable packet processing to meet these converging needs. Of the many ways to create an OpenFlow switch, a popular choice is to make heavy use of ternary content addressable memories (TCAMs). Unfortunately, TCAMs can consume a considerable amount of power and, when used to match flows in an OpenFlow switch, put a bound on switch latency Proposed system In this paper, we propose enhancing an OpenFlow Ethernet switch with per-port packet prediction circuitry in order to simultaneously reduce latency and power consumption without sacrificing rich policy-based forwarding enabled by the OpenFlow architecture. Packet prediction exploits the temporal locality in network communications to predict the flow classification of incoming packets. When predictions are correct, latency can be reduced, and significant power savings can be achieved from bypassing the full lookup process. Simulation studies using actual network traces indicate that correct prediction rates of 97% are achievable using only a small amount of prediction circuitry per port. These studies also show that prediction circuitry can help reduce the power consumed by a lookup process that includes a TCAM by 92% and simultaneously reduce the latency of a cut-through switch by 66%. SYSTEM CONFIGURATION:- HARDWARE CONFIGURATION:-  Processor - Pentium –IV  Speed - 1.1 Ghz

 RAM - 256 MB(min)  Hard Disk - 20 GB  Key Board - Standard Windows Keyboard  Mouse - Two or Three Button Mouse  Monitor - SVGA SOFTWARE CONFIGURATION:-  Operating System : Windows XP  Programming Language : JAVA  Java Version : JDK 1.6 & above.

Add a comment

Related presentations

Related pages

2014-15 ieee projects list,ieee projects for ns2,ieee ...

2014-15 ieee projects list,ieee projects for ns2,ieee projects in ns2 ... ECRUITMENT SOLUTION provides bulk ieee projects to its regular clients and ...
Read more

bulk ieee 2014-15 projects list for NS2 - Engineering

Share bulk ieee 2014-15 projects list for NS2. ... 2014 IEEE NS2 Projects, ... Simultaneously Reducing Latency and Power Consumption in OpenFlow Switches 70.
Read more

Ieee 2014 2015 ns2 projects titles list globalsoft ...

Download Ieee 2014 2015 ns2 projects titles list globalsoft technologies. Transcript ...
Read more

Ieee 2014-2015 Ns2 Projects Titles List Globalsoft ...

To Get any Project for CSE, IT ECE, EEE Contact Me @ 09666155510, 09849539085 or mail us - ieeefinalsemprojects@gmail.com-Visit Our Website: www ...
Read more

Ieee 2014 2015 ns2 projects titles list globalsoft ...

Ieee 2014 2015 ns2 projects titles list globalsoft technologies. by ieeematlabprojects
Read more

ieee projects 2012 in trichy, ieee projects 2012 for cse ...

IEEE Projects 2014 For Cse in Networking Java; IEEE Projects 2014 ... Projects; IEEE 2013 Communication Projects; NS2 ... the power consumption of ...
Read more

Takeoff Projects

2016 - 2017 latest m.tech projects ...
Read more

IEEE 2016 / 15 - Networking Projects - dhsprojects.blogspot.in

DHS IEEE 2016 Projects +91 9845166723. ... We are developing the projects in NS2, Java, J2EE, J2ME, Android, Dot Net C#, ASP.Net & Embedded System ...
Read more