IPv6 Structure and Dual Stack

“CHECK POINT “(檢查點)
telnet route-server.ip.att.net
  1. IPv6 (Beta) : 6Bone using prefix 3ffe::/16 (6BONE的測試位址)
  2. IPv6 (RIR) using 2001::/16 allocation IPv6 Address to Service Provider (服務提供商目前被提供使用的IPV6區段)
  3. IPv5 (ST2:ST&SCMP Protocol ) coordinate with IPv4 not IPV4 successor (IPv5 和IPV6無關)
  4. HierArchical Ipv6 Address (Concept as following)(IP6位址的結構化層次)
Internet 1st – Level ISP 2nd -Level ISP Nth –Level
ISP
ORG. Site. Host
  1. Ipv6 State Less Autoconfig vs DHcp v6 StateFul Configure
  2. Simple and efficient IPv6 header (Fixed 128bits Length and few fields ,64bits fetch can directly access Memory , but may be require more CPU Cycle to handle Header , Besides this
    IPv6 without IP header CRC checking )
  3. Flow Label using to identify  Flow in order to associate the Qos to Same Flow but no detail explain by ietf

  4. Next header
  5. Mobility (using Route option and Destination Extension Header
  6. IPsec for IPv6
  7. Rich transistion ( Dual Stack ,Tunnel , and NAT-PT( v4v6 proxy)

2.IP v6 Dual Stack

Today the Internet is predominantly based on IPv4. As a result, most of end systems and network devices are able to exchange packets all together in a global network. However, there is no a single global IPv6 network on the same scale and it will take some time to get it. Therefore, new applications should be designed to work in all environments: single IPv4, single IPv6 or mixed communication when group collaborative applications are considered.

In the design of applications to use IPv6 some characteristics must be taken into account. First of all it is necessary to separate the transport module from the rest of application functional modules.
This separation makes the application independent on the network system used. Then, if the network protocol is changed, only the transport module should be modified. Transport module should provide the communication channel abstraction with basic channel operations and generic data structures to represent the addresses. These abstractions could be instantiated as different implementations depending on the network protocol required at any moment. The application will deal with this generic communication channel interface without knowing the network protocol used. Using this design if a new network protocol is added, application developers only need to implement a new instance of the channel abstraction which manages the features of this new protocol.
Once the transport module has been designed, there are some implementation details related to the type of the nodes which will run the application: IPv4-only nodes, IPv6-only nodes or both, dual stack.

Within the transport module the use of the new API with extensions for IPv6 is mandatory, but it is strongly recommended to make the program protocol independent (for instance, using the BSD socket API consider getaddrinfo and getnameinfo instead of gethostbyname and gethostbyaddr).

The new IPv6 functions are only valid if this protocol is supported by all installed systems. IPv6 is now reaching maturity and most popular systems provide it as default in their standard distributions. However, IPv6 support does not force to use it, only after the complete network configuration is defined, applications will use IPv4 or IPv6.
Protocol independent code is feasible if design is based on the principal building block for transitioning, the dual stack.


Dual stacks maintain two protocol stacks that operate in parallel and thus it is allowed to operate via either protocol. The operating system running dual stack translates IPv4 addresses to IPv4-mapped IPv6 ones when communicating to IPv4 remote applications. In the following sections the connections between IPv6 server and IPv6 client applications when they are running on a dual stack node will be analyzed.

If a server application is running bound to the IPv6 wildcard address and a known port on a dual stack node, it will be able to accept connections from IPv4 and IPv6 clients, see Figure .

When an IPv4 client is connecting to this IPv6 server, the dual stack kernel converts the client IPv4 address to the IPv4-mapped IPv6 address since the IPv6 server can only deal with IPv6 connections. The communication between the IPv4 client and the IPv6 server will take place using IPv4 datagrams, but the server will not know that it is communicating with an IPv4 client, unless the server explicitly checks it.

When running an IPv6 client application on a dual stack node, the client can connect to an IPv6 or IPv4 server. Since the client node runs dual stack, client will try to open an IPv6 connection to the server. If server is running over IPv4, the resolved server address returned by the resolver system is the IPv4-mapped IPv6 one. Then, the communication between IPv4 server and IPv6 client, which is running on the dual stack, is carried out using IPv4 but the IPv6 client will not know it, unless the client explicitly checks it.


Figure – Protocol independent application on dual-stack host

Although applications are written following program protocol independent rules, other points have to be considered such as the movement of binary code between IPv4-only nodes, dual stacks nodes or IPv6-only nodes.
Compilation options (#ifdefs in C language) can be provided throughout the code to select the proper use environment. If IPv6 is not supported the IPv4-only code will be selected for compilation during installation process. However, if IPv6 is supported (Kernel level support) by installed systems, the code for IPv6 or the IPv4 could be selected for compilation, depending on the requirements of applications. Notice that if IPv6 is supported by the kernel, it only means the IPv6 option could be activated and while this option is disabled the node will be only use IPv4 stack. During the transition period nodes are usually running dual stack, both IPv4 and IPv6 stacks. The problem with the conditional compilation approach is that the code becomes littered with compilation options very quickly and harder to follow and maintain.

If an application is compiled on a system which supports dual stack and move the binary code to an IPv4-only node without IPv6 kernel support, the source code must be recompiled to use the original IPv4 API. The binary code generated on the dual stack uses the new system functions which are not supported in the IPv4-only node.
If the binary code is moved, which has been compiled on a dual stack, to an IPv4-only node with IPv6 kernel support and IPv6 stack not activated, recompilation is not required. Since the IPv6 stack is not activated, the node can not establish IPv6 connections. However, the resolver system could return an IPv6 address to an application query and the application should be prepared to discard this IPv6 address and select the IPv4 one to open connections.
All these alternatives are summarized in Table 4.

Table – IPv4 or IPv6 activation

In summary, if applications follow the recommendations explained above, using a separated protocol independent transport module, which provides a generic communication API, it is easy to adapt them to new network protocols. Besides, the generic communication API could be implemented as a communication library to be used for many applications. This solution encourages the code reusability and makes communication modules of applications easy to maintain.

High Performance Solution

(譯自High Performance Solution)

From EMC 技術觀察員

A revolution has started to free data center networks from the tyranny of inflexibility, complexity, vendor stranglehold, and high costs.

It’s time to decouple network services from the underlying physical hardware.

It’s time to never have to think about the network again.

It’s time to virtualize the network.but…

Adapt From nicira Main Page


引言

接觸網絡虛擬化純屬偶然。那天無意中看到了一條新聞:Xsigo公司推出了業界第一個資料中心網路全虛擬化解決方案。
巧的是Xsigo公司的方案是基於Infiniband技術的,所以就重點關注了一下。這一關注不要緊,才發現裡面水很深。
不管是傳統IT豪強還是網路巨人都對這一領域虎視眈眈,謀篇定局,更有無數的創業者們在此展開深耕。 抱著對技術要略懂的心態,一探究竟。

這篇博文算作者涉水的總結,網路虛擬化發展到現在牽涉的技術非常多,每種技術都可以單獨寫一篇文章來介紹,限於精力和知識水準,只能給大家做個整體的簡單介紹,如果有讀者對某種技術感興趣可以搜索相關資料做更詳細的瞭解。


什麼是網路虛擬化

首先我們需要明確一個問題,什麼是網路虛擬化,網路虛擬化簡單來講是指把邏輯網路從底層的實體網路分離開來。
這個概念存在一段時間了,VLAN,VPN, VPLS等其實都可以歸為網路虛擬化的技術。
近一,二年來, “計算“的浪潮席捲IT界。幾乎所有的IT基礎構架都被有意的朝向著 ““的方向發展。在雲計算的發展中,虛擬化技術一直是重要的推動因素之一。
作為基礎構架,伺服器和存儲的虛擬化已經發展的有聲有色,而同作為基礎構架的網路卻還是一直沿用傳統方式。在這種環境下,網路確實期待一次變革,使之更加符合雲計算和互聯網發展的需求。
雲計算的大環境下,網路虛擬化的定義沒有變,但是其包含的內容卻大大增加了。

雲計算環境下的網路虛擬化需要解決端未到端未的問題,作者將其歸納為三個部分:

(一)第一部分是伺服器內部。隨著越來越多的伺服器被虛擬化,網路已經延伸到Hypervisor內部,網路通信的端已經從以前的伺服器變成了運行在伺服器中的虛擬機器,資料包從1.虛擬機器的虛擬網卡流出,通過2.Hypervisor內部的虛擬交換機,在經過3.伺服器的實體網卡流出到上聯4.交換機。在整個過程中,虛擬交換機網卡的I/O問題以及虛擬機器的網路接入都是研究的重點。

(二)第二部分是伺服器到網路的連接。10Gb乙太網 和Infiniband等技術的發展使一根連接線上承載的頻寬越來越高。為了簡化,通過一種連接技術聚合互聯網路和存儲網路成為了一個趨勢。

(三)第三部分是網路交換,需要將實體網路和邏輯網路有效的分離,滿足雲計
算多租戶,按需服務的特性,同時具有高度的擴展性。

以下面我就圍繞這三個方面來講述網路虛擬化中的一些主要技術和標準。

伺服器內部

I/O虛擬化

多個虛擬機器共用伺服器中的實體網卡,需要一種機制既能保證I/O的效率,又要保證多個虛擬機器對用實體網卡共用使用。
I/O虛擬化的出現就是為了解決這類問題。I/O虛擬化包括了從CPU到設備的一群解決方案。

1.從CPU的角度看,要解決虛擬機器訪問實體網卡等I/O設備的性能問題,能做的就是直接支援虛擬機器記憶體到實體網卡的DMA操作。IntelVT-d技術及 AMD IOMMU技術通過DMARemapping 機制來解決這個問題。
DMARemapping機制主要解決了兩個問題,一方面為每個VM創建了一個DMA保護區域並實現了安全的隔離,另一方面提供一種機制是將虛擬機器的Guest Physical Address翻譯為實體機的Host Physical Address。


2.從虛擬機器對網卡等設備訪問角度看,傳統虛擬化的方案是虛擬機器通過
Hypervisor來共用的訪問一個實體網卡,Hypervisor需要處理多虛擬機器對
設備的併發訪問和隔離等。
這樣Hypervisor容易行成一個性能瓶頸。為了提高性能,一種 做法是虛擬機
器繞過Hypervisor直接操作實體網卡,這種做法通常稱作PCIpass through,
VMware,Xen和KVM都支援這種技術。
但這種做法的問題是虛擬機器通常需要獨佔一個PCI插槽,不是一個完整的解
決方案,成本較高且擴展性不足

另一種做法是設備如網卡直接對上層作業系統或Hypervisor提供虛擬化的功
能,一個乙太網卡可以對上層軟體提供多個獨立的虛擬的PCIe設備並提供虛
擬通道來實現併發的訪問

這種方法也是業界主流的做法和發展方向,目前已經形成了標準,主要包括
SR-IOV(Single Root I/O Virtualization)和
MR-IOV(Multi-Root I/O Virtualization)。
這方面的技術在網上已有很好的文章來做介紹,推薦想進一步瞭解的讀者讀一
下:http://t.cn/aK72XS

(虛擬接入)

在傳統的伺服器虛擬化方案中,從虛擬機器的虛擬網卡發出的資料包在經過伺
服器的實體網卡傳送到外部網路的上聯交換機後,虛擬機器的標識資訊被遮罩
掉了,上聯交換機只能感知從某個伺服器的實體網卡流出的所有流量而無法感
知伺服器內某個虛擬機器的流量,這樣就不能從傳統網路設備層面來保證QoS
和安全隔離。
虛擬接入要解決的問題是要把虛擬機器的網路流量納入傳統網路交換設備
管理之中,需要對虛擬機器的流量做標識。
在解決虛擬接入的問題時,思科和惠普分別提出了自己的解決方案。思科的是
VN-Tag, 惠普的方案是VEPA(VirtualEthernet Port Aggregator)。為了制定
下一代網路接入的話語權,思科和惠普這兩個巨頭在各自的方案上都毫不讓
步,紛紛將自己的方案提交為標準,分別為802.1Qbh802.1Qbg

 

 

 

 

 

關於虛擬接入也有一篇很好的文章來介紹,想深入瞭解的可以看看:
(http://t.cn/hGWnOQ)

網路連接

網路連接技術一直都在追求更高的頻寬中發展。比如Infiniband和10Gb乙太網。在傳統的
企業級資料中心IT構架中,伺服器存儲網路互聯網路的連接是不同結構且分開的。
存儲網路用光纖,互聯網用乙太網線(ISCSI雖然能夠在IP層上跑SCSI,但是性能與光纖比
還是差的很遠)。

資料中心連接技術的發展趨勢是用一種連接線將資料中心存儲網路和互聯網路聚合起來,使伺
服器可以靈活的配置網路埠,簡化IT部署。乙太網上的FCOE技術和Infiniband技術本身都
使這種趨勢成為可能。

Infiniband

Infiniband 技術產生於上個世紀末,是由Compaq、惠普IBM戴爾英特爾微軟Sun七家公司共同研究發展的高速先進的I/O標準。最初的命名為SystemI/O,1999年10月,正式改名為InfiniBand。InfiniBand是一種長纜線的連接方式,具有高速、低延遲的傳輸特性。基於InfiniBand技術的網卡的單埠頻寬可達20Gbps,最初主要用在高性能計算系統中,近年來隨著設備成本的下降,Infiniband也逐漸被用到企業資料中心。為了發揮Infiniband設備的性能,需要一整套的軟體栈來驅動和使用,這其中最著名的就是OFED(Open Fabrics Enterprise Distribution) ,它基於Infiniband設備實現了RDMA(remote direct memory access). RDMA的最主要的特點就是零拷貝和旁路作業系統,資料直接在設備和應用程式記憶體之間傳遞,這種傳遞不需要CPU的干預和上下文切換。
OFED還實現了一系列的其它軟體棧:IPoIB(IP over Infiniband), SRP(SCSI RDMA Protocol)等,這就為Infiniband聚合存儲網路和互聯網路提供了基礎。
OFED由Open Fabrics聯盟負責開發。Open Fabrics最初叫做OpenIB,從2006年開始OpenIB在Infiniband之外也開始支援乙太網,業務做的大了名字也從OpenIB改為Open Fabrics。OFED現已經被主流的Linux發行版本本支持,並被整合到微軟的windows server中。


查看大图

圖1 OFED 軟體棧

 

FCOE

就在大家認為Infiniband就是資料中心連接技術的未來時,10Gb乙太網的出現讓人看到了其它選擇,乙太網的發展好像從來未有上限,目前它的性能已經接近Infiniband(詳見http://www.cisco.com/en/US/prod/collateral/ps10265/le_32804_pb_hpc10ge.pdf),而從現有網路逐漸升級到10Gb乙太網也更易為用戶所接受。
FCOE的出現則為資料中心互聯網路和存儲網路的聚合提供了另一種可能。FCOE是將光纖通道直接映射到乙太網線上,這樣光纖通道就成了乙太網線上除了互聯網網路通訊協定之外的另一種網路通訊協定。FCOE能夠很容易的和傳統光纖網路上運行的軟體和管理工具相整合,因而能夠代替光纖連接存儲網路。雖然出現的晚,但FCOE發展極其迅猛。與Infiniband技術需要採用全新的鏈路相比,企業IT們更願意升級已有的乙太網。在兩者性能接近的情況下,採用FCOE方案似乎性價比更高


網路交換

在這一層面上要解決的問題則是要對現有的互聯網路進行升級,使之滿足新業務的需求,網路虛擬化則是這一變革的重要方向。在這一方向上目前有兩種做法,一種是在原有的基礎設施上添加新的協議來解決新的問題;另一種則完全推倒重來,希望設計出一種新的網路交換模型。

當虛擬資料中心開始普及後,虛擬資料中心本身的一些特性帶來對網路新的需求。實體機的位置一般是相對固定的,虛擬化方案的一個很大的特性在於虛擬機器可以遷移。當虛擬機器的遷移發生在不同網路,不同資料中心之間時,對網路產生了新的要求,比如需要保證虛擬機器的IP在遷移前後不發生改變,需要保證虛擬機器內運行在第二層(鏈路層)的應用程式也在遷移後仍可以跨越網路和資料中心進行通信等等。

在這方面,Cisco連續推出了OTV,LISP和VXLAN等一系列解決方案。

OTV

OTV的全稱叫做Overlay Transport Virtualization。通過擴展鏈路層網路,它可以使局域網跨越資料中心。很多應用需要使用廣播和本地鏈路多播。通過擴展鏈路層網路,OTV技術能夠跨地域的處理廣播流和多播,這使得這些應用所在的虛擬機器在資料中心之間遷移後仍然能夠正常工作。
OTV擴展了鏈路層網路實際上也擴展了其相關的IP子網,需要IP路由同樣的做改變,這就引出了新的問題,這個問題就由LISP來解決了。


LISP

LISP的全稱是Locator/ID Separation Protocol。傳統的網路位址IP蘊含了兩個含義,一個是你是誰(ID),另一個是你在哪裡(Locator)。這樣帶來的一個問題就是如果你的位置變了(Locator變了),IP必須跟著變化。LISP的目標是將ID和Locator分開,再通過維護一個映射系統將兩者關聯。這樣虛擬機器和伺服器在網路不同位置進行遷移時可以保持相同的IP位址。


查看大图


圖2 OTV和LISP的應用

 

VXLAN

VXLAN的目的是在雲計算環境中創建更多的邏輯網路。在雲計算的多租戶環境中,租戶都需要一個邏輯網路,並且與其它邏輯網路能夠進行很好的隔離。在傳統網路中,邏輯網路的隔離是通過VLAN技術來解決的。不幸的是在IEEE802.1Q標準中,VLAN的標識號只有12位元,這就限制了在一定範圍內虛擬網路最多只能擴展到4K個VLANs。為了解決這個問題,思科聯合VMware在今年推出了VXLAN技術,通過MAC in User Datagram Protocol(MAC-in-UDP)封裝技術,加入了一個24位的段識別字。用UDP的多播代替廣播,將資料包限定在目標網段內。VXLAN技術極大的擴充了雲計算環境中所能支援的邏輯網路的數量,同時通過邏輯段可以將邏輯網路擴展到不同的子網內,使虛擬機器能夠在不同的子網間做遷移。

 


查看大图

圖3 VXLAN 框架格式


查看大图

圖4 通過VXLAN來擴展網路

 

NVGRE

對於雲計算環境中的下一代網路,各大IT廠商們都不想隨便就丟掉操控權,就在Cisco推出VXLAN不久,Microsoft就聯合Intel, HP& Dell 提出了NVGRE標準。
NVGRE的全稱是NetworkVirtualization using Generic Routing Encapsulation。它和VXLAN解決相同的問題,只是用了稍微不同的方式,使用GRE (Generic Routing Encapsulation) key的低24位元作為網路租戶的識別字。

 

前面我們講的都是在原有的基礎設施上添加新的協議來解決新出現的問題,而近年來Software Defined Networking (SDN) 的興起則期待從根本上改變目前的網路交換模式。

SDN中最著名的就是OpenFlow。

OpenFlow

0penFlow論壇起源於斯坦福大學的"Clean slate"計畫,開始主要是為了設計一種新的互聯網實驗環境。在目前的實驗網上沒有實際足夠多的使用者或者足夠大的網路拓撲來測試新協定的性能和功能,最好的方法是將運行新協定的實驗網路嵌入實際運營的網路,利用實際的網路環境來檢驗新協定的可行性和存在的問題。
OpenFlow的切入點是目前已有的互聯網上的交換設備。無論是交換機還是路由器,最核心的資訊都保存在flow table裡面,這些flow table用來實現諸如轉發、統計、過濾等各種功能。雖然不同生產廠家有不同格式的flow table,但可以抽取出絕大多數switch和router都有的一些通用的功能。OpenFlow試圖提出一種通用的flow table設計,能被所有廠家的設備所支援。通過控制flow table能夠實現網路流量的分區,將網路流量劃分為不同的資料流程,這些資料流程能被歸於不同的組且相互隔離,能夠按照需要來處理和控制。更重要的是flow table支持遠端的訪問和控制。OpenFlow的flow table中每一個entry支援3個部分:規則,操作與狀態規則是用來定義flow;操作就是轉發丟棄等行為狀態部分則是主要用來做流量的統計

有了Open FLow,我們可以在正常運行的網路中自己定義一些特殊的規則,通過定義不同的flow entry讓符合規則的流量按照我們的需求走任意的路徑,可以達到把實體網路切成若干不同的虛擬邏輯網路目的。所以,OpenFlow將傳統的互聯網改造成為了動態可變的軟體定義互聯網(SoftwareDefined Networking )。OpenFlow的發展異常迅速,就連Cisco如今也開始擁抱Open Flow。

總結

網路虛擬化當前IT發展最熱門的方向之一,是雲計算基礎架構的核心技術。網路虛擬化涉及的面非常的廣,本文也只為認識做了粗淺的介紹。

 

備註

在網路虛擬化方面不僅很多大公司在搶佔操控權,很多初創公司也在努力開拓機會,這裡把知道的中小公司稍微做下總結,供大家參考:

Nicira:專注於Open Flow的神秘公司。

Big Switch:提供基於OpenFlow的網路虛擬化解決方案

Juniper Networks:支持OpenFlow

Open vSwitch: 一個開源的虛擬switch ,它是一個軟體switch能運行在Hypervisor裡, 目前已是XenServer 6.0 的預設switch。

Conte Xtream:借鑒Grid的思想,通過DHT(Distributed Hash Table)在傳統的網路之上建立一個虛擬的抽象的網路,解決雲主機服務提供者們在網路靈活性,多租戶和擴展性方面的挑戰。

Embrane: 提供一種on-demand的虛擬化網路服務,比如服務的負載均衡,防火牆,VPN。

Xsigo: 提供基於Infiniband技術的資料中心全虛擬化方案。

NextIO:提供基於PCIe技術 的I/O虛擬化產品。

希望通過粗略的理解,引來更多專家的討論,最終對這個概念提出一個明確、清晰的認識。畢竟,技術名詞是要落地的,我們需要的是"雲計算"而不是"暈計算"或是 “雲而不計算"。

轉BFD white Paper

BFD技術白皮書

BFD技術白皮書

關鍵字:BFD

摘要:BFD是用來實現快速故障檢測的標準協定。本文對BFD技術實現和典型組網應用進行介紹。

術語:

欄位 英文全名 中文解釋
BFD Bidirectional Forwarding Detection 雙向轉發檢測
UDP User Datagram Protocol 使用者資料包通訊協定

目  錄

1 概述… 3

1.1 產生背景… 3

1.2 技術優點… 3

2 BFD技術實現… 3

2.1 BFD實現簡介… 3

2.2 BFD訊息… 5

2.2.1 BFD控制訊息… 5

2.2.2 BFD Echo訊息… 6

2.3 BFD會話連接建立… 7

2.4 計時器協商… 8

2.5 故障檢測… 9

3 典型組網應用… 10

3.1 路由式通訊協定與BFD聯動典型組網應用… 10

3.2 快速重路由與BFD聯動典型組網應用… 10

4 參考文獻… 10

1  概述

1.1  產生背景

為了保護關鍵應用,網路中會設計有一定的額外的備份鏈路,網路發生故障時就要求網路設備能夠快速檢測出故障並將流量切換至備份鏈路以加快網路收斂速度。目前有些鏈路(如POS)通過硬體檢測機制來實現快速故障檢測。但是某些鏈路(如乙太網鏈路)不具備這樣的檢測機制。此時,應用就要依靠上層協議自身的機制來進行故障檢測,上層協定的檢測時間都在1秒以上,這樣的故障檢測時間對某些應用來說是不能容忍的。
某些路由式通訊協定如OSPF、IS-IS雖然有Fast Hello功能來加快檢測速度,但是檢測時間也只能達到1秒的精度,而且Fast Hello功能只是針對本協議的,無法為其它協定提供快速故障檢測。

1.2  技術優點

BFD協議就是在這種背景下產生的,提供了一個通用的標準化的介質無關和協定無關的快速故障檢測機制。具有以下優點:

對網路設備間任意類型的雙向轉發路徑進行故障檢測,包括直連物理鏈路、虛電路、隧道、MPLS LSP、多站路由路徑以及單向鏈路等。

可以為不同的上層應用服務,提供一致的快速故障檢測時間。提供小於1秒的檢測時間,從而加快網路收斂速度,減少應用中斷時間,提高網路的可靠性。

2  BFD技術實現

2.1  BFD實現簡介

BFD在兩台網路設備上建立會話,用來檢測網路設備間的雙向轉發路徑,為上層應用服務。BFD本身並沒有鄰居發現機制,而是靠被服務的上層應用通知其鄰居資訊以建立會話。

會話建立後會週期性地快速發送BFD訊息,如果在檢測時間內沒有收到BFD訊息則認為該雙向轉發路徑發生了故障,通知被服務的上層應用進行相應的處理。下面以OSPF與BFD聯動為例,簡單介紹會話工作流程。

1. BFD會話建立流程


圖1 BFD會話建立流程圖

(1) OSPF通過自己的Hello機制發現鄰居並建立連接;

(2)OSPF在建立了新的鄰居關係後,將鄰居資訊(包括目的地址和源
位址等)通告給BFD;

(3) BFD根據收到的鄰居資訊建立會話。


2. BFD故障發現處理流程


圖2 BFD故障發現處理流程圖

(1)被檢測鏈路出現故障;

(2)BFD檢測到鏈路故障,解除BFD鄰居會話;

(3) BFD通知本地OSPF程序BFD鄰居不可達;

(4) 本地OSPF程序中斷OSPF鄰居關係

BFD有兩種操作模式:非同步模式和查詢模式。

目前Comware只支援非同步模式。在此模式下,會話兩端週期性地
發送BFD控制訊息,根據是否能收到對端的BFD控制訊息來檢測會
話狀態。

另外,Comware還支援回聲功能。回聲功能啟動後,會話的一端
週期性地發送BFD Echo訊息,對端不對此訊息進行處理,而只將
此訊息轉發回發送端。根據發送端是否能收到BFD Echo訊息來檢
測會話狀態。

BFD會話的兩端可能是在直連網段(即IP訊息的一跳),也可能
是在不同網段。回聲功能只可以檢測直連網段故障,即BFD Echo
訊息是單站發送;而BFD控制訊息可以檢測直連網段和非直連網段
的故障,即BFD控制訊息可以是單站或多站發送。


2.2  BFD訊息

2.2.1  BFD控制訊息

BFD控制訊息包括強制部分和可選認證部分。

強制部分格式如圖3:


圖3 BFD控制訊息

可選認證部分格式如圖4:


圖4 BFD控制訊息(認證部分)

BFD控制訊息各欄位含義如表1:

表1 BFD控制訊息各欄位含義

欄位 含義
Vers BFD協議版本號,目前版本號為1
Diag 診斷碼,表明發送方最近一次會話Down的原因
Sta 發送方BFD會話當前狀態,取值為:0代表AdminDown,1代表Down,2代表Init,3代表Up
P 會話參數變化時置位
F 如果收到的BFD控制訊息P欄位置位,則將下一個發送的BFD控制訊息的F欄位置位作為應答
C 該欄位置位表明BFD的實現是獨立於控制平面的
A 該欄位置位元表明訊息包含認證部分,會話需要進行認證
D 該欄位置位元表明發送方希望以查詢模式運行,不置位元元表明不希望以查詢模式運行或不支援查詢模式
R 保留位,發送時設為0,接收時忽略該欄位
Detect Mult 檢測時間倍數
Length BFD控制訊息長度,單位為位元組
My Discriminator 發送方產生的一個唯一非0值,用來標識不同的BFD會話
Your Discriminator 如果已經收到會話鄰居發送的BFD控制訊息則該值為收到訊息中的My Discriminator,否則為0
Desired Min TX Interval 發送方支援的最小BFD控制訊息發送時間間隔,單位為微秒。
Required Min RX Interval 發送方支援的最小BFD控制訊息接收時間間隔,單位為微秒
Required Min Echo RX Interval 發送方支持的最小BFD Echo訊息接收時間間隔,單位為微秒。為0表示不支持BFD Echo訊息
Auth Type 認證類型
Auth Len 可選認證部分長度,包括Auth Type和Auth Len欄位,單位為位元組

BFD控制訊息采用UDP封裝,目的埠號為3784,源埠號在4915265535的範圍內。

2.2.2  BFD Echo訊息

BFD Echo訊息提供了一種不依賴於BFD控制訊息的故障檢測方法。本端發送本端接收,遠端不對訊息進行處理,而只是將此此訊息在反向通道上返回。因此BFD協議並沒有對BFD Echo訊息的格式進行定義,唯一的要求是發送方能夠通過訊息內容區分會話。

BFD Echo訊息采用UDP封裝,目的埠號為3785,目的IP位元址為發送介面的位址,來源IP位元址由配置產生(配置的來源IP位元址要避免產生ICMP重新定向)。

2.3  BFD會話連接建立

  說明:

下面僅介紹通過發送控制訊息來建立會話並進行故障檢測的過
程。

BFD會話建立前有主動被動兩種模式。如果一台設備為主動模式,那麼在會話建立前不管有沒有收到對端發來的BFD控制訊息,都會主動發送BFD控制訊息。如果一台設備為被動模式,那麼在會話建立前就不會主動發送BFD控制訊息,直到收到對端發來的BFD控制訊息才發送。

要建立BFD會話的兩端中至少要有一端為主動模式才能成功建立起會話。下面對兩端都為主動模式的會話建立過程進行說明,一端主動模式一端被動模式的會話建立過程基本相同。


圖5 BFD會話連接建立

BFD使用三向握手的機制來建立會話,發送方在發送BFD控制訊息時會在Sta欄位填入本地當前的會話狀態,接收方根據收到的BFD控制訊息的Sta欄位以及本地當前會話狀態來進行狀態機的遷移,建立會話。

              Router A和Router B的BFD收到上層應用的通知後,發送狀態為DOWN的BFD控制訊息。Router B的BFD狀態變化同Router A。

              Router B收到對端狀態為DOWN的BFD控制訊息後,本地會話狀態由DOWN遷移到INIT,隨後發送的BFD控制訊息中將Sta欄位填為2表明會話狀態為INIT。Router A的BFD狀態變化同Router B。

              Router A收到對端狀態為INIT的BFD控制訊息後,本地會話狀態由INIT遷移到UP,隨後發送的BFD控制訊息中將Sta欄位填為3表明會話狀態為UP。Router B的BFD狀態變化同Router A。

              BFD雙方狀態都為UP,會話成功建立並開始檢測鏈路狀態。

2.4  計時器協商

BFD會話建立前BFD控制訊息以1秒的時間間隔週期發送以減小訊息流量。在會話建立後則以協商的時間間隔發送BFD控制訊息以實現快速檢測。在BFD會話建立的同時,BFD控制訊息發送時間間隔以及檢測時間也會通過訊息交互協商確定。在BFD會話有效期間,這些計時器可以隨時協商修改而不影響會話狀態。BFD會話不同方向的計時器協商是分別獨立進行的,雙向計時器時間可以不同。

BFD控制訊息發送時間間隔為本端Desired Min TX Interval與對端Required Min RX Interval之中的最大值,也就是說比較慢的一方決定了發送頻率。

檢測時間為對端BFD控制訊息中的Detect Mult乘以經過協商的對端BFD控制訊息發送時間間隔。

如果加大本端Desired Min TX Interval,那麼本端實際發送BFD控制訊息的時間間隔必須要等收到對端F欄位置位的訊息後才能改變,這是為了確保在本端加大BFD控制訊息發送時間間隔前對端已經加大了檢測時間,否則可能導致對端檢測計時器錯誤超時。

如果減小本端Required Min RX Interval,那麼本端檢測時間必須要等收到對端F欄位置位的訊息後才能改變,這是為了確保在本端減小檢測時間前對端已經減小了BFD控制訊息發送間隔時間,否則可能導致本端檢測計時器錯誤超時。

然而如果減小Desired Min TX Interval,本端BFD控制訊息發送時間間隔將會立即減小;加大Required Min RX Interval,本端檢測時間將會立即加大。

下麵詳細介紹參數改變後計時器的協商過程:


圖6 BFD檢測時間協商

Router A與Router B建立BFD會話,雙方的Desired Min TX Interval和Required Min RX Interval(下麵簡稱為TX和RX)都為100ms,Detect Mult都為3。根據計時器協商規則,Router A的發送時間間隔為Router A的TX與Router B的RX中的最大值也就是100ms,Router B的發送時間間隔也是100ms,雙方的檢測超時時間都為300ms。

如果此時將Router A的TX和RX加大到150 ms。

(1)        Router A比較本端的RX(150ms)和Router B的TX(100ms),從而將本端檢測時間改為450ms。同時向對端發送P欄位置位的BFD控制訊息(TX和RX均為150ms)。

(2)        Router B收到訊息後,給Router A回復F欄位置位的BFD控制訊息(TX和RX均為100ms)。同時將收到訊息中的RX與本端的TX進行比較,由於TX較大,故Router B的發送間隔改為150ms。經過比較本端RX和對端的TX,從而將檢測時間也增大到450ms。

(3)        Router A收到對端發來F欄位置位的控制訊息。根據訊息中的RX與本端的TX進行比較計算出新的時間間隔為150ms。

(4)        計時器協商完成,雙方的發送間隔和檢測時間分別為150ms和450ms。

2.5  故障檢測

BFD會話建立及計時器協商完成後,兩端會以協商後的間隔發送BFD控制訊息。每當收到BFD控制訊息時,就會重置檢測時間計時器,保持會話UP狀態。如果在檢測時間內沒有收到BFD控制訊息,BFD會話會遷移到DOWN狀態,並通知該會話所服務的上層應用發生故障,由上層應用採取相應的措施。本端BFD會話DOWN後,發給對端的BFD控制訊息中的Sta欄位就填為1,通知對端會話DOWN,對端的BFD會話也遷移到DOWN狀態。

3  典型組網應用

3.1  路由式通訊協定與BFD聯動典型組網應用


圖7 路由式通訊協定與BFD聯動組網圖

兩台路由器Router A、Router B通過二層交換機互連,在設備上運行路由式通訊協定,網路層相互可達。

由於通過二層交換機相連,Router A與Router B之間的鏈路故障可能不會導致介面DOWN,只能通過協定握手去檢測。通過在Router A與Router B之間使用BFD就能快速檢測出故障,路由式通訊協定得到BFD通知後可以儘快計算新的路由,從而縮短收斂時間。

3.2  快速重路由與BFD聯動典型組網應用


圖8 快速重路由與BFD聯動典型組網圖

隨著網路的快速發展,IP網路越來越多的承載語音、視頻等多種業務,這些業務對網路的高可靠性提出了更高的要求,從而運營商網路要求更快的收斂速度。

BFD應用於路由式通訊協定以及路由式通訊協定快速收斂技術的使用雖然很大程度提高了收斂速度,但還是無法滿足語音、視頻等新業務對業務中斷時間的要求。

而快速重路由和BFD聯動技術可以很好的滿足這種要求,通過提前計算備用路徑,快速發現主用路徑故障,並在主用路徑故障時不依賴於控制平面的收斂而直接在轉發平面切換至備用路徑,極大的縮短了業務中斷時間。

4  參考文獻

              Katz D., Ward D., “Bidirectional Forwarding Detection", draft-ietf-bfd-base-05.

              Katz D., Ward D., “Generic Application of BFD", draft-ietf-bfd-generic-02.

              Katz D., Ward D., “BFD for IPv4 and IPv6 (Single Hop)", draft-ietf-bfd-v4v6-1hop-05.

              Katz D., Ward D., “BFD for Multihop Paths", draft-ietf-bfd-multihop-03.


Using Cisco IOS Dhcp Server across How to Config dhcp snooping function

DHCP SNOOPING

DHCP Snooping 的原理簡單的來說,Switch 會監看 DHCP 的活動,就是只允許指定的部分 Switch Port 放行 DHCP伺服器的回應封包(OFFER & ACK),其他的一律不准。所以只會取得指定 Server 回應的 DHCP 內容。
CISCO 交換機可攔截來自不信任埠口的 DHCP 要求,任何來自不信任埠口之 DHCP 封包將被丟棄。進入errdisable狀態。在收到合法回應時,並可記錄已完成的 DHCP 資訊庫,內有 MAC位置 / IP位置 / 租用時間 等等資訊。

注意: 有一點很重要, 如果 DHCP Server 不是接在本 Switch 時, Uplink port 就必須要允許 DHCP Server 回應,不然整台 Switch 上的 PC 都會收不到動態的 IP位址資訊!
以下是一般的設定:

! Global,啟動 DHCP Snooping,這行一定要。
ip dhcp snooping

! 假設只對 VLAN 100 和 VLAN 200 作 DHCP 設限, 這行也一定要,可以是單一 vlan,也可以是一個 range x-x"
ip dhcp snooping vlan 100 200

! DHCP Server 埠,或是 Uplink,這邊是重點所在,不然會全部都收不到!
interface GigabitEthernet0/1
ip dhcp trust

! 一般使用者埠,這是預設值,可以不用下,啟動 DHCP Snooping 後,預設所有埠口為不信任狀態。
interface FastEthernet 0/1
no ip dhcp trust

! 針對不信任埠口,可以限制 DHCP 封包流量速率。使用下列指令:
ip dhcp snooping limit rate “流量,1-4294967294 / 封包"

! 倘若在不信任埠口偵測到 DHCP 要求,交換機可把本身 MAC / 交換埠識別字 加入到該封包的 選項82 之下。
使其可以順利的到達可信任的 DHCP 伺服器。此功能預設啟用。可以使用以下指令控制:
[no] ip dhcp snooping information option

! DHCP SNOOPING 檢查指令,如果後面加上 bining 字串,將可顯示所有已被監聽到且儲存在交換機裡的 DHCP 租用狀況

show ip dhcp snooping [binding]

需不需要設定 ip dhcp snooping information option, 這件事其實很複雜.

當這個選項被開啟時, Client 所發出的 DHCP 請求, 會被中途經過的 Switch 加上 Option 82 檔頭, 裡面會夾帶著這些資訊:
* Circuit ID sub-option
* Remote ID sub-option
* VLAN-mod-port

當這些資訊被送達 DHCP Server 之後, 如果這台 DHCP Server Option 82 Capable 的話, 他會使用上面夾帶的資訊, 比對預先定義的 Policy, 來決定要如何發 IP 給這台電腦.

若你的 DHCP Server 不是 Option 82 capable 的話, 那這些資訊就完全沒有用.

但若您是用 Cisco IOS 內建的 DHCP Server 的話, 那狀況又完全不一樣:
如果中間層層轉送的 Option 82, 裡面所含的 giaddr 欄位是 0.0.0.0 的話, Cisco 的 DHCP Server 會直接丟棄這個封包, 使得用戶端得不到回應. 所以當你使用 Cisco 的 DHCP Server, 會有兩種不同的解法:

1.
增加以下指令:
ip dhcp relay information trust-all (global)
or
ip dhcp relay information trusted (per interface )

2. 或者是, 改用:
no ip dhcp snooping information option

所以您的環境裡, 有幾個問題需要先釐清:
1.
您使用哪一種 DHCP Server? 需不需要 Option 82 的資訊?
2.
當您穿越這麼多種 VLAN 之後, Option 82 內含的資訊是否還正確?
3.
您中間那個防火牆是否會攔阻含有 Option 82 DHCP 封包?
建議, 先不要嘗試穿越那麼多個 VLAN, 在單純只有一台 Switch 的環境中, 測試完備之後, 再擴展出去.
最簡單的偵錯, 就是在每個 Switch 上都監聽 DCHP 封包, 看看封包內容對不對? 在哪個階段不見的?


EAP-TLS

EAP-TLS認證協定是基於802.1x/EAP體系架構.這個架構是由三部分內容組成的:用戶端(supplicant)/認證設備(authenticator)/認證伺服器(authentication server).其中,用戶端及認證伺服器要求必須支援EAP-TLS協定.而對於認證設備(如交換機/AP等),則對EAP-TLS並不關心.

下圖解釋了EAP-TLS協議是如何來完成802.1x/EAP的認證過程.這個過程對於LEAP及EAP-MD5同樣是適用的.


下圖更詳細地解釋了EAP-TLS的交換過程.作為EAP請求資料訊息的一部分,RADIUS伺服器向用戶端提供其伺服器憑證,同時請求用戶端的證書.用戶端檢查伺服器憑證的有效性之後,會在EAP應答資料訊息中回應用戶端證書,以及開始加密及壓縮演算法的一個協商過程.當伺服器驗證了用戶端證書的有效性之後,伺服器會向用戶端回應這次會話所使用的加密及壓縮演算法.



Testing Juniper Networks M40 Router MPLS Interoperability with Cisco Systems 7513 and 12008 Routers

Testing Juniper Networks M40 Router MPLS Interoperability with Cisco Systems 7513 and 12008 Routers
Mark Anderson, Systems Engineer
Annette Kay Donnell, Marketing Engineer
Juniper Networks, Inc.

Executive Summary

Juniper Networks, Inc. condUCted a series of interoperability tests at our facilities. The purpose is to demonstrate that the MPLS implementation of the M40 Internet router interoperates with Cisco Systems 7513 and 12008 routers. You can use the procedures in this paper to replicate the tests.
The interoperability test includes the creation and use of label switched paths (LSPs), including the control plane using the Resource Reservation Protocol (RSVP) to setup LSPs and the forwarding plane that forwards packets through LSPs. It also includes the testing of IS-IS traffic engineering extensions. The tests were successful in that they demonstrated that Juniper Networks and Cisco Systems MPLS implementations are interoperable.

Scope

Appendix A lists the specific configuration used for all three configuration tests. This configuration enables you to test the MPLS interoperability between the M40 router running JUNOS Internet software release 3.4, Cisco Systems 7513 routers running IOS 12.0(6)S, and 12008 routers running IOS 12.0(7)S. Particularly, these tests verify LSPs and RSVP, as well as IS-IS traffic engineering extensions.
Features not tested include Juniper Networks enhancements, such as MPLS Fast Reroute, Circuit Cross Connect, and OSPF traffic engineering. Other features not tested include the set of tag-switching features that Cisco Systems calls MPLS features, such as MPLS VPN.

These tests do not include performance testing.
Test Environment

The test environment consists of a four-router network topology: two M40 routers, one Cisco Systems GSR 12008, and one Cisco Systems 7513 (Figure 1).

Figure 1: MPLS Interoperability Test Topology

点击查看大图


LSP Interoperability Test

This test determines whether you can create an LSP across an arbitrarily heterogeneous path, as well as whether static routing correctly operates.

Parameters
Appendix A lists the configurations for the four routers in the LSP and static routing tests.

To demonstrate interoperability over an arbitrary path, a set of permutations of possible paths is created (Table 1). This set represents all the possible paths. Each member of the set contains a different combination of routers for three different label switching router (LSR) functions: ingress, transit, and egress. These six paths are overlaid on top of the topology in Figure 2.

Table 1: LSP Created
Ingress Transit Egress
Cisco Systems Cisco Systems Juniper Networks
Cisco Systems Juniper Networks Juniper Networks
Juniper Networks Cisco Systems Juniper Networks
Cisco Systems Juniper Networks Cisco Systems
Juniper Networks Juniper Networks Cisco Systems
Juniper Networks Cisco Systems Cisco Systems


点击查看大图


To test different means of path selection, the LSPs in Figure 2 are configured in a variety of ways. Most LSPs are configured with eXPlicit paths to constrain them to a single path. Others, like the C2-J2-J1 LSP, are configured to dynamically choose a path that meets the bandwidth requirements of the LSP.

IP traffic is sent over all the LSPs using a Smartbits (SMB) 6000. The SMB 6000 is connected through a Gigabit Ethernet port directly to each router. To route IP traffic through the LSPs, each Gigabit Ethernet circuit is advertised via BGP.

Results
Testing demonstrates that the MPLS LSP implementations and static routing implementations are interoperable.

For each Gigabit Ethernet circuit, the BGP next hop advertises for the loopback interface of the router connected to it. Likewise, every LSP is established between the loopback interfaces of the originating and terminating routers. Any LSP originating on an M40 router dynamically uses this information to route traffic over the LSP. When a Cisco Systems-initiated LSP is configured with autoroute announce, the same behavior occurs on the Cisco Systems router.

Implementation Differences
The two implementations differ in how traffic is routed through the LSP.

In JUNOS software, there are three ways to route traffic into LSPs.
The primary means is with BGP. If any BGP route exists using the LSP destination as the BGP next hop, then BGP automatically uses the LSP to reach the next hop instead of using IGP.

JUNOS software enables you to use IGP next hops in addition to BGP next hops to route traffic through LSPs.

You can statically map routes into an LSP.

In Cisco Systems IOS, there are two ways to route traffic into LSPs: static routes and automatic routes. Cisco Systems’ Autoroute feature sends all traffic to a specific destination through the same LSP and routes to all the IGP interfaces on the destination router. There is no Cisco Systems equivalent to the JUNOS default of using solely BGP to place routes through the LSP.

RSVP Interoperability

By successfully creating and configuring the LSPs in the above LSP test, the interoperability of RSVP signalling is tested indirectly.

Parameters
Appendix A lists the configurations for the four routers in the RSVP test.

Results
Testing demonstrates that the RSVP implementations are interoperable. Specifically, the following RSVP objects are interoperable between Juniper Networks and Cisco Systems routers.

Explicit-Route
Label
Label-Request
Session
The only object not interoperable is the Record-Route object. This exception, which is being addressed by both Juniper Networks and Cisco Systems, does not affect the creation or use of LSPs.

Implementation Differences
Juniper Networks supports the RSVP Hello packet and Cisco Systems does not. This difference does not prevent Cisco Systems from creating and using LSPs.

To signal penultimate hop popping, a Cisco egress router sends label 0 to the penultimate hop in the control plane, whereas a Juniper Networks egress router sends label 3 (implicit NULL). When a Cisco egress router is connected to a penultimate Juniper Networks router, it is recommended that you use the following IOS command to require the egress router to send label 3 (as defined by the standard).

mpls traffic-eng signalling advertise implicit-null

IS-IS Traffic Engineering Extensions Interoperability

The inspection of the Traffic Engineering Database (TED) during the LSP test indicates whether Cisco Systems and Juniper Networks are sending the correct information in the sub-TLVs in the new IS-IS extended IS reachability TLV type 22. The identical use of the information in the TED is established by adding a few more steps to the existing test as described in the Parameters section.

Parameters
Appendix A lists the basic configurations for the four routers in the tests, though the following changes are made.

Bandwidth Reservation
Some additional LSPs are created between J1 and C2 to determine whether the routers in the path correctly update the unreserved bandwidth in the sub-TLV. The configuration of one of these LSPs is as follows.

label-switched-path to-C2-10

Wire-less PEAP

1 證書獲取

證書主要用來進行終端和網路的相互認證。 Radius伺服器首先向CA憑證授權申請伺服器憑證,用來代表Radius伺服器的合法性。 用戶端向CA憑證授權下載CA 根證書,用來驗證Radius伺服器下發的證書是否合法(一般情況下,如果終端不需要對網路進行認證的情況下,根證書可以不用下載和安裝)。

2 無線接入

用戶端通過開放系統接入的方法(OPEN SYSTEM)和AP之間建立好物理連接。

3 認證初始化

1) Client向AP設備發送一個EAPoL-Start資料訊息,開始進行802.1x登入的。

2) AP向用戶端發送EAP-Request/Identity資料訊息,要求用戶端將使用者
資訊送上來。

3) Client回應一個EAP-Response/Identity給AP的請求,其中包括使用者
的網路標識。使用者ID, “對於EAP-mschchap v2認證方式的使用者ID是由用戶在用戶端手動輸入或者配置的。用戶ID通常的格式是username@domain,其中username是服務提供廠商提供給用戶的身份ID,domain是服務提供廠商的功能變數名稱(如"CCHAHA.com")。

4)AP以EAP Over RADIUS的資料訊息格式將EAP-Response/Identity發送給
認證伺服器Radius,並且帶上相關的RADIUS的屬性。

5)Radius收到用戶端發來的EAP-Response/Identity,根據配置確定使用
EAP-PEAP認證,並向AP發送RADIUS-Access-Challenge資料訊息,裡面含有Radius發送給用戶端的EAP-Request/Peap/Start的資料訊息,表示希望開始行EAP-PEAP的認證。

 6) AP設備將EAP-Request/PEAP/Start發送給認證用戶端。

4 建立TLS通道

7) Client收到EAP-Request/Peap/Start資料訊息後,產生一個亂數、用戶
端支援的加密演算法清單、TLS協定版本、會話ID、以及壓縮方法(目前均為NULL),封裝在EAP-Response/Client Hello資料訊息中發送給AP設備。

 8):AP以EAP Over RADIUS的資料訊息格式將EAP-Response/Client Hello發
送給認證伺服器Radius Server,並且帶上相關的RADIUS的屬性。

9):Radius收到Client發來的Client Hello資料訊息後,會從Client 的
Hello資料訊息的加密演算法清單中選擇自己支援的一組加密演算法+Server產生的亂數+Server 證書(包含伺服器的名稱和公開金鑰)+證書請求+Server_Hello_Done屬性形成一個Server Hello資料訊息封裝在Access-Challenge資料訊息中,發送給Client.

10) AP把Radius資料訊息中的EAP域提取,封裝成EAP-request資料訊息發
送給Client.

注:由於證書比較大,一個資料訊息是無法承載的,所以在實際流程中第10,
11完成後,後面還有3個續傳的IP分片資料訊息,目的是把Server證書發送到用戶端.

11) Client收到資料訊息後,進行驗證Server的證書是否合法(使用從CA
憑證授權獲取的根證書進行驗證,主要驗證證書時間是否合法,名稱是否合法),即對網路進行認證,從而可以保證Server的合法。如果合法則提取Server證書中的公開金鑰,同時產生一個隨機密碼串pre-master-secret,並使用伺服器的公開金鑰對其進行加密,最後將加密的資訊ClientKeyExchange+用戶端的證書(如果沒有證書,可以把屬性置為0)+TLS finished屬性封裝成EAP-Rsponse/TLS OK資料訊息發送給認證點AP.如果client沒有安裝證書,則不會對Server證書的合法性進行認證,即不能對網路進行認證。

 12) AP以EAP Over RADIUS的資料訊息格式將EAP-Response/TLS OK發送給
認證伺服器Radius Server,並且帶上相關的RADIUS的屬性

13) Radius收到用戶端發了的資料訊息後,用自己的證書對應的私密金鑰對
ClientKeyExchange進行解密,從而獲取到pre-master-secret,然後將pre-master-secret進行運算處理,加上Client和Server產生的亂數,生成加密金鑰、加密初始化向量和hmac的金鑰,這時雙方已經安全的協商出一套加密辦法了,至此TLS通道已經建立成功,以後的認證過程將使用協商出的金鑰進行加密和校驗。Radius Server借助hmac的金鑰,對要在TLS通道內進行認證的消息做安全的摘要處理,然後和認證消息放到一起。借助加密金鑰,加密初始化向量加密上面的消息,封裝在Access-Challenge資料訊息中,發送給Client.

5 認證過程

14)AP把Radius資料訊息中的EAP域提取,封裝成EAP-request資料訊息發
送給Client.

15)用戶端收到Radius server發來資料訊息後,用伺服器相同的方法生成加
密金鑰,加密初始化向量和hmac的金鑰,並用相應的金鑰及其方法對資料訊息進行解密和校驗,然後產生認證回應資料訊息,用金鑰進行加密和校驗,最後封裝成EAP-response資料訊息發送給AP,AP以EAP Over RADIUS的資料訊息格式將EAP-Response發送給認證伺服器Radius Server,並且帶上相關的RADIUS的屬性,這樣反復進行交互,直到認證完成(注:對於不同的認證方法交互流程不一致,通常的認證方法為:PEAP-MSCHAPV2或者GTC(IBM LDAP支持的,有關於PEAP-GTC的過程就是在認證的時候按照GTC/OTP的過程在PEAP添加的一個過程罷了,再注:在傳送完密碼後要傳一個長度為1的資料為0的包過去後才會得到SUCESS連通網路),下面由單獨認證流程,如果是SIM認證,還需要跟HLR/AUC設備進行資料交互,並且使用AS作為認證伺服器),在認證過程中,Radius Server會下發認證後用於生成空口資料加密金鑰(包括單播、組播金鑰)的PMK給Client.

16) 伺服器認證用戶端成功,會發送Access-Accept資料訊息給AP,資料
訊息中包含了認證伺服器所提供的MPPE屬性。

17) AP收到RADIUS-Access-Accept資料訊息,會提取MPPE屬性中的金鑰做
為WPA加密用的PMK,並且會發送EAP-success資料訊息給用戶端.


原文:

PEAP Authentication Process

PEAP authentication begins in the same way as EAP-TLS (Figure 37):

1. The client sends an EAP Start message to the access point

2. The access point replies with an EAP Request Identity message

3. The client sends its network access identifier (NAI), which is its username, to the access point in an EAP Response message

4. The access point forwards the NAI to the RADIUS server encapsulated in a RADIUS Access Request message

5. The RADIUS server will respond to the client with its digital certificate

6. The client will validate the RADIUS server’s digital certificate

From this point on, the authentication process diverges from EAP-TLS

7. The client and server negotiate and create an encrypted tunnel

8. This tunnel provides a secure data path for client authentication

9. Using the TLS Record protocol, a new EAP authentication is initiated by the RADIUS server

10. The exchange will include the transactions specific to the EAP type used for client authentication

11. The RADIUS server sends the access point a RADIUS ACCEPT message, including the client’s WEP key, indicating successful authentication


Wireless LAN Security White Paper


 

White Paper


A Comprehensive Review of 802.11 Wireless LAN Security
and the Cisco Wireless Security Suite


1. Introduction

Since the ratification of the IEEE 802.11b standard in 1999, wireless LANs have become more prevalent. Today, wireless LANs are widely deployed in places such as corporate office conference rooms, industrial warehouses, Internet-ready classrooms, and even coffeehouses.

These IEEE 802.11-based wireless LANs present new challenges for network administrators and information security administrators alike. Unlike the relative simplicity of wired Ethernet deployments, 802.11-based wireless LANs broadcast radio-frequency (RF) data for the client stations to hear. This presents new and complex security issues that involve augmenting the 802.11 standard.

Security in the IEEE 802.11 specification—which applies to 802.11b, 802.11a, and 802.11g—has come under intense scrutiny. Researchers have exposed several vulnerabilities in the authentication, data-privacy, and message-integrity mechanisms defined in the specification. This white paper:

  • Reviews the authentication(驗證) and data-privacy(保密) functions described in Clause 8 of the IEEE 802.11 specification
  • Describes the inherent security vulnerabilities(安全弱點) and management issues(管理問題) of these functions
  • Explains how security issues can be addressed effectively only by augmenting the 802.11 security standard (改善安全的方式)
  • Examines Cisco Systems architecture for enhanced security on wireless LANs—including the Cisco Wireless Security Suite
  • Looks ahead to long-term security enhancements

2. 802.11 Authentication and Its Weaknesses

Wireless LANs, because of their broadcast nature, require the addition of:

  • User authentication to prevent unauthorized access to network resources
  • Data privacy to protect the integrity and privacy of transmitted data

The 802.11定義了兩種用於客戶端驗證的標準: 較安全的open authentication 和較不安全的 shared key authentication. 此外另兩種方法亦展用於WIRELESS 網路上—隱藏Service Set Identifier (SSID) 和 綁定客戶端的Media Access Control (MAC) address—.

This section explains each approach and its weaknesses.(以下段落用來描述上述的方式)

The use of Wired Equivalent Privacy (WEP) keys can function as a type of access control because a client that lacks the correct WEP key cannot send data to or receive data from an access point.(利用WEP KEY進行存取控制,需有相同WEP KEY 才能傳送及接收資料)

WEP, the encryption scheme adopted by the IEEE 802.11 committee, provides encryption with 40 bits or 104 bits of key strength. A subsequent section of this paper discusses WEP and its weaknesses in greater detail.

2.1. Service Set Identifier(服務設定名稱)

The SSID is a construct that allows logical separation of wireless LANs. In general, a client must be configured with the appropriate SSID to gain access to the wireless LAN. The SSID does not provide any data-privacy functions, nor does it truly authenticate the client to the access point.

2.2. 802.11 Station Authentication(設施驗證)

Authentication in the 802.11 specification is based on authenticating a wireless station or device instead of authenticating a user(使用者驗證). The specification provides for two modes of authentication: open authentication and shared key authentication.

The 802.11 client authentication process consists of the following transactions (Figure 1):
(CLIENT和AP之間先驗證,再進行關聯性連結)

1. Client broadcasts a probe request frame on every channel

2. Access points within range respond with a probe response frame

3. The client decides which access point (AP) is the best for access and sends an authentication request

4. The access point will send an authentication reply

5. Upon successful authentication, the client will send an association request frame to the access point

6. The access point will reply with an association response

7. The client is now able to pass traffic to the access point


Figure 1   802.11 Client Authentication Process


The next four subsections will detail each of the individual processes for client authentication.

2.2.1. Probe Requests and Responses(步驟1,2)

Once the client becomes active on the medium, it searches for access points in radio range using the 802.11 management frames known as probe request frames. The probe request frame is sent on every channel the client supports in an attempt to find all access points in range that match the SSID and client-requested data rates (Figure 2).

All access points that are in range and match the probe request criteria will respond with a probe response frame containing synchronization information(同步訊息) and access point load(負載). The client can determine which access point to associate to by weighing the supported data rates and access point load. Once the client determines the optimal access point to connect to, it moves to the authentication phase of 802.11 network access.


Figure 2   Probe Request Frame(探測訊框 )


2.2.2. Open Authentication(開放式驗證)

Open authentication is a null authentication algorithm. The access point will grant any request for authentication. It might sound pointless to use such an algorithm, but open authentication has its place in 802.11 network authentication. Authentication in the 1997 802.11 specification is connectivity-oriented. The requirements for authentication are designed to allow devices to gain quick access to the network. In addition, many 802.11-compliant devices are hand-held data-acquisition units like bar code readers. They do not have the CPU capabilities required for complex authentication algorithms.

Open authentication consists of two messages:

  • The authentication request (Figure 3)
  • The authentication response (Figure 4)


Figure 3   Open Authentication Request(驗證需求)



Figure 4   Open Authentication Response(驗證回應)


Open authentication allows any device network access. If no encryption is enabled on the network, any device that knows the SSID of the access point can gain access to the network. With WEP encryption enabled on an access point, the WEP key itself becomes a means of access control. If a device does not have the correct WEP key, even though authentication is successful, the device will be unable to transmit data through the access point. Neither can it decrypt data sent from the access point (Figure 5).


Figure5.Open Authentication with Differing WEP Keys(開放驗證加wep key存取控制)


2.2.3. Shared Key Authentication (共用鑰匙驗證法)

Shared key authentication is the second mode of authentication specified in the 802.11 standard. Shared key authentication requires that the client configure a static WEP key. Figure 6 describes the shared key authentication process.

1. The client sends an authentication request to the access point requesting shared key authentication

2. The access point responds with an authentication response containing challenge text(未加密訊息)

3. The client uses its locally configured WEP key to encrypt the challenge text(加密訊息) and reply with a subsequent authentication request

4. If the access point can decrypt the authentication request and retrieve the original challenge text, then it responds with an authentication response that grants the client access

WEAKNESS: (未加密訊息)+(KEY)= (加密訊息),三者只要具備兩者就可推洐出剩餘者

Figure 6   Shared Key Authentication Process


2.2.4. MAC Address Authentication(綁定MAC位址)

MAC address authentication is not specified in the 802.11 standard, but many vendors—including Cisco—support it. MAC address authentication verifies the client’s MAC address against a locally configured list of allowed addresses or against an external authentication server (Figure 7).
MAC authentication is used to augment the open and shared key authentications provided by 802.11, further reducing the likelihood of unauthorized devices accessing the network.


Figure 7   MAC Address Authentication Process


2.3. Authentication Vulnerabilities(各種驗證的安全弱點)

2.3.1. Use of SSID

The SSID is advertised in plain-text in the access point beacon messages (Figure 8).
Although beacon messages are transparent to users, an eavesdropper can easily determine the SSID with the use of an 802.11 wireless LAN packet analyzer, like Sniffer Pro. Some access-point vendors, including Cisco, offer the option to disable SSID broadcasts in the beacon messages. The SSID can still be determined by sniffing the probe response frames from an access point (Figure 9).

The SSID is not designed, nor intended for use, as a security mechanism.

In addition, disabling SSID broadcasts might have adverse effects on Wi-Fi interoperability for mixed-client deployments. Therefore, Cisco does not recommend using the SSID as a mode of security.


Figure 8   SSID in an Access Point Beacon Frame



Figure 9   SSID in an Access Point Probe Response Frame


2.3.2. Open Authentication Vulnerabilities

Open authentication provides no way for the access point to determine whether a client is valid.
This is a major security vulnerability if WEP encryption is not implemented in a wireless LAN. Cisco does not recommend deploying wireless LANs without WEP encryption. In scenarios in which WEP encryption is not needed or is not feasible to deploy, such as public wireless LAN deployments strong, higher-layer authentication can be provided by implementing a Service Selection Gateway (SSG).

2.3.3. Shared Key Authentication Vulnerabilities

Shared key authentication requires the client use a preshared WEP key to encrypt challenge text sent from the access point. The access point authenticates the client by decrypting the shared key response and validating that the challenge text is the same.

The process of exchanging the challenge text occurs over the wireless link and is vulnerable to a man-in-the-middle attack. An eavesdropper can capture both the plain-text challenge text and the cipher-text response. WEP encryption is done by performing an exclusive OR (XOR) function on the plain-text with the key stream to produce the cipher-text. It is important to note that if the XOR function is performed on the plain-text and cipher-text are XORed, the result is the key stream. Therefore, an eavesdropper can easily derive the key stream just by sniffing the shared key authentication process with a protocol analyzer (Figure 10).


Figure 10   Vulnerability of Shared Key Authentication


2.3.4. MAC Address Authentication Vulnerabilities

MAC addresses are sent in the clear as required by the 802.11 specification. As a result, in wireless LANs that use MAC authentication, a network attacker might be able to subvert the MAC authentication process by “spoofing" a valid MAC address.

MAC address spoofing is possible in 802.11 network interface cards (NICs) that allow the universally administered address (UAA) to be overwritten with a locally administered address (LAA). A network attacker can use a protocol analyzer to determine a valid MAC address in the business support system (BSS) and an LAA-compliant NIC with which to spoof the valid MAC address.

3. WEP Encryption and Its Weaknesses

WEP is based on the RC4 algorithm, which is a symmetric key stream cipher. As noted previously, the encryption keys must match on both the client and the access point for frame exchanges to succeed. The following section will examine stream ciphers and provide some perspective on how they work and how they compare to block ciphers.

3.1. Stream Ciphers and Block Ciphers

A stream cipher encrypts data by generating a key stream from the key and performing the XOR function on the key stream with the plain-text data. The key stream can be any size necessary to match the size of the plain-text frame to encrypt (Figure 11).


Figure 11   Stream Cipher Operation


Block ciphers deal with data in defined blocks, rather than frames of varying sizes. The block cipher fragments the frame into blocks(將不同長度的訊框切割成事先定義的區塊大小) of predetermined size and performs the XOR function on each block. Each block must be the predetermined size, and leftover frame fragments are padded to the appropriate block size (Figure 12).

For example, if a block cipher fragments frames into 16 byte blocks, and a 38-byte frame is to be encrypted, the block cipher fragments the frame into two 16-byte blocks and one six-byte block. The six-byte block is padded with 10 bytes of padding to meet the 16-byte block size.


Figure 12   Block Cipher Operation


The process of encryption described above for stream ciphers and block ciphers is known as Electronic Code Book (ECB) mode encryption. With ECB mode encryption, the same plain-text input always generates the same cipher-text output. As Figure 13 illustrates, the input text of “FOO" always produces the same cipher-text. This is a potential security threat because eavesdroppers can see patterns in the cipher-text and start making educated guesses about what the original plain-text is.


Figure 13   Electronic Code Book Encryption


There are two encryption techniques to overcome this issue:

  • Initialization vectors (初始向量)
  • Feedback modes (回饋模式)

3.1.1. Initialization Vectors

An initialization vector (IV) is used to alter the key stream(IV值主要用來改變Key Stream數值). The IV is a numeric value that is concatenated to the base key before the key stream is generated. Every time the IV changes, so does the key stream. Figure 14 shows the same plain-text “FOO" with the XOR function performed with the IV augmented key stream to generate different cipher-text. The 802.11 standard recommends that the IV change on a per-frame basis. This way, if the same packet is transmitted twice, the resulting cipher-text will be different for each transmission.


Figure 14   Encryption with an Initialization Vector


The Original RC4 IV is a 24-bit value (Figure 15) that augments a 40-bit WEP key to 64 bits and a 104-bit WEP key to 128 bits. The IV is sent in the clear in the frame header so the receiving station knows the IV value and is able to decrypt the frame (Figure 16).

Although 40-bit and 104-bit WEP keys are often referred to as 64-bit and 128-bit WEP keys, the effective key strength is only 40 bits and 104 bits, respectively, because the IV is sent unencrypted.


Figure 15   Initialization Vector in a WEP-Encrypted Frame



Figure 16   Initialization Vector in an 802.11 Protocol Decode


3.1.2. Feedback Modes

Feedback modes are modifications to the encryption process to prevent a plain-text message from generating the same cipher-text during encryption. Feedback modes are generally used with block ciphers, and the most common feedback mode is known as cipher block chaining (CBC) mode.

The premise behind CBC mode is that a plain-text block has the XOR function performed with the previous block of cipher-text. Because the first block has no preceding cipher-text block, an IV is used to change the key stream. Figure 17 illustrates the operation of CBC mode. Other feedback modes are available, and some will be discussed later in this paper.


Figure 17   CBC Mode Block Cipher


3.2. Statistical Key Derivation—Passive Network Attacks

In August 2001, cryptanalysts Fluhrer, Mantin, and Shamir determined that a WEP key could be derived by passively collecting particular frames from a wireless LAN. The vulnerability is how WEP has implemented the key scheduling algorithm (KSA) from the RC4 stream cipher. Several IVs (referred to as weak IVs) can reveal key bytes
after statistical analysis. Researchers at AT&T/Rice University as well as the developers of the AirSnort application implemented this vulnerability and verified that WEP keys of either 40- or 128-bit key length can be derived after as few as 4 million frames. For high-usage wireless LANs, this translates to roughly four hours until a 128-bit WEP key is derived.

This vulnerability renders WEP ineffective. Using dynamic WEP keys can mitigate this vulnerability, but reactive efforts only mitigate known issues. To eliminate this vulnerability, a mechanism that strengthens the WEP key is required.

3.3. Inductive Key Derivation—Active Network Attacks

Inductive key derivation is the process of deriving a key by coercing information from the wireless LAN and is also referred to as an active network attack. As mentioned in the section on stream ciphers, encryption is accomplished by performing the XOR function with the stream cipher to produce the cipher-text. Inductive network attacks work on this premise.

Man-in-the-middle attacks, a form of inductive key derivation attack, are effective in 802.11 networks because of the lack of effective message integrity. The receiver of a frame cannot verify that the frame was not tampered with during its transmission. In addition, the Integrity Check Value (ICV) used to provide message integrity is based on the 32-bit cyclic redundancy check (CRC32) checksum function. The CRC32 value is vulnerable to bit-flipping attacks, which render it ineffective. With no effective mechanism to verify message integrity, wireless LANs are vulnerable to man-in-the-middle attacks, which include bit-flipping attacks and IV replay attacks.

3.3.1. Initialization Vector Replay Attacks

The initialization vector (IV) replay attack is a network attack that has been practically implemented, not just theorized. Although various forms of the network attack exist, the one that clearly illustrates its inductive nature is described below and illustrated in Figure 18:

1. A known plain-text message is sent to an observable wireless LAN client (an e-mail message)

2. The network attacker will sniff the wireless LAN looking for the predicted cipher-text

3. The network attacker will find the known frame and derive the key stream

4. The network attacker can “grow" the key stream using the same IV/WEP key pair as the observed frame

This attack is based on the knowledge that the IV and base WEP key can be reused or replayed repeatedly to generate a key stream large enough to subvert the network.


Figure 18   Initialization Vector Reuse Vulnerability


Once a key stream has been derived for a given frame size, it can be “grown" to any size required. This process is described below and illustrated in Figure 19:

1. The network attacker can build a frame one byte larger than the known key stream size; an Internet Control Message Protocol (ICMP) echo frame is ideal because the access point solicits a response

2. The network attacker then augments the key stream by one byte

3. The additional byte is guessed because only 256 possible values are possible

4. When the network attacker guesses the correct value, the expected response is received: in this example, the ICMP echo reply message

5. The process is repeated until the desired key stream length is obtained


Figure 19   “Growing" a Key Stream


3.3.2. Bit-Flipping Attacks

Bit-flipping attacks have the same goal as IV replay attacks, but they rely on the weakness of the ICV. Although the data payload size may vary, many elements remain constant and in the same bit position. The attacker will tamper with the payload portion of the frame to modify the higher layer packet. The process for a bit-flipping attack is listed below and in Figure 20:

1. The attacker sniffs a frame on the wireless LAN

2. The attacker captures the frame and flips random bits in the data payload of the frame

3. The attacker modifies the ICV (detailed later)

4. The attacker transmits the modified frame

5. The receiver (either a client or the access point) receives the frame and calculates the ICV based on the frame contents

6. The receiver compares the calculated ICV with the value in the ICV field of the frame

7. The receiver accepts the modified frame(Layer2 Pass)

8. The receiver de-encapsulates the frame and processes the Layer 3 packet

9. Because bits are flipped in the layer packet, the Layer 3 checksum fails

10. The receiver IP stack generates a predictable error

11. The attacker sniffs the wireless LAN looking for the encrypted error message

12. Upon receiving the error message, the attacker derives the key stream as with the IV replay attack


Figure 20   Bit-Flipping Attack


The basis for this attack is the failure of the ICV. The ICV is in the WEP-encrypted portion of the frame, so how is the attacker able to modify it to match the bit-flipped changes to the frame? The process of flipping bits is:

1. A given frame (F1 in Figure 21) has an ICV (C1)
(原始訊框及ICV數值)

2. A new frame is generated (F2) the same length as F1 with bits set
(Bit Flipping 使用的訊框)

3. Frame F3 is created by performing the XOR function F1 and F2
(Bit-Flapping 之後的資料訊框)

4. The ICV for F3 is calculated (C2)
(使用Bit-Flapping 之後的資料訊框計算出的新ICV值)

5. ICV C3 is generated by performing the XOR function C1 and C2
(利用原有的ICV C1 及 Bit-Flapping Modify後的資料訊框產生的ICV C2 進行
Xor 運算計算出 ICV C3)


Figure 21   ICV Weakness


3.4. Static WEP Key Management Issues

The 802.11 standard does not specify key management mechanisms. WEP is defined to support only static, preshared keys. Because 802.11 authentication authenticates a device and not the user of the device, the loss or theft of a wireless adapter becomes a security issue for the network. The loss of an adapter and the compromising of the existing key presents network administrators with the tedious task of manually rekeying all wireless devices in the network.

This task might be acceptable for small deployments but is not realistic in midsize and large deployments in which the number of wireless users can reach into the thousands. Without a mechanism to distribute or generate keys, administrators must watch wireless NICs closely.

4. Secure 802.11 Wireless LANs with Cisco Wireless Security Suite

Cisco recognizes the vulnerabilities in 802.11 authentication and data privacy. To give customers a secure wireless LAN solution that is scalable and manageable, Cisco has developed the Cisco Wireless Security Suite. This suite of security enhancements augments 802.11 security by implementing prestandards enhancements to 802.11 authentication and encryption.

Some mistakenly believe WEP to be the only component to wireless LAN security, but wireless security actually consists of three components:

  • The authentication framework
  • The authentication algorithm
  • The data privacy or encryption algorithm

All three of these components are included in the Cisco Wireless Security Suite:

  • 802.1X authentication framework—The IEEE 802.1X standard provides a framework for many authentication types and the link layer
  • Extensible Authentication Protocol (EAP) Cisco authentication algorithm—The EAP Cisco Wireless authentication type, also called Cisco LEAP supports centralized, user-based authentication with the ability to generate dynamic WEP keys

  • Cisco Key Integrity Protocol (CKIP)—Cisco has implemented two components to augment WEP encryption : (Open Standard Called TKIP)
    • Message Integrity Check (MIC)—The MIC function provides effective frame authenticity to mitigate man-in-the-middle vulnerabilities
    • Per-Packet Keying—Per-packet keying provides every frame with a new and unique WEP key that mitigates WEP key derivation attacks
    • Broadcast Key Rotation—Dynamic key rotation for broadcast and multicast traffic. (由AP對廣播及群播封包以動態方式產生不同的Key)

4.1. Cisco Wireless Security Suite Components

4.1.1. 802.1X Authentication

The 802.1X authentication framework is included in the draft for 802.11 MAC layer security enhancements currently being developed by the IEEE 802.11 Task Group i (TGi). The 802.1X framework provides the link layer with extensible authentication, normally seen in higher layers (Figure 22).


Figure 22   802.1X Layers


802.1X requires three entities:

  • The supplicant-—Resides on the wireless LAN client
  • The authenticator-—Resides on the access point
  • The authentication server—Resides on the RADIUS server

These entities are logical entities on the network devices. The authenticator creates a logical port per client, based on the client’s association ID (AID). This logical port has two data paths. The uncontrolled data path allows network traffic through to the network. The controlled data path requires successful authentication to allow network traffic through (Figure 23).


Figure 23   802.1X Ports


The supplicant becomes active on the medium and associates to the access point. The authenticator detects the client association and enables the supplicant’s port. It forces the port into an unauthorized state so that only 802.1X traffic is forwarded. All other traffic is blocked. The client may send an EAP Start message, although client initiation is not required (Figure 24).

The authenticator replies with an EAP Request Identity message back to the supplicant to obtain the client’s identity. The supplicant’s EAP Response packet containing the client’s identity is forwarded to the authentication server.

The authentication server is configured to authenticate clients with a specific authentication algorithm. Currently, 802.1X for 802.11 LANs does not stipulate a specific algorithm to use. However, this paper focuses on Cisco LEAP authentication and assumes that Cisco LEAP credential verification occurs.

The end result is a RADIUS -ACCEPT or RADIUS-REJECT packet from the RADIUS server to the access point. Upon receiving the RADIUS ACCEPT packet, the authenticator transitions the client’s port to an authorized state, and traffic may be forwarded.


Figure 24   802.1X and EAP Message Flow


802.1X provides the means for a wireless LAN client to communicate with an authentication server to validate the client credentials. 802.1X is extensible and allows a variety of authentication algorithms to operate over it.

4.1.2. The EAP Cisco Authentication Algorithm

Cisco designed the Cisco LEAP authentication algorithm to provide easy-to-implement, strong authentication. Cisco LEAP, like other EAP authentication variants, is designed to function on top of the 802.1X authentication framework. What makes the Cisco LEAP algorithm so compelling is its robust features.

4.1.2.1. Mutual Authentication

Many authentication algorithms exist, each with an ideal use. In the world of wireless LANs, the client needs to be certain that it is communicating with the intended network device. The lack of physical connectivity between the client and the network requires the client to authenticate the network as well as to be authenticated by the network. Therefore, Cisco has designed Cisco LEAP to support mutual authentication.

4.1.2.2. User-Based Authentication

802.11 authentication is device-based. The user of the device is invisible to the authenticator, and so unauthorized users can access the network simply by gaining access to an authorized device. Laptops with 802.11 NICs using static WEP with 802.11 authentication create network vulnerability if the laptop is stolen or lost. Such an event would require the network administrator to rapidly rekey the wireless network and all clients.

The scenario is all too common and is a major barrier to deployment for wireless LANs. Cisco has responded by implementing Cisco LEAP, which is based on authenticating the user rather than the wireless LAN device.

4.1.2.3. Dynamic WEP Keys

User-based mutual authentication provides an easy-to-administer and secure authentication scheme, yet a mechanism is still needed to manage WEP keys efficiently. This need has driven the requirement for the authentication algorithm to generate keying material for dynamic WEP keys. Cisco LEAP employs its user-based nature to generate unique keying material for each client. This relieves network administrators from the burden of managing static keys and manually rekeying as needed.

802.1X session timeouts force the client to reauthenticate to maintain network connectivity. Although reauthentication is transparent to the client, the process of reauthentication in an algorithm that supports dynamic WEP will generate new WEP keys at every reauthentication interval. This is an important feature in mitigating statistical key derivation attacks and is critical for Cisco WEP enhancements (described in detail later).

4.1.3. Data Privacy with TKIP

Previous sections of this paper have highlighted network attacks on 802.11 security and shown WEP to be ineffective as a data-privacy mechanism. Cisco has implemented prestandards enhancements to the WEP protocol that mitigate existing network attacks and address its shortcomings. These enhancements to WEP are collectively known as the Temporal Key Integrity Protocol (TKIP). TKIP is a draft standard with Task Group i of the IEEE 802.11 working group. Although TKIP is not a ratified standard, Cisco has implemented a prestandards version of TKIP to protect existing customer investments in Cisco Aironet® wireless products.

TKIP provides two major enhancements to WEP:

  • A message integrity check (MIC) function on all WEP-encrypted data frames
  • Per-packet keying on all WEP-encrypted data frames

Cisco also adds a third feature not specified in the IEEE 802.11 Task Group i draft: broadcast key rotation.

4.1.3.1. Message Integrity Check(訊息完整性檢測)

The MIC augments the ineffective integrity check function (ICV) of the 802.11 standard. The MIC is designed to solve two major vulnerabilities:

  • Initialization vector/base key reuse—The MIC adds a sequence number field to the wireless frame. The access point will drop frames received out of order.
  • Frame tampering/bit flipping—The MIC feature adds a MIC field to the wireless frame. The MIC field provides a frame integrity check not vulnerable to the same mathematical shortcomings as the ICV.

Figure 25 shows an example of a WEP data frame. The MIC adds two new fields to the wireless frame: a sequence number and the integrity check field (Figure 26)


Figure 25   Example of WEP Frame Format



Figure 26   Example of WEP Frame Format with MIC Enabled


The sequence number is a sequential counter that increases in value on a per-frame, per-association basis. The access point will discard frames received that have an out-of-order sequence number. The MIC field is calculated based on the fields in Figure 27.


Figure 27   MIC Value Derivation


Modifications to any of the fields will result in a discrepancy in the calculated MIC on the receiver. As a result, the receiver will drop the frame.

4.1.3.2. Per-Packet Keying

The vulnerabilities described in the Fluhrer, Mantin, and Shamir, paper as well as the AirSnort tool, which can implement an attack, render WEP ineffective for data privacy and encryption. Using WEP key rotation schemes via 802.1X reauthentication can mitigate the vulnerabilities but does not provide resolution for the weaknesses.

The IEEE has adopted into the Task Group i a WEP enhancement that changes the transmit WEP key on a per-packet basis. Cisco was instrumental in devising and co-developing this enhancement and has implemented it on Cisco clients and access points.

In a Cisco implementation of 802.11 WEP encryption, an IV is generated randomly and concatenated with the WEP key. The two values are processed by the WEP algorithm to generate the key stream. The key stream is mixed with the plain-text to generate the cipher-text.

The Cisco implementation of per-packet keying augments the process by hashing the WEP key and the IV to create a new packet key. The original IV is then concatenated with the packet key and processed normally (Figure 28).


Figure 28   Per-Packet Keying


To effectively use the 24-bit IV space, Cisco has also adopted IV sequencing. Cisco client and access points implement IV sequencing by simply starting the IV counter and increasing the IV value by one for each frame. If the client and access point both initiate their IV counters at zero, the client and access point will be sending the same IV/base WEP keys through the hashing algorithm and generating the same packet keys. To overcome this problem, the Cisco IV sequencing is directional. For example, client-to-access-point frames may use an even-numbered IV, and access-point-to-client frames may use an odd-numbered IV.

Per-packet keying will not generate the same packet key as long as unique IV/base WEP key pairs are used. With a static WEP key, this only allows for 224 possible unique packet keys. Because the IV space recycles when it is exhausted, IV/base WEP key pairs will be reused. To get around this limitation, the base WEP key should be changed before the IV space is used. Cisco LEAP session timeouts accommodate this requirement. Once the base WEP key is changed, new IV/base WEP key pairs are used, and unique packet keys will be generated.

4.1.3.3. Broadcast Key Rotation

802.1X authentication types that support user-based WEP keys provide WEP keys for unicast traffic only. To provide encryption for broadcast and multicast traffic, the Cisco Wireless Security Suite requires that one of two options be selected:

  • Employ a static broadcast key configured on the access point
  • Enable broadcast key rotation for dynamic broadcast key generation

A static broadcast key must be configured on an access point for 802.1X clients to receive broadcast and multicast messages. In wireless LAN deployments in which Cisco TKIP enhancements are implemented, a static broadcast key will go through the per-packet keying process. This reduces the opportunity for statistical key derivation attacks, but because the base broadcast key remains static, the IV space will recycle, causing key streams to be reused. Statistical attacks may take much longer to execute, but they are still possible.

Static broadcast key deployments might be required in some instances. Broadcast keys are sent from the access point to the client encrypted with the client’s unicast WEP key. Because the broadcast keys are distributed after authentication, access points do not have to be configured with the same broadcast key.

Cisco recommends enabling broadcast key rotation on the access points. The access point generates broadcast WEP keys using a seeded pseudorandom number generator (PRNG). The access point rotates the broadcast key after a configured broadcast WEP key timer expires. This process should generally be in sync with the timeouts configured on the RADIUS servers for user reauthentication.

Broadcast key rotation is designed for 802.1X-enabled access point deployments. In mixed static WEP/802.1X deployments, broadcast key rotation may cause connectivity problems in static WEP clients. Therefore, Cisco recommends that broadcast key rotation be enabled when the access point services an 802.1X exclusive wireless LAN.

5. Cisco LEAP Architecture

The EAP Cisco Wireless or Cisco LEAP algorithm provides user-based mutual authentication. It also provides keying material to the client and RADIUS server for the generation of WEP keys. This section will examine Cisco LEAP, from protocol message exchanges to how to implement the algorithm on RADIUS servers, access points, and client devices.

5.1. Cisco LEAP Authentication Process

Cisco LEAP is a user-based authentication algorithm that is secure enough to implement in hostile wireless LAN deployments. Based on these user requirements, and the need for single-sign-on (SSO) capabilities, Cisco built Cisco LEAP around the premise of Microsoft Challenge Handshake Authentication Protocol (MS-CHAP).

Cisco LEAP is a password-based algorithm. It preserves the integrity of the password during wireless authentication by converting the password to a secret key value so that wireless eavesdroppers cannot sniff Cisco LEAP authentication and see a user’s password transmitted across the wireless link. The secret key value is the result of a mathematical function called a hash function. A hash function is an algorithm that one-way encrypts data. The data cannot be decrypted to derive the original input. Cisco LEAP uses secrets in the form of the Microsoft NT key format. The Windows NT key is a Message Digest Algorithm 4 (MD4) hash of an MD4 hash of the user’s password (Figure 29).


Figure 29   Windows NT Key


Use of the Windows NT key allows Cisco LEAP to use existing Windows NT Domain Services authentication databases as well as Windows 2000 Active Directory databases. In addition, any Open Database Connectivity (ODBC) that uses MS-CHAP passwords can also be used.

Cisco has developed drivers for most versions of Microsoft Windows (Windows 95, 98, Me, 2000, NT and XP) and uses the Windows logon as the Cisco LEAP logon. A software shim in the Windows logon allows the username and password information to be passed to the Cisco Aironet client driver. The driver will convert the password into a Windows NT key and hand the username and Windows NT key to the Cisco NIC. The NIC executes 802.1X transactions with the AP and the authentication, authorization, and accounting (AAA) server.


Note:    Neither the password nor the password hash is ever sent across the wireless medium.


Reauthentication and subsequent WEP key derivation follow a similar process. The transaction WEP-encrypted with the existing client WEP key and client’s port on the access point does not transition to a blocking state. It will remain in the forwarding state until the client explicitly sends an EAP Logoff message or fails reauthentication.

5.2. Cisco LEAP Deployment

Cisco designed Cisco LEAP to provide strong, easy-to-deploy, and easy-to-administer wireless security. Cisco offers third-party NIC support and RADIUS support to allow customers to use their existing investments in wireless clients as well as existing RADIUS servers. In addition, Cisco provides deployment best practices guidance to ensure customer success with Cisco Aironet products and the Cisco LEAP algorithm.

5.2.1. Third-Party Support

Cisco offers Cisco LEAP RADIUS support on the:

  • Cisco Secure Access Control Server (ACS) Version 2.6 and v3.0 platforms
  • Cisco Access Registrar v1.7 or later

To service customers with existing RADIUS servers, Cisco has partnered with Funk Software and Interlink Networks. Cisco LEAP support is available on:

  • Funk Steel Belted RADIUS v3.0
  • Interlink Merit v5.1

In addition, third-party client support is available from Apple Computers for its AirPort wireless adapters.

In addition, here are some salient points for deploying wireless LANs.

5.2.2.1. Use Strong Passwords for LEAP Authentication

Cisco LEAP is a password-based algorithm. To minimize the possibility of a successful dictionary attack, use strong passwords, which are difficult to guess. Some characteristics of strong passwords include:

  • A minimum of ten characters
  • A mixture of uppercase and lowercase letters
  • At least one numeric character or one non-alphanumeric character (Example: !#@$%)
  • No form of the user’s name or user ID
  • A word that is not found in the dictionary (domestic or foreign)

Examples of strong passwords:

  • cnw84FriDAY, from “cannot wait for Friday"
  • 4yosc10cP!, from “for your own safety choose 10 character password!"

5.2.2.2. Avoid Using MAC and Cisco LEAP Authentication on the Same RADIUS Server

In scenarios where MAC address authentication uses the same ACS as Cisco LEAP, be sure that the MAC address has a separate MS-CHAP strong password.

If a MAC address has been configured on an ACS that supports Cisco LEAP and MAC authentication, the MAC address should use a different strong password for the required MS-CHAP/CHAP field. If not, an eavesdropper can spoof a valid MAC address and use it as a username and password combination for Cisco LEAP authentication.

Refer to http://www.cisco.com/warp/public/cc/pd/witc/ao1200ap/prodlit/wrsec_an.htm for Cisco Wireless Security Suite configuration details.

5.2.2.3. Use RADIUS Session Timeouts to Rotate WEP Keys

Cisco LEAP and EAP Transport Layer Security (TLS) support session expiration and 802.1X reauthentication by using the RADIUS session timeout option (RADIUS Internet Engineering Task Force option 27). To avoid IV reuse (IV collisions), rotate the base WEP key before the IV space is exhausted.

For example, the worst-case scenario for a reauthentication time would be stations in a service set running at maximum packet rate (in 802.11 stations, this is 1000 frames per second).

  • 2^24 frames (16,777,216) / 1000 frames per second ~= 16,777 seconds or 4 hours 40 minutes.

Normal frame rates will vary by implementation, but this example serves as a guideline for determining the session timeout value.

5.2.2.4. Deploy Cisco LEAP on a Separate Virtual LAN (VLAN)

Deploying Cisco LEAP wireless LAN users on a separate VLAN allows Layer 3 access lists to be applied to the wireless LAN VLAN if required, without affecting wired clients. In addition, intrusion-detection systems can be installed on wireless LAN VLANs to monitor wireless LAN traffic.

6. What Lies Ahead

WEP encryption and 802.11 authentication are known to be weak. The IEEE is enhancing WEP with TKIP and providing robust authentication options with 802.1X to make 802.11-based wireless LANs secure. At the same time, the IEEE is looking to stronger encryption mechanisms. The IEEE has adopted the use of the Advanced Encryption Standard (AES) to the data-privacy section of the proposed 802.11i standard.

6.1. AES Overview

The Advanced Encryption Standard (AES) is the next-generation encryption function approved by the National Institute of Standards and Technology (NIST). NIST solicited the cryptography community for new encryption algorithms. The algorithms had to be fully disclosed and available royalty free. The NIST judged candidates on cryptographic strength as well as practical implementation. The finalist, and adopted method, is known as the Rijndael algorithm.

Like most ciphers, AES requires a feedback mode to avoid the risks associated with ECB mode. The IEEE is deciding which feedback mode to use for AES encryption. The two contenders are:

  • Offset code book (OCB)
  • Cipher block chaining counter mode (CBC-CTR) with cipher block chaining message authenticity check (CBC-MAC), collectively known as CBC-CCM

The two modes are similar but differ in implementation and performance.

6.1.1. AES-OCB Mode

AES-OCB is a mode that operates by augmenting the normal encryption process by incorporating an offset value. The routine is initiated with a unique nonce (the nonce is a 128-bit number) used to generate an initial offset value. The nonce has the XOR function performed with a 128-bit string (referred to as value L). The output of the XOR is AES-encrypted with the AES key, and the result is the offset value. The plain-text data has the XOR function performed with the offset and is then AES-encrypted with the same AES key. The output then has the XOR function performed with the offset once again. The result is the cipher-text block to be transmitted. The offset value changes after processing each block by having the XOR function performed on the offset with a new value of L (Figure 30).


Figure 30   AES-OCB Encryption


OCB mode also includes a MIC function. The MIC is calculated by performing the XOR function on the following values:

  • All plain-text blocks except the final one
  • The final plain-text block with the XOR function performed with the appropriate offset value
  • The final cipher-text block
  • The final offset value

The result from this XOR function is AES-encrypted using the AES key. The first 64 bits of the resulting 128-bit output is the MIC value inserted into the AES-encrypted frame (Figure 31). Note that the MIC is not included in the encrypted portion of the frame. Encrypting the MIC is not required because MIC itself is the result of AES encryption.


Figure 31   AES-Encrypted Frame


AES-OCB is a new mode, which is an inherent weakness in the eyes of the cryptographic community. But the author of the mode, Phil Rogaway, has cryptographically proven the strength of the mode and is also a well-respected member of the cryptography community. OCB mode is known to be efficient and fast. One major benefit is that the MIC can be calculated in the same processing function as the encryption, minimizing encryption overhead and maximizing data throughput.

6.1.2. AES-CCM Mode

AES-CCM mode is an alternative to OCB mode for AES encryption. CCM mode is the combination of Cipher Block Chaining Counter mode (CBC-CTR mode and CBC Message Authenticity Check (CBC-MAC. The functions are combined to provide encryption and message integrity in one solution.

CBC-CTR encryption operates by using IVs to augment the key stream. The IV increases by one after encrypting each block. This provides a unique key stream for each block (Figure 32).


Figure 32   CBC-CTR Encryption


CBC-MAC operates by using the result of CBC encryption over frame length, destination address, source address, and data. The resulting 128-bit output is truncated to 64 bits for use in the transmitted frame.

AES-CCM uses cryptographically known functions but has the weakness of requiring two operations for encryption and message integrity. This is computationally expensive and adds a significant amount of overhead to the encryption process.

7. Summary

Wireless LAN deployments should be made as secure as possible. Standard 802.11 security is weak and vulnerable to numerous network attacks. This paper has highlighted these vulnerabilities and described how the Cisco Wireless Security Suite can augment 802.11 security to create secure wireless LANs.

Some Cisco security enhancement features might not be deployable in some situations because of device limitations such as application specific devices (ASDs such as 802.11 phones capable of static WEP only) or mixed vendor environments. In such cases, it is important that the network administrator understand the potential WLAN security vulnerabilities.

Cisco strives to educate and inform customers and clients about Cisco wireless LAN solutions, and to provide design and deployment guidance to allow them to make decisions that best suit their needs.

Cisco recommends using the Cisco Wireless Security Suite to provide wireless LAN users with the most secure environment possible—abandoning legacy authentication and encryption, wherever possible, in favor of strong authentication and encryption.

Cisco is committed to providing customers with interoperable wireless LAN solutions. The Cisco Wireless Security Suite offers many prestandard features that will be upgradeable to interoperable versions once the standards are ratified. This arrangement allows for deployment of secure wireless LANs today with the prospect of interoperable wireless LANs tomorrow.

8. Appendix A—EAP Authentication Types

8.1. EAP Transport Layer Security

EAP Transport Layer Security (TLS) (RFC2716) is a Microsoft-supported EAP authentication algorithm based on the TLS protocol (RFC2246). TLS is the current version of Secure Socket Layer (SSL) used in most Web browsers for secure Web application transactions. TLS has proved to be a secure authentication scheme and is now available as an 802.1X EAP authentication type. EAP-TLS is supported in the Microsoft XP platform, and support is planned for legacy Microsoft operating systems as well. Including a supplicant on the client operating system eases deployment and alleviates single-vendor constraints.

8.1.1. TLS Overview

EAP-TLS is based on SSL v3.0. To better understand EAP-TLS operation, this section focuses on the operation of TLS with respect to SSL. TLS is designed to provide secure authentication and encryption for a TCP/IP connection. To provide this functionality, TLS comprises three protocols:

  • Handshake protocol—The handshake protocol negotiates the parameters for the SSL session. The SSL client and server negotiate the protocol version, encryption algorithms, authenticate each another, and derive encryption keys.
  • Record protocol—The record protocol facilitates encrypted exchanges between the SSL client and the server. The negotiated encryption scheme and encryption keys are used to provide a secure tunnel for application data between the SSL endpoints.
  • Alert protocol—The alert protocol is the mechanism used to notify the SSL client or server of errors as well as session termination.

TLS authentication is generally split into two methods: server-side authentication and client-side authentication. Server-side authentication uses public key infrastructure (PKI), namely PKI certificates. Client-side authentication can also use PKI certificates, but this is optional. EAP-TLS uses client-side certificates.

8.1.2. PKI and Digital Certificates

PKI encryption is based on asymmetric encryption keys. A PKI user has two keys: a public key and a private key. Any data encrypted with the public key can be decrypted only with the private key, and vice versa. For example: George gives Martha his public key. Martha then sends George an e-mail encrypted with his public key. For George to read the message, he has to decrypt the message with his private key. Because George is the only person with access to his private key, only he can decrypt the message (Figure 33).


Figure 33   Public Key Encryption


Digital certificates are data structures distributed by a certificate authority that join a public key to a user. A digital certificate is generally made up of the following pieces of information:

  • Certificate version
  • Serial number
  • Certificate issuer
  • User
  • User’s public key
  • Validity period
  • Optional extensions
  • Signature algorithm
  • Signature

The digital signature is derived by combining the certificate version, serial number, issuer, user, user’s public key, and validity period and running the values through a keyed hash function. The certificate authority keys the hash with its own private key (Figure 34).


Figure 34   Digital Signature


8.1.3. TLS Authentication Process

The TLS process begins with the handshake process:

1. The SSL client connects to a server and makes an authentication request

2. The server sends its digital certificate to the client

3. The client verifies the certificate’s validity and digital signature

4. The server requests client-side authentication

5. The client sends its digital certificate to the server

6. The server verifies the certificate’s validity and digital signature

7. The encryption and message integrity schemes are negotiated

8. Application data is sent over the encrypted tunnel via the record protocol

8.1.4. EAP-TLS Authentication Process

The EAP-TLS authentication process is as follows (Figure 35):

1. The client sends an EAP Start message to the access point

2. The access point replies with an EAP Request Identity message

3. The client sends its network access identifier (NAI), which is its username, to the access point in an EAP Response message

4. The access point forwards the NAI to the RADIUS server encapsulated in a RADIUS Access Request message

5. The RADIUS server will respond to the client with its digital certificate

6. The client will validate the RADIUS server’s digital certificate

7. The client will reply to the RADIUS server with its digital certificate

8. The RADIUS server will validate the client’s credentials against the client digital certificate

9. The client and RADIUS server derive encryption keys

10. The RADIUS server sends the access point a RADIUS ACCEPT message, including the client’s WEP key, indicating successful authentication

11. The access point sends the client an EAP Success message

12. The access point sends the broadcast key and key length to the client, encrypted with the client’s WEP key.


Figure 35   EAP-TLS Authentication Process


8.2. EAP SIM Architecture

The EAP subscriber identity module (SIM) authentication algorithm is designed to provide per-user/per-session mutual authentication between a wireless LAN (WLAN) client and an AAA server. It also defines a method for generating the master key used by the client and AAA server for the derivation of WEP keys. The Cisco implementation of EAP SIM authentication is based on the most recent IEEE draft protocol. This section will take a closer look at EAP SIM, from protocol message exchanges to how to implement EAP SIM on the AAA servers, access points, and client devices.

8.2.1. Global System for Mobile Communications

EAP SIM authentication is based on the authentication and encryption algorithms stored on the Global System for Mobile Communications (GSM) SIM, which is a Smartcard designed according to the specific requirements detailed in the GSM standards. GSM authentication is based on a challenge-response mechanism and employs a shared secret key, Ki, which is stored on the SIM and otherwise known only to the GSM operator’s Authentication Center (AuC). When a GSM SIM is given a 128-bit random number (RAND) as a challenge, it calculates a 32-bit response (SRES) and a 64-bit encryption key (Kc) using an operator-specific confidential algorithm. In GSM systems, Kc is used to encrypt mobile phone conversations over the air interface.

8.2.2. EAP SIM Authentication Process

EAP SIM authentication provides a hardware-based authentication method secure enough to implement in potentially hostile public wireless LAN deployments. It allows GSM mobile operators to reuse their existing authentication infrastructure for providing access to wireless networks, mainly in public access “hot spots." EAP SIM combines the data from several GSM “triplets" (RAND, SRES, Kc), obtained from an AuC, to generate a more secure session encryption key. EAP SIM also enhances the basic GSM authentication mechanism by providing for mutual authentication between the client and the AAA server.

On the client side, the EAP SIM protocol, as well as the code needed to interface with a Smartcard reader and the SIM, is implemented in the EAP SIM supplicant. The supplicant code is linked into the EAP framework provided by the operating system; currently, supplicants exist for Microsoft Windows XP and 2000. The EAP framework handles EAP protocol messages and communications between the supplicant and the AAA server; it also installs any encryption keys provided the supplicant in the client’s WLAN radio card.

On the network side, the EAP SIM authenticator code resides on the service provider’s AAA server. Besides handling the server side of the EAP SIM protocol, this code is also responsible for communicating with the service provider’s AuC. In a Cisco implementation of EAP SIM, the AAA server communicates with a Cisco IP Transfer Point (ITP), which acts as a gateway between the IP and Signaling System 7 (SS7) networks. The Cisco ITP translates messages from the AAA server into standard GSM protocol messages, which are then sent to the AuC.

802.1X authentication using Cisco implementation of EAP SIM proceeds as follows (Figure 36):

1. An EAP-over-LAN (EAPOL) Start message from the client starts the authentication protocol and indicates to the access point that the client wants to authenticate using EAP.

2. In response, the access point sends an EAP Identity Request message to the client. At this point, the client has not yet been assigned an IP address, and the access point blocks all messages from the client except for those necessary for authentication (EAP and EAP SIM protocol messages).

3. The client responds to the access point’s request with an EAP Identity Response message containing the user’s network identity. This identity is read from the SIM card, using a card reader attached to (or incorporated into) the client. It is of the form 0<IMSI>@<realm>, where <IMSI> is the International Mobile Subscriber Identity (as used in GSM networks) and <realm> is the operator’s domain name string (voicestream.com, for example). The network identity is stored on the SIM and determined by the service provider; it may differ from the user’s login credentials and is used mainly to authenticate access to the WLAN.

4. The access point forwards the EAP Identity Response to the AAA server using a RADIUS protocol message with Cisco vendor-specific attributes.

5. The AAA server determines that the user intends to use EAP SIM authentication based on its configuration parameters or on the identity passed to it and invokes its EAP SIM extension code. This code then starts the EAP SIM extension protocol by sending an EAP SIM Start request back to the client. It may also generate a GetAuthInfo message to the AuC requesting a (configurable) number of GSM triplets; this step may be delayed until after a response to the EAP SIM Start message is received to ensure that the client indeed supports the EAP SIM protocol.


Note:    Depending on the realm (domain) contained in the identity string, the AAA request might need to be proxied from the local AAA server to the service provider’s AAA server.


6. The GetAuthInfo message is routed to the Internet Transfer Point Mobile Application Part (ITP MAP) proxy, which acts as a gateway to the service provider’s SS7 network. The ITP translates the request into a standard GSM MAP GetAuth request before sending it to the AuC.

7. On receiving the EAP SIM Start request, the client reads a 128-bit (16-byte) random number generated on the SIM and passes it back to the AAA server in the EAP SIM Start response.

8. Once the AAA server has received the client’s EAP SIM Start response and the response from the AuC containing a sufficient number of GSM triplets (typically two to three), it then constructs an EAP SIM Challenge message that contains the random numbers (RAND) received from the AuC and a 160-bit (20-byte) message-authentication code (MAC_RAND).

9. The client passes the EAP SIM Challenge request to the SIM card, which first calculates its own MAC_RAND. The AAA server is validated if the result matches the MAC_RAND received from the server. Only in that case, the SIM also calculates the GSM result (SRES) and encryption key (Kc) for each of the RANDs it received, as well as a 160-bit (20-byte) message-authentication code (MAC_SRES) based on these results and the user identity. Only MAC_SRES is returned to the AAA server (and therefore exposed on the radio link) in the EAP SIM Challenge response. The SIM also calculates cryptographic keying material, using a secure hash function on the user identity and the GSM encryption keys, for the derivation of session encryption keys.

10. When the AAA server receives the client’s EAP SIM Challenge response, it calculates its own MAC_SRES and compares it to the one received from the client. If both match, the client is authenticated and the AAA server also calculates the session encryption keys. It then sends a RADIUS ACCEPT message to the access point, which contains an encapsulated EAP Success message and the (encrypted) client session key.

11. The access point installs the session key for the client’s association ID and forwards the EAP Success message to the client. It then sends its broadcast key, encrypted with the client’s session key, in an EAP Key message to the client. It also unblocks the data path for the client so that IP traffic can flow between the client and the rest of the network.

12. Upon receiving the EAP Success message, the EAP SIM supplicant returns the session encryption key calculated by the SIM to the EAP framework, which installs it on the client’s WLAN radio card.

13. The client is now able to securely send and receive network traffic.


Figure 36   EAP SIM Authentication



Note:    The client’s session key is never sent across the radio link and can therefore not be snooped by network attackers listening in on the message traffic. Similarly, the results of the GSM authentication algorithm (SRES, Kc) are never exposed to listeners over the radio link. EAP SIM, therefore, exposes even less information to network attackers than the standard GSM authentication for wireless phones.


All message authentication codes described above are calculated using a secure keyed hashing algorithm, HMAC-SHA1 (steps 4 and 5). A hash function is an algorithm that one-way encrypts data so that it cannot be decrypted to derive the original input data. The algorithm uses the user’s identity, the random number generated by the SIM, the GSM encryption keys Kc, and other data to calculate the authentication codes and encryption keys used in EAP SIM.

The Cisco implementation of EAP SIM is particularly secure because the results of the GSM authentication algorithm (SRES, Kc) never leave the SIM and therefore remain inaccessible even if network attackers manage to compromise the EAPSIM supplicant code. This is made possible by a partnership between Cisco and Gemplus, a world leader in Smartcard technology and leading supplier of SIM chips to the GSM industry. Other implementations of EAP SIM, using standard GSM SIM chips or software-based SIM emulators, are possible but are inherently less secure than the Cisco solution.

8.3. Protected EAP

Protected EAP (PEAP), is a draft EAP authentication type that is designed to allow hybrid authentication. PEAP employs server-side PKI authentication. For client-side authentication, PEAP can use any other EAP authentication type. Because PEAP establishes a secure tunnel via server-side authentication, non-mutually authenticating EAP types can be used for client-side authentication, such as EAP generic token card (GTC) for one-time passwords (OTP), and EAP MD5 for password based authentication.

PEAP is based on server-side EAP-TLS, and it addresses the manageability and scalability shortcomings of EAP-TLS. Organizations can avoid the issues associated with installing digital certificates on every client machine as required by EAP-TLS and select the method of client authentication that best suits them.

8.3.1. PEAP Authentication Process

PEAP authentication begins in the same way as EAP-TLS (Figure 37):

1. The client sends an EAP Start message to the access point

2. The access point replies with an EAP Request Identity message

3. The client sends its network access identifier (NAI), which is its username, to the access point in an EAP Response message

4. The access point forwards the NAI to the RADIUS server encapsulated in a RADIUS Access Request message

5. The RADIUS server will respond to the client with its digital certificate

6. The client will validate the RADIUS server’s digital certificate

From this point on, the authentication process diverges from EAP-TLS

7. The client and server negotiate and create an encrypted tunnel

8. This tunnel provides a secure data path for client authentication

9. Using the TLS Record protocol, a new EAP authentication is initiated by the RADIUS server

10. The exchange will include the transactions specific to the EAP type used for client authentication

11. The RADIUS server sends the access point a RADIUS ACCEPT message, including the client’s WEP key, indicating successful authentication


Figure 37   PEAP Authentication


9. Appendix B—Cisco Wireless Security Suite in Bridging Deployments

The authentication and TKIP WEP enhancements are primarily focused on addressing infrastructure basic service sets. Cisco recognizes the need for enhanced security in point-to-point and point-to-multipoint bridging environments and has added features to the bridge firmware to allow wireless bridge links to take advantage of Cisco LEAP authentication and TKIP WEP enhancements.

Figure 38 illustrates a typical point-to-point bridging scenario. The root bridge is configured to support 802.1X authentication and the TKIP WEP enhancements, including per-packet keying, the MIC, and broadcast key rotation.


Figure 38   Cisco LEAP and TKIP and Bridge Links


The non-root bridge is statically configured with a username and password. The non-root bridge must also be configured to support per-packet keying and the MIC function. As with a NIC-based client, the broadcast key will be sent via the wireless link to the non-root bridge, encrypted with the dynamic WEP key of the non-root bridge.

Enabling Cisco LEAP and TKIP WEP enhancements allows the wireless bridge link to use dynamic WEP keys with administrator-controlled reauthentication (and WEP rekey) intervals.

10. Appendix C—Useful Links

Cisco Wireless LAN Security Web site

http://www.cisco.com/go/aironet/security

Cisco Aironet Wireless LAN Security Overview

http://www.cisco.com/warp/public/cc/pd/witc/ao350ap/prodlit/a350w_ov.htm

SAFE: Wireless LAN Security in Depth

http://www.cisco.com/warp/public/cc/so/cuso/epso/sqfr/safwl_wp.htm

Intercepting Mobile Communications: The Insecurity of 802.11

http://www.isaac.cs.berkeley.edu/isaac/wep-draft.pdf

Your 802.11 Wireless Network Has No Clothes

http://www.cs.umd.edu/%7Ewaa/wireless.pdf

Cisco response to Your 802.11 Wireless Network Has No Clothes

http://www.cisco.com/warp/public/cc/pd/witc/ao350ap/prodlit/1327_pp.htm

An Initial Security Analysis of the IEEE 802.1x Standard

http://www.cs.umd.edu/~waa/1x.pdf

Cisco response to An Initial Security Analysis of the IEEE 802.1x Standard

http://www.cisco.com/warp/public/cc/pd/witc/ao350ap/prodlit/1680_pp.htm

Using the Fluhrer, Mantin, and Shamir Attack to Break WEP

Authentication with 802.1x and EAP Across Congested WAN Links

http://www.cisco.com/warp/public/cc/pd/witc/ao350ap/prodlit/authp_an.htm

Configuring the Cisco Wireless Security Suite

http://www.cisco.com/warp/public/cc/pd/witc/ao1200ap/prodlit/wrsec_an.htm

IEEE 802.11 Working Group Web site

http://grouper.ieee.org/groups/802/11/


無線訊號三角定位

無線訊號三角定位

此種方法利用RSSI來估測從訊號源到接收器的距離。但單一個接收器並不足以判斷其正確的位置。一般需要三個以上的固定點才能夠做估測距離。

無線訊號特徵定位

此種方法也是利用數個接收器的RSSI數據來計算可能的位置。此時其得到的數據,不是利用其訊號的強弱度直接去計算估計值,而是去比對一個預先建好的Model。這個比對的Model,則必須先經過在實際環境做「實地測定」 (Site Calibration),依各個不同的環境建立其各別的Model。RSSI值在不同的室內環境裡,由於實際建築物的隔間,建材的不同,所產生的散射或衰減,在每個環境裡都不一樣。因此此類系統在建置之初,則必須要實際帶著一台可以發送無線訊號的設備,在樓層裡走動,建立該環境的Model,並儲存這些不同的RSSI值。

為了提高準確度,這些實際測得的RSSI數據點,將做更進一步的統計分析,因此實際比對數據時,將不是比對RSSI的實測值,而是經過計算後的「特徵」。這樣的比對,是去比對訊號點的「RF Fingerprint」,因此實際上並不一定要三個以上的接收器,兩個以上的接收器則可做定位,但若有高準確度的要求,則建議仍然使用三個接收器。

三角定位的挑戰

三角定位法,在開放空間裡,若無障礙物的阻隔,則可得到較為準確的結果,然而,在一般的辦公室裡,隔間、電梯、設備或其他的設施,造成了相當多不確定的環境因素。這些變數,造成訊號的散射(Scattering)、多路徑(Multi-pathing)、及衰減(Attenuation)。

散射

當訊號在傳播時,遇到辦公室裡的牆或其他不規則的表面,而同時產生很多強弱不同的反射訊號,因此,實際上這些訊號需要花更多的時間才會被接收器收到,同時訊號的強度也變得較為微弱,使得定位系統在估算時,會比原本實際的距離估算較遠,造成錯估。

多路徑

在辦公室裡的眾多設施,將使得訊號在傳送時,同一個訊號可能會採用不同的路徑到達接收器。雖然是同一個訊號源發射訊號,但在接收器上可能會收到各種不同RSSI數值,三角定位所依賴的是RSSI值,此時估算出來的距離,將無法準確或判斷實際距離。

衰減

在辦公室裡的訊號,可能直接穿過隔間牆而直接抵達接收器,但抵達的同時,訊號已經衰減,三角定位裡的估算,其衰減的估算是以在無障礙空間的自然衰減值來計算,因此,此種情況將會高估實際距離。

特徵定位的實地測定

為了克服三角定位的散射、多路徑、衰減的影響,特徵定位採用實地測定來現場採樣經過這些影響後的RSSI值,並建立該環境的位置Model。所以在定位系統建立之初,需要先做實地的走測,建立參考點的RF 特徵值。使用者先輸入平面圖,然後帶著無線設備,並用穩定的速度在環境裡走動,讓接收器能夠記錄參考點的統計數據,為了得到最佳精確度,一般建議至少每10英呎能建立一個參考點。而在各個參考點之間,系統則會利用內差計算,來補足這些數據。當實地測定的參考數據愈多,則可得到較高的準確度。

精確度的比較

距離估算的精確度,跟接收器的多少、密度、擺設位置以及現場環境的變數惜惜相關。三角定位由於散射、多路徑、衰減的影響,經過實測數據得到的平均準確度約為30英呎。特徵定位的平均準確度,實測的平均精確度可達到10英呎以內。大幅提昇了無線定位的準確性。若為了節省接收器的佈建成本,則並不一定要採用三個以上的接收器,直接降低購置接收器的費用支出。

給企業的建議

特徵定位改進了三角定位的限制,提昇了準確度,若是考量定位系統的建置成本,也可以在降低準確度的要求下,減少接收器的佈置。但其缺點為必須經過一次實際的參考點採樣。

目前市面上的無線網路管理&入侵偵測系統,許多廠商都積極的加入了這項的功能,各家的方法及技術也不一樣。有些採用傳統的三角定位,也有使用特徵定位的系統。因此企業可依實際環境及其他因素的考量,來選用及建置這些系統。

無線設備定位的技術,隨著無線網路的廣為採用,以及無線環境日趨複雜,企業安全需求增加,各家廠商無不致力於這方面的研究。以目前10英呎的定位技術的準確度,再配合公司無線網路政策的執行,相信這些非法的無線設備,將可排除於企業的安全大門之外。

WireLess

  1. 常用名詞

WIRELESS SPECTRUM


無線網路頻譜,常用於ISMFree Specturm

1.(900M: 這段902~928MHz的頻率常見於日常生活中無線電話)
2.(2.4G:
這段2.4000~2.4835GHzWLAN的運用很多,如802.11b,802.11g,802.11n,每22M一個頻道,總共可分14個頻道,其中只
3個不覆蓋頻道(1,6,11)最常用,這個頻率常使用的WLAN 傳輸技術是DSSS(Direct Sequence Spread Spectrum)
3.(5G :
這段運用於802.11a802.11n,每20M一個頻道,共有23個不覆蓋頻道,這個頻率的調變技術使用OFDM(orthogonal
Frequency Division Multiplexing)

Access Point


簡稱AP,一種無線網路的(無線電)收發機,主要在於達到有線網路與無線網路間橋接及存取功能。

Ad Hoc Network


一種無線網路的架構,由無線主機之間互相傳訊,不需要AP,只有點對點模式。

Bandwidth


定義為可作資料傳輸之展頻頻寬,可以確保一個訊號透過此媒介而達到最大之傳輸速率,且不會因為功率增減而減損。

Bridge


一種網路裝置,可以連接網路,透過媒體存取控制層轉送封包。

Carrier Sense Multiple Access with Collision Detection(CSMA/CA)


具有避免衝突的載波感測多重存取(CSMA的一種版本碰撞避免)無線網路存取媒體的方式,
此為IEEE 802.11 標準所定義。

Carrier Sense Multiple Access with Collision Detection(CSMA/CD)


具有避免衝突的載波感測多重存取(CSMA的一種版本碰撞偵測)乙太網路媒體存取方式,
此為IEEE 802.3標準所定義。

Complementary Code KeyingCCK


一種調變技術,使用IEEE 802.11b 規範,可以傳輸到 5.5 Mbps 11 Mbps的速度。

DBi


一種等向天線的分貝比率,通常使用在測量天線的增益,越高的dBi值及增益可以得到更好的無線電波涵蓋範圍。

Differential Binary Phase Shift Keying(DBPSK)


一種調變技術,使用無線網路IEEE 802.11規範,可以傳輸到 1 Mbps的速度。

Differential Quadrature Phase Shift Keying(DQPSK)


一種調變技術,使用無線電IEEE 802.11 規範,可以傳輸到2 Mbs的速度。

Direct Sequence Spread Specturm(DSSS) 直接序列展頻


無線電傳輸展頻技術的一種形式,連續不斷平均的延展範圍在一個頻帶的寬度上。

Directional Antenna(指向型天線)

傳輸功率朝向一個方向,因使增加覆蓋範圍,覆蓋角度隨著距離而衰減,指向型天線
包含砲筒式、平版、碟型與網狀。

Diversity Antennas

此為智慧型天線系統,透過兩組天線持續不斷的感測傳送電波,系統會自動選擇訊號較強的天線將訊號接收下來。

Dynamic Host Configuration Protocol(DHCP)
一個通訊協定,使用於許多不同的作業系統,可以自動指派IP位址在設定範圍配給網路上的裝置。
在明確的管理時間內,裝置會保留這個指派IP位址。

Frequency Hopping Spread Spectrum (FHSS) 跳頻展頻


在無線電傳輸展頻技術的一種形式,依據預定的形式,傳輸和接收的跳躍方式需要同步地在不同頻道之間切換。

Fresnel Effect


此為一關於電波傳輸在可目視線內,無阻礙物組當無線電波傳輸的一種效應。

Gain


一種透過集中無線電訊號的單一方向,可增加傳輸距離的方式,大部分是以指向性天線來達成。增益不會增加訊號強度,
但可以改變訊號傳送方向,若增加增益,則訊號覆蓋範圍角度將會減小。

Hidden Node


一個在無線網路的工作站,嘗試傳輸資料給其他工作站,但因為位置在同一個無線傳輸的區域內,因此無法傳送與接收影響
網路效率。

Line of Sight


在傳輸與接收間無阻礙的可目視直線距離。可目視的特性是遠距離指向性無線傳輸的必要條件,
因為地球曲面的特性,在可目視的範圍下無線的傳輸設備若不架設於鐵塔上,其最大的傳輸距離為16英哩(26公里)。

Modulation
由載波訊號來傳輸。調變(Modultaion)為一種改變訊號的技術,透過訊號的改變,可以降低傳送的失誤、讓訊號傳的更遠,調變

的技術有三個部份組成 1."Amplitude" ,"Phase" ,"Frequency"
Amplitude
振幅: 波長上下幅度的變化
Phase
相位: 在兩個波頂之間的時間中定義代表不同波形的階段如二相位,四相位等
Frequency
頻率: 每秒出現全波的次數
在無線網路中常見的調變技術有三種1 “DSSS" 2. “OFDM" 3. “MIMO"
1. DSSS:
直接序列展頻Direct Sequence Spread Spectrum,運用了兩個技術,一個是編碼(Encoding)、一個是調變(Modulation)
編碼的概念就是把一個位元的資訊轉變成一串 Chip Sequence,如把11101來代替,目的是讓無線傳輸因干擾造成某一個位元
收到,還是可以辨示其值,編碼會用如Barker code(較多位元較少資料傳輸率)CCK Code(較少位元數較高資料傳輸率)進行。
調變的技術,利用Phase Shift來展開頻率,如BPSKQPSKBPSK利用兩個180相位差來調變展頻,而QPSK則是利用90度相位
差來產生四個訊號波展頻。

2.OFDM: 正交分頻多功Orthogonal Frequency Division Multiplexing利用附屬頻道技術來達成高傳輸率,以一個20MHz的通道來說,
可以有52300Hz的附屬通道傳送資料。

3.MIMO: Multiple-Input Multiple-Output802.11n調變技術,這個技術使用多個天線(通常2~3)來傳送與接收訊號,好處是可以
在同一通道同時傳送超過 100M的資料,就算是非MIMO的設備,也能提高30%的傳輸效率。

Multipath
無線電波經過不同物體所反射出的回應現象。

Range


傳送端送出訊號後可被線性量測的傳輸距離。

Receiver sensitivity


微弱訊號被接收端接收,且可正確的轉換成有效資料的靈敏值。

Roaming


提供使用者透過無線存取設備而能移動至不同的位置,還能與LAN互相連線的方式。

Wired Equivalent Privacy(WEP)


在無線網路 802.11 標準中,定義了一種加密的安全技術,確保資料安全,提供了 40 bits 128 bits 兩種選擇。

二、無線區域網路傳輸技術:

微波(Microwave
主要用於大樓間LAN網路 連接,這需要使用到碟形天線,且天線必需位於 視線範圍Line-of-sight

展頻(Spread Spectrum):
目前Wireless LAN使用最廣泛的傳輸技術,原先由軍方發 展用以避免信號的擁擠與被監聽,又可分為兩 種展頻技術
Frequency Hopping Spread Spectrum

信號透過一系列頻率範圍廣播出去,傳 送裝置會先去傾聽(listens)頻道 (channel),當此頻道處於閒置狀態,
信號會利用此頻道傳送出去,若此頻道 已經在使用,傳送端便會跳躍(hops) 到另一個頻道,因此接收端必須知道
傳送端的跳躍程序,且傳送端與接收端必 須同步切換頻道才可正常收送資料

Direct Sequence Spread Spectrum

對每一信號加入多餘資料位元,這些資 料稱為“chips",每一筆資料必須加入 至少10chips,如此接收端才可以根據 這
些編碼正確接收資料

紅外線(Infrared ray
使用紅外線傳送資 料,又可分為兩種型號

散射式(Difused )紅外線

一種非直線式傳輸方式,只要在一定區 域內,可介藉由物體表面反射方式,達 到傳送資料目的

直射式(Directed) 紅外線

以直線形態傳輸資料,若途中遇到任何 障礙,接收端將無法收到資料,因此網 路環境必須是視線範圍(Line-of-sight

三、Wireless LAN傳輸技術比較表

 

Spread Spectrum

Microwave

Infrared

頻率

902MHz~928MHz;
2.4GHz~2.4385GHz;
5.725GHz~5.825GHz

18.825GHz~19.205GHz

3×10^14Hz

傳輸範圍

105~800呎,或50000平方呎

40~130呎,或5000平方呎

30~80

Line of sight

No

No

Yes

傳輸功率

小於1W

25mW

N/A

建築物內使用

須要天線

No

Yes

傳輸率

2~5Mbps

3.3Mbps

1.1524 Mbps

無線區域網路的種類

Ad-Hoc LAN

數個可攜帶式節點(例如:筆記型電腦), 同處於一個小區域,彼此以點對點 (peer-to-peer)方式通訊,而不必透過
有線或無線主幹的輔助,如圖示

 

Infrastructured wireless LAN

包含有線或無線主幹(backbone),網 路交通可分為兩種: uplinkstationbackbone)及 downlinkbackbonestation),通往 backbone的接點稱之為Access Point Access Point可以是有線主幹或無線橋接 的base station,如圖示


(四) IEEE 802.11 網路之工作原理

19977月無線區域網路的標準化IEEE 802.11已正式 定案.它包含以下數個特性:

一、Physical Layer

使用2.4GB頻帶,Fequency Hopping Direct Sequence兩種展頻技術

FHSS

跳頻技術(Frequency-Hopping Spread Spectrum)在同步與同時的情況下,接受兩端以特定形式的窄頻載波來傳送訊號,
對於一個非特定的接收器,FHSS所產生的跳動訊號對他而言,只是脈衝雜訊。
FHSS所展開的訊號可一特別設計來規避雜訊或one-to-many 的非重複頻道,並且這些跳頻訊號必須遵守FCC的要求,
使用75個以上的跳頻訊號,跳頻至下一個頻率的最大時間間隔(Dwell Time)為400 ms

DSSS

直接序列展頻技術(Direct Sequence Spread Spectrum)是將原來的訊號「1」或「0」,利用10個以上的chips來代表
「1」或「0」位元,使的原來的高功率、較窄的頻率變成具有較寬頻的低功率頻率。
而每個bit使用多少個chips稱做Spreading chips,一個較高的Spreading chips可以增加抗雜訊干擾,較低的 Spreading
chips可以增加使用者的人數。

基本上,在DSSS的Spreading Ration 是相當少的,例如在幾乎所有的 2.4 GHz的無線區域網路產品所使用的Spreading
Ration皆少於20。而在IEEE 802.11 的標準內,Spreading Ration 只有 11。

二、MAC Layer

CSMA/CACarrier Sense Multiple Access with Collision Avoidance), 類似Ethernet的標準CSMA/CD,差別
在於乙太網路是利用碰撞偵測的方式,而無線電波傳送過程中要偵測到碰撞是非常困難的,所以採取碰撞避免的方式。

本技術是先去偵測工作頻帶中的電磁波能量,若超過基準值,則判定頻道被佔用,所以它是一種 先聽再說的設計,亦即
傳送端在傳送之前必須先偵測頻道上是否正被使用,如果沒有則傳輸將進行.假如頻道上已有信號正在傳遞,那麼傳送端將
會隨機短暫等待再重新傳送,這個等待時間叫作 “backoff",然而backoff時間並不保證碰撞不會發生,為了避免訊框因
碰 撞及干擾造成遺漏,接收端成功接收訊框後會立刻回傳認可(Acknowledgment)信 號,當傳送端acknowledgment
傳 送端將等待backoff時間再重送訊框。

以上機制需配合RTS/CTS 的控制訊框,在送出資料前先送出RTS的控制訊框,目的端收到之後便送回CTS控制訊框,當鄰近
工作站聽到這兩種訊框的任何一種,便需再等待一段時間,不得傳送任何資料,如此一來碰撞頂多發生在RTS/CTS的控制訊
號上,碰撞機率自然降低很多。

三、傳輸速度

使用展頻技術的802.11支援12Mbps兩種傳輸 速度,而更快的傳輸速度持續發展中

(802.11 b ,802.11a,802.11g,802.11n then 802.11ac)

四、最高功率限制

由於無限區域網路所使用頻率屬於公用頻率,不須FCC機構認可,因此為避免互相干擾,最高功率限制為1W

802.11 wireless lan card : 30 mW

GSM mobile phone : 600 mW

PHS mobile phone : 10 mW

五、電源管理

無線區域網路設備一般而言很輕巧,使用電池進行供電,因此電源管理顯得有其必要,不動作時進入睡眠狀態,必要時即
可喚醒,不會影響網路連線

六、CTS/RTS

此機制為了解決 hidden terminal problem,如圖中Station A 與Station B同時傳送封包給Access Point,當Station A要送封包
時先送RTS給Access Point,Access Point 收到之後,馬上送出CTS給其他要傳送資料給他的人,其他人即停止傳送封包給Access Point。

(三)Authentication 、Accounting 、 Authorization

因為無線網路使用的特性,對於網路存取權的控制變成一個重要的議題,由Microsoft 與 Cisco 所主導的 802.1x就是為
了解決這個問題,將 radius agent 的功能加入在Access point 中,讓使用者可以透過 radius server 進行認證,決定其使用權限,並記錄所使用流量。

     但是因為 802.1x的軟體並不普及,目前只有 windows XP有提供,所以使用802.1x的方式並不普及。

     使用VPN也是一種解決方式,且VPN結合IPsec,又可同時解決安全上的問題,但是因為VPN對資料需要加密、解 密
的特性,比較消耗系統資源,對於一些行動裝置並不適合。

    目前有一些廠牌的switch 有提供radius agent的功能,也提供使用者相當親和的介面,在 802.1x 尚未普及前是一個相當
好的選擇。

(四)無線網路之建置

什麼情形須要無線區域網路

無線區域網路絕不是用來取代有線區域網路,而是用來彌補有線區域網路之不足,以達到網路延伸之目的,下列情形可能須要
無線區域網路

公共區域的使用者

有線區域網路架設受環境限制

作為有線區域網路的備用系統

三、WLAN RF Principles

(a)Wavelength波長: 兩個波頂(crests)間的距離。
(b)Frequency
頻率: 每秒出現全波的次數,單位以Hz計算。比如一秒出現一次為1Hz,一秒出現1百萬次為1MHz
(c)Amplitude
振幅: 波長上下幅度的變化。
(d)EIRP
有效全向輻射功率: Effective Isotropic Radiated Power,簡單的說就是最後輸出的功率,FCCEIRP都會訂定天線最後輸出的功率
標準,就是以EIRP的功率來定義,這裡記個簡單功式,因為會考:
EIRP = Transmitter output power – cable loss + antenna gain
(e)Free Path Loss Model:
像把石頭丟入平靜水面產生向外擴散且漸漸消失的波紋,這就是無線電波的波紋
(f)Absorption
吸收:影響無線訊號的因素之一,會使振幅減少,減少後會產生熱能,就像微波爐會使食物變熱一樣
(g)Reflection
反射:當訊號傳達到易反射的介面,如鏡子時,會有反射的狀況,這會使得接收器會在不同時間收到同一個訊號而產生干擾
(h)Scattering
散射:訊號在空氣中因雨、雪等因素造成訊號無法照原射向發送而產身的分散狀況
(i)Refraction
折射:訊號穿過如水的介質而產生折射的狀況
(j)Line of Sight
視線: 即兩個天線直直對射的狀況,這裡提醒的是,雖然對射,但仍要考慮前面介紹過的影響,
absorption,reflection,scattering etc,還有一個要特別考慮的,就是地球本身是圓形的,在地表其實是有角度問題,
也會是訊號影響的一個因素
(k)Received Signal Strength Indicator
訊號接收強度指標:簡稱RSSI,用來評估訊號強弱程度,通常每一家廠商有自已的指標,所以無法拿
來統一評定,指標通常以dBm為單位
(l) Signal-to-Noise Ratio
訊號雜訊比:簡稱SNR,常用看看訊號的清析度
(m)Link Budget
連線預算: 用來計算需要傳輸多少功率才能傳送到接收端,功式如下:
Received Power(dBm) = Transmitted Power (dBm) + Gains (dB) – Losses(dB)

四、WLAN Technologies and Topologies
(1)General Wireless Topologies:
WPAN
無線個人網路: 範圍小於10m802.15,代表的如bluetooth
WLAN
無線區域網路: 範圍小於100m802.11系列
WMAN
無線都會網路: 802.16,代表的如WiMAX
WWAN
無線廣域網路: 通常要付費,如GSM,CDMA

(2)Original 802.11 Topologies:
有兩種主要架構1. “Ad Hoc Mode" 2. “Infrastructure mode":

Ad Hoc:兩台電腦對接的架構,一台設定SSID讓另一台連過來(要在同一網段),這稱為Basic Service Set(BSS),又因完全獨立,可以稱作
Independent Basic Service Set(IBSS)

Infrastructure: 簡單的說,就是有AP的架構,AP同時扮演兩個角色,一個像Hub一樣在空氣中半雙功的運作,另一個像Bridge一樣把
訊號在無線與有線之間轉換。再來再記幾個Infrastructure的專有名詞:

STA-Station- 無線網路中的Client

Infrastructure Device –無線網路中的AP

BSA – Basic Service Area為一台AP所覆蓋的範圍

ESA – Extended Service Area 為兩台以上AP所覆蓋相同LAN的範圍

Distribution System – 無線Client的流量透過Controller進到有線網路環境的系統

SSID – Service Set IdentifiersMAC與網路名稱所組合而成,讓無線Client可以辨示與連線

(3)Vendor-Specific Topology Extensions:
廠商定義的拓蹼架構,所以不會套用到每一家:
Workgroup Bridges –
APBridge兩個有線網段,Cisco定義了兩個種類,1. “aWGB" 2. “uWGB",都是Cisco設備就叫
Autonomous Workgroup Bridge
,有其它設備就叫Universal Workgroup Bridge

Repeaters – 一台沒有接線路的AP在兩個無線網路之間進行訊號傳送(訊號放大),最理想的overlaping50%,而且Repeater
throughput
是原數值的一半,因要同時傳送與接收

Outdoor Wireless Bridges – 距離較長的戶外Bridge Outdoor Mesh Networks – 在兩個網路之間的Bridge透過多個AP,不會因為某個
AP
出問題而失效

五、Antenna Communacations
(1)Principles of Antennas
天線訊號的發射會有振幅,這個振幅就是電磁波變動的方向,又叫極性(Polariztion), 有三種極性方向要了解,
1. “Vertical"
為上下變動的極性、
2. “Horizontal"
水平變動的極性,
3. “Circular"
圓形方向變動的極性,以 Cisco來說,所有設備都是Vertical極性的天線。

(2)Common Antenna Type
可以簡單的將分類分成兩種,全向性與指向性,
1. “Omnidirectional Antennas"
全向性天線,適合室內使用的全面覆蓋,可以透過天線的水平圖H-plane與垂視圖E-plane選擇合適的天線

2. “Directional Antennas"適合掛在牆上或柱子上,適合兩邊無線Bridge或是加強某個地方的訊號。還有一點是,計算天線功率的單位是dBi

(3)Antenna Connectors and Hardware
天線的其它配件如:
Attenuators
衰減器訊號過強影響其它網段,這時就用的上
Amplifiers
放大器裝在AP與天線間來加強訊號
Lightning Arrestors
避雷器避免閃電突波電流把設備用掛
Splitters
分流器如果原本天線要拉到某一個地方,但有死角,可以利用分流器把天線拉到兩個地點,但兩條天線的訊號會下降

Overview of the 802.11 WLAN Protocols
無線網路協定定義了傳輸資料的大小、調變技術等,常被使用的為802.11 a/b/g/n相關說明
(1) 802.11a Protocol : OFDM/5.0GHz/BPSK,QPSK,16QAM,64QAM/6,9,12,18,24,36,48,54M
(2) 802.11b Protocol: DSSS/2.4GHz/DBPSK,DQPSK/1,2,5.5,11M
(3) 802.11g Protocol: DSSS,OFDM/2.4G,5.0G/BPSK,DQPSK/1,2,5.5,11M with DSSS/6,9,12,18,24,36,48,54Mwith OFDM
(4) 802.11n Protocol: mimo (differential OFDM)
採用MIMO(Multiple-Input Multiple-Output)來運作,是一種多天線的架構,要同時多天線進行傳收,是透過transmit
beamformin(TxBF)
來達成,有三種MIMO架構
1. “Precoding"

2. “Spatial Multiplexing"

3. “Diversity Coding"

802.11n有兩種頻道寬度20MHz40MHz,達到40MHz的方法就是把兩個20MHz的頻寬bound在一起。
在傳送Frame上,有兩個加速的方法,一個是Block achnowledgement就是在收到frame的回應之前就傳送數個Frames
另一個是RIFS(Reduced Interframe Space),來減少Frame傳送等待時間

Wireless Traffic Flow and AP Discovery
(1) Wireless Frame Transmission
訊框的種類有三種:
Management – 用來加入與離開某個無線網路
Control – 用來告知訊框已經收到
Data – 資料訊框就是放資料的
訊框的傳送:
因為無線網路以CSMA/CA(Carrier sense multiple access/ collision avoidance)運作,所以在訊框傳送之前都需要一段的時間,網路上沒有流量自已在能把東西往外丟,這裡有一個詞叫 “IFS" Interframe Space訊框溝通空間,
IFS
有三個種類:
1. Short interframe space(SIFS):
比較高的優先順序,用來快速傳遞訊框用,用在ACK的回應、
2. Point-coordination interframe space(PIFS):
用在由AP掌握網路時
3.Distributed-coordination interframe space(DIFS):
一般的訊框空間,常應用於Data frame中。

(2) Wireless Frame Headers
無線網路的Header基本上都一樣,主要功能在告知這個Frame是什麼種類(management,control,data),與Mac位址資訊,無線網

路的訊框會有三個MAC位址:
Destination Address(DA)
即為傳送目的地的位址、Transmitter Address(TA)負責傳送無線訊號的APSource Address(SA)即來源電腦。有時候還會有第四個MAC位址: Receiving Address(RA)負責接收訊號的AP

(3) Frame Types
訊框的內容(body)可以分為三種 “Management" “Control" “Data"在前面已簡單說明,這裡再稍作補充:
1.
管理訊框(Management Frame):是用來管理訊框的連線,如發出一個Beacon好讓在範圍內的Client可以知道有這個AP可以連線、利用Probe request(Response)來找尋連線、Authentication Request(Response)來驗證連線、Association Request(Response) Reassociation Request(Response)來溝通建立連線、Deauthentication 來離開連線

2.控制訊框(Control Frame):有兩種運作環境,會有不同的控制訊框:
– DCF(Distribution Coordination Function):
由各別電腦控制連線,常用到的如ACK來回應收到、RTS(Request to Send)要求傳送、
CTS(Clear to Send)
回應RTS可以傳送

– PCF(Point Coordination Function):AP控制連線,如CF-Poll(Connection Free)告知可以開始傳送、CF-ACK回應CF-Poll收到了、
PS-Poll(Power Save)
進入省電模式的電腦告知AP已經回復

(4) A Wireless Connection
一台電腦進入到一個無線AP的連線如下
代碼:
Client (((((((((( AP
送出Beacons
Probe Request –>–>
<–<– Probe Response Authentication Request –>–>
<–<– Authentication Response Association Request –>–>
<–<– Association Response Request to Send –>–>
<–<– Clear to Send Data –>–>
<–<– ACK
再來Client會決定傳輸的速率,會參考RSSISNR的數值決定。

Additional Wireless Technologies
(1) Cordless Phone:
無線電話,運作於1.8G~1.9GHz之間

(2) Bluetooth: 藍牙,運作於2.4GHz,利用跳頻技術(FHSS),目前最高的傳輸速度可達4Mbps,有一個受人喜愛的技術Piconet藍牙微網,可以同時連接8 個設備(一台Master、七台Slave),定義於802.15.1
(3) ZigBee:
群蜂,運作於 868,915MHz,2.4GHz,定義於802.15.4,主要特點在於低功率傳輸、低資料量、安全可靠

(4) WiMax: 全名為Worldwide Interoperability for Microwave Access,運作於10~66GHz,定義於802.16,理論值可以傳輸到40Mbps,在非線性環境Non-LOS(Line of Sight),可以傳到3~4英哩,約30Mbps,在線性環境LOS可以傳到40Mbps

Delivering Packets from the wireless to wired network

  1. The Wireless Network Road Trip 這個架構是以Thin AP架構來說明,上面有聊過ClientAP的溝通,而在溝通之後,如何把無線傳輸到有線呢?如下: 代碼: Client )))))) (((((((((( AP ————-WLC —————————Switch ———–Gateway ARP Request (TA,SA,DA)–>–>P進行Frame Check Sequence&等待SIFS<–<– ACK for frame transmission LWAPP傳送給WLC –>–>
    WLC
    重新封裝成802.3格式–>–>
    Switch flood
    除了來源port–>–>
    连连Gatewayunicast回應ARP Request
    <–<– WLC
    重新封裝成802.11格式、以LWAPP傳給AP <–<– ARP Response (TA,SA,DA,知道GatewayMAC)
    (2) Using VLANs to Add Control

會需要用VLAN於無線網路的環境,原因很簡單,就是因為當AP以兩個SSID運作時,就像無線的VLAN環境,叫Logical VLAN,但是,當無線的兩個網段透過AP有線連接到WLC時,怎麼確保兩個網段是分開的 ?就是要在有線連接的Switch上設定VLAN (3) Configure VLAN and Trunks 簡單回憶VLAN的設定指令
設定兩個VLAN: 引言回覆:
Conf t
Vlan 10 ,20
Exit
end
Show vlan brief
設定vlanport: 引言回覆:
Conf t
int f0/5
Switchport mode access
Swithchport access vlan10
end
Show interface status

設定trunk ports: 引言回覆:
Interface range f0/1-3
Switchport trunk encapsulation dot1q
Switchport mode trunk
Switchport nonegotiate
Switchport trunk native vlan 1
End
Show interface trunk
Cisco

無線網路
一、Cisco Wireless Networks Architecture
(1) The Need for Centralized Control
通常一台AP(稱作Autonomous accessFat AP)就能解獨立解決無線網路環境的問題,這是很好的架構,但是如果環境大,考慮到擴充與管理時,集中式的管理就成為必需,所以讓AP更輕巧 (lightweight APThin AP)再連接後端一台集中控制平台進行設定與管理,即可滿足大環境與擴充性需求
(2) The Cisco Solution
Cisco
的無線解決方案稱作CUWN(Cisco Unified Wireless Network)顧名思意就是一個集中式的統一管理網路架構,架構的成員如下:
Wireless Client –
即一台需要無線上網的電腦,也可進行網路管理
APs in the CUWN –
Cisco的環境裡AP都建議以ThinAP搭配WLC(Wireless LAN Controller)來建構,之間以LWAPP(Lightweight AP Protocol)進行溝通,LWAPP有兩模式Layer2layer3,運作於Layer2APWLC要在同一個網段裡,Cisco建議用 Layer3,如此就能跨網段的讓Thin APWLC運作在更大的環境架構裡。
Thin AP
通常負責的工作有: Frame exchange, Client Side Exchange , Transmits Beacons ,Power-Save Mode Frame Buffer ,Response Probe Request , Monitor Channels Noise, Provide Real-time quality Info
WLC in the CUWN –
WLC
收到的不只是Client的資料,還有訊號強度RSSISNR以做為訊號覆蓋範圍的參考,一台WLC最多可以接300AP,超過300AP的環境,可以建置多台WLC,再以WCS(Wireless control System)WLC Group起來,就可以管理更大的環境。通常WLC負責的工作有: Association, Roaming Association, Authentication, Frame Translation, Frame Bridging
AP
傳送給WLC的資料有一部份為控制訊號,作為RRM(Radio Resource Management),可作為觀察流量、Dynamic Channel Assigment, Interference detection and avoidance, TCP(Transmit Power Control)
(3) Supporting Multiple Networks
在多網路無線環境,一台AP最多個支援512SSID(VLAN)、一台WLC最多也是512VLAN,但當串在一起時,WLC最多只能讓一台
Thin AP
支援到16VLAN。有VLAN的好處是可以在一台AP上分割不同的網域(如訪客用與員工用切開),還可以針對不同的VLAN設定不同的QoS Policy
(4) The CUWN Architectur
Cisco
統一無線網路架構有五大功能
1.Wireless Clients-
如果有Cisco 網卡,可以用ADU(Aironet Desktop Utility)來管理,另外也可以裝SSC(Secure Services Client)來設定Cisco設備的profile
2.Access Points –
CUWN的架構,AP的概念是不用管理的“(Zero-Touch Managemnet),也就是你只要把AP接上線,其它都是透過WLC來管理,AP型號如下:
1130AG – FAT,Thin,HREAP/Indoor/abg/54M
1240AG – FAT,Thin,HREAP/Rugged Indoor/abg/54M
1250AP – FAT,Thin/Rugged Indoor/abgn/300M
1300AP – FAT,Thin,bridge/Outdoor/bg/54M
1400 – Bridge Only/Outdoor/abg/None
3.Network Unification –
就是WLC,通常選擇WLC很簡單的,就看你要管理多少台AP,型號如下:
4400 – 100AP/stand alone appliance
3750G – 50AP/2U appliance
WiSM – 3600AP(每片300AP)/可插入65系列或ISR的模組
2106 WLC – 6AP/Desktop appliance
WLCM – 6AP/模組
4.Network Management –
WCS(Wireless Control System)軟體式可安裝於win,linux平台管理,一台WCS可以管理3000Thin AP1250台自治AP,如果透過WCS Navigator,可以看到不同ControllerAP,最多達30,000
5.Network Services –
其它的無線安全模組,如ASA,IDS等安全服務二、Controller Discovery and Association
(1)Understanding the Different LWAPP Modes
LWAPP
運作兩種模式,Layer2Layer3,這之間的差異只是在APWLC之間是以MAC互通或是以IP溝通,Layer3的封包是以UDP 傳送。無線傳輸運作如下:
Client )))))) (((((((( AP ———————WLC ————————-Switch —————–Gateway
Client
傳送802.11封包–>–>
AP
封裝LWAPP802.11Ethernet Frame–>–>
WLC
移除802.3LWAPP,處理802.11
WLC
再封裝成802.3並加入802.1Q–>–>
封包透過Switch傳到目的地–>–>
(2)LWAPP AP Discovers a Controller
Layer2模式:
AP
開機後會進入Discovery Mode,首先會送出Layer2Discovery broadcast,如果環境有WLC收到即會回應,如果沒有,AP會看自已的Config是否有IP address,如果沒有,會以DHCP要一個IP,然後再利用拿到的IP試著與WLC溝通,如果再失敗,AP會回到Layer2重覆上述流程,直到找到一個WLC
Layer3模式:
AP
有三種方式來找Controller –
1.Subnet Broadcast: AP
會在同的個網段定發出Broadcastcontroller,然後再從空中找其它AP在空氣中傳輸的資訊是否也有其它Controller的資訊,叫OTAP(Over the Air Provisioning)
2.DHCP Option43:
這是廠商提供的選項,可以直接用option43來學習管理介面的controller資訊
3.DHCP DNS entry: AP
先透過DHCP取得IPDNS entry,再透過DNS entry尋問ControllerIP位址,再針對問到的IP發出Discovery query找到Controller之後的動作叫“AP Priming",就是把所有找到的Controller的資訊列成清單,放在NVRAM,如AP重開機後,只需要把清單拿出來進行broadcast就可以找到
(3)LWAPP AP Chooses a Controller and Joins It
如果已有Primingcontroller清單,AP會將Join Request依序送給Primary,Secondary,TertiaryController。但如果沒有Primed清單,會將Join Request送給Master Controller(一個Mobility Domain只會有一個Master Controller)Controller收到Request後會回應給AP即完成Joing動作
(4)LWAPP AP Receives Its Configuration
加入Controller之後,會馬上比對image資訊,如果不同會馬上進升級或降級、重開機,重開後就重覆之前的流程 ->Discovery->Join->image比對,如果image資訊一樣,即跳到下一步“Config Data",這時AP會送出一個Configure Request(通常是空的)ControllerController收到後會回應相關的設定資訊給AP,然後AP會將它存在NVRAM,如果重開 AP就會自已把config road出來跑
(5)Redundancy for APs and Controllers
AP備援:
必需在同一個無線Domain,通常發生在AP自已身上,比如某台AP掛點,附近的AP增強自身的功率以增加覆蓋率、或改變自已的頻道提供備援
Controller備援:
Controller
的備援,就是在AP上設定Primary, Secondary, Tertiary Controller,即可達成備援。其它可用的備援方式如LAG(Link Aggregation),Multiple AP managers
(6)The AP Is Joined, Now What?[/b]
大部份的人連到AP後就是想連上網路,但是不同的AP模式有不同的功能,說明如下
Local Mode:
大部份AP運作這個模式,也只為他每180秒會scan全部的頻道(60ms掃一次),也用來做為SiteSurvey,這個模式也可以當作IDS用,會檢查management packet
Monitor Mode:
這個模式,AP是被動的,不會主動送出訊號,也不允許Client連線進來,可以作為rogue AP找尋、IDSTroubleshootingSitesurvey
Sniffer Mode:
這個模式與Monitor模式的差別是會抓取所有的封包,就像wireshark一樣,抓到封包後可以提供給其它設備,或是做為 troubleshooting、數位鑑識使用
Rogue Detection Mode:
這個特殊的模式,作為APWLC之間的溝通,AP會關掉它的RF訊號,只聽取來自有線的ARP資訊,然後比對在Controller上的合法MAC清單,是否有非法的ARP請求,如果有的話WLC就會發出Alarm
H-REAP Mode:
H-REAP(Hybrid Remote Edge AP)
模式運用在APController的連線需要跨到Internet時,這個模式要注意WAN的連線速度,如果太慢可能會無法連結成功
Bridge Mode:
運作BridgeAP可以允許Client直接連線,再透過AWPP(Adaptive Wireless Path Protocol)選出合適的路徑(Cisco indoor稱為iMeshoutdoor稱為mesh)

三、Adding Mobility with Roaming
(1)Understanding Roaming
漫遊(Roaming)簡單的說,就是可以跨不同的AP仍能保持上線的狀態,聽起來很簡單,其實背後有很多同步的技術。這裡介紹兩個名詞:
Mobility Group:
同一個GroupController會互相分享要漫游的Client資訊
Mobility Domain:
由不同的Mobility Group組成,可以共享不同GroupClient資訊,一台Controller只能在一個Mobility Group與一個Mobility Domain
(2)Types of Roaming
Controller
要支援Roaming有幾個先決條件:
– Same Mobility Domain
– Same Code version
– Same LWAPP mode
– Same ACL
– Same SSID(WLAN)
如此才能支援Roaming,而漫游有兩種運作方式,L2L3:
Layer2 Roaming:
只要Client移動到另一個AP,仍是在同一個VLAN、同一個網段就是L2roamingL2Roaming如果發生在同一個 Controller下的不同AP稱為“Intracontroller Roaming",約會有10ms時間差,如果是在不同的Controller之間的AP稱為“Intercontroller Roaming",約會有20ms時間差
Layer3 Roaming:
L3 Roaming
的差別在於需要在不同Controller的不同網段之間保持連線,解決方法就是在Controller上進行tunnel好讓 Client保持原IP,而Tunnel又可分為兩種Asymmetricsymmetric,預設Cisco是以Asymmetric為主:
– Asymmetric Tunneling:
非同步的Tunnel的命名是因為這個技術的來源IP是不同的,由Anchor Controller負責把這個不一樣的來源IP進行tunnelForeign Controller進行連線
– Symmetric Tunneling:
同步的Tunnel技術使得來源IP變的一樣,方法就是改由Foreign Controller進行與AnchorTunnel,這個技術可以避免網路間其它設備將來源IP不同介為是攻擊四、Simple Network Configuration and Monitoring with
the Cisco Controller
(1)Contoller Terminology
WLC裡有幾個專有名詞:
– Interface:
logical的,它包括了VLAN與其它資訊
– Port:
概念就像實體的連接介面,會與VLANSSID綁在一起
– Dynamic Interface :
管理者定義的介面,用802.1Qheader,類似subinterface的概念
– Management Interface:
可以控制所有實體port溝通的介面,也是唯一可以持續ping的通的介面
– AP manager interface:
運作於Layer3,用來與AP溝通的介面
– Virtual Interface:
用來管理L3 Security, Mobility manager communications, 也有DNS GatewayHostname, Wireless Auth Website(通常IP1.1.1.1)
– Service Port:
Out-of-band,的管理維護介面,也是唯在一boot mode時就啟動的port
(2)Connecting to the Controller
連線的方式就像大部份設備一樣,可以透過Serial port連接,進行CLI,或以初始IP:192.168.1.1 HTTPS連線設定,在router我們會以:
Copy run start
來存設定資訊,而在WLC的指令改成: save config
(3)Configuring the Controller Using the Web Interface[/b]
設定Web介面的基本設置順序如下:
1.
建立Controller Interface: 這裡要先設定給VLANInterface
2.
設立WLAN並綁到Interface: 這裡設定WLANProfileNameSSID並與上一步設定的Interface綁在一起
3.
修改安全設定: 選擇Layer2即可,因為預設Layer3是沒有policy五、Migrating Standalone APs to LWAPP
(1)Connecting to a Standalone AP
連線到AP有幾種方式,consloe,telnet,HTTP,SSH,如果沒有IP的話,只要接到有DHCP的環境,就會抓到了。
(2)Using the Express Setup and Express Security fo Basic Configuration
這是AP上兩個快速設定的精靈可以快速設定AP
(3)Converting to LWAPP
AP
預設都是Autonomous模式,要改成Thin AP有三個方式,一個是下載windowsupgrade應用程式(IOS to LWAPP conversion utility);另一個是從WCS上面去更改,也是比較建議的方式;第三個是將IOSimage archiveAP
如果要把APThin AP改為FAT AP,只要進CLIAP回復工廠預設,指令如下:
Config ap tftp-downgrade tftp-server-ip-address filename apname
接下來要按住mode的扭到LED燈變成紅色,AP重開後就會去找tftp server裡命名為cplatform_name-k9w7-tar.default的檔案,再進行回復動作六、Cisco Mobility Express
(1)Overview of the small Business Communication System
Cisco
Moility 解決方案包括了526 Controller521 AP,在500系列,最多可支援到48個使用者,中央控管可以透過CCA(Cisco Configuration Assistant),可以支援2controller,每台Controller可以支援6AP,要注意的是Cisco 521 AP是無法與CUWNController溝通的,而526 Controller也無法與CUWNAP溝通
(2)Configuring the 521 AP and 526 Controller
透過Console連線到controller,設定好IP後,再以HTTPS連入設定。
也可以安裝Windows based的軟體CCACCA是專門用來設定mobility 解決方案的,安裝好後在桌面就有圖示可以開啟應用程式,連線controller管理,也有圖形化的架構圖方便管理七、Wireless Clients
(1)Using Windows to Connect to a Wireless LAN
Windows下,Cisco提供一套軟基本的軟體“Windows Wireless Zero Configuration(WZC)"供連線使用。
(2)Using a Mac to Connect to a Wireless LAN
Mac下,可以利用AirPort AirPort Extreme進行無線連線
(3)Using Linux to Connect to a Wireless LAN
Linux下,可以利用可以利用CLI或是GUICLI的工具叫iwconfig,在GUI下利用Network Manager
(4)Using the ADU to Connect to a Wireless LAN
windows下除了WZC之外,如果是利用Cisco網卡,還可以利用ADU(Aironet Desktop Utility)來進行管理,把網卡功能全開,在安裝時,同時也可以選擇CSSU(Cisco Site Survey Utility),可以用來了解環境的訊號強度
(5)The ACAU
ACAU(Aironet Configuration Administration Utility)
用來協助Client profile設定、與ADU的自動佈屬
(6)The Cisco Secure Service Client
SSC(Secure Service Client)
提供有線與無線的802.1x認證機制,SSC包括了三個部份: SSC軟體、SSC Administration UtilitySSC Log PackageMFP v2需要支援SSCCCX5才能使用。
(7)The Cisco Client Extension Program
CCX(Cisco Client Extention)
是免費的延伸軟體,可以知道Cisco最新的技術資訊與產品訊息,CCX5以上版本還可當Client MFP(Management Frame Protection)保護AP傳輸安全

Part III WLAN
維護與管理
一、Securing the Wireless Network
(1)Threat to Wireless Networks
無線網路的威脅可以概分為四類:
1.Ad Hoc Networks:
使用Ad Hoc的威脅是因為可能資料的傳輸繞過公司的政策,而有資訊外流的可能性
2.Rogue APs:
Rogue AP
並非全部是壞的,只有是沒組織架構中的AP就稱為Rogue AP,如有同時自已在公司內放一台Fat AP就算Rogue AP
3.Clients Misassociation
Clients
的錯誤連線是有可能發生,比如你的作業系統會記住最近連過的SSID資訊,下次開機會自動幫你連上去,那如果有人偽裝這個SSID,結果作業系統就自動連上這個SSID的無線環境了。
有一個方式可以避免這個狀況,稱作“Management Frame Protection"(MFP)MFP有兩種模式,第一個為Infrastructure MFP,這種模式AP會在每個FCS(Frame Check Sequence)之前加入MIC(Message Integrity Check)來判斷是不是與他相關的Frame,另一種模式為Client MFPClient端需要安裝CCX5(Cisco Compatible Extension)SSC(Secure Service Client)才能進行MIC確認與回報給Controller
MFP
除了可以保護Clients黑白連(Misassociation),還可以防止DoS,只要沒有MICFrame直接丟棄可以避免DoS可能性發生
4.Wireless Attack
無線網路的攻擊常見有三類:
– Reconnaissance attacks:
攻擊者試著知道無線環境的資訊,包括隱藏的SSID也可能被發現
– Access attacks:
攻擊者會試著存取無線網路的資料、設備、網路,利用MAC過濾是一個不錯的方法,但,MAC也容易被偽裝,用WEB的驗證機制,也可以,但有可能在4~7分鐘內被破解
– Denial-of-service attack:
讓合法的使用者無法使用無線網路,(由於無線網路是半雙工,用無線抓P2P也可能形成DoS^^),可以透過無線IPS/IDS來解決
(2)Simple Authentications
簡單的驗證大致有三種如下:
1. Open Authentication:
這是最簡單的驗證,流程超Open!
代碼:
Client ———————————— AP
Authentication Request–>–>
<–<–Authentication Response Association Request –>–>
<–<–Association Confirmation 2. Preshared Key Authentication with Wired Equivalent Privacy: WEP
驗證基本上是不看人的,只看那把Key,直接看圖,如下: 代碼: Client ———————————————————————– AP Authentication Request–>–>
<–<–Authentication Response + Challenge Clear-text Use text to Encrypt (Association Request) –>–>
<–<–Compare static WEP keys & Association Confirmation WEP
是使用RC4加密,也因為用RC4,所以容易被破解~WEP的長度有40,104,128bits,包括了IV(Initialization Vector)IV是一組數字,用來產生獨特的encryption key 3. MAC Address Filtering: 設過AP的都知道可以設定MAC,沒有列上去的就不能連線到AP,但因MAC容易偽裝,所以不夠安全 (3) Centralized Authentication 集中式驗證的一個代表就是PKI(Public Key Infrastructure),集中式驗證的重點是身份識別,透過PKI(通常會有第三方信任機構,稱為CA)所發出的加密安全憑證 (Certificates)就可用來識別這個人的身份合法性。憑證的機制也應用在802.1x來對Client進行驗證,再搭配不同的EAP方式就有很多不同驗證的變化,說明如下: 802.1x: 802.1x是由IEEE所訂定的一種驗證標準,可應用在無線與有線網路。802.1x是一個平台,實際的驗證溝通方式需透過 EAP(Extensible Authentication Protocol)來達成,只要通過,802.1x平台可以決定哪個port要開或關。在802.1x架構有三個角色要認識: Client —————————-Switch ————————-Auth Server (又稱Supplicant) ( 稱為Authenticator) (稱為Authentication Server) 驗證通過前沒有訊框可以通過Authenticator,流程如下: 代碼: Client )))))) (((((((( AP ——————————–Auth Server Association –>–>
<–<–Authentication Request Authentication Response –>–>
<–<– Association Request Association Response –>–>
(
此刻AP還沒把Port}Frame通過)
Send Credentials–>–>
Send Auth Info Via RADIUS Packet–>–>
<–<—————————–Forwarded<–<——–RADIUS Traffic Return (Got Unique Session Key) (Got Unique SessionKey) <–<–Success Message + Session WEP key <–<–Keep Session WEP Key and send
然後ClientAP就可利用Session Key來解開加密的資料。 EAP Process: EAP專門用來控制Credentials如何傳送,不管何使用何種EAP有相同的流程如下代碼: Client )))))) (((((((( AP ——————————–Auth Server Request Access ->–>
<–<– Identity Query Proof of Identity

transfer


<–<– <–<– <–<– <–<– Success/Fail EAP-TLS: 全名為Extensible Authentication Protocol – Transport Layer Security,這是最普遍的EAP機制,加密機制跟SSL很相近,在EAP-TLS裡憑證必需要同時安裝在ServerClient,所以被認為是很安全的機制,流程如下代碼: Client )))))) (((((((( AP ——————————–Auth Server EAP Start –>–>
<–<– Request Identity Identity –>–> –>–> –>–> –>–>
<–<– <–<– <–<– Server Send it’s Cert Client Send it’s Cert –>–> –>–> –>–>
<–<– Auth Server :Symmetric Session Keys (
或稱master session key) <–<– Master Key to AP or Contrller <–<– Encryption Between Client and AP Using WEP or WPA/WPA2 EAP的溝通裡,主要還是ClientAuth Server,中間的AP很簡單的運作於Layer2進行轉傳是Port的開關,所以在設定上也很簡單的選擇802.1x就可以了。 EAP-FAST: 全名為Extensible Authentication Protocol-Flexible Authentication via Secure Tunnel,這是Cisco所開發的協定,目的是為了加強另一個cisco 開發的協定LEAP(Lightweight Extensible Authentication Protocol)的安全,在EAP-FAST裡面沒有使用PKI,而是用每個Clietn獨有的安全Share key :PAC(Protected Access Credential)來取代。在EAP的溝通裡有三個階段,Phase0: PAC分配到Auth ServerClient身上,Phase1: Auth ServerClient建立起TLS TunnelPhase2:驗證使用者身份,流程: 代碼: Client )))))) (((((((( AP ——————————–Auth Server EAP Start –>–>
<–<– EAP Request Identity EAP Response Identity –>–> –>–> –>–> –>–>
<–<– EAP-FAST start (AID) <–<– EAP Request Challenge (Authority-ID) PAC Opaque –>–> –>–> –>–>
<–<– <–<–<–<– <–<– <–<–Cipher Trust Protocol Set Confirm cipher Trust Protocol Set –>–> –>–>
===================Tunnel Below==========================
<–<– <–<– <–<– Identity Request Authentication Response (EAP GTC) –>–> –>–> –>–>
<–<– <–<– <–<– Success/Fail
PEAP: 全名為Protected EAP,只有在Server安裝憑證來建立tunnel與驗證,這是由Cisco, Microsoft, RSA所共同開發的,所以如果要與AD整合,可以利用MS-CHAPv2來驗證,其它可以利用GTC(Generic Token Card)來驗證,其溝通流程如下: 代碼: Client )))))) (((((((( AP ——————————–Auth Server EAP Start –>–>
<–<– EAP Request Identity EAP Response Identity –>–> –>–> –>–>
<–<– <–<– <–<– <–<– Server Cert(EAP-TLS) Pre-Master Secret –>–> –>–> –>–>
===================Tunnel Below==========================
<–<– <–<– <–<– <–<–<–<– Identity Request/Response –>–> –>–>
<–<– <–<–<–<– EAP MSCHAPv2 Challenge EAP MSCHAPv2 Response –>–> –>–>
<–<– <–<– EAP Success/Fail
LEAP: 全名為Lightweight Extensible Authentication Protocol,也是Cisco所開發,目前在一些802.11b的網路還看的見,因為這個機制可以被離線攻擊,所以要小心使用 (4) Authentication and Encrypption 由於WEP使用RC4加密,容易被破解,所以有了更安全的。 WPA: 全名為Wi-Fi Protected Access,是WiFi聯盟替換WEP的驗證與加密方式,WPA使用TKIP(Temporal Key Integrity Protocol)協定來動態變更加密金鑰,不過因為加密方式仍然是RC4所以還是有風險,變通方式有兩個,一個是直接把IV值變大,讓加密方式更複雜,另一個升級硬體到可以使用AES加密機制。 WPA有兩種運作模式,一個是要透過Auth Server的,稱為Enterprise Mode,另一個是直接發送Preshared key的機制,稱為Personal Mode。其驗證機制如下: 代碼: Client )))))) (((((((( AP ——————————–Auth Server <–Security Capability Discovery <–<– <–<– 802.1x Authentication



<–<–<–<–Pairwise Master Key (PMK) <–<– <–<– <–<–<–<– Pairwise Master Key (PMK) (( 4 Way Handshake for key)) (Derive PTK) <–<– <–<– Random Number Random Number –>– –>–> (Derive PTK (Pairwise Transient Key))
(Install PTK) <–<– <–<– Resend Random Number PTK Done –>–> –>–> (Install PTK)
((2Way Group Key Handshake))
–>–> –>–>
<–<– <–<–

WPA2:
WPA2
WPA的運作時一樣的,差別在於加密方式,由RC4改為AES/CCMP(Advanced Encryption Standard-Cipher Block Chaining Message Authentication Code Protocol),所以需要硬體升級或是換新的硬體才能從舊款設備支援二、Enterprise Wireless Management with the WCS and
Location Appliance
WCS
全名為Wireless Control System,是一套軟體可以安裝在WindowsLinux上,主要的工作在協助無線網路的佈屬、設計與管理,要佈屬在較的環境裡,Cisco比較建議將WCS安裝在Linux-Base的環境下。
在佈屬上,需要考慮到它所需要開放的Port連線(其實是ApacheWeb Server):
Java: 1299,8009,8456,8457
HTTP:80
HTTPS:443
FTP: 21 , TFTP:69
SNMP:162
其它的建議實際操作過一下WCS的畫面。
三、Maintaining Wireless Networks
這部份提供是WCS, Controller, AP的升級,大致了解過就可以了^^
四、Troubleshooting Wireless Networks
從網路管理的角度來看,大部份的問題都發生在OSILayer1~Layer3,也建議從Layer1往上檢查問題,可以幫助節省許多時間,在實體的檢查順序如下:
AP
Switch , Switch Switch , Switch Controller ,
Controller
Distribution
檢查實體線路與燈號,通常燈號紅色代表有問題、燈色代表不好、綠色代表正常。再來檢查Client端的問題,因為無線網路通常都是由Client來告訴你,我不能上網,可以檢查這幾個項目:
Client Card (網卡) : 看看有沒有啟動
SSID : 看看有沒有設對、或抓錯台
。頻率使用: 看看電腦有沒有用到同頻率的其它服務
MAC: 看看有沒有加入AP的黑名單或白名單
。如使用802.1x 驗證,要確認一下使用的EAP方式有沒有支援
。看看有沒有被ACL擋住
。看看Client電腦的FirewallAnti-Virus是不是有擋到啥咪
。環境有NAC(Network Access Control)設備的話,確認有沒有擋到
。如果用Pre-ShareKey,確認Client的設定
除了這些,有幾個無線網路常碰到的狀況與解決方法如下:
1. Hidden Node Issue:
這個狀況發生在同一個AP的環境下,兩台電腦因為看不到對方,而使用同一個頻道與AP溝通,造成半雙功的AP無法同時處理兩個訊號。解決方案如讓AP的覆蓋範圍變小、減少Frame的最大size、強制每個訊號都要使用RTS/CTS控制訊框進入與離開
2. Exposed Node Issue:
這個狀況就是兩台AP放太近,而且運作於同一個頻率與通道(802.11b/g)而造成相互的干擾,解決方案就是要調整AP的擺放位置、或將另一台改為802.11a
3. Near/Far Issue:
這個狀況就是AP的訊號太強,覆蓋範圍較大,當一個Client在範圍內,但它能傳送的訊號送不了像AP那麼遠,就會造成無法與AP溝通的狀況而無法連線,這個狀況要縮小AP的訊號,神奇的是,透過WCS可以自動發覺Client的訊號,調整AP的訊號大小
4. Degrade Data Rate:
這個狀況常發生,如一台b的電腦接到gAP,或a/b/g的電腦接到nAP的環境,這會使用傳輸速率無法用到最大~可以強制只能使用某種頻率來控制
最後再記幾個Debug的指令就ok~
引言回覆:
Show client summary
Show client detail
Debug lwapp events enable
Debug dot11

Cisco LWAPP

Layer 2 LWAPP is in an Ethernet frame
(Ethertype 0xBBBB)

Cisco WLAN Controller and AP must be connected to the same VLAN/subnet


Cisco WLAN Controller


LWAPP-L2


Lightweight Access Points

LWAPP-L2 : Data Message

MAC Header    LWAPP Header (C=0)    Data …

LWAPP-L2 : Control Message

MAC Header

LWAPP Header (C=1)

Control Msg

Control Elts …

How cisco LAP use L2 LWAPP register to WLC

1.    Power on the AP

2.    If a static IP address has not been previously configured, the AP issues a DHCP DISCOVER to get an IP address

3.    If L2 mode is supported, attempt an L2 LWAPP WLAN Controller Discovery (Ethernet broadcast)

4.    If L2 mode is not supported or step 3 fails to find a WLAN controller, attempt an L3 LWAPP WLAN Controller Discovery

5.    If step 4 fails, reboot and return to step 1

Cisco Layer 3 LWAPP is in a UDP / IP frame

•    Data traffic uses source port 1024 and destination 12222

•    Control traffic uses source port 1024 and destination port 12223

Cisco Controller and AP can be connected to the same VLAN/subnet
or connected to a different VLAN/subnet,Requires IP addressing of Cisco Lightweight AP

Cisco WLAN Controller


LWAPP-L3

                                                                                         LWAPP-L3

                        
Lightweight Access Points

LWAPP-L3 : Data Message

LWAPP-L3 : Control Message

MAC Header

IP

UDP=12223

LWAPP Header (C=1)

Control Msg

Control Elts …

How cisco LAP Get L3 address use to register to WLC

AP goes through the following steps to compile a LIST OF WLAN CONTROLLERS:

  1. LWAPP Discovery broadcast on local subnet
  2. Over-the-Air Provisioning (OTAP)
  3. Locally stored controller IP addresses
  4. DHCP vendor specific option 43 (IP Address should be"Management Interface"IP)
  5. DNS resolution of “CISCO-LWAPP- CONTROLLER.localdomain"
    (should resolve to the “Management Interface" IP)
  6. If no controller found, start over…
頻寬大幅擴增 802.11ac傳輸速率飆破Gbit/s
但這些頻寬從何處取得?目前正在發展,同時被視為802.11n下一個革命性進展的802.11ac無線網路標準,正可以提供解答。這個被視為國際電機電子工程師學會(IEEE)802.11n的修正案,可望在2013年第一季取得無線區域網路(Wi-Fi)認證的新標準,除擴展覆蓋率性能,同時在日益擁擠繁忙的網路當中提供視訊等級的傳輸量。

這代表著下一波無線網路革命的802.11ac,是第一個超越10億位元(Gigabit)性能的Wi-Fi標準。對於不僅是將行動電話使用於通話與文字簡訊,同時也用來檢視電子郵件、瀏覽網際網路,以及進行更多媒體密集內容的串流,例如音樂與影片等的使用者而言,將受益於此標準所帶來的便利。

由於平板電腦的熱潮正持續成長,也讓Wi-Fi可靠存取的潛在阻礙移除,這對於為取得市場領導地位而激烈競爭的區域性與國際性生產廠商而言,也是其十分關心的議題。

避免干擾 802.11ac運作於5GHz頻段

802.11ac標準所帶來的優點之一,即是在5GHz波段中運作,而非如同802.11n裝置目前所使用的2.4GHz頻段。由於2.4GHz的使用對象是家用產品,以及Wi-Fi和藍牙(Bluetooth)裝置,因此該頻段越來越擁擠,且很容易受到電磁干擾。而5GHz提供市場上所亟需的降低干擾特性,且具有更多可使用的非重疊頻道。此外,5GHz頻段的使用也促成波束成形(Beamforming)技術,可讓源自於發射器與接收器的波束能夠直接聚焦於裝置、標準上。

新增80MHz160MHz即時頻寬

除降低惱人的干擾外,新的802.11ac標準增加80MHz160MHz的頻寬通道(802.11n中只支援20MHz40MHz),以提供更加寬廣的頻寬來達到更高資料速率及支援先進多媒體的能力。舉例來說,802.11ac解決方案預計在2012年之內將會使用80MHz的頻寬與三組空間串流(Spatial Stream),藉以實現1.3Gbit/s的最大實體(PHY)資料速率,亦即接近802.11n經由三組空間串流所提供600Mbit/s最大資料速率的兩倍(1)

除了更寬廣的頻寬通道,802.11ac也增加從64正交振幅調變(QAM)256QAM的群集組態設定,因此,比起802.11n在資料速率上增加33%

動態頻寬管理是802.11ac另一項重要的進步,此項技術讓準備發送/允許發送(RTS/CTS)傳輸能在20MHz的「主要」通道中進行,而且不會有來自於其他網路的干擾。802.11ac網路也將敏感度臨界值從802.11n中的-62dBm改善至-72dBm,這將可減少與其他Wi-Fi存取點重疊與碰撞的發生。

導入MIMO技術 提升802.11ac傳輸速率

目前802.11n網路最大可允許四組多重輸入多重輸出(MIMO)串流同時傳送至單一裝置中。然而,一個802.11ac基地台(STA)能接收高達八組的空間串流,可有效將總體網路傳輸量增加一倍。

802.11ac也是第一個採用多重使用者MIMO(MU-MIMO)Wi-Fi標準,可以在高達四組的STA當中分配八組空間串流。這項針對快速資料轉換的高傳輸量及低功率消耗,為要求嚴格的應用提供更長的電池使用時間和優越的性能。對於越來越多在多重裝置中同時進行高頻寬內容串流的使用者而言,此項特性對其特別有利(1、圖2)

1 802.11ac可支援多重裝置影像資料串流
2 加入MIMO技術可提升傳輸速率

802.11ac裝置紛出籠

802.11ac產品將會在今年內以預先認證的裝置上市,分別有四種不同的區塊,包括行動(智慧型手機)、運算(筆記型電腦與平板電腦)、消費性電子產品(電視、遊戲主機與數位監視器),以及家庭與娛樂網路(路由器及基地台)

針對行動區塊,新的802.11ac晶片解決方案與先進處理器的結合將可提供高傳輸量,藉以獲得快速的資料傳輸與低功率消耗,將可為需求嚴格的應用實現更長的電池使用時間與優越的性能。

業界首款行動802.11ac Wi-Fi/藍牙/調頻(FM)組合晶片,將鎖定在與智慧型手機、平板電腦及其他消費性電子裝置。對於轉移到這項Wi-Fi技術的下一代革命階段抱持謹慎態度的人,新標準在設計上能夠與802.11n/a元件並存,確保能順利轉移至具有Gigabit能力的802.11ac Wi-Fi網路。

針對新興的家庭娛樂網路,802.11ac將透過智慧型電視、遊戲主機,以及串流媒體播放器(智慧型DMA)提供更快速且更順暢的影片傳送,也使裝置能有效的成為智慧型手機與平板電腦等鄰近行動產品的連結集線器與較大型的顯示器。

以無線連結進行串流的高畫質影片若缺乏所需頻寬時,會造成影片播放的斷續,大幅降低使用者的體驗。802.11ac以類似NetflixHulu,或是Vudu的方式處理訂閱頻道,並改善整個房屋中從電視或藍光播放器到遊戲主機或平板電腦等每個裝置的播放品質。

由於增加802.11ac的連結能力,聯網家庭將因此獲得更先進的智慧型電視解決方案,進一步實現薄型電視設計。遊戲玩家將可在任何地方享受到高性能的端對端遊戲,以及在家用主機上無延遲的遊戲體驗。

最後,對於家庭與企業網路設備,客戶產品可以實現高達1.3Gbit/s的無線資料速率,並鎖定次世代的同步雙頻(DBDC)無線基地台、路由器,以及閘道器。藉由高位準的整合性,802.11ac系統單晶片(SoC)實現更高的可靠度,並簡化設計與生產。

由於更廣大的頻寬、更多的空間串流、多重使用者MIMO、更高階的調變,以及動態頻寬管理,而使802.11ac標準的性能獲得提升。透過Gigabit能力的Wi-Fi網路將可為智慧型手機、平板裝置與筆記型電腦帶來更多的發展空間。如同稍早所提到的,最初的802.11ac預先認證裝置預計將在2013年第一季Wi-Fi認證通過前上市。值得注意的是,具有802.11ac功能智慧型手機的使用者將會優先享受到更高速資料傳輸量的優點,且不須犧牲小巧的尺寸、空間、低成本、低電池用電等特性(3)

3 802.11ac技術優勢一覽

IPV6 Interface ID (OLD but Useful)

IPv6 介面識別元

IPv6 interface identifiers

IPv6 位址的最後 64 個位元,是 IPv6 位址的 64 位元首碼的專用介面識別元。介面識別元有下列幾種決定方式:

  • RFC 2373 表示所有使用首碼 001 到 111 的單點傳送位址,必須同時使用從 Extended Unique Identifier (EUI)-64 位址衍生出來的
    64 位元介面 ID。
  • RFC 3041 會描述的隨機產生的介面識別元,會隨著時間變更來提供匿名層次。
  • 自動設定非自動式位址 (例如,透過 DHCPv6) 時指派的介面識別元。目前正在定義 DHCPv6 標準。Windows Server 2003 系列及 Windows XP 的 IPv6 通訊協定不支援可以設定狀況的位址設定或 DHCPv6。
  • 手動設定的介面識別元。

EUI-64 address-based interface identifiers

64 位元 EUI-64 位址是由 Institute of Electrical and Electronic Engineers (IEEE) 定義。EUI-64 位址是指派給網路介面卡
或是衍生自 IEEE802 位址。

IEEE 802 MAC addresses

網路介面卡的傳統介面識別元使用的 48 位元位址稱做 IEEE 802 位址。這個位址中包含一個 24 位元的公司 ID (又稱做製造商 ID) 和一個 24 位元的延伸識別元 (又稱做介面板識別元)。裡面有專門指派給每個網路介面卡及的公司 ID 以及組合時專門指派給每個網路介面卡的介面板識別元,形成全球唯一的 48 位元位址。這個 48 位元位址又稱做實體、硬體或媒體存取控制 (MAC) 位址。


IEEE 802 位址中定義了下列位元:

  • 萬用/本機 (U/L)
    U/L 位元是第一個位元組的第 7 個位元,可以決定這個位址是以通用或本機方式管理。如果 U/L 位元設定為 0,表示 IEEE 是透過設計專用公司 ID 來管理這個位址。如果 U/L 位元設定為 1,表示是以本機方式來管理這個位址。網路系統管理員覆蓋了製造位址,並且指定了不同的位址。
  • 個別/群組 (I/G)
    I/G 位元是第一個位元組的低階位元,可以決定這個位址是個別位址 (單點傳送) 還是群組位址 (多點傳送)。如果設定為 0,表示這個位址是單點傳送位址。如果設定為 1,表示這個位址是多點傳送位址。

一般 802.x 網路介面卡位址的 U/L 及 I/G 位元都設定為 0,對應以通用方式管理的單點傳送 MAC 位址

 IEEE EUI-64 addresses

 IEEE EUI-64 位址代表網路介面位址設定的新標準。公司 ID 的長度仍然是 24 個位元,但是延伸識別元是 40 個位元,為網路介面卡製造商
建立較大的位址空間。EUI-64 位址使用與 IEEE 802 位址相同方式的 U/L 及 I/G 位元。


Mapping IEEE 802 addresses to EUI-64 addresses

若要根據 IEEE 802 位址來建立 EUI-64 位址,必須將 16 個位元的 11111111 11111110 (0xFFFE) 插入到公司 ID 及延伸識別元之間的 IEEE 802 位址中。下圖說明 IEEE 802 位址如何轉換到 EUI-64 位址。


Mapping EUI-64 addresses to IPv6 interface identifiers

若要取得 IPv6 單點傳送位址的 64 位元介面,必須在 EUI-64 位址中補充 U/L 位元 (若是 1,設定為 0;若是 0,設定為 1)。下圖說明如何轉換以萬用方式管理的單點傳送 EUI-64 位址。


若要從 IEEE 802 位址取得 IPv6 介面識別元,您必須先將 IEEE 802 位址對應到 EUI-64 位址,再來補充 U/L 位元。下圖說明如何轉換以萬用方式管理的單點傳送 IEEE 802 位址。


IEEE 802 address conversion example

主機有 Ethernet MAC 位址 00-AA-00-3F-2A-1C。首先,它在第 3 和第 4 個位元組之間插入 FF-FE,產生 00-AA-00-FF-FE-3F-2A-1C,轉換成 EUI-64 格式。接著,在第 1 個位元組中補充第 7 個位元 U/L。二進位格式的第 1 個位元組是 00000000。補充第 7 個位元後,就成為 00000010 (0x02)。最後的結果是 02-AA-00-FF-FE-3F-2A-1C,轉換成冒號十六進位表示法後就成為介面識別元 2AA:FF:FE3F:2A1C。最後,對應到網路介面卡 (搭配 MAC 位址 00-AA-00-3F-2A-1C) 的連結本機位址是 FE80::2AA:FF:FE3F:2A1C。

附註

  • 補充 U/L 位元時,如果 EUI-64 位址是以萬用方式管理,請將 0x2 新增到第 1 個位元組

Some unofficial notes on IPv6 prefixes and address

Updated: 2006-06-18
Created: 2003-12-18

Addressing information from many sources, like:

General note on unicast addresses: the first 16 bits are the TLA, the next 8 bits must be zero, then 24 bits of NLA (48 bits total), 16 bits of SLA (64 bits total), and 64 bits for the interface.

Also, there are issues on internet-wide routes and the length of prefixes. Someone has written some recommendations which illustrate what is expected to be routeable.

The table below is exhaustive.

IPv6 unicast and anycast address ranges

Prefix

Up to

Notes

0000::/8 00ff::/8 Reserved
0000::0/128 0000::0/128 node local: unspecified address
0000::1/128 0000::1/128 node local: localhost
0000::0000:0000:0000/96 0000::0000:ffff:ffff/96 IPv4 compatible
0000::ffff:0000:0000/96 0000::ffff:ffff:ffff/96 IPv4 mapped
0100::/8 01ff::/8 unassigned
0200::/7 03ff::/7 NSAP
0400::/7 05ff::/7 IPX (obsolete)
0600::/7 06ff::/7 unassigned
0800::/5 0fff::/5 unassigned
1000::/4 1fff::/4 unassigned
2000::/3 3fff::/3 aggregatable global
2000:0000::/16 2000:ffff::/16 reserved
2001:0000:0000:/32 2001:0000:ffff:/32 Teredo, non Microsoft (official)
2001:0001::/32 2001:ffff::/32 sub TLAs
2001:0000::/23 2001:01ff::/23 sub TLAs: IANA
2001:0200::/23 2001:03ff::/23 sub TLAs: APNIC
2001:0400::/23 2001:04ff::/23 sub TLAs: ARIN
2001:0600::/23 2001:07ff::/23 sub TLAs: RIPE NCC
2001:0800::/23 2001:09ff::/23 sub TLAs: RIPE NCC
2001:0a00::/23 2001:0bff::/23 sub TLAs: RIPE NCC
2001:0c00::/23 2001:0dff::/23 sub TLAs: APNIC. Among these 2001:0db8::/32 is reserved for example addresses, and will not be allocated.
2001:0e00::/23 2001:0fff::/23 sub TLAs: APNIC
2001:1200::/23 2001:13ff::/23 sub TLAs: LACNIC
2001:1400::/23 2001:15ff::/23 sub TLAs: RIPE NCC
2001:1600::/23 2001:17ff::/23 sub TLAs: RIPE NCC
2001:1800::/23 2001:19ff::/23 sub TLAs: ARIN
2002:0000::/16 2002:ffff::/16 6to4
3ffe:0000::/16 3ffe:ffff::/16 6bone pseudo TLAs
3ffe:831f:0000:/32 3ffe:831f:ffff:/32 Teredo, Microsoft (experimental)
3fff:0000::/16 3fff:ffff::/16 reserved
4000::/3 5fff::/3 unassigned
6000::/3 7fff::/3 unassigned
8000::/3 9fff::/3 unassigned
a000::/3 bfff::/3 unassigned
c000::/3 dfff::/3 unassigned
e000::/4 efff::/4 unassigned
f000::/5 f7ff::/5 unassigned
f800::/6 f9ff::/6 unassigned
fa00::/7 fbff::/7 unassigned
fc00::/7 fdff::/7 unique local
fe00::/9 fe7f::/9 unassigned
fe80::/9 feff::/9 local
fe80:0000::/10 febf:0000::/10 link local
fec0:0000::/10 feff:0000::/10 site local (deprecated)

The multicast table below is exhaustive, except that any ranges not described in the fourth hex digit are reserved.

IPv6 multicast address ranges

Prefix

Up to

Notes

ff00::/8 ffff::/8 multicast
ff00::/12 ff0f::/12 well known
ff01::/16 ff01::/16 well known link local
ff02::/16 ff02::/16 well known site local
ff02:0:0:0:0:1:ff00::/104 ff02:0:0:0:0:1:ffff::/104 well known solicited-node
ff05::/16 ff05::/16 well known site local
ff08::/16 ff08::/16 well known organization local
ff0e::/16 ff0e::/16 well known global
ff10::/12 ff1f::/12 transient
ff11::/16 ff11::/16 transient link local
ff12::/16 ff12::/16 transient site local
ff15::/16 ff15::/16 transient site local
ff18::/16 ff18::/16 transient organization local
ff1e::/16 ff1e::/16 transient global
ff20::/12 ff2f::/12 unassigned
ff30::/12 ff3f::/12 RFC 3306 prefix based multicast
ff30:0000:0000:0000:0000:0000::/96 ff30:0000:0000:0000:0000:0000::/96 RFC 3306 source-specific multicast
ff70::/12 ff7f::/12 RFC 3956 embedded-RP multicast
ff40::/10 ffff::/10 unassigned

Special interface numbers in anycast addresses.

IPv6 unicast/anycast interfaces

Suffix

Notes

*:0000 Nearest router
*:fff7:ffff:ffff:ff80-ffff Reserved anycast addresses (EUI-64)
*:fff7:ffff:ffff:fffe Mobile IPv6 home-agents (EUI-64)
*:ffff:ffff:ffff:ff80-ffff Reserved anycast addresses (non EUI-64)
*:ffff:ffff:ffff:fffe Mobile IPv6 home-agents (non EUI-64)
fe80::c171:3a50 Nearest 6to4 gateway (193.113.58.80)

IPv6 multicast interfaces

Info from RFC 2375.

IPv6 multicast interfaces (group ids)

Suffix

Notes

ff0?::0000 reserved
ff0?::0100 VMTP Managers Group
ff0?::0101 Network Time Protocol (NTP)
ff0?::0102 SGI-Dogfight
ff0?::0103 Rwhod
ff0?::0104 VNP
ff0?::0105 Artificial Horizons – Aviator
ff0?::0106 NSS – Name Service Server
ff0?::0107 AUDIONEWS – Audio News Multicast
ff0?::0108 SUN NIS+ Information Service
ff0?::0109 MTP Multicast Transport Protocol
ff0?::010A IETF-1-LOW-AUDIO
ff0?::010B IETF-1-AUDIO
ff0?::010C IETF-1-VIDEO
ff0?::010D IETF-2-LOW-AUDIO
ff0?::010E IETF-2-AUDIO
ff0?::010F IETF-2-VIDEO
ff0?::0110 MUSIC-SERVICE
ff0?::0111 SEANET-TELEMETRY
ff0?::0112 SEANET-IMAGE
ff0?::0113 MLOADD
ff0?::0114 any private experiment
ff0?::0115 DVMRP on MOSPF
ff0?::0116 SVRLOC
ff0?::0117 XINGTV
ff0?::0118 microsoft-ds
ff0?::0119 nbc-pro
ff0?::011a nbc-pfn
ff0?::011b lmsc-calren-1
ff0?::011c lmsc-calren-2
ff0?::011d lmsc-calren-3
ff0?::011e lmsc-calren-4
ff0?::011f ampr-info
ff0?::0120 mtrace
ff0?::0121 RSVP-encap-1
ff0?::0122 RSVP-encap-2
ff0?::0123 SVRLOC-DA
ff0?::0124 rln-server
ff0?::0125 proshare-mc
ff0?::0126 dantz
ff0?::0127 cisco-rp-announce
ff0?::0128 cisco-rp-discovery
ff0?::0129 gatekeeper
ff0?::012a iberiagames
ff0?::0201 “rwho" Group (BSD) (unofficial)
ff0?::0202 SUN RPC PMAPPROC_CALLIT
ff0?::0002:0000-7ffd Multimedia Conference Calls
ff0?::0002:7ffe SAPv1 Announcements
ff0?::0002:7fff SAPv0 Announcements (deprecated)
ff0?::0002:8000-8fff SAP Dynamic Assignments
ff02::0001 all nodes
ff02::0002 all routers
ff02::0004 DVRMP routers
ff02::0005 OSPFUGP
ff02::0006 OSPFIGP designated routers
ff02::0007 ST routers
ff02::0008 ST hosts
ff02::0009 RIP routers
ff02::000a EIGRP routers
ff02::000b mobile agents
ff02::000d PIM routers
ff02::000e RSVP encapsulation
ff02::0001:0001 Link name
ff05::0002 all routers
ff05::0001:0003 all DHCP servers
ff05::0001:0004 all DHCP relays
ff05::0001:1000-13ff service location


Cisco SONA

CiscoSONA介紹(1)

瞭解思科服務導向網路架構 (sona)

企業面臨的挑戰
儘管投入大量 it 資金,但許多企業發現大多數的關鍵網路資源和資訊資產仍處於遊離狀態。事實上,擁有數百個無法彼此通信的"孤立"應用和資料庫是常見的企業現象。

K造成此種現象的原因是由於內外部客戶不斷增長卻無法預測的需求所致。許多企業都被迫迅速部署新技術,常導致部署多個分立系統,進而無法在機構內有效地共用資訊。
例如,如果不建立將應用和資訊結合在一起的各種重疊網路,銷售人員、客戶服務人員或採購部門將無法輕鬆訪問客戶記錄。許多企業都發現,這種盲目擴展為他們帶來了多個未得到充分利用的、無法協調的 分離系統和資源。這些分離的系統同時還難以 管理 且管理成本高昂。

思科的SONA & IIN優勢

智慧資訊網路優勢

思科系統公司®憑藉智慧資訊網路 (iin) 計畫,正在幫助全球 it 機構解決這些問題並迎接新挑戰,如部署服務導向的架構、web服務和虛擬化等。 iin 詳細闡述了網路在促進軟硬體集成方面的發展,這將使機構能夠更好地根據業務優先順序來調整 it 資源。通過將智慧構建於現有網路基礎設施之中,iin 將幫助機構實現降低基礎設施複雜性和成本等優勢
網路的重要性.com/I]n+_+Z:H9m,YE
it 環境的創新主要集中在通過傳統的基於伺服器的系統來分發新業務應用。然而,網路仍然是透明連接並支持it 基礎 架構 的所有組件的平臺。
使用思科®服務導向網路架構 (sona) ,企業可優化應用、流程和資源,來獲得更大的商業利益 。通過提供更出色的網路功能和智慧,企業可提高與網路相關的各種活動的效率,同時 將更多的 資金用於全新的戰略投資和創新。
標準化降低了支持相同數量的資產所需的運營成本,進而提高了資產效率。虛擬化優化了資產的使用,可從邏輯上分割實際資源,以便在所有的分散部門中使用。整個網路效率的全面提高能夠增強靈活性及可擴展性,進而對業務發展、客戶忠誠度及利潤產生巨大影響 — 從而提高企業的競爭優勢。
利用架構取得成功

cisco  sona 架構闡述了企業應如何發展到智慧資訊網路,以便加速應用、業務流程和資源使用、並使 it 能夠為企業提供更好服務。
cisco sona 利用思科及思科合作夥伴在各行業的解決方案、服務和經驗來提供公認的、可擴展的商業解決方案。

cisco sona 架構闡述了如何在全面融合的智慧型網路上構建集成系統,以便大幅度提高靈活性和效率。

企業可將這種 綜合的 智慧特性部署在整個網路之中,包括資料中心、分支機搆和園區環境。

圖 1. 思科服務導向網路架構 NB Vista(Vista技術論壇) O        L K]W"a1V

&^1~­fD-PL#Y+q

\ 遠程工作人員n
           伺服器    存儲 O-I   用戶端:b2F        G(I,[2`%EZ
o n,?&tE
w;P2U(J
DY(@Hnhv$`
智慧資訊網路(IIN)N(B Vista(Vista技術論壇)A’Y(W"(A4I(

應用層     商業應用 4JR 協調統合應用

互動服務層服務     網路基礎設施虛擬化  基礎設施服務  自我調適的管理)k


網路基礎設施服務    園區  分支機搆 資料中

cisco  sona 的三個層次^

1. 網路基礎設施層,在此,所有的 it 資源在融合網路平臺上

2.交互服務層,為利用網路基礎設施的應用和業務流程有效分配資源

3.應用層,包含商業應用和協作應用,充分利用交互服務的效率在網路基礎設施層,經過驗證的思科企業
架構提供全面的設計指南,為你的全部網路提供全面的集成的端到端系統設計指南。在交互服務層,思科將全套服務集成到智慧系統中,以優化商業和協作應用的分發,從而提供可預測性更強、更可靠的性

能,同時降低運營成本。在應用層,通過與網路矩陣深入集成,思科應用網路解決方案無需安裝用戶端
或更改應用,同時在整個應用分發時保持應用可視性和 安全性。

;n~:P.w(ZO構建 cisco sona 的商業優勢,更簡單、更靈活的集成基礎設施將提供更高的靈活性和適應性,進而以更低成本實現更高的商業收益。使用 cisco sona ,您將能夠提高整體 it 效率和利用率,由此增強 it 的效能,我們稱之為網路乘法效應
網路放大效應
網路放大效應指通過 cisco sona 幫助企業 it 增強對整個企業的貢獻。 it 資源的最佳效率和使用將以更低成本對企業產生更高影響,使您的網路成為可盈利的增值資源
網路放大效應計算如下:   K.V
GZ-@Nz6KD

f.gh zC j
效率 = it 資產的成本 ÷ (it 資產的成本+運營成本 ) ;ksSi’l f
w*Wn2c
使用率 = 所使用的資產與總資產的百分比 (如正使用的可用存儲百分比) NB Vista(Vista技術論壇)WgMwf._(D
效能 = 效率 x 使用率
網路放大效應 = 使用 cisco sona 時的資產效能 ÷ 不使用 cisco sona 時的資產效能
投資收益 NB Vista(Vista技術論壇)C­D2XlCl`
cisco sona 中思科智慧系統的優勢不僅是提高效率和降低成本。通過 cisco sona ,您可通過網路的力量實現:

增加收入和機會
改進客戶關係
提高業務永續性和靈活性
提高生產率和效率並降低成本
即時發展
通過 cisco sona 朝著更智慧的集成網路發展,企業可分階段完成:融合、標準化、虛擬化和自動化。與思科管道夥伴或客戶團隊合作,您可使用 cisco sona 框架 為企業的發展制訂 藍圖 。憑藉思科生命週期管理服務、標準化領域的領先地位、成熟的企業架構和創建針對性行業解決方案的豐富經驗,思科客戶團隊可幫您即時滿足業務要求。

發展到智慧資訊網路
網路的作用正在不斷發展。明天的智慧型網路將 不止 提供基本的連接、用戶帶寬和應用訪問服務,它將提供端到端的功能和集中統一的 控制,實現真正的企業透明性和靈活性。 cisco sona 使企業能夠擴展其現有基礎設施,朝著智慧型網路發展,以加速應用並改進業務流程。思科提供設計、支援和融資服務,以便最大限度地提高您的投資回報。

思科生命週期服務與支援
思科及其合作夥伴提供了生命週期服務方法,以使客戶的業務目標與技術實施保持一致。此方法根據技術和 網路的 複雜性,定義了必須採取的措施,以幫您成功部署並運行思科技術,且在網路的整個生命週期中優化它們的性能。
思科及其合作夥伴將針對客戶環境和能力共同構建適當的部署模式,包括技能識別、能力評估及方法差距評估等內容,以便創建一個定制的解決方案,幫助客戶移植到 cisco sona 。
思科資本融資
思科資本®公司在為企業提供此完整解決方案方面發揮著關鍵作用 — 它可即時滿足企業要求。思科資本公司不僅提供交易幫助,還可確保經濟因素不會對企業實施技術的時間產生影響

CISCO PPDIOO

Cisco 生命週期服務的六個階段分別是:
1.準備階段

2.規劃階段

3.設計階段

4.實施階段

5.運行階段

6.優化階段
這一過程簡稱為 PPDIOO。
案例

一.準備階段:準備階段明確業務目標:提升客戶體驗降低成本新增更多服務支援公司擴展 這些目標為商務案例提供了基礎,商務案例用於證明實施技術改進所需的資金投入是合理的。
以下為商務案例:
1) 專案目的:專案如何達到公司業務目標主要效益和風險成功指標
2) 成本/效益分析:實現業務目標的可選方案非財務效益
3) 資源選項:服務所需的資源(外部供應商、網路安裝公司等)採購程式
4) 預算承受能力和整個專案的資金來源(內部及外部),一次投入還是分期投入
5) 項目管理項目計畫和負責人時間表主要風險和將影響降至最小的計畫項目沒有完成時的應急計畫技能和人員要求

二.規劃階段: 在規劃階段,網路設計師對現場和運作情況進行綜合評估,評價物件包括當前網路、運作情況和網路管理基礎架構。NetworkingCompany 員工確定所有需要進行的實際設施修改、環境修改及電氣修改,並評估當前運作情況和網路管理基礎架構對新技術方案的支援能力。所有基礎架構、人員、程式和工具方面的變更都必須在新技術方案實施前完成。 可讓新網路具有更多功能的客制化應用程式也在這一階段確定。最後,NetworkingCompany 員工會創建一份包含所有設計需求的文檔。

三.設計階段:在設計階段,設計公司員工根據規劃階段確定的初步需求進行工作。設計需求文檔在以下方面支持準備階段和規劃階段中明確的技術要求:可用性可擴展性安全性管理便利性 設計必須足夠靈活,以便新目標或新需求出現時能夠作出更改或增補。必須將技術整合到當前的運作和網路管理基礎架構中。

四.實施階段:完成網路設計並經客戶批准後,便進入到實施階段。在實施階段,依據經批准的設計規範構建網路,並驗證網路設計是否成功。

五.運行階段:運行和優化階段是持續進行的過程,反映網路的日常運作情況。讓員工監控網路並建立網路基本模型(BAELINE)。網路監控有助於公司實現最大的可擴展性、可用性、安全性和管理便利性。

六.優化階段: 網路的優化是一個持續不斷的過程。其目的是在發生網路問題之前即找出並解決潛在的問題,從而改善網路的性能和可靠性。優化網路能確保公司的業務目標和需求能得到維護。 可在優化階段中發現的一般網路問題包括:功能不相容鏈路,容量不足啟用多個功能時,設備性能有問題,協定的擴展性問題

HSRP

第二部分 HRRP 熱備份路由式通訊協定, 多台路由器組成一個"熱備份組"用來模擬為一個虛擬的路由器擁有虛擬的IP 位址和虛擬的MAC位址
在一個備份組中,只有一台路由器作為主動路由器發送資料包,只有當主動路由器失效後,將之前選擇的惟一備份路由器轉成為主動路由器轉發資料包,但對於網路中的主機來說虛擬路由器並沒有發生任何改變。
HSRP有三種廣播包:
1.Hello包:hello通知其他路由器,反應本身路由器的HSRP
優先順序和狀態資訊,HSRP路由器預設為每3秒 鐘發送一個hello訊息,可以修改這個參數。
2.Coup包當一個備用路由器變為一個主動路由器時發送一個coup訊息。
3.Resign包當主動路由器要關機時或者當有優先順序更高的路由器發送hello訊息時,主動路由器發送一個resign訊息。

HSRP狀態類型

1.Initial初始化:HSRP啟動時的狀態,HSRP還沒有運行一般是在改變配置或埠剛啟動時進入該狀態。
2.Learn學習狀態:路由器已經得到了虛擬IP位址,但是它既不是主動路由器也不是備份路由器。它一直監聽從主動路由器和備份路由器發來的
HELLO訊息。
3.Listen監聽狀態:路由器處於監聽hello訊息狀態中。
4.Speak對話狀態:在該狀態下路由器定期發送HELLO訊息
並且積極參加主動路由器(Active)或等待路由器(Standby)的競選。
5.Standby被動狀態:當主動路由器失效時準備接手主動路由器轉送封包傳輸功能的路由器。
6.Active主動狀態:路由器執行封包傳輸功能。

HSRP路由器體系
1.主動路由器:負責轉發發送到虛擬路由器的資料。它通過
發送HELLO訊息,基於UDP埠號為1985的廣播來通告它
的主動狀態
2備份路由器:監視HSRP組中的運行狀態,並且在當前主動路由器不可用時,迅速承擔起負責資料轉發的任務。備份路由器也發送HELLO訊息來通
告組中其他的路由器它備份路由器的角色。
3.虛擬路由器:向最終的用戶來代表一台能持續工作的路由器設備。它有自己的MAC和IP地址。但是實際上它是不用來轉發資料包,它的作用僅
僅是代表一台可用的路由設備。 虛擬MAC位址組成:Vendor ID:廠商ID構成MAC位址的頭3個位元組 HSRP代碼表示該位址是HSRP虛擬路由器 的。XX07.acxx 組ID,最後一個位元組是組ID由組號組成

我們可以在主動路由器上使用命令:
show ip arp命令查看
00000c07ac01

4.其他路由器:監聽HELLO訊息,但是不作應答,這樣它就不會在備份組有身份的概念,同時它也不參與發送到虛擬路由器的資料包,但是還是轉發其他路由器發來的資料包。

選舉過程

1.預設情況下優先順序皆為100,這時IP位址最大的會成為主動路由器
2.當主動路由器失效後,備份路由器替代成為主動路由器當主動和備份路由器都失效後,其他路由器將參與主動和備份路由器的選舉工作
a) 優先順序高的成為主動路由器預設值皆為100(介於 0~255 ) ,優先順序別相同時,介面IP位址高的將成為主動路由器,
HSRP支援搶佔,Preempt技術 HRSP技術能夠保證優先順序高的路由器失效恢復後總能處於主動狀態。
主動路由器失效後優先順序最高的備用路由器將成為新的主動路由器,如果沒有使用preempt技術,
則當原來的主動路由器恢復後,只能處於備用狀態,先前的備用伺服器代替其角色處於主動狀態,
直到下一次選舉發生

HSRP的介面追蹤track技術
如果所監測的埠出現故障,則也可以進行路由器的切換。如果主路由器上有多條線路被跟蹤,則當某些線路出現故障時可通過設定來切換到備份路由器上,即使還有其他線路正常工作,直到主動路由器規劃的線路正常工作,才能重新切換回來。
介面追蹤使得能夠根據HSRP組路由器的介面是否可用來自動調整該路由器的優先順序。當被跟蹤的介面不可用時,路由器的HSRP優先順序將降低。
HSRP追蹤特性確保當HSRP主動路由器的重要入站介面不可用時,該路由器不再是主動路由器。
多組HSRP 為了方便負載均衡,同一台路由器可以是同一個網段或VLAN中多個HSRP備用組的成員。配置多個備用組可進一步提高冗餘性,在網路中均衡負載,提高冗余路由器的使用率。
路由器在為一個HSRP組轉發通信流的同時,可以在另一個HSRP組中處於備用或監聽狀態。每個備用組都模擬一個虛擬路由器。在VLAN或介面上最多可以有255個備用組 例如:路由器A和路由器B都是HSRP組1和2的成員然而路由器A是HSRP組1的活躍路由器和HSRP組2的備用路由器而路由器B是HSRP組2的活躍路由器和HSRP組1的備用路由器。
配置過程
——————————–
HSRP組
未定義
備用組編號 0
備用組MAC地址 0000.0c07.acXX XX是HSRP組號 優先順序 100
延時 0沒有延時
跟蹤埠的優先順序 10
HELLO時間 3秒
HOLD時間 10秒
配置要求
1.HSRP最多可以配置32個VLAN或路由介面
2.配置介面必須是3層介面 ,路由介面,
介面使用no switchport並且配置ip位元址
SVI(交換虛擬介面)使用命令interface vlan vlan號
3層乙太網通道(EtherChannel)
3.所有3層介面都必須配置IP位元址
配置HSRP 1
進入全域模式 configure terminal
2 進入介面模式
interface 介面
3 使能HSRP指定熱備份組的組號和虛擬IP 位址 standby 組號 ip IP位址 ,組號取值為0-255,如果是一個備用組可以不輸入組號預設為0 ,ip位址是虛擬的Ip位址,不能為介面上的位址

4.指定備用組的優先順序別
standby 組號 priority 優先順序別
優先順序取值為1-255,預設為100,數值越大級別越高
5.指定搶佔模式,當主動路由器失效後,備份路由器
替換為主動路由器,需要等待的時間
standby 組號 preempt [delay 秒數]
delay取值為0-36001小時
6.設置介面追蹤(可選)
standby 組號 track 埠號 降低的優先順序別
降低的優先順序別預設為10
7.人工指定虛擬mac位元址(可選)
standby 組號 mac mac位址
8.使用實際介面的mac位址(可選),只有25系列路由器使
用
standby 組號 use-bia
9.配置認證(可選)
standby 組號 authentication mode text 密碼
| 密碼:密碼是明文預設為cisco
注意對於IOS12.3以上的版本中
HSRP支持MD5的認證具體配置命令為
standby 組號 authentication md5 key-string 密碼
10.配置HSRP時間(可選)
standby 組號 timers hello秒數 hold秒數
hello秒數取值為1-255預設為3秒
hold秒數取值為1-255預設為10秒
hold time= 3*hello time 核對總和調試

檢驗
show standby
[介面[組]][active|init|listen|standby][brief]
如
show standby Ethernet0 init brief
調測
debug standby:顯示所有的HSRP錯誤、事件和資料包
debug standby terse :顯示除hello和通告以外的所有HSRP錯誤、事件和資料包
debug standby events detail:顯示HSRP事件
debug standby error:顯示HSRP錯誤
debug standby packets [advertise|coup|hello|resign][detail]

案例
sw1(config)#int vlan 10
sw1(config-if)#ip address 192.168.1.1 255.255.255.0

sw1(config-if)#standby 1 ip 192.168.1.254

sw1(config-if)#standby 1 priority 110

————————————————-

sw2(config)#int vlan 10
sw2(config-if)#ip address 192.168.1.2 255.255.255.0

sw2(config-if)#standby 1 ip 192.168.1.254

sw2(config-if)#standby 1 priority 100

————————————————-

SW1#show standby

Vlan10 – Group 1 State is Active 2 state changes, last state change 00:02:38

Virtual IP address is 192.168.1.254

Active virtual MAC address is 0000.0c07.ac01 Local virtual MAC address is 0000.0c07.ac01 (v1 default)

Hello time 3 sec, hold time 10 sec Next hello sent in 0.124 secs
Preemption enabled

Active router is local Standby router is 192.168.1.2, priority 100 (expires in 7.992 sec)
Priority 110 (configured 110)
IP redundancy name is “hsrp-Vl10-1″ (default)

—————————————————

SW2#show standy

Vlan10 – Group 1 State is Standby 1 state change, last state change 00:02:32
Virtual IP address is 192.168.1.254
Active virtual MAC address is 0000.0c07.ac01 Local virtual MAC address is 0000.0c07.ac01 (v1 default)

Hello time 3 sec, hold time 10 sec Next hello sent in 0.328 secs
Preemption enabled
Active router is 192.168.1.1, priority 110 (expires in 9.456 sec)
Standby router is local Priority 100 (default 100)

IP redundancy name is “hsrp-Vl10-1″ (default)

SW1#show standby brief
P indicates configured to preempt.

Interface Grp Prio P State Active Standby Virtual IP
V10 1 110 P Active local 192.168.1.2 192.168.1.254
多組HSRP備用組

Router A Configuration
Switch# configure terminal
Switch(config)# interface gigabitethernet0/1

Switch(config-if)# no switchport
Switch(config-if)# ip address 10.0.0.1 255.255.255.0

Switch(config-if)# standby 1 ip 10.0.0.3

Switch(config-if)# standby 1 priority 110

Switch(config-if)# standby 1 preempt

Switch(config-if)# standby 2 ip 10.0.0.4

Switch(config-if)# standby 2 preempt

Switch(config-if)# end

Router B Configuration

Switch# configure terminal

Switch(config)# interface gigabitethernet0/1

Switch(config-if)# no switchport

Switch(config-if)# ip address 10.0.0.2 255.255.255.0

Switch(config-if)# standby 1 ip 10.0.0.3

Switch(config-if)# standby 1 preempt

Switch(config-if)# standby 2 ip 10.0.0.4

Switch(config-if)# standby 2 priority 110

Switch(config-if)# standby 2 preempt

Switch(config-if)# end

案例 CCIE-LAB(V160) 題目要求

Configure HSRP on R3 and R4 for hosts on VLAN ACarefully read the requirements,There are 10 hosts on VLAN A。

5 Windows PCs and 5 Unix workstations The Windows PCs are configured to use 160.YY.10.50 as their default gateway
The UNIX workstations are configured to use 160.YY.10.100 as their default gateway
R3 is the preferred router for Windows users.
R4 is the prefered router for Unix users.
If the frame relay connection on R3 goes down then R4 becomes the preferred router for all users。
If the frame relay connection on R4 goes downthen R3 becomes the preferred router for all users。
Once either R3 or R4 recover from a failure then the network must operate as outined above
I.e Each router must resume their original role。
配置 R3 config termi
interface f0/0
standby 1 ip 160.11.10.50
standby 1 priority 110
standby 1 track s0/0 20
standby 1 preempt
standby 2 ip 160.11.10.100
standby 2 preempt

R4 config termi

interface f0/0

standby 1 ip 160.11.10.50

standby 1 preempt

standby 2 ip 160.11.10.100

standby 2 priority 110
standby 2 track s0/0 20
standby 2 preempt

 

Trunk鏈路上的HSRP 此項功能能通過在子網和VLAN間提供負載均衡和冗餘能力提高整個網路的韌性。我們可以實現通過兩個路由器來實現在幹道上的互為活躍/備份路由器。我們只要設置一下他們在HSRP中的優先順序就可以實現。 配置步驟 1 定義封裝類型和vlan 如
interface f0/0.10
encapaulation isl 10
interface f0/0.20
encapaulation isl 20
2.分配IP地址

如 interface f0/0.10
ip address 10.10.10.2 255.255.255.0
interface f0/0.20
ip address 10.10.20.2 255.255.255.0
3.啟動HSRP
如 inteface f0/0.10
standby 1 ip 10.10.10.254
standby 1 priority 105
standby 1 preempt
interface f0/0.20
standby 2 ip 10.10.20.254
standby 2 priority 90

新HSRPv2 在CISCO IOS 12.3以上的版本中,HSRP支援V2的版本。HSRP V2和HSRP V1命令上沒有什麼不同只是HSRP V2所支持的組號進行了擴展HSRP V1只支援0-255個組而HSRP V2可以支援0-4095個組主要是應用在它可以支援匹配在子介面上的VLAN號,擴展的VLAN可以支援到4094個,HSRP V1和HSRP V2不能同時在一個介面上啟用,但是在同一台路由器上的不同介面上可以應用不同版本的HSRP 配置過程
1 進入全域模式 configure terminal
2進入介面模式 interface 介面
3 啟用HSRP版本2
standby version 2
4 配置HSRP standby 組號 ip ip位元址 組號取值範圍0-4095

案例 RouterA#configure terminal

RouterA(config)#interface f0/0

RouterA(config-if)#standby version 2

RouterA(config-if)#standby 4095 ip 10.1.1.1

RouterA(config-if)#standby 4095 timers msec 15 msec 50

RouterA(config-if)#standby 4095 priority 200

RouterA(config-if)#standby 4095 preempt

—————————————————

RouterB#configure terminal

RouterB(config)#interface f0/0

RouterB(config-if)#standby version 2

RouterB(config-if)#standby 4095 ip 10.1.1.1

RouterB(config-if)#standby 4095 timers msec 15 msec 50

RouterB(config-if)#standby 4095 preempt

RouterA#show standby
FastEthernet0/0 – Group 4095 (version 2)
State is Active 2 state changes, last state change 00:11:47
Virtual IP address is 10.1.1.1
Active virtual MAC address is 0000.0c9f.ffff

Local virtual MAC address is 0000.0c9f.ffff (v2 default)

Hello time 15 msec, hold time 50 msec Next hello sent in 0.007 secs Preemption enabled
Active router is local Standby router is 10.1.1.3, priority 100 (expires in 0.030 sec)
Priority 200 (configured 200)
IP redundancy name is “hsrp-Fa0/0-4095″ (default)

CISCO FLEXLINK

FlexLinks Access Model

FlexLinks are an alternative to the looped access layer topology. FlexLinks provide an active-standby
pair of uplinks defined on a common access layer switch. After an interface is configured to be a part of an active-standby FlexLink pair, spanning tree is turned off on both links and the secondary link is placed in a standby state, which prevents it from being available for packet forwarding. FlexLinks operate in single pairs only, participate only in a single pair at a time, and can consist of mixed interface types with mixed bandwidth. FlexLinks are configured with local significance only because the opposite end of a FlexLink is not aware of its configuration or operation. FlexLinks also has no support for preempt, or an ability to return to the primary state automatically after a failure condition is restored.

The main advantage of using FlexLinks is that there is no loop in the design and spanning tree is not enabled. Although this can have advantages in reducing complexity and reliance on STP, there is the drawback of possible loop conditions that can exist, which is covered in more detail later in this chapter. Other disadvantages are a slightly longer convergence time than R-PVST+, and the inability to balance traffic across both uplinks. Failover times measured using FlexLinks were usually under two seconds.



NNote:
When FlexLinks are enabled on the access layer switch, it is locally significant only. The aggregation switch ports to which FlexLinks are connected do not have any knowledge of this state, and the link state appears as up and active on both the active and standby links. CDP and UDLD packets still traverse and operate as normal. Spanning tree is disabled (no BPDUs flow) on the access layer ports configured for FlexLink operation, but spanning tree logical and virtual ports are still allocated on the aggregation switch line card. VLANs are in the forwarding state as type P2P on the aggregation switch ports.


Figure 6 shows the FlexLinks access topology.

Figure 6 FlexLinks Access Topology



The configuration steps for FlexLinks are as follows. FlexLinks are configured only on the primary interface.

ACCESS1#conf t

ACCESS1(config-if)#interface tenGigabitEthernet 1/1

ACCESS1(config-if)#switchport backup interface tenGigabitEthernet 1/2

ACCESS1(config-if)#

    May 2 09:04:14: %SPANTREE-SP-6-PORTDEL_ALL_VLANS: TenGigabitEthernet1/2 deleted from all Vlans

    May 2 09:04:14: %SPANTREE-SP-6-PORTDEL_ALL_VLANS: TenGigabitEthernet1/1 deleted from all Vlans

ACCESS1(config-if)#end


To view the current status of interfaces configured as FlexLinks:

ACCESS1#show interfaces switchport backup


Switch Backup Interface Pairs:


Active Interface Backup Interface State

————————————————————————

TenGigabitEthernet1/1 TenGigabitEthernet1/2 Active Up/Backup Standby


Note that both the active and backup interface are in up/up state when doing a “show interface" command:

ACCESS1#sh interfaces tenGigabitEthernet 1/1

TenGigabitEthernet1/1 is up, line protocol is up (connected)

 Hardware is C6k 10000Mb 802.3, address is 000e.83ea.b0e8 (bia 000e.83ea.b0e8)

Description: to_AGG1

 MTU 1500 bytes, BW 10000000 Kbit, DLY 10 usec,

 reliability 255/255, txload 1/255, rxload 1/255

Encapsulation ARPA, loopback not set

 Keepalive set (10 sec)

 Full-duplex, 10Gb/s

input flow-control is off, output flow-control is off

ARP type: ARPA, ARP Timeout 04:00:00

Last input 00:00:09, output 00:00:09, output hang never

Last clearing of “show interface" counters 00:00:30

Input queue: 0/2000/0/0 (size/max/drops/flushes); Total output drops: 0

 Queueing strategy: fifo

Output queue: 0/40 (size/max)

5 minute input rate 32000 bits/sec, 56 packets/sec

5 minute output rate 0 bits/sec, 0 packets/sec

1150 packets input, 83152 bytes, 0 no buffer

 Received 1137 broadcasts (1133 multicasts)

0 runts, 0 giants, 0 throttles

0 input errors, 0 CRC, 0 frame, 0 overrun, 0 ignored

0 watchdog, 0 multicast, 0 pause input

0 input packets with dribble condition detected

 26 packets output, 2405 bytes, 0 underruns

0 output errors, 0 collisions, 0 interface resets

 0 babbles, 0 late collision, 0 deferred

0 lost carrier, 0 no carrier, 0 PAUSE output

0 output buffer failures, 0 output buffers swapped out


ACCESS1#sh interfaces tenGigabitEthernet 1/2

TenGigabitEthernet1/2 is up, line protocol is up (connected)

 Hardware is C6k 10000Mb 802.3, address is 000e.83ea.b0e9 (bia 000e.83ea.b0e9)

Description: to_AGG2

 MTU 1500 bytes, BW 10000000 Kbit, DLY 10 usec,

 reliability 255/255, txload 1/255, rxload 1/255

Encapsulation ARPA, loopback not set

 Keepalive set (10 sec)

 Full-duplex, 10Gb/s

input flow-control is off, output flow-control is off

ARP type: ARPA, ARP Timeout 04:00:00

Last input 00:00:51, output 00:00:03, output hang never

Last clearing of “show interface" counters 00:00:33

Input queue: 0/2000/0/0 (size/max/drops/flushes); Total output drops: 0

 Queueing strategy: fifo

Output queue: 0/40 (size/max)

5 minute input rate 32000 bits/sec, 55 packets/sec

5 minute output rate 0 bits/sec, 0 packets/sec

1719 packets input, 123791 bytes, 0 no buffer

 Received 1704 broadcasts (1696 multicasts)

0 runts, 0 giants, 0 throttles

0 input errors, 0 CRC, 0 frame, 0 overrun, 0 ignored

0 watchdog, 0 multicast, 0 pause input

0 input packets with dribble condition detected

 7 packets output, 1171 bytes, 0 underruns

0 output errors, 0 collisions, 0 interface resets

 0 babbles, 0 late collision, 0 deferred

0 lost carrier, 0 no carrier, 0 PAUSE output

0 output buffer failures, 0 output buffers swapped out

ACCESS1#


Note that both the spanning-tree is no longer sending BPDUs in an effort to detect loops on interfaces in the FlexLink pair:

ACCESS1#sh spanning-tree interface tenGigabitEthernet 1/1

no spanning tree info available for TenGigabitEthernet1/1

ACCESS1#sh spanning-tree interface tenGigabitEthernet 1/2

no spanning tree info available for TenGigabitEthernet1/2


CDP and UDLD packets are still transmitted across Flexlinks as shown below:

ACCESS1#show cdp neighbor

Capability Codes: R – Router, T – Trans Bridge, B – Source Route Bridge

 S – Switch, H – Host, I – IGMP, r – Repeater, P – Phone


Device ID Local Intrfce Holdtme Capability Platform Port ID

Aggregation-1.cisco.com

 Ten 1/1 156 R S WS-C6509 Ten 7/4

Aggregation-2.cisco.com

 Ten 1/2 178 R S WS-C6509 Ten 7/4


ACCESS1#show udld neighbor

Port Device Name Device ID Port ID Neighbor State

—- ———– ——— ——- ————–

Te1/1 TBM06108988 1 Te7/4 Bidirectional

Te1/2 SCA0332000T 1 Te7/4 Bidirectional

ACCESS1#

FlexLink

下面討論思科FlexLink的操作和使用。

問:什麼是FlexLink?
答:FlexLink 能夠提供第2層永續性,一般在接入交換機和分佈交換機之間運行。它的收斂時間優於生成樹協定/快速生
成樹協定/IEEE 802.1w。FlexLink 在Cisco Catalyst 3000 和 Cisco Catalyst 6000 系列交換機上實施,收斂時
間低於100ms。換言之,從主鏈路的故障檢測,到通過備用鏈路轉發流量,總收斂時間低於100ms。FlexLink 成對部
署,即需要兩個埠。其中一個埠為主埠,另一個埠為從埠。這兩個埠可以是接入埠、EtherChannel埠或中繼埠。


問:FlexLink 是否關閉 Cisco Catalyst 3560-E 上的生成樹協議?

答:不會,FlexLink 只關閉FlexLink 對上的生成樹協定。換言之,只有為 FlexLink (主、從)配置的上行鏈路埠,
才會關閉生成樹協定。為避免網路迴圈,建議不關閉其餘埠上的生成樹協議。

問:備用埠的阻塞方式是否與生成樹協定相同?
答:不必相同。FlexLink 的最新增強允許備用埠對某些VLAN開放。與多生成樹協議相似,這些VLAN都由主埠提供備份。
這種方式稱為負載均衡,即允許使用者使用兩條主鏈路,而不是一條主用、一條備用。某些VLAN能將某條鏈路作為
主用鏈路,另一些VLAN則可以將該條鏈路作為備用鏈路。

問:FlexLink 是否支援負載均衡模式?
答:是的,FlexLink 支援VLAN均衡配置。在雙穴的配置中,某些VLAN 將一條鏈路作為主用鏈路(鏈路A),將另一條
鏈路作為備用鏈路(鏈路B);另一些VLAN則將鏈路B作為主用鏈路,將鏈路A作為備用鏈路。

問:是否能用 FlexLink 建立環拓撲?
不能,FlexLink 的目的是取代生成樹協定,建立上行鏈路,因此,它不支持環拓撲。

問:能否通過 EtherChannel 埠、中繼埠或接入埠配置 FlexLink?
答:可以,FlexLink 能在所有埠類型上配置,包括EtherChannel 埠、中繼埠和接入埠。

問:主、從埠是否必須是同一種埠?
答:不需要。在埠對中,可以將 EtherChannel 埠作為主埠,將接入埠作為從埠,而且主、從埠的速度也可以不同。

問:故障切換之後,當高頻寬主鏈路修復後,是否具有優先使用權?
答:是的,可以打開優先使用權設置,但預設狀態下是關閉的。
優先使用權能根據埠的頻寬配置。當然,為避免頻繁的
主從切換,也能在主鏈路恢復後延期切換。如果需要強迫使用優先使用權,應忽略延期計時器。

問:FlexLink 是否建立了調整機制,以便用戶能夠避免鏈路頻繁變動?
答:是的,為避免頻繁的主從切換,可以在主鏈路恢復後延期切換。

問:是否需要啟用 FlexLink 的MAC 移動更新(MMU)?
答:不需要,但是,為了加快雙向收斂,建議使用MMU。MMU 能夠向分佈層的交換機通報 MAC 表變更情況。
注意:Cisco
Catalyst 6000 產品線目前不支援MMU,但已列入了發展計畫。

問:運行 FlexLink 時是否需要關閉生成樹協定?
答:不需要,預設狀態下,生成樹協定只在 FlexLink 對上關閉,交換機的其它埠和VLAN上將繼續使用該協議。

System Log

Cisco設備的日誌發往syslog服務器

CISCO交換機的設置舉例

(1) 以下配置描述了如何將Cisco設備的日誌發往syslog服務器
思科路由器配置

cisco#conf t
cisco(config)#service timestamps log datetime localtime//日誌記錄的時間戳設置,可根據需要具體配置

cisco(config)#logging on //啟用日誌
cisco(config)#logging a.b.c.d //日誌服務器的IP地址
cisco(config)#logging facility local1 //facility設四標識, RFC3164 規定的本地設備標識為 local0 – local7
cisco(config)#logging trap errors //日誌記錄級別,可用"?"查看詳細內容 這個是3的級別

Severity Level
可選的級別有0-7共八個級別,0最高,7最低。這八個級別分別為:
emergencies System is unusable (severity=0)
alerts Immediate action needed (severity=1)
critical Critical conditions (severity=2)
errors Error conditions (severity=3)
warnings Warning conditions (severity=4)
notifications Normal but significant conditions (severity=5)
informational Informational messages (severity=6)
debugging Debugging messages (severity=7)

cisco(config)#logging source-interface e0 //日誌發出用的源IP地址

cisco#sh logging
選取最低的debugging級別可以記錄所有可以記錄的資訊思科路由器交換機,包括系統資訊、錯誤資訊思科路由器交換機模擬軟件、debug資訊等等。但是仍然不能滿足我們的全部需求

 

Note: using NTP to make time accurately

SNMP V1,V2C,V3

本文主要重點在後面的SNMP v3的配置。

另外SNMP測試可以使用iReasoning MIB,在http://www.ireasoning.com/ 可以下載到。

技術總結:

SNMP是Simple Network Manger Protocol(簡單網路管理協定)的縮寫,利用SNMP協定,網路系統管理員可以對網路上的節點進行資訊查詢、網路配置、故障定位、容量規劃,網路監控和管理是SNMP的基本功能。

SNMP是一個應用層協議,為客戶機/伺服器模式,包括三個部分:

1. SNMP網路管理器 //一般即為主機上的網管軟體,HP OPENVIEW ,SOLDWIND 等,現今的大規模網路管理都使用此協定, UDP 162 由於trap消息為代理主動發送,此處需要埠號

2.SNMP代理 //路由器交換機等網路設備上運行的SNMP程式,負責處理請 求及回應的 UDP 161

3. MIB管理資訊庫 // 預先定義好的樹形結構庫,單個節點代表一個資訊

MIB的相關概念:

下為MIB的一示例,很抽象,學習過資料結果的應該不陌生。

示例每個葉子節點代表一個屬性,可以囊括網路產品的所有樹形,如設備型號,電源狀態,介面速率,流量類型等等。

1.3.6.1.2.1.5
為節點ICMP,在網管軟體中獲取此節點與子節點的資訊,可以得到所有與ICMP有關的資訊與操作。


管理資訊結構SMI(structure of management information) :

指定了在 SNMP 的 MIB 中用於定義管理目標的規則。

SMI 是一種語言,是為了確保網路管理資料的語法和語義明確和無二義性而定義的語言。 如整數型,浮點型,二進位型,IP網址類別型,資料結構等。

它是定義被管理網路實體中特定資料的語言。

它定義了資料類型、物件模型,以及寫入和修改管理資訊的規則。讀寫許可權。


目前的SNMP版本:

1. SNMPv1 :簡單網路管理協定的第一個正式版本,在RFC1157中定義。

2. SNMPv2C:基於社群體(Community-Based)的SNMPv2管理架構, 在RFC1901中定義的一個實驗
性協議。

3. SNMPv3 :通過對資料進行鑒別和加密,提供了以下的安全特性:

1) 確保資料在傳輸過程中不被篡改;

2) 確保資料從合法的資料來源發出;

3) 加密報文,確保資料的機密性;

SNMPv2C是對SNMPV2的改進,增加了Get-bulk操作機制並且能夠對管理工作站返回更加詳細的錯誤資訊類型。Get-bulk操作能夠一次性地獲取表格(注意與walk的區別,walk是獲取所有表的資訊)中的所有資訊或者獲取大批量的資料,從而減少請求-回應的次數。避免多次getnext.


SNMP管理操作:

SNMP協定中的NMS和Agent之間的交互資訊,定義了6種操作類型:

1) Get-request操作:NMS從Agent提取一個或多個參數值。

2) Get-next-request操作:NMS從Agent提取一個或多個參數的下一個參數值。

3) Get-bulk操作:NMS從Agent提取批量的參數值;一個表格,SNMPV1不支援

4) Set-request操作:NMS設置Agent的一個或多個參數值。節點要有可寫屬性

5) Get-response操作:Agent返回的一個或多個參數值,是Agent對NMS前面3個操作的回應操作。

6) Trap操作:Agent主動發出的報文,通知NMS有某些事情發生。

將某些用戶和一個組關聯,再將某個組與某個視圖關聯。


snmp-server view
view-name oid-tree {include | exclude} //定義可以被訪問或者排除的MIB

snmp-server group
groupname {v1 | v2c |v3 {auth | noauth | priv}} [read readview] [write
writeview] [access {[ipv6 ipv6_aclname] [aclnum | aclname]
}] //創建組並和視圖關聯

Ruijie(config)# snmp-server user
username roupname {v1 | v2 | v3 [encrypted] [auth { md5|sha } auth-password ] [priv
des56
priv-password] } [access {[ipv6
ipv6_aclname] [aclnum | aclname]
}] //配置使用者和組關聯


配置實例:

1、snmpv1/v2模式均只要一條命令:

snmp-server comm. gaodong rw //gaodong相當於共用的密碼,為明文傳輸

snmp-ser enable traps //開啟trap通告

snmp host 192.168.225.147 traps gaodong //trap發往的主機位址

2、SNMP V3的配置

認證不加密型:

snmp-server user gduser gdgroup v3 encrypted auth md5 gdong //用戶名,組,認證方式,認證密碼

snmp-server group gdgroup v3 auth read default write default //認證組賦予許可權,auth為只認證不加密,其中default為默認的view視圖,此view可自行定義,但有的設備不支援

snmp-server host 192.168.45.96 traps version 3 auth gduser //TRAP通告地址。指定用戶名

snmp-server enable traps

snmp-server community gd rw

(紅色部分均填寫md5的認證模式—md5)

、認證加密型:

snmp-server user gduser gdgroup v3 encrypted auth md5 gdong priv des56 gaod //指定認證加密的類型,密碼,使用者與組關聯

snmp-server group gdgroup v3 priv read default write default //定義組的許可權,priv為既認證又加密

snmp-server host 192.168.45.96 traps version 3 priv gduser

snmp-server enable traps

snmp-server community gd rw

、不認證,不加密型:

snmp-server user gduser gdgroup v3 //用戶名與組關聯,指定版本號

snmp-server group gdgroup v3 noauth read default write default //定義組的許可權,noauth為既不認證也不加密

snmp-server host 192.168.45.96 traps version 3 noauth gduser

snmp-server enable traps

snmp-server community gd rw

3 包含視圖的配置實例:

以下的配置允許SNMPv3的管理者採用認證+加密模式通過用戶名v3user對MIB-2(1.3.6.1.2.1)節點下的管理變數進行設置和查看。採用的認證模式為MD5,使用的認證密碼為MD5-Auth,採用DES加密,加密金鑰為DES-Priv。同時允許向192.168.65.199以SNMPv3格式發送Trap。發送Trap使用的用戶名為v3user,採用認證+加密模式發送,採用的認證模式為MD5,使用的認證密碼為MD5-Auth,採用DES加密,加密金鑰為DES-Priv。

(config)# snmp-server view v3userview 1.3.6.1.2.1
include

(config)# snmp-server group v3usergroup v3 priv read v3userview write v3userview

(config)# snmp-server user v3user v3usergroup v3 auth md5 md5-auth priv des56 des-priv

(config)# snmp-server host 192.168.65.199 traps version 3 priv v3user

Switch 1.0 Ref 2 802.1X and VACL

CISCO 交換機 配置AAA、802.1X以及VACL

CISCO, AAA, VACL, 交換機

CISCO交換機配置AAA、802.1X以及VACL

一 啟用AAA、禁用Telnet 以及啟用 ssh

1.啟用aaa身份驗證,以進行SSH訪問:
Switch# conf t
Switch(config)# aaa new-model

2.配置主機名稱
Switch(config)# hostname sw1

3.配置本地用戶名口令,以便在帶外伺服器不可用時能夠訪問交換機
sw1(config)# username cisco password cisco

4.配置SSH
sw1(config)# ipdomain-name cisco.com
sw1(config)# crypto key generate rsa
! 如果要支援 SSH V2 ,在Cisco 設備上要求key長度大於"768 bits"
5.配置交換機,使得只能通過SSH以帶內方式訪問交換機
sw1(config)# line vty 0 15
sw1(config-line)# transport input ssh
sw1(config-line)# exit
sw1(config)# exit

二 配置vty的aaa身份驗證方式,首先使用radius 伺服器,如果伺服器不可用,使用本地用戶名口令資料庫
sw1(config)# aaa authentication login TEST group radius line
sw1(config)# line vty 0 15
sw1(config-line)# login authentication TEST
sw1(config-line)# exit

  • 三 在介面上配置802.1x

    1.為radius身份驗證啟用802.1x
       sw1(config)# aaa authentication dot1x default group radius

    2.全域啟用802.1x
       sw1(config)#dot1x system-auth-control

    3.在介面上配置802.1x
       sw1(config)# int range fa0/2 – 10
       sw1(config-if-range)# swtichport access vlan 10
       sw1(config-if-range)# dot1x port-control auto

    四 配置vacl以丟棄所有通過tcp埠8889進入的Frame

    1.配置一個acl,以判斷資料包是否通過tcp埠8889進入:
       sw1(config)# access-list 100 permit tcp any any eq 8889

    2.配置vlan訪問映射表:
       sw1(config)# vlan access-map DROP_WORM 100
       sw1(config-access-map)# match ip address 100
       sw1(config-access-map)# action drop
       sw1(config-access-map)# exit

    3.將vlan訪問表應用於合適的vlan
        (config)#vlan filter DROP_WORM vlan 10-20

    802.1x筆記

      在某網路測試時筆記。

    一、802.1x協定起源於802.11協定,後者是標準的無線局域網協定,802.1x 協定的主要目的是為了解決無線局域網使用者的接入認證問題。
    現在已經開始被應用於一般的有線LAN的接入。為了對埠加以控制,以實現用戶級的接入控制。
    802.1x就是IEEE為了解決基於埠的接入控制(Port-Based Access Control)而定義的一個標準。
    1、802.1X首先是一個認證協定,是一種對使用者進行認證的方法和策略。
    2、802.1X是基於埠的認證策略(這裡的埠可以是一個實實在在的物理埠也可以是一個就像VLAN一樣的邏輯埠,
    對於無線局域網來說個"埠" 就是一條通道)
    3、802.1X的認證的最終目的就是確定一個埠是否可用。對於一個埠,如果認證成功那麼就"打開"這個埠,
    允許文所有的資料通過;如果認證不成功就使這個埠保持"關閉",此時只允許802.1X的認證資料EAPOL
    (Extensible Authentication Protocol over LAN)通過。

    二、802.1X的認證體系分為三部分結構:
    Supplicant System,用戶端(PC/網路設備)
    Authenticator System,認證系統 (目前為Switch ,Autonomous AP , or WLAN Controller)
    Authentication Server System,認證伺服器 (Only Support EAP support Radius Server)

    三、認證過程

    1、認證通過前,通道的狀態為unauthorized,此時只能通過EAPOL的802.1X認證資料;
    2、認證通過時,通道的狀態切換為 authorized,此時從遠端認證伺服器可以傳遞來使用者的資訊,
    比如VLAN、CAR參數、優先順序、用戶的存取控制清單等等;
    3、認證通過後,用戶的流量就將接受上述參數的監管,此時該通道可以通過任何資料,注意只有認證通過後才有DHCP等過程。
    4、Supplicant System- Client(用戶端)是—需要接入LAN/WLAN,及享受switch/AP提供服務的設備(如PC機),
    用戶端需要支援EAPOL協定,用戶端必須運行802.1X 用戶端軟體,如:802.1X-complain、Windows XP等

    四、配置
    1、先配置switch到radius server的通訊
    2. 全域啟用802.1x身份驗證功能
    Switch# configure terminal
    Switch(config)# aaa new-model
    Switch(config)# aaa authentication dot1x {default} method1[method2…]
    3.指定radius伺服器和金鑰
    switch(config)#radius-server host ip_add key string
    4、在port上起用802.1x
    Switch# configure terminal
    Switch(config)# interface fastethernet0/1
    Switch(config-if)# switchport mode access
    Switch(config-if)# dot1x port-control auto
    Switch(config-if)# end

    五802.1x的安全機制
    IEEE 802.1x作為一種新生的鏈路層驗證機制協議,其原理、組成、工作進程以及功能等到底如何呢?
    上文揭示了這項協定在網路安全方面的運行機理及其突出應用。
    雖然大多數組織的IT基礎設施已經加強了邊界防禦,但其內部仍然容易受到攻擊。這種脆弱性具體表現為內部用戶可未經授權訪問
    或者惡意破壞關鍵的IT資源。作為全面的"深層防禦"安全架構的一部分,關鍵內部基礎設施的訪問權只能提供給授權使用者。
    局域網(LAN)歷來是大門敞開。IEEE 802.1x這項最新標準定義了為LAN實施存取控制的機制,允許授權使用者進來,而把非授權使用者拒之門外。因此,
    有了802.1x,LAN便不再是一道敞開的門,其本身也成為安全架構和政策執行的一個重要部分,並有助於消除來自組織內部的安全威脅。

    802.1x驗證機制及其優點

    IEEE 802.1x是一種鏈路層驗證機制協定,控制著對網路訪問埠即網路連接點的訪問,如實施在無線接入點的物理交換埠或邏輯埠。
    通過控制網路訪問,使用者可以在多層安全架構部署第一道防線。在連接設備得到驗證之前,網路訪問權完全被禁止。
    得到驗證之後,用戶可以被提供第2層交換機通常提供的服務以外的附加服務。這些服務包括第3層過濾、速率限制和第4層過濾,
    而不僅僅是簡單的"開/關(on/off)"服務。

    鏈路層驗證方案的一個優點是,它只要求存在鏈路層連接,用戶端(在802.1x中稱為請求者)不需要分配供驗證用的第3層位址,
    因而降低了風險。此外,鏈路層驗證涉及了所有能夠在鏈路上工作的協議,從而不必為每種協定提供網路層驗證。
    802.1x還能夠使執行點盡可能地接近網路邊緣,因此可以針對連接設備的特定需求定制細細微性訪問規則。

     

  • IEEE 802.1x的組成

    IEEE 802.1x定義了下列邏輯單元:
    網路訪問埠:
    即網路上的連接點,它可能是物理埠,也可能是邏輯埠。

    請求者:
    連接到網路、請求其服務的設備,通常是終端站。請求者是請求驗證的設備,但也有可能是一種網路設備。

    驗證者:
    為驗證服務提供便利的網路單元,驗證者實際上並不提供驗證服務,它充當請求者和驗證伺服器之間的代理,並且充當政策執行點。
    埠訪問實體:參與驗證過程的硬體。請求者和驗證者都擁有埠訪問實體(PAE)單元,請求者的PAE負責對驗證資訊請求做出回應,
    驗證者的PAE負責與請求者之間的通信,代理授予通過驗證伺服器驗證的證書,並且控制埠的授權狀態。

    驗證伺服器:
    提供向驗證者申請驗證服務的實體,它根據證書提供授權,既可以同驗證者放在一起,也可以放在遠地。

    擴展驗證協定(EAP)
    由於使用了擴展驗證協定(EAP),802.1x在請求者和驗證者之間採用的實際驗證機制相當靈活。
    EAP原先是為點對點(PPP)鏈路上的應用而定義的, 明確定義在網際網路工程任務組的請求評論文檔(RFC)2284。
    基於以上所述,IEEE定義了一種封裝模式,允許EAP通過LAN傳輸,EAP over LAN(EAPoL)於是應運而生。
    各種驗證服務都可以通過這種協定運行,包括用帳戶/密碼、Kerberos、數位憑證、一次性密碼和生物檢測術等服務。
    EAP資料包在請求者和驗證者之間的鏈路層上傳輸,並通過驗證者與驗證伺服器之間的IP/RADIUS連接。

    EAP本身並不為流量傳輸明確規定任何保護機制,如加密等。相反,EAP內運行的驗證協議(在RFC 2284當中定義為驗證類型)
    為安全操作提供了資料的機密性和完整性。驗證類型的一個例子就是EAP傳輸層安全(EAP-TLS),它利用了在RFC 2246中定義
    的傳輸層協議(TLS)。

    受控埠和不受控埠

    802.1x標準定義了兩種邏輯抽象:受控埠和不受控埠。這兩種埠都連接到網路訪問埠上。受控埠提供了訪問請求者所用的網路服務的功能,根據驗證操作成功
    與否實現打開或關閉操作。不受控埠始終處於連接狀態,為驗證服務的進行提供了始終連通的狀態。受控埠需驗證通過後才處於連接狀態,從而提供給請求者
    網路服務。

    802.1x的工作過程

    802.1x的工作相當簡單。值得注意的是請求者和驗證者之間的流量通過EAPoL傳輸,驗證者與驗證伺服器之間的流量則通過RADIUS傳輸。
    開始時,受控埠處於中斷狀態,所以請求者無法訪問LAN本身。請求者向驗證者發出EAPoL起始消息,或者驗證者會向請求者發出EAP請求身份消息。請求者接到請求身份消息後,就會把證書提供給驗證者,然後驗證者進行封裝後把RADIUS資料包裡面的消息發送給驗證伺服器。

    RADIUS伺服器隨後向請求者發出質問請求,該請求由驗證者充當代理。請求者對質問做出回應後,驗證者又一次充當資料通過驗證伺服器的代理。驗證伺服器確認證書後,就會做出請求者可不可以使用LAN服務的決定。只要讓受控埠由中斷狀態轉換到連接狀態,即可實現這一步。

    增強授權功能

    因為802.1x只明確規定了基本的開/關功能,其他功能甚少,大多數安全專家提到了AAA,即驗證、授權和審計,而802.1x僅僅提供了最基本的驗證功能,即開/關。部分廠商於是提供了另一層保護,因為許多組織需要比"開/關"所能夠實現的更高一級的細細微性。廠商為受控埠增添邏輯以提高保真度,從而大大增強了授權功能。這種所謂的授權服務可以提供高保真度的第2、第3和第4層過濾功能,而且可以進行擴展,以獲得動態自我配置的資料包過濾防火牆。授權服務可以提供如下的一些服務:

    第2層協定過濾,去除了網路中不接受的第2層協議。
    第3層過濾,對提供特定單元訪問權的網路執行邏輯視圖。
    第4層過濾,禁止不允許的協定進入網路。
    速率限制,控制可能有害的資訊進入網路的速率。

    如果利用為每個埠進行程式設計的授權服務,就能針對特定用戶或用戶級制訂相應的細細微性安全性原則,為它們提供僅供訪問所需服務的功能,
    其它服務一概禁止。

    舉一例,即使網路系統管理員能訪問SNMP,會計部門的人卻沒有理由同樣能訪問。如果有可能確定一個用戶屬於哪個組,那麼在驗證期間就可以實施特定的授權策略。就本例而言,SNMP訪問權應該授予網路管理組的成員,不然需收回許可權,從而防止網路系統管理員之外的人無意或惡意對網路設備亂加配置。

Switch 1.0 ref 1-Port Fast,uplink fast ,back fast , Bpdu guide , bpdu fast

Switch 1.0 ref 1-Port Fast,uplink fast ,back fast , Bpdu guide , bpdu fast

 

 

PortFast(802.1D快速轉送埠):
主要用於access 埠,能繞過監聽和學習狀態直接進入轉發狀態 節省30S
埠下用spanning-tree portfast (cisco 專有){disable | truck }
由於是接入埠,正常情況下是不會接受到BPDU 的,如果接受到了BPDU,STP 就要想辨法把這個埠轉到阻塞(BLOCK)狀態-“portfast bpduguide" ,
全域命令

 

spanning-tree portfast bpduguard,當收到BPDU,STP 把這個埠關閉掉,更好的保護埠,(ERROR-DISABLE)
spanning-tree portfast bpdufilter default,default是所有介面的意思,則當收到BPDU 時,STP 把該埠變為普通埠,不再是portfast 埠,

用show spanning-tree summay totals 查看
UplinkFast(802.1D快速上行鏈路-Switch require own Root port & Block Port):
在有冗余的鏈路中使用,當啟用的那條鏈路斷掉,另一條備份鏈路繞過監聽和學習狀態直接進入轉發狀態 節省30S 。如果原來的交換機恢復連接,交換機在等待2倍轉發延遲時間再加上5s後才將該埠轉入轉發狀態。
這使得鄰接埠有時間經過偵聽和學習狀態轉入轉發狀態。
全域下用spanning-tree uplinkfast (cisco 專有)

本帖隱藏的內容需要回復才可以流覽

BackboneFast(802.1D快速骨幹-Block Port and Root Port are not on same switch ):
在一個STP 實例的整個網路交換機上使用,當一個交換機從被阻塞的埠收到一個劣等BPDU 時,它查詢根交換機後發現網路改變,就把原來那個阻塞埠馬上轉到監聽狀態,
而不用等待Max Age 的時間 節省20S (Via Root LINK Query
在全域下用spanning-tree backbonefast (cisco 專有)

Root guard:
(config-if)#spanning-tree guard root 啟用根防護,埠只能轉發BPDU而不能接收BPDU,禁止埠成為根埠,當接收到較好的BPDU時該埠則進入阻斷狀態。

用於防止新加入具有更小優先順序交換機成為根橋在連接新加入的交換機的交換機埠上用spanning-tree guard root 保護原有的根交換機,把這個埠自動阻塞,
用show spanning-tree inconsistentports 查看因為根保護而被阻塞的埠

BPDU guard:
全域命令
spanning-tree portfast bpduguard,當收到BPDU,STP 把這個埠關閉掉,更好的保護埠,
spanning-tree portfast bpdufilter default,default是所有介面的意思,則當收到BPDU 時,STP 把該埠變為普通埠,不再是portfast 埠,

(config-if)#spanning-tree portfast{ bpduguard | bpdufilter }default 埠上啟用了PortFast將自動啟用BPDU 防護,啟用後收到BPDU埠將進入errdisable狀態被關閉,所以上行鏈路埠不應啟用
介面下:spanning-tree bpduguard [enable | disable]
spanning-tree bpdufilter [enable | disable] 介面下不收發BPDU可能產生迴圈

所有這種既能在全域模式下配置又能在介面模式下配置的,都是介面配置可以覆蓋全域配置

Types of Ethernet

Types of Ethernet

As mentioned earlier, Ethernet provides various data rates with different physical layouts. A variety of Ethernet types have come and gone over the years, such as the following:

  • 10BASE5 (Thicknet)
  • 10BASE2 (Thinnet)
  • 10BASE-FL
  • 10BASE-T

In the mid 1990s, 100BASE-T (unshielded twisted-pair [UTP]) and 100BASE-FX (using fiber) were ubiquitous in the enterprise network, and they still are. Since the start of the millennium, enterprise networks have actively implemented Gigabit Ethernet, 1000BASE-T, in their network. The push for today is 10 Gbps in the core of the enterprise network.

Transmission Media

The more common transmission media are twisted pair and fiber optics. Coaxial cable is mentioned in this section for historical purpose. Categories defined under twisted pair support transmission over various distances and data rates. The most common UTP cable in the enterprise network is Category 5, which supports 100 Mbps and 1000 Mbps rates.

Ethernet over Twisted-Pair Cabling

Ethernet technology standards are the responsibility of the IEEE 802.3 working group. This group is responsible for evaluating and eventually approving Ethernet specifications as new Ethernet technologies are developed such as Gigabit and 10Gigabit Ethernet. Although this group defines the standards for Ethernet, it looks to other established standards organizations to define the specifications for physical cabling and connectors. These organizations include the American National Standards Institute (ANSI), Engineering Industry Association (EIA), and Telecommunications Industry Association (TIA). The TIA/EIA published specifications for twisted-pair cabling are found in the TIA/EIA-568-B specification document.

The more common forms of cabling are unshielded twisted-pair (UTP) and optical fiber. Twisted pair cable comes in a variety of forms. The most common categories in today’s networks are the following:

  • Category 3
  • Category 5
  • Category 5E
  • Category 6

The categories represent the certification of the radio frequency capability of the cabling.

Category 3 was initially designed as voice grade cable and is capable of handling transmissions using up to 16 MHz. Category 5 is capable of handling transmissions up to 100 MHz. Category 5E is an improved version of Category 5; while still limited to 100 MHz, Category 5E defines performance parameters sufficient to support 1000BASE-T operation.

Category 6 provides the best possible performance specification for UTP cabling. Category 6 specifies much stricter requirements for cabling than Category 5 and 5E. The frequency range of Category 6 extends to 250 MHz, in contrast to Category 5 and 5E’s 100 MHz. While new cabling installations typically install Category 5E or 6 cabling, Category 5 cabling can be utilized for 1000BASE-T applications. With few exceptions, if 100 Mbps Ethernet is operating without issues up to 100 meters on a Category 5 cable plant, 1000BASE-T will operate as well.

Although 10 Mbps and 100 Mbps Ethernet often use two pairs (pins 1, 2, 3, and 6) of twisted-pair cabling, Gigabit Ethernet over twisted pair uses all four pairs of wiring in the twisted-pair cable.

Even if the actual twisted pair is rated a specific category, it does not imply that a cabling infrastructure properly supports the category specification end-to-end. Installation and accessories (such as patch panels and wall plates) must meet the standard as well. Cable plants should be certified from end-to-end. When installing a cabling infrastructure, the installer should be able to use specialized equipment to verify the specifications of the cabling system from end-to-end.

Ethernet over Fiber Optics

Two major types of fiber used in Ethernet networks are multimode and single mode. Multimode fiber (MMF) is used for short haul applications (up to 2000 m). Examples include campus or building networks. MMF is usually driven by LED or low-power laser-based equipment. Single mode fiber (SMF) is used for longer haul applications (up to 10 km) and the equipment is laser based. SMF is generally used in metropolitan-area networks or carrier networks.

Table 1-1 compares Ethernet types over different transmission media.

Table 1-1. Comparisons of Ethernet over Various Transmission Media

Ethernet Type

Media Type

Distance Limitations (meters)

Speed (megabits)

Data Encoding

10BASE-T

UTP Category 3 or better

100

10

Manchester

10BASE-FX ? MMF

MMF

2000

10

Manchester

100BASE-TX

UTP Category 5 or better

100

100

4B/5B

100BASE-FX ? MMF

MMF

2000

100

4B/5B

100BASE-FX ? SMF

SMF

10000

100

4B/5B

1000BASE-SX

MMF

2000

1000

8B/10B

1000BASE-LX

SMF

5000[*]

1000

8B/10B

1000BASE-T

UTP Category 5 or better

100

1000

PAM 5×5

 

[*] The Cisco implementation of 1000BASE-LX doubles this distance over the standard to 10,000 meters

Ethernet over Coax Cabling

The use of coax cable for LANs is virtually nonexistent. One might run into it in an old abandoned building. Ethernet’s eventual support of twisted pair cabling in a star topology virtually ended the use of coaxial cabling for Ethernet. Keep in mind that coax cable was not cheap either. Two major types of coax were used: thinnet (also called cheapernet) and thicknet. Thinnet uses 50 ohm coaxi cable (RG-58 A/U) with a maximum length of 185 meters when used for Ethernet. This cable is thinner and more flexible than thicknet, which is also 50 ohm coax cable. It is packaged and insulated differently than thinnet. It requires a specialized tool, a vampire tap, to pierce into and has a maximum length of 500 meters for Ethernet. The vampire tap was used to pierce the outer shielding of the cable, creating an electrical connection between the device and the shared media. Traditionally, thicknet was used as a backbone technology because of its additional shielding. Both thinnet and thicknet are virtually extinct in production networks today.

Ethernet Cross-Over Cabling

Network devices can be categorized as either data circuit equipment (DCE) or data terminating equipment (DTE). DCE equipment connects to DTE equipment, similar to the male and female end of a garden hose. DCE equipment usually is a type of concentrator or repeater, like a hub. DTE equipment is usually equipment that generates traffic, like a workstation or host.

Sometimes, it is necessary to connect like equipment. Connecting like devices can be accomplished by altering the twisted-pair media, and taking transmit and receive wires and reversing them. This is commonly called a “cross-over" cable. Figure 1-2 shows an RJ-45 connector with its pinouts. Pins 4, 5, 7, and 8 are not used.

Figure 1-2. Crossover Pinouts


 

The pinouts are a bit different in a Gigabit scenario because all the pins are used. In addition to the pinouts for 10 Mbps/100 Mbps aforementioned, two additional changes are necessary: pin 4 to 7, and 5 to 8.

A crossover cable can link DCE to DCE, and DTE to DTE. The exception to connecting like devices is that some devices are manufactured to be connected together. An example would be that some hubs and switches have an uplink or Media Dependent Interface (MDI) port. There is typically a selector that allows the user to toggle between MDI and MDI-X (X for crossover), with MDI-X intentionally reversing the pin out of transmit and receive similar to a crossover cable. A setting of MDI-X allows two DCE devices, such as two hubs or switches, to connect to each other using a typical straight through wired twisted-pair cable.

Ethernet Topology

Ethernet is defined at the data link layer of the OSI model and uses what is commonly referred to as a bus topology. A bus topology consists of devices strung together in series with each device connecting to a long cable or bus. Many devices can tap into the bus and begin communication with all other devices on that cable segment. This means that all the network devices are attached to a single wire and are all peers, sharing the same media.

Bus topology has two very glaring faults. First, if there were a break in the main cable, the entire network would go down. Second, it was hard to troubleshoot. It took time to find out where the cable was cut off. The star topology has been deployed for a long time now and is the standard in the LAN environment. Star topologies link nodes directly to a central point. The central point is either a hub or a LAN switch. Ethernet hubs are multiport repeaters, meaning they repeat the signal out each port except the source port.

The advantages of a physical star topology network are reliability and serviceability. If a point-to-point segment has a break, in the star topology, it will affect only the node on that link. Other nodes on the network continue to operate as if that connection were nonexistent. Ethernet hubs and LAN switches act as the repeaters that centralize the twisted-pair media. Twisted-pair media can also be used to join like devices. Following the OSI model and the concept of interchangeable parts, even Token Ring, which is a logical ring, can use a physical star topology with twisted pair.

Ethernet Logical Addressing

In Ethernet, LAN devices must have a unique identifier on that specific domain. LAN devices use a Media Access Control (MAC) address for such purpose. MAC addresses are also referred to as hardware addresses or burned-in addresses because they are usually programmed into the Ethernet adapter by the manufacturer of the hardware.

The format of a MAC address is a 48-bit hexadecimal address. Because hexadecimal uses the digits 0-9 and the letters a-f (for numbers 10-15), this yields a 12-digit address. MAC addresses are represented in any one of four formats. All the formats properly identify a MAC address and differ only in the field separators, as follows:

  • Dashes between each two characters: 00-01-03-23-31-DD
  • Colons instead of dashes between each two characters: 00:01:03:23:31:DD
  • Periods between each fourth character: 0001.0323.31DD
  • The digits without dashes, periods, or colons: 0001032331DD

Cisco routers typically use the 0001.0323.31DD formatting, while Cisco switches running Catalyst Operation System (Catalyst OS) images use 00:01:03:23:31:DD to represent the same address.

CSMA/CD Operation

Ethernet operates using CSMA/CD. By definition, CSMA/CD is half-duplex communication. Half duplex implies that only one device on the Ethernet LAN “talks" at a time, and devices connected to the same Ethernet network are considered to be part of the same collision domain. Devices sending traffic in the same collision domain have the potential of their packets colliding with each other when two devices attempt to transmit at the same time. The logical definition of this range of devices is called a domain, hence the term collision domain.

The old style telephone party line example best illustrates the concept of a collision domain, as shown in Figure 1-3.

Figure 1-3. Telephone Party Line


 

Table 1-2 lists each party line operation and compares it to Ethernet.

Table 1-2. Comparing Party Line and Ethernet Operations

Step

Telephone Party Line Operation

Ethernet Operation

1

I pick up the phone. Is anyone talking?

The LAN device listens to the Ethernet network to sense the carrier signal on the network.

2

If no one is speaking, I can start talking. I’ll keep listening to make sure no one speaks at the same time as me.

If the LAN device does not detect a carrier signal on the network, it can begin transmitting. The LAN device listens to the carrier signal on the network and matches it to the output.

3

If I can’t hear myself speak, I’ll assume someone is trying to speak at the same time.

If there is a discrepancy between input and output, another LAN device has transmitted. This is a collision.

4

I’ll then yell out to tell the other person to stop talking.

The LAN device sends out a jamming signal to alert the other LAN devices that there has been a collision.

5

I will then wait a random amount of time to start my conversation again.

The LAN device waits a random amount of time to start transmitting again. This is called the backoff algorithm. If multiple attempts to transmit fail, the backoff algorithm increases the amount of time waited.

 

In a party line, people occasionally speak over each other. When the party line is loaded with more callers, the more often people attempt to speak at the same time. It is the same with Ethernet collisions. Because users share Ethernet bandwidth and are part of the same collision domain, it is often referred to as shared media or shared Ethernet. (See Figure 1-4.) The efficiency of shared Ethernet is proportional to the number of devices attempting to communicate at the same time. As more devices are added, the efficiency decreases.

Figure 1-4. Shared Ethernet Segment


 

The algorithm in CSMA/CD used after a collision is Truncated Binary Exponential Backoff algorithm. When a collision occurs, the device must wait a random number of slot times before attempting to retransmit the packet. The slot time is contingent upon the speed of the link. For instance, slot time will be different for 10 Mbps Ethernet versus 100 Mbps Ethernet. Table 1-3 shows an example for a 10 Mbps Ethernet link. Cisco switches uses a more aggressive Max Wait Time than what is illustrated in this example. The purpose of the example is to give you a feel for how Truncated Binary Exponential Backoff works.

Table 1-3. CSMA/CD Collision Backoff Ranges

Retry

Range

Max Number

Max Wait Time

1st

0-1

(2^1)-1

51.2us

2nd

0-3

(2^2)-1

153.6us

3rd

0-7

(2^3)-1

358.4us

4th

0-15

(2^4)-1

768.0us

5th

0-31

(2^5)-1

1587.2us

6th

0-63

(2^6)-1

3225.6us

7th

0-127

(2^7)-1

6502.4us

8th

0-255

(2^8)-1

13056.0us

9th

0-511

(2^9)-1

26163.2us

10th ? 15th

0-1023

(2^10)-1

52377.6us

 

Cisco switches monitor various collision counters, as follows:

  • Single
  • Multiple
  • Late
  • Excessive

Of the four types, be wary of late and excessive collisions. Late collisions occur when two devices send data at the same time. Unlike single and multiple collisions, late collisions cause packets to be lost. Late collisions are usually indicative of the cable exceeding IEEE specifications. Cascading hubs (connecting two or more hubs to each other) can also cause the length of the collision domain to increase above specification. You can use a Time Delay Reflectometer (TDR) to detect cable fault and whether the cable is within the IEEE standard. Other factors that cause late collisions include mismatched duplex settings and bad transceivers. Example 1-1 shows the output from a switch that has detected a late collision on one of its ports.

Example 1-1. Late Collision Error Messages

 

%LANCE-5-LATECOLL: Unit [DEC], late collision error

 

%PQUICC-5-LATECOLL: Unit [DEC], late collision error

 

 

The slot time, 51.2 microseconds, used to detect and report collisions is based on the round trip time between the furthest points on the Ethernet link. The value is calculated by taking the smallest Ethernet frame size of 64 bytes and multiplying it by 8 bits, which gives 512 bits. This number is then multiplied by .1 microseconds. The farthest distance between the end points of the cable should be reached within half of this slot time, 25.6 microseconds.

Excessive collisions typically occur when too much traffic is on the wire or too many devices are in the collision domain. After the fifteenth retransmission plus the original attempt, the excessive collisions counter increments, and the packet gets dropped. In this case, too many devices are competing for the wire. In addition, duplex mismatches can also cause the problem. A syslog message is generated by the switch, as depicted in Example 1-2, when excessive collision occurs on the port.

Example 1-2. Excessive Collisions Error Message

 

%PQUICC-5-COLL: Unit [DEC], excessive collisions. Retry limit [DEC] exceeded

 

 

On the switch, the show port mod/port command provides information about collisions, multiple collisions, and so on. Example 1-3 is an excerpt from the show port command that is useful. This example was taken from a switch that was running Catalyst OS software.

Example 1-3. Sample of show port Command

 

Switch1 (enable) show port 10/3

 

 

 

Port Single-Col Multi-Coll Late-Coll Excess-Col Carri-Sen Runts Giants

 

—– ———- ———- ———- ———- ——— ——— ———

 

10/3 37 3 24 0 0 0 0

 

 

Full-Duplex Ethernet

In the party line scenario, congestion occurs when more than two people attempt to talk at the same time. When only two people are talking, or only two devices, virtually all the bandwidth is available. In cases where only two devices need to communicate, Ethernet can be configured to operate in full-duplex mode as opposed to the normal half-duplex operation. Full-duplex operation allows a network device to “talk" or transmit and “listen" or receive at the same time. (See Figure 1-5.)

Figure 1-5. Full-Duplex Directly Connected Hosts


 

Because Ethernet is based on CSMA/CD, full-duplex devices either need to be directly connected to each other or be connected to a device that allows full-duplex operation (such as a LAN switch). Ethernet hubs do not allow full-duplex operation, as they are only physical layer (Layer 1) signal repeaters for the logical bus (Layer 2). Ethernet still operates as a logical bus under full duplex.

Autonegotiation

Autonegotiation is a mechanism that allows two devices at either end to negotiate speed and duplex settings at physical layer. The benefits of autonegotiation include minimal configuration and operability between dissimilar Ethernet technologies.

In today’s networks, 10BASE-T and 100BASE-T are ubiquitous. Newer Cisco modules such as the WS-X6548-GE-TX have ports capable of 10/100/1000BASE-T. Most existing network interface cards (NICs) operate at 10/100 speeds, with newer NICs offering 10/100/1000BASE-T operation. NICs capable of autonegotiating speed and duplex are beneficial because more and more users are becoming mobile. One day, a user might be connected to the office Catalyst switch at 100 Mbps, and the next day, a remote site that supports only 10 Mbps. The primary objective is to ensure that the user not only has easy access to the network but also has network reliability. If the user’s laptop NIC is hard coded at 100BASE-T full duplex, the user connectivity might be impacted because the two switches might have different types of modules that operate at different speeds. For instance, the module in the office building is WS-X5225 (24 port 10/100BASE-TX), and the remote site has WS-X5013 (24 port 10BASE-T). In this case, because the switches are set by default to autonegotiate, a user with a NIC hard coded to 100BASE-T full duplex will not get any connectivity. Setting up autonegotiation on both the switch and laptop gets rid of this problem. The user no longer has to worry about the laptop NIC settings because the NIC automatically negotiates the proper physical layer configuration with the end device to which it connects.

The actual mechanics behind autonegotiation are straightforward, as depicted in Figure 1-6. Autonegotiation attempts to match speed and duplex mode at the highest priority with its link partner. Since the introduction of 1000BASE-T, the priorities have been readjusted. Table 1-4 describes each priority level.

Figure 1-6. Ethernet Autonegotiation


 

Table 1-4. Autonegotiation Priority Levels

Priority

Ethernet Specification

Type of Duplex

1

1000BASE-T

Full duplex

2

1000BASE-T

Half duplex

3

100BASE-T2

Full duplex

4

100BASE-TX

Full duplex

5

100BASE-T2

Half duplex

6

100BASE-T4

7

100BASE-TX

Half duplex

8

10BASE-T

Full duplex

9

10BASE-T

Half duplex

 

The 10BASE-T specification does not include autonegotiation between devices. Autonegotiation was first introduced in IEEE 802.3u Fast Ethernet specification as an optional parameter. In a 10BASE-T environment, a single pulse, called the Normal Link Pulse (NLP), is sent every 16 ms (±8 ms) on an idle link. The NLP performs a link integrity test for 10BASE-T. When no traffic is on the link, the 10BASE-T device generates a NLP on the wire to keep the link from going down. The 10BASE-T device stops generating pulses when it receives data packets. A link failure occurs under conditions when the 10BASE-T device does not receive NLPs or a single data packet within a specified time slot.

As mentioned earlier, the IEEE 802.3u specification has an optional programmable field for autonegotiation. Within autonegotiation, there are various other optional operations, such as Remote Fault Indication and Next Page Function. Remote Fault Indication detects and informs the link partner of physical layer errors. The Next Page Function provides more verbose information about the negotiation process. One of the more appealing features of autonegotiation is compatibility with dissimilar Ethernet technologies. For example, Fast Ethernet is backward-compatible with 10BASE-T through a Parallel Detection mechanism. Essentially, the Fast Ethernet switches to NLP to communicate with a 10BASE-T device. Parallel Detection is when only one of the two link partners is capable of autonegotiation.

Fast Ethernet uses the same pulse structure as 10BASE-T. In 10BASE-T, there is only a single pulse every 16 ms, whereas in Fast Ethernet, there are bursts of pulses in intervals of 16 (±8) ms. In these pulses, or groups of pulses, the capability of the device is encoded in a 16-bit word called a Link Code Word (LCW), also known as Fast Link Pulse (FLP). The length of the burst is approximately 2 ms.

NOTE

Fast Ethernet vendors used their discretion whether to add autonegotiation capabilities to their devices. As a result, Fast Ethernet NICs without autonegotiation capabilities were once found in the marketplace.

 

Gigabit Ethernet implementation requires that all IEEE 802.3z compliant devices have autonegotiation capability. Autonegotiation can, however, be disabled through a software feature. From the actual hardware perspective, the 802.3z specification requires autonegotiation capabilities on the device. On Cisco Catalyst switches, autonegotiation can be disabled with the following command. Note that this command must be configured on both link partners:

 

 

 

 

 

set port negotiation <mod/port> enable | disable

 

 

The parameters that 802.3z devices negotiate are

  • Duplex setting
  • Flow control
  • Remote fault information

Although duplex setting can be negotiated, Cisco switches operate Gigabit Ethernet in full-duplex mode only. With the introduction of the newer 1000/100/10 blades, a port can operate at various speeds and duplex settings. However, it is unlikely that Cisco will support Gigabit half duplex in any point-to-point configurations with even the aforementioned blades. Use the show port capabilities command that is available in Catalyst OS to view the features supported by the line module, as shown in Example 1-4.

Example 1-4. Output from show port capabilities Command

 

Switch1 (enable) show port capabilities 1/1

 

Model WS-X6K-SUP2-2GE

 

Port 1/1

 

Type 1000BaseSX

 

Speed 1000

 

Duplex full

 

 

Flow control is an optional feature that is part of the 802.3x specification. The concept behind flow control is to help reduce the burden on the port that is overwhelmed with traffic. It does this by creating back-pressure on the network. If the volume of traffic is such that a port runs out of buffers, it drops subsequent packets. The flow control mechanism simply tells the transmitter to back off for a period of time by sending an Ethernet Pause Frame (MAC address of 01-80-c2-00-00-01) to the transmitter. The transmitter receives this frame and buffers the outgoing packets in its output buffer queue. This mechanism provides needed time for the receiver to clear the packets that are in its input queue. The obvious advantage is that packets are not dropped. The negative aspect to this process is latency. Certain multicast, voice, and video traffic are sensitive to latency on the network. It is recommended that flow control should be implemented with care. Typically, this feature is implemented as a quick fix. Not all Cisco switches support this feature.

 

 

 

 

 

set port flowcontrol <mod/port>

 

 

Remote fault information detects and advertises physical layer problems such as excessive noise, wrong cable types, bad hardware, and so on to the remote peer. The switch is programmed to take a proactive approach when excessive physical layer problems exist. A port that is generating errors can potentially disrupt a network. For instance, it can cause spanning-tree problems and traffic black holing, and drain system resources. As a result, the switch error disables the port.

Looking at some examples will solidify the concept and function of autonegotiation. In Figure 1-7, Host1 and the hub are link partners over a 10BASE-T connection. 10BASE-T has no knowledge of autonegotiation, and therefore, the devices must statically be configured. NLPs are sent by both devices when they come online. In this example, these devices operate over a 10BASE-T half-duplex connection.

Figure 1-7. 10BASE-T Autonegotiation


 

Figure 1-8 shows a straight 100BASE-T connection with both devices enabled for autonegotiation. FLP bursts are sent to advertise the device’s capabilities and negotiate a maximum highest bandwidth connection. The highest connection negotiated is priority 4, which is 100BASE-TX full duplex.

Figure 1-8. 100BASE-T Autonegotiation


 

The following is the command that configures a switch to autonegotiate a port:

 

 

 

 

 

set port speed <mod/port> auto

 

 

In Figure 1-9, Host1 has a 10BASE-T card. The switch has a capability to operate in both 10BASE-T and 100BASE-T mode. The 10/100 modules are common in a switching environment. Cisco has various 10/100 modules with various features and functionalities. In this example, there is a mismatch between the pulses sent by the Host1 and the switch. Because Host1 has a 10BASE-T card, it can send only NLPs. Initially, when the switch comes online, it generates only FLP bursts. When the switch detects NLPs from its link partner, it ceases to generate FLP bursts and switches to NLP. Depending on the static configuration on Host1, the switch chooses that priority. In this instance, the connection is 10BASE-T operating at half duplex.

Figure 1-9. 10/100BASE-T Autonegotiation


 

The finer points of autonegotiation have been discussed; however, some drawbacks need to be discussed. Numerous network problems resulted when the autonegotiation feature was first deployed. The issues ranged from degradation in performance to connectivity loss. The cause of some of these problems included advanced software features that came with the NIC, vendors not fully conforming to 802.3u standard, and buggy code. These days, now that manufacturers have resolved these issues, misconfiguration is the biggest remaining problem. Table 1-5 and Table 1-6 show various consequences from misconfigurations. For instance, a duplex mismatch can degrade performance on the wire and potentially cause packet loss.

Table 1-5. Autonegotiation Configurations for 10/100 Ethernet

Configuration NIC (Speed/Duplex)

Configuration Switch (Speed/Duplex)

Resulting NIC Speed/Duplex

Resulting Catalyst Speed/Duplex

Comments

AUTO

AUTO

100 Mbps, Full duplex

100 Mbps, Full duplex

Assuming maximum capability of Catalyst switch and NIC is 100 full duplex.

100 Mbps, Full duplex

AUTO

100 Mbps, Full duplex

100 Mbps, Half duplex

Duplex mismatch.

AUTO

100 Mbps, Full duplex

100 Mbps, Half duplex

100 Mbps, Full duplex

Duplex mismatch.

100 Mbps, Full duplex

100 Mbps, Full duplex

100 Mbps, Full duplex

100 Mbps, Full duplex

Correct manual configuration.

100 Mbps, Half duplex

AUTO

100 Mbps, Half duplex

100 Mbps, Half duplex

Link is established, but switch does not see any autonegotiation information from NIC and defaults to half duplex.

10 Mbps, Half duplex

AUTO

10 Mbps, Half duplex

10 Mbps, Half duplex

Link is established, but switch will not see FLP and will default to 10 Mbps half duplex.

10 Mbps, Half duplex

100 Mbps, Half duplex

No Link

No Link

Neither side will establish link because of speed mismatch.

AUTO

100 Mbps, Half duplex

10 Mbps, Half duplex

10 Mbps, Half duplex

Link is established, but NIC will not see FLP and default to 10 Mbps half duplex.

 

Table 1-6. Autonegotiations Configurations for Gigabit Ethernet

Switch Port Gigabit

Autonegotiation Setting

NIC Gigabit Autonegotiation Setting

Switch Link/NIC Link

Enabled

Enabled

Up

Up

Disabled

Disabled

Up

Up

Enabled

Disabled

Down

Up

Disabled

Enabled

Up

Down

 

Network engineers still have heated discussions about whether to enable autonegotiation in the network. As mentioned earlier, autonegotiation is a big advantage for mobile users. A user should not have to worry about configuring his laptop every time he goes to a different location.

The rule of thumb is to enable autonegotiation on access ports that connect to users. Mission-critical devices should be statically configured to protect the network from possible outages and performance hits. Therefore, connections between routers and switches, or servers and switches should be hard coded with the appropriate speed and duplex settings.

Switch Congestion and Head-of-Line Blocking

Congestion and Head-of-Line Blocking

Head-of-line blocking occurs whenever traffic waiting to be transmitted prevents or blocks traffic destined elsewhere from being transmitted. Head-of-line blocking occurs most often when multiple high-speed data sources are sending to the same destination. In the earlier shared bus example, the central arbiter used the round-robin service approach to moving traffic from one line card to another. Ports on each line card request access to transmit via a local arbiter. In turn, each line card’s local arbiter waits its turn for the central arbiter to grant access to the switching bus. Once access is granted to the transmitting line card, the central arbiter has to wait for the receiving line card to fully receive the frames before servicing the next request in line. The situation is not much different than needing to make a simple deposit at a bank having one teller and many lines, while the person being helped is conducting a complex transaction.

In Figure 2-7, a congestion scenario is created using a traffic generator. Port 1 on the traffic generator is connected to Port 1 on the switch, generating traffic at a 50 percent rate, destined for both Ports 3 and 4. Port 2 on the traffic generator is connected to Port 2 on the switch, generating traffic at a 100 percent rate, destined for only Port 4. This situation creates congestion for traffic destined to be forwarded by Port 4 on the switch because traffic equal to 150 percent of the forwarding capabilities of that port is being sent. Without proper buffering and forwarding algorithms, traffic destined to be transmitted by Port 3 on the switch may have to wait until the congestion on Port 4 clears.

Figure 2-7. Head-of-Line Blocking


 

Head-of-line blocking can also be experienced with crossbar switch fabrics because many, if not all, line cards have high-speed connections into the switch fabric. Multiple line cards may attempt to create a connection to a line card that is already busy and must wait for the receiving line card to become free before transmitting. In this case, data destined for a different line card that is not busy is blocked by the frames at the head of the line.

Catalyst switches use a number of techniques to prevent head-of-line blocking; one important example is the use of per port buffering. Each port maintains a small ingress buffer and a larger egress buffer. Larger output buffers (64 Kb to 512 k shared) allow frames to be queued for transmit during periods of congestion. During normal operations, only a small input queue is necessary because the switching bus is servicing frames at a very high speed. In addition to queuing during congestion, many models of Catalyst switches are capable of separating frames into different input and output queues, providing preferential treatment or priority queuing for sensitive traffic such as voice. Chapter 8 will discuss queuing in greater detail.

Swich Data Switch fabric

Although a variety of techniques have been used to implement switching fabrics on Cisco Catalyst platforms, two major architectures of switch fabrics are common:

  • Shared bus
  • Crossbar

Shared Bus Switching

In a shared bus architecture, all line modules in the switch share one data path. A central arbiter determines how and when to grant requests for access to the bus from each line card. Various methods of achieving fairness can be used by the arbiter depending on the configuration of the switch. A shared bus architecture is much like multiple lines at an airport ticket counter, with only one ticketing agent processing customers at any given time.

Figure 2-2 illustrates a round-robin servicing of frames as they enter a switch. Round-robin is the simplest method of servicing frames in the order in which they are received. Current Catalyst switching platforms such as the Catalyst 6500 support a variety of quality of service (QoS) features to provide priority service to specified traffic flows. Chapter 8, “Understanding Quality of Service on Catalyst 6500," will provide more information on this topic.

Figure 2-2. Round-Robin Service Order


 

The following list and Figure 2-3 illustrate the basic concept of moving frames from the received port or ingress, to the transmit port(s) or egress using a shared bus architecture:

1.Frame received from Host1? The ingress port on the switch receives the entire frame from Host1 and stores it in a receive buffer. The port checks the frame’s Frame Check Sequence (FCS) for errors. If the frame is defective (runt, fragment, invalid CRC, or Giant), the port discards the frame and increments the appropriate counter.

2.Requesting access to the data bus? A header containing information necessary to make a forwarding decision is added to the frame. The line card then requests access or permission to transmit the frame onto the data bus.

3.Frame transmitted onto the data bus? After the central arbiter grants access, the frame is transmitted onto the data bus.

4.Frame is received by all ports? In a shared bus architecture, every frame transmitted is received by all ports simultaneously. In addition, the frame is received by the hardware necessary to make a forwarding decision

5.Switch determines which port(s) should transmit the frame? The information added to the frame in step 2 is used to determine which ports should transmit the frame. In some cases, frames with either an unknown destination MAC address or a broadcast frame, the switch will transmit the frame out all ports except the one on which the frame was received.

6.Port(s) instructed to transmit, remaining ports discard the frame? Based on the decision in step 5, a certain port or ports is told to transmit the frame while the rest are told to discard or flush the frame.

7.Egress port transmits the frame to Host2? In this example, it is assumed that the location of Host2 is known to the switch and only the port connecting to Host2 transmits the frame

 

Figure 2-3. Frame Flow in a Shared Bus

[View full size image]


 

One advantage of a shared bus architecture is every port except the ingress port receives a copy of the frame automatically, easily enabling multicast and broadcast traffic without the need to replicate the frames for each port. This example is greatly simplified and will be discussed in detail for Catalyst platforms that utilize a shared bus architecture in Chapter 3, “Catalyst Switching Architecture."

Crossbar Switching

In the shared bus architecture example, the speed of the shared data bus determines much of the overall traffic handling capacity of the switch. Because the bus is shared, line cards must wait their turns to communicate, and this limits overall bandwidth.

A solution to the limitations imposed by the shared bus architecture is the implementation of a crossbar switch fabric, as shown in Figure 2-4. The term crossbar means different things on different switch platforms, but essentially indicates multiple data channels or paths between line cards that can be used simultaneously.

Figure 2-4. Crossbar Switch Fabric


 

In the case of the Cisco Catalyst 5500 series, one of the first crossbar architectures advertised by Cisco, three individual 1.2-Gbps data buses are implemented. Newer Catalyst 5500 series line cards have the necessary connector pins to connect to all three buses simultaneously, taking advantage of 3.6 Gbps of aggregate bandwidth. Legacy line cards from the Catalyst 5000 are still compatible with the Catalyst 5500 series by connecting to only one of the three data buses. Access to all three buses is required by Gigabit Ethernet cards on the Catalyst 5500 platform.

A crossbar fabric on the Catalyst 6500 series is enabled with the Switch Fabric Module (SFM) and Switch Fabric Module 2 (SFM2). The SFM provides 128 Gbps of bandwidth (256 Gbps full duplex) to line cards via 16 individual 8-Gbps connections to the crossbar switch fabric. The SFM2 was introduced to support the Catalyst 6513 13-slot chassis and includes architecture optimizations over the SFM.

LanSwitch Receive Data Mode

Cut-Through Mode

Switches operating in cut-through mode receive and examine only the first 6 bytes of a frame. These first 6 bytes represent the destination MAC address of the frame, which is sufficient information to make a forwarding decision. Although cut-through switching offers the least latency when transmitting frames, it is susceptible to transmitting fragments created via Ethernet collisions, runts (frames less than 64 bytes), or damaged frames.

Fragment-Free Mode

Switches operating in fragment-free mode receive and examine the first 64 bytes of frame. Fragment free is referred to as “fast forward" mode in some Cisco Catalyst documentation. Why examine 64 bytes? In a properly designed Ethernet network, collision fragments must be detected in the first 64 bytes.

Store-and-Forward Mode

Switches operating in store-and-forward mode receive and examine the entire frame, resulting in the most error-free type of switching.

As switches utilizing faster processor and application-specific integrated circuits (ASICs) were introduced, the need to support cut-through and fragment-free switching was no longer necessary. As a result, all new Cisco Catalyst switches utilize store-and-forward switching.

Figure 2-1 compares each of the switching modes.

Figure 2-1. Switching Modes


802.1D TOPOLGY CHANGE FROM CISCO

Interactive: This document offers customized analysis of your Cisco device.


Contents

Introduction

Prerequisites

      Requirements

      Components Used

      Conventions

Purpose of the Topology Change Mechanism

Principle of Operation

      Notify the Root Bridge

      Broadcast the Event to the Network

What to Do When There are Many Topology Changes in the Network

      Flooded Traffic

      Problem in ATM LANE Bridged Environments

      Avoid TCN Generation with the portfast Command
      Track the Source of a TCN

Conclusion

NetPro Discussion Forums – Featured Conversations

Related Information


Introduction

When you monitor
Spanning-Tree Protocol (STP) operations, you may be concerned when you see topology change counters incrementing in the statistics log. Topology changes are normal in STP. However, too many of them can have an impact on network performances. This document explains that the purpose of this topology is to:

  • Change the mechanism in per-VLAN spanning tree (PVST) and PVST+ environments.
  • Determine what triggers a topology change event.
  • Describe issues related to the topology change mechanism.

Prerequisites

Requirements

There are no specific requirements for this document.

Components Used

This document is not restricted to specific software and hardware versions.

The information in this document was created from the devices in a specific lab environment. All of the devices used in this document started with a cleared (default) configuration. If your network is live, make sure that you understand the potential impact of any command.

Conventions

For more information on document conventions, refer to the Cisco Technical Tips Conventions.

Purpose of the Topology Change Mechanism

Learning from the frames it receives, a bridge creates a table that associates to a port the Media Access Control (MAC) addresses of the hosts that can be reached through this port. This table is used to forward frames directly to their destination port. Therefore, flooding is avoided.

Default aging time for this table is 300 seconds (five minutes). Only after a host has been silent for five minutes, its entry disappears from the table of the bridge. Here is an example that shows why you could want this aging to be faster:

In this network, assume that bridge B1 is blocking its link to B4. A and B are two stations that have an established connection. Traffic from A to B goes to B1, B2, B3, and then B4. The scheme shows the MAC addresses table learned by the four bridges in this situation:


Now, assume the link between B2 and B3 fails. Communication between A and B is interrupted at least until B1 puts its port to B4 in forwarding mode (a maximum of 50 seconds with default parameters). However, when A wants to send a frame to B, B1 still has an entry that leads to B2 and the packet is sent to a black hole. The same applies when B wants to reach A. Communication is lost for five minutes, until the entries for A and B MAC addresses age out.


The forwarding databases implemented by bridges are very efficient in a stable network. However, there are many situations where the five minute aging time is a problem after the topology of the network has changed. The topology change mechanism is a workaround for that kind of problem. As soon as a bridge detects a change in the topology of the network (a link that goes down or goes to forwarding), it advertises the event to the whole bridged network.

The Principle of Operation section explains how this is practically implemented. Every bridge is then notified and reduces the aging time to forward_delay (15 seconds by default) for a certain period of time (max_age + forward_delay). It is more beneficial to reduce the aging time instead of clearing the table because currently active hosts, that effectively transmit traffic, are not cleared from the table.

In this example, as soon as bridge B2 or B3 detects the link going down, it sends topology change notifications. All bridges become aware of the event and reduce their aging time to 15 seconds. As B1 does not receive any packet from B on its port leading to B2 in fifteen seconds, it ages out the entry for B on this port. The same happens to the entry for A on the port that leads to B3 on B4. Later when the link between B1 and B4 goes to forwarding, traffic is immediately flooded and re-learned on this link.

Principle of Operation

This section explains how a bridge advertises a topology change at the Bridge Protocol Data Unit (BPDU) level. .

It has already been briefly explained when a bridge considers it detected a topology change. The exact definition is:

  • When a port that was forwarding is going down (blocking for instance).
  • When a port transitions to forwarding and the bridge has a designated port. (This means that the bridge is not standalone.)

The process to send a notification to all bridges in the network involves two steps:

  • The bridge notifies the root bridge of the spanning tree.
  • The root bridge “broadcasts" the information into the whole network.

Notify the Root Bridge

In normal STP operation, a bridge keeps receiving configuration BPDUs from the root bridge on its root port. However, it never sends out a BPDU toward the root bridge. In order to achieve that, a special BPDU called the topology change notification (TCN) BPDU has been introduced. Therefore, when a bridge needs to signal a topology change, it starts to send TCNs on its root port. The designated bridge receives the TCN, acknowledges it, and generates another one for its own root port. The process continues until the TCN hits the root bridge.


The TCN is a very simple BPDU that contains absolutely no information that a bridge sends out every hello_time seconds (this is locally configured hello_time, not the hello_time specified in configuration BPDUs). The designated bridge acknowledges the TCN by immediately sending back a normal configuration BPDU with the topology change acknowledgement (TCA) bit set. The bridge that notifies the topology change does not stop sending its TCN until the designated bridge has acknowledged it. Therefore, the designated bridge answers the TCN even though it does not receive configuration BPDU from its root.

Broadcast the Event to the Network

Once the root is aware that there has been a topology change event in the network, it starts to send out its configuration BPDUs with the topology change (TC) bit set. These BPDUs are relayed by every bridge in the network with this bit set. As a result all bridges become aware of the topology change situation and it can reduce its aging time to forward_delay. Bridges receive topology change BPDUs on both forwarding and blocking ports.

The TC bit is set by the root for a period of max_age + forward_delay seconds, which is 20+15=35 seconds by default.


What to Do When There are Many Topology Changes in the Network

Here are some of the problems that can be generated by TCN. It is followed by some information on how to limit topology changes and find from where they come .

If you have the output of a show-tech support command from your Cisco device, you can use Output Interpreter (
registered customers only
) to display potential issues and fixes. To use Output Interpreter (
registered customers only
) , you must be a registered customer, be logged in, and have JavaScript enabled.

Flooded Traffic

The more hosts are in the network, the higher are the probabilities of getting a topology change. For instance, a directly attached host triggers a topology change when it is power cycled. In very large (and flat) networks, a point can be reached where the network is perpetually in a topology change status. This is as if the aging time is configured to fifteen seconds, which leads to a high level of flooding. Here is a worst case scenario that happened to a customer who was doing some server backup.


The aging out of the entry for the device that receives the backup was a disaster because it caused a very heavy traffic to hit all users. See the Avoid TCN Generation with the portfast Command section for how to avoid TCN generation.

Problem in ATM LANE Bridged Environments

This case is more critical than the normal flooding of traffic implied by a quick aging. On the receipt of a topology change for a VLAN, a Catalyst switch has its LAN emulation (LANE) blades reconfirming their LE-arp table for the corresponding emulated LAN (ELAN). As every LANE blade in the ELAN issues at the same time the same request, it may put a high stress on the LAN Emulation Server (LES) if there are a lot of entries to reconfirm. Connectivity issues have been seen in this scenario. If the network is sensitive to a topology change, the real problem is not the topology change itself but the design of the network. It is recommended that you limit as much as possible the TCN generation to save the CPU of the LES (at least). See the Avoid TCN Generation with the portfast Command section to limit TCN generation.

Avoid TCN Generation with the portfast Command

The portfast feature is a Cisco proprietary change in the STP implementation. The command is applied to specific ports and has two effects:

  • Ports that come up are put directly in the forwarding STP mode, instead of going through the learning and listening process. The STP still runs on ports with portfast.
  • The switch never generates a TCN when a port configured for portfast is going up or down.

Enable portfast on ports where the connected hosts are very likely to bring their link up and down (typically end stations that users frequently power cycle). This feature should not be necessary for server ports. It should definitely be avoided on ports that lead to hubs or other bridges. A port that directly transitions to forwarding state on a redundant link can cause temporary bridging loops.

Topology changes can be useful, so do not enable portfast on a port for which a link that goes up or down is a significant event for the network.

Track the Source of a TCN

In itself, a topology change notification is not a bad thing, but as a good network administrator, it is better to know their origin in order to be sure that they are not related to a real problem. Identifying the bridge that issued the topology change is not an easy task. However, it is not technically complex.

Most bridges only count the number of TCNs they have issued or received. The Catalyst 4500/4000, 5500/5000, and 6500/6000 are able to show the port and the ID of the bridge that sent the last topology change they received. Starting from the root, it is then possible to go downstream to the initiator bridge. Refer to the show spantree statistics
command for more information.

If you have the output of a show spantree statistics command from your Cisco device, use Output Interpreter (
registered customers only
) to display potential issues and fixes. To use Output Interpreter (
registered customers only
) , you must login as a registered customer, and have JavaScript enabled.

IEEE 802.1

IEEE 802.1

目錄

零、IEEE 802.1簡介

一、IEEE 802.1D

二、IEEE 802.1p協議

三、IEEE 802.1q協議

四、IEEE 802.1w協議

五、IEEE 802.1s協議

六、IEEE 802.1x協議

展開

零、IEEE 802.1簡介

一、IEEE 802.1D

二、IEEE 802.1p協議

三、IEEE 802.1q協議

四、IEEE 802.1w協議

五、IEEE 802.1s協議

六、IEEE 802.1x協議

展開

編輯本段零、IEEE 802.1簡介

  IEEE 802.1是一組協定的集合,如生成樹協議、VLAN協議等。為了將各個協議區別開來,IEEE在制定某一個協議時,就在IEEE 802.1後面加上不同的小寫字母,
如IEEE 802.1a定義局域網體系結構;IEEE 802.1b定義網際互連,網路管理及定址;IEEE 802.1d定義生成樹協議;IEEE 802.1p定義優先順序佇列
IEEE 802.1q定義VLAN標記協定;IEE 802.1s定義多生成樹協議;IEEE 802.1w定義快速生成樹協議;IEEE 802.1x定義局域網安全認證等。

編輯本段一、IEEE 802.1D


 

1IEEE 802.1D簡介

  為了解決"廣播風暴“這一在二層資料網路中存在弊端,IEEE(電機和電子工程師學會)制定了IEEE 802.1d的生成樹(Spanning Tree)協議。生成樹協議是一種鏈路管理協定,為網路提供路徑冗餘,同時防止產生環路。為使乙太網更好地工作,兩個工作站之間只能有一條活動路徑。

  STP(生成樹協議)允許橋接器之間相互通信以發現網路物理環路。該協定定義了一種演算法,橋接器能夠使用它創建無環路(loop-free)的邏輯拓樸結構。換句話說,STP創建了一個由無環路樹葉和樹枝構成的樹結構,其跨越了整個第二層網路。


 

2、工作原理

  STP協議中定義了根橋(Root Bridge)、根埠(Root Port)、指定埠(Designated Port)、路徑開銷(Path Cost)等概念,目的就在於通過構造一棵自然樹的方法達到裁剪冗餘環路的目的,
同時實現鏈路備份和路徑最優化。用於構造這棵樹的演算法稱為生成樹演算法 SPA。
要實現這些功能,橋接器之間必須要進行一些資訊的交流,這些資訊交流單元就稱為配置消息BPDU。STP BPDU是一種二層資訊,目的MAC是多播地址01-80-C2-00-00-00,
所有支援STP協定的橋接器都會接收並處理收到的BPDU報文。該報文的資料區裡攜帶了用於生成樹計算的所有有用資訊。

  STP的主要不足表現在:二層數據網的收斂時間過長;網路拓撲容易引起全域波動;缺乏對現有多 VLAN 環境的支持。

編輯本段二、IEEE 802.1p協議

  IEEE 802.1P是流量優先權控制標準,工作在媒體存取控制(MAC)子層,使得二層交換機能夠提供流量優先順序和動態組播過濾服務。
IEEE 802.1p標準也提供了組播流量過濾功能,以確保該流量不超出第二層切換式網路範圍。 IEEE 802.1p協議頭包括一個3位元優先順序欄位,
該欄位支持將數據包分組為各種流量種類。它是IEEE 802.1q(VLAN標籤協定)標準的擴充協定,它們協同工作
IEEE 802.1q標準定義了為乙太網MAC幀添加的標籤。VLAN 標籤有兩部分:VLAN ID(12位元)和優先順序(3位元)。IEEE 802.1q VLAN標準中沒有定義和使用優先順序欄位,而IEEE
802.1p中則定義了該欄位。

  IEEE 802.1p中定義的優先順序有8種。最高優先順序為7,應用於關鍵性網路流量;優先順序6和5主要用於延遲敏感(delay-sensitive)應用程式;優先順序4到1主要用於受控負載(controlled-load)應用程式;優先順序0是預設值,在沒有設置其它優先順序值的情況下自動啟用。

編輯本段三、IEEE 802.1q協議

  IEEE 802.1q協議也就是"Virtual Bridged Local Area Networks"(虛擬橋接局域網,簡稱"虛擬區域網路“)協議,主要規定了VLAN的實現方法。

1VLAN簡介

  "Virtual LANs"(虛擬區域網路)目前發展很快,世界上主要的大網路廠商在他們的交換機設備中都實現了VLAN協議。在一個支援VLAN技術的交換機中,
可以將它的乙太網口劃分為幾個組,比如生產組、工程組、市場組等。這樣,組內的各個用戶就像在同一個局域網內(可能各組的用戶位於很多的交換機上,而非一個交換機)一樣,
同時,不是本組的用戶就無法訪問本組的成員,在一定程度上提高了各組的網路安全性。

  VLAN成員的定義可以分為4種,即根據埠劃分VLAN;根據MAC位址劃分VLAN;根據網路層劃分VLAN;根據IP組播劃分。

2、協議簡介

  IEEE 802.1q協定為標識帶有VLAN成員資訊的乙太訊框建立了一種標準方法。IEEE802.1q標準定義了VLAN橋接器操作,從而允許在橋接局域網結構中實現定義、運行以及管理VLAN拓撲結構 等操作。IEEE 802.1q標準主要用來解決如何將大型網路劃分為多個小網路,如此廣播和組播流量就不會佔據更多頻寬的問題。此外IEEE 802.1q標準還提供更高的網路段間安全性。IEEE802.1q完成這些功能的關鍵在於標籤。支援IEEE 802.1q的交換埠可被配置來傳輸標籤乙太訊框無標籤乙太訊框。一個包含VLAN資訊的標籤欄位可以插入到乙太訊框中。如果埠有支援IEEE 802.1q的設備(如另一個交換機)相連,那麼這些標籤幀可以在交換機之間傳送VLAN成員資訊,這樣VLAN就可以跨越多台交換機。但是,對於沒有支援IEEE 802.1q設備相連的埠我們必須確保它們用於傳輸無標籤幀。

  在IEEE 802.1q中,用於標籤幀的最大合法乙太訊框大小已由1518位元組增加到1522位元組,這樣就會使網卡和舊式交換機由於幀"尺寸過大"而丟棄標籤乙太訊框。

編輯本段四、IEEE 802.1w協議

  為了解決前面介紹的STP協議缺陷,在20世紀初IEEE推出了802.1w標準。它同樣是屬於生成樹協議類型,稱之為"快速生成樹協議",作為對802.1D標準的補充。之所以要制定IEEE 802.1w協議的原因是IEEE 802.1d協議雖然解決了鏈路閉合引起的閉環問題,但是生成樹的收斂(過程仍需比較長的時間。

1、協議原理

  IEEE 802.1w RSTP的特點是將許多思科增值生成樹擴展特性融入原始IEEE 802.1d中,如Portfast、Uplinkfast和Backbonefast。IEEE 802.1w協議通過利用一種主動的橋接器到橋接器握手機制,取代 IEEE 802.1d根橋接器中定義的計時器功能,提供了交換機(橋接器)、交換機埠(橋接器埠)或整個LAN的快速故障恢復功能。

  RSTP協定在STP協定基礎上做了以下兩個重要改進,使得收斂速度快得多(最快1秒以內) 。

  IEEE 802.1w設置了快速切換用的替代埠(Alternate Port)和備份埠(Backup Port)兩種角色,當根埠/指定埠失效的情況下,替代埠/備份埠就會無時延地進入轉發狀態。

  減少轉發延時

  使用了RSTP協定後,在只連接了兩個交換埠的點對點鏈路中,指定埠只需與下游橋接器進行一次握手就可以無時延地進入轉發狀態。

2IEEE 802.1w標準的缺陷

  RSTP的主要缺陷表現在以下三個方面:

  由於整個切換式網路只有一棵生成樹,在網路規模比較大的時候會導致較長的收斂時間,拓撲改變的影響面也較大。

  在網路結構不對稱的時候, RSTP協定的單生成樹就會影響網路的連通性。

  當鏈路被阻塞後將不承載任何流量,造成了頻寬的極大浪費,這在環行都會區網路的情況下比較明顯。

編輯本段五、IEEE 802.1s協議

  IEEE 802.1s標準中的多生成樹(MSTP)技術把IEEE 802.1w快速單生成樹(RSTP)演算法擴展到多生成樹,這為虛擬區域網路(VLANs)環境提供了快速收斂和負載均衡的功能,是IEEE 802.1 VLAN標記協定的擴展協定。

1MSTP工作原理

  IEEE802.1s引入了IST(Single Spanning Tree,單生成樹)概念和MST實例。IST是一種RSTP實例,它擴展了MST區域內的802.1D單一生成樹。IST連接所有MST橋接器,並從邊界埠發出、作為貫穿整個橋接器域的虛擬橋接器。MST實例(MSTI)是一種僅存在於區域內部的RSTP實例。它可以缺省運行RSTP,無須額外配置。不同於IST的是,MSTI在區域外既不與BPDU交互,也不發送BPDU。MSTP可以與傳統和PVST+交換機交互操作。

  採用MSTP技術,可以通過幹道(trunks)建立多個生成樹,關聯VLANs到相關的生成樹進程,而且每個生成樹進程具有獨立於其它進程的拓撲結構。MSTP還提供了多個資料轉發路徑和負載均衡,提高了網路容錯能力,因為一個進程(轉發路徑)的故障不會影響其它進程(轉發路徑)。

  每台運行MST的交換機都擁有單一配置,包括一個字母數位式配置名、一個配置修訂號和一個4096部件表,與潛在支援某個實例的各4096 VLAN相關聯。作為公共MSTP區域的一部分,一組交換機必須共用相同的配置屬性。重要的是要記住,配置屬性不同的交換機會被視為位於不同的區域。

2MSTP的主要特性

  MSTP具有下列特性:

  MSTP在MSTP區中運行IST常量;

  一個運行MSTP的橋提供與單生成樹橋的互通性;

  MSTP在每個區內建立和維護額外的生成樹。

編輯本段六、IEEE 802.1x協議

  IEEE 802.1x也稱為"基於埠的存取控制協議"(Port based network access control protocol)。它的體系結構包括三個重要的部分:Supplicant System(用戶端系統)、Authenticator System(認證系統)、Authentication Server System(認證伺服器系統)。

  "用戶端系統"一般為一個使用者終端系統。該終端系統通常要安裝一個用戶端軟體,使用者通過啟動這個用戶端軟體發起IEEE 802.1x協定的認證過程。

  "認證系統"通常為支援IEEE 802.1x協定的網路設備。該設備對應於不同使用者的埠有兩個邏輯埠:受控(controlled Port)埠和不受控埠(uncontrolled Port)。

  "認證伺服器系統"通常為RADIUS伺服器,該伺服器可以存儲有關使用者的資訊,比如使用者所屬的VLAN、CAR參數、優先順序、用戶的存取控制清單等等

IEEE 802.1S and Cisco MSTI

IEEE 802.1s協議
IEEE 802.1s標準中的多生成樹(Multiple Spanning Tree ,MST)技術把IEEE 802.1w快速單生成樹(RST)演算法擴展到多生成樹,這為虛擬區域網路(VLANs)環境提供了快速收斂和負載均衡的功能,是IEEE 802.1 VLAN標記協定的擴展協定。
1.MST工作原理

IEEE802.1s引入了IST(Single Spanning Tree,單生成樹)概念和MST實例。IST是一種RSTP實例,它擴展了MST區域內的802.1D單一生成樹。IST連接所有MST橋接器接器,並從邊界埠發出、作為貫穿整個橋接器接器域的虛擬橋接器接器。MST實例(MSTI)是一種僅存在於區域內部的RSTP實例。它可以預設運行RSTP,無須額外配置。不同於IST的是,MSTI在區域外既不與BPDU交互,也不發送BPDU。MST可以與傳統的PVST+交換機交互操作。思科實施定義了16種實例:一個IST(實例0)和15個MSTI,而IEEE 802.1s則支持一個IST和63個MSTI。
RSTP和MSTP都能夠與傳統生成樹協議交互操作。但是,當與傳統橋接器接器交互時,IEEE 802.1w的快速融合優勢就會失去。為保留與基於IEEE 802.1d橋接器接器的向後相容性,IEEE 802.1s協議橋接器接器在其埠上接聽IEEE 802.1d格式的BPDU(橋接器接器協定資料單元)。如果收到了IEEE 802.1d BPDU,埠會採用標準IEEE 802.1d行為,以確保相容性。
採用MST技術後,可以通過幹道(trunks)建立多個生成樹,關聯VLANs到相關的生成樹進程,而且每個生成樹進程具有獨立於其他進程的拓撲結構。MST還提供了多個資料轉發路徑和負載均衡,提高了網路容錯能力,因為一個進程(轉發路徑)的故障不會影響其他進程(轉發路徑)。
每台運行MST的交換機都擁有單一配置,包括一個字母數位式配置名、一個配置修訂號和一個4096部件表,與潛在支援某個實例的各4096 VLAN相關聯。作為公共MST區域的一部分,一組交換機必須共用相同的配置屬性。重要的是要記住,配置屬性不同的交換機會被視為位於不同的區域。
在大型網路的不同網路部分,通過MST來定位不同VLANs和生成樹進程的分配可以更容易地管理網路和使用冗餘路徑;一個生成樹進程只能存在於具有一致的VLAN進程分配的橋接器中,必須用同樣的MST配置資訊來配置一組橋接器,這使得這些橋接器能參與到一組生成樹進程中,具有同樣的MST配置資訊的互連的橋接器構成多生成樹(MST)區。
為確保一致的VLAN實例映射,協定需要識別區域的邊界。因此,區域的特徵都包括在BPDU中。交換機必須瞭解它們是否像鄰居一樣位於同一區域,因此會發送一份VLAN實例映射表摘要,以及修訂號和名稱。當交換機接收到BPDU後,它會提取摘要,並將其與自身的計算結果進行比較。為避免出現生成樹環路,如果兩台交換機在BPDU中所接收的參數不一致,負責接收BPDU的埠就會被宣佈為邊界埠。

2.MST的主要特性
多生成樹(MST)使用修正的快速生成樹協定(RSTP)——多生成樹協議(MSTP)。MST具有下列特性。
(1)MST在MST區中運行IST程序(instance 0)
MST協定運行一個生成樹程序叫做內部生成樹(IST),IST用於關聯MST區的內部資訊同時增加了共用
生成樹(CST)的資訊來和其它MST區域或CST溝通,而MST區對於相鄰的單一生成樹(SST)和其MST區
其本身整個MST區域就像一個單獨的橋接器接器。

(2)一個運行MST的橋接器提供與單一生成樹橋接器的互通性
在MST協議中,內部生成樹(IST)連接區中的所有MST橋接器,並且是共用生成樹(CST)的一個子樹。通用生成樹包含整個的橋接器域,MST區對於相鄰的單一生成樹(SST)橋接器和其它MST區就像一個虛橋接器。通用和內部生成樹(CIST)是每個MST區的內部生成樹(IST)、互連MST區的通用生成樹和單生成樹橋接器的一個集合,它與一個MST區內的一個IST和一個MST區外的CST都是一樣的。STP、RSTP和MSTP共同建立一個單獨的橋接器來作為通用和內部生成樹(CIST)的根。

(3)MST在每個自身區內建立和維護額外的生成樹
這些建立和維護額外的生成樹就是MST進程(MSTIS)。IST的進程號為0,MSTIS的進程號為1、2、3等。即使MST區是互連的,任何MSTI也都是本地于MST區並且獨立于另一個區的MSTI,MST進程和IST在MST區的邊界組合在一起構成了CST。MSTI的生成樹資訊包含在MSTP的記錄(M-record)中,M-record總是封裝在MST的BPDUS中。由MSTP計算的原始生成樹叫做M樹(M-tree),M樹只在MST區活躍,M樹和IST在MST區的邊界合併而形成CST


 

Cisco Reference

MST Region

As previously mentioned, the main enhancement introduced by MST is that several VLANs can be mapped to a single spanning tree instance. This raises the problem of how to determine which VLAN is to be associated with which instance. More precisely, how to tag BPDUs so that the receiving devices can identify the instances and the VLANs to which each device applies.

The issue is irrelevant in the case of the 802.1q standard, where all instances are mapped to a unique instance. In the PVST+ implementation, the association is as follows:

  • Different VLANs carry the BPDUs for their respective instance (one BPDU per VLAN).

The Cisco MISTP sent a BPDU for each instance, including a list of VLANs that the BPDU was responsible for, in order to solve this problem. If by error, two switches were misconfigured and had a different range of VLANs associated to the same instance, it was difficult for the protocol to recover properly from this situation.

The IEEE 802.1s committee adopted a much easier and simpler approach that introduced MST regions. Think of a region as the equivalent of Border Gateway Protocol (BGP) Autonomous Systems, which is a group of switches placed under a common administration.

MST Configuration and MST Region

Each switch running MST in the network has a single MST configuration that consists of these three attributes:

  1. An alphanumeric configuration name (32 bytes)
  2. A configuration revision number (two bytes)
  3. A 4096-element table that associates each of the potential 4096 VLANs supported on the chassis to a given instance

In order to be part of a common MST region, a group of switches must share the same configuration attributes. It is up to the network administrator to properly propagate the configuration throughout the region. Currently, this step is only possible by the means of the command line interface (CLI) or through Simple Network Management Protocol (SNMP). Other methods can be envisioned, as the IEEE specification does not explicitly mention how to accomplish that step.

Note: If for any reason two switches differ on one or more configuration attribute, the switches are part of different regions. For more information refer to the Region Boundary section of this document.

Region Boundary

In order to ensure consistent VLAN-to-instance mapping, it is necessary for the protocol to be able to exactly identify the boundaries of the regions. For that purpose, the characteristics of the region are included in the BPDUs. The exact VLANs-to-instance mapping is not propagated in the BPDU, because the switches only need to know whether they are in the same region as a neighbor. Therefore, only a digest of the VLANs-to-instance mapping table is sent, along with the revision number and the name. Once a switch receives a BPDU, the switch extracts the digest (a numerical value derived from the VLAN-to-instance mapping table through a mathematical function) and compares this digest with its own computed digest. If the digests differ, the port on which the BPDU was received is at the boundary of a region.

In generic terms, a port is at the boundary of a region if the designated bridge on its segment is in a different region or if it receives legacy 802.1d BPDUs. In this diagram, the port on B1 is at the boundary of region A, whereas the ports on B2 and B3 are internal to region B:


MST Instances

According to the IEEE 802.1s specification, an MST bridge must be able to handle at least these two instances:

  • One Internal Spanning Tree (IST)
  • One or more Multiple Spanning Tree Instance(s) (MSTIs)

The terminology continues to evolve, as 802.1s is actually in a pre-standard phase. It is likely these names will change in the final release of 802.1s. The Cisco implementation supports 16 instances: one IST (instance 0) and 15 MSTIs.

IST Instances

In order to clearly understand the role of the IST instance, remember that MST originates from the IEEE. Therefore, MST must be able to interact with 802.1d-based networks, because 802.1d is another IEEE standard. For 802.1d, a bridged network only implements a single spanning tree (CST). The IST instance is simply an RSTP instance that extends the CST inside the MST region.

The IST instance receives and sends BPDUs to the CST. The IST can represent the entire MST region as a CST virtual bridge to the outside world.

These are two functionally equivalent diagrams. Notice the location of the different blocked ports. In a typically bridged network, you expect to see a blocked port between Switches M and B. Instead of blocking on D, you expect to have the second loop broken by a blocked port somewhere in the middle of the MST region. However, due to the IST, the entire region appears as one virtual bridge that runs a single spanning tree (CST). This makes it possible to understand that the virtual bridge blocks an alternate port on B. Also, that virtual bridge is on the C to D segment and leads Switch D to block its port.


The exact mechanism that makes the region appear as one virtual CST bridge is beyond the scope of this document, but is amply described in the IEEE 802.1s specification. However, if you keep this virtual bridge property of the MST region in mind, the interaction with the outside world is much easier to understand.

MSTIs

The MSTIs are simple RSTP instances that only exist inside a region. These instances run the RSTP automatically by default, without any extra configuration work. Unlike the IST, MSTIs never interact with the outside of the region. Remember that MST only runs one spanning tree outside of the region, so except for the IST instance, regular instances inside of the region have no outside counterpart. Additionally, MSTIs do not send BPDUs outside a region, only the IST does.

MSTIs do not send independent individual BPDUs. Inside the MST region, bridges exchange MST BPDUs that can be seen as normal RSTP BPDUs for the IST while containing additional information for each MSTI. This diagram shows a BPDU exchange between Switches A and B inside an MST region. Each switch only sends one BPDU, but each includes one MRecord per MSTI present on the ports.


Note: In this diagram, notice that the first information field carried by an MST BPDU contains data about the IST. This implies that the IST (instance 0) is always present everywhere inside an MST region. However, the network administrator does not have to map VLANs onto instance 0, and therefore this is not a source of concern.

Unlike regular converged spanning tree topology, both ends of a link can send and receive BPDUs simultaneously. This is because, as shown in this diagram, each bridge can be designated for one or more instances and needs to transmit BPDUs. As soon as a single MST instance is designated on a port, a BPDU that contains the information for all instances (IST+ MSTIs) is to be sent. The diagram shown here demonstrates MST BDPUs sent inside and outside of an MST region:


The MRecord contains enough information (mostly root bridge and sender bridge priority parameters) for the corresponding instance to calculate its final topology. The MRecord does not need any timer-related parameters such as hello time, forward delay, and max age that are typically found in a regular IEEE 802.1d or 802.1q CST BPDU. The only instance in the MST region to use these parameters is the IST; the hello time determines how frequently BPDUs are sent, and the forward delay parameter is mainly used when rapid transition is not possible (remember that rapid transitions do not occur on shared links). As MSTIs depend on the IST to transmit their information, MSTIs do not need those timers.

Common Misconfigurations(如需使用MST時請注意下列常犯錯誤)

The independence between instance and VLAN is a new concept that implies you must carefully plan your configuration. The IST Instance is Active on All Ports, Whether Trunk or Access section illustrates some common pitfalls and how to avoid them.

IST Instance is Active on All Ports, Whether Trunk or Access

This diagram shows Switches A and B connected with access ports each located in different VLANs. VLAN 10 and VLAN 20 are mapped to different instances. VLAN 10 is mapped to instance 0, while VLAN 20 is mapped to instance 1.


This configuration results in pcA ‘s inability to send frames to pcB. The show command reveals that Switch B is blocking the link to Switch A in VLAN 10, as shown in the this diagram:


How is that possible in such a simple topology, with no apparent loop?

This issue is explained by the fact that MST information is conveyed with only one BPDU (IST BPDU), regardless of the number of internal instances. Individual instances do not send individual BPDUs. When Switch A and Switch B exchange STP information for VLAN 20, the switches send an IST BPDU with an MRecord for instance 1 because that is where VLAN 20 is mapped. However, because it is an IST BPDU, this BPDU also contains information for instance 0. This means that the IST instance is active on all ports inside an MST region, whether these ports carry VLANs mapped to the IST instance or not.

This diagram shows the logical topology of the IST instance:


Switch B receives two BPDUs for instance 0 from Switch A (one on each port). It is clear that Switch B has to block one of its ports in order to avoid a loop.

The preferred solution is to use one instance for VLAN 10 and another instance for VLAN 20 to avoid mapping VLANs to the IST instance.

An alternative is to carry those VLANs mapped to the IST on all links (allow VLAN 10 on both ports, as in this

diagram).

Two VLANs Mapped to the Same Instance Block the Same Ports

Remember that VLAN no longer means spanning tree instance. The topology is determined by the instance, regardless of the VLANs mapped to it. This diagram shows a problem that is a variant of the one discussed in the IST Instance is Active on All Ports, Whether Trunk or Access section:


Suppose that VLANs 10 and 20 are both mapped to the same instance (instance 1). The network administrator wants to manually prune VLAN 10 on one Uplink and VLAN 20 on the other in order to restrict traffic on the Uplink trunks from Switch A to distribution Switches D1 and D2 (an attempt to achieve a topology as described in the previous diagram). Shortly after this is completed, the network administrator notices that users in VLAN 20 have lost connectivity to the network.

This is a typical misconfiguration problem. VLANs 10 and 20 are both mapped to instance 1, which means there is only one logical topology for both VLANs. Load-sharing cannot be achieved, as shown here:


Because of the manual pruning, VLAN 20 is only allowed on the blocked port, which explains the loss of connectivity. In order to achieve load balancing, the network administrator must map VLAN 10 and 20 to two different instances.

A simple rule to follow to steer clear of this problem is to never manually prune VLANs off a trunk. If you decide to remove some VLANs off a trunk, remove all the VLANs mapped to a given instance together. Never remove an individual VLAN from a trunk and not remove all the VLANs that are mapped to the same instance.

Interaction Between the MST Region and the Outside World

With a migration to an MST network, the administrator is likely to have to deal with interoperability issues between MST and legacy protocols. MST seamlessly interoperates with standard 802.1q CST networks; however, only a handful of networks are based on the 802.1q standard because of its single spanning tree restriction. Cisco released PVST+ at the same time as support for 802.1q was announced. Cisco also provides an efficient yet simple compatibility mechanism between MST and PVST+. This mechanism is explained later in this document.

The first property of an MST region is that at the boundary ports no MSTI BPDUs are sent out, only IST BPDUs are. Internal instances (MSTIs) always automatically follow the IST topology at boundary ports, as shown in this diagram:


In this diagram, assume VLANs 10 through 50 are mapped to the green instance, which is an internal instance (MSTI) only. The red links represent the IST, and therefore also represent the CST. VLANs 10 through 50 are allowed everywhere in the topology. BPDUs for the green instance are not sent out of the MST region. This does not mean that there is a loop in VLANs 10 through 50. MSTIs follow the IST at the boundary ports, and the boundary port on Switch B also blocks traffic for the green instance.

Switches that run MST are able to automatically detect PVST+ neighbors at boundaries. These switches are able to detect that multiple BPDUs are received on different VLANs of a trunk port for the instance.

This diagram shows an interoperability issue. An MST region only interacts with one spanning tree (the CST) outside of the region. However, PVST+ bridges run one Spanning Tree Algorithm (STA) per VLAN, and as a result, send one BPDU on each VLAN every two seconds. The boundary MST bridge does not expect to receive that many BPDUs. The MST bridge either expects to receive one or to send one, depending on whether the bridge is the root of the CST or not.


Cisco developed a mechanism to address the problem shown in this diagram. A possibility could have consisted of tunneling the extra BPDUs sent by the PVST+ bridges across the MST region. However, this solution has proven to be too complex and potentially dangerous when first implemented in the MISTP. A simpler approach was created. The MST region replicates the IST BPDU on all the VLANs to simulate a PVST+ neighbor. This solution implies a few constraints that are discussed in this document.

Recommended Configuration

As the MST region now replicates the IST BPDUs on every VLAN at the boundary, each PVST+ instance hears a BPDU from the IST root (this implies the root is located inside the MST region). It is recommended that the IST root have a higher priority than any other bridge in the network so that the IST root becomes the root for all of the different PVST+ instances, as shown in this diagram:


In this diagram, Switch C is a PVST+ redundantly connected to an MST region. The IST root is the root for all PVST+ instances that exist on Switch C. As a result, Switch C blocks one of its Uplinks in order to prevent loops. In this particular case, interaction between PVST+ and the MST region is optimal because:

  • Switch C’s Uplink ports’ costs can be tuned to achieve load balancing of the different VLANs across the Uplinks’ ports (because Switch C runs one spanning tree per VLAN, this switch is able to chose which Uplink port blocks on a per-VLAN basis).
  • UplinkFast can be used on Switch C to achieve fast convergence in case of an Uplink failure.

Alternate Configuration (Not Recommended)

Another possibility is to have the IST region be the root for absolutely no PVST+ instance. This means that all PVST+ instances have a better root than the IST instance, as shown in this diagram:


This case corresponds to a PVST+ core and an MST access or distribution layer, a rather infrequent scenario. If you establish the root bridge outside the region, there are these drawbacks as compared to the previously recommended configuration:

  • An MST region only runs one spanning tree instance that interacts with the outside world. This basically means that a boundary port can only be blocking or forwarding for all VLANs. In other terms, there is no load balancing possible between the region’s two Uplinks that lead to Switch C. The Uplink on Switch B for the instance will be blocking for all VLANs while Switch A will be forwarding for all VLANs.
  • This configuration still allows for fast convergence inside the region. If the Uplink on Switch A fails, a fast switchover to an Uplink on a different switch needs to be achieved. While the way the IST behaves inside the region in order to have the whole MST region resemble a CST bridge was not discussed in detail, you can imagine that a switchover across a region is never as efficient as a switchover on a single bridge.

Invalid Configuration

While the PVST+ emulation mechanism provides easy and seamless interoperability between MST and PVST+, this mechanism implies that any configuration other than the two previously mentioned is invalid. These are the basic rules that must be followed to get a successful MST and PVST+ interaction:

  1. If the MST bridge is the root, this bridge must be the root for all VLANs.
  2. If the PVST+ bridge is the root, this bridge must be the root for all VLANs (including the CST, which always runs on VLAN 1, regardless of the native VLAN, when the CST runs PVST+).
  3. The simulation fails and produces an error message if the MST bridge is the root for the CST, while the PVST+ bridge is the root for one or more other VLANs. A failed simulation puts the boundary port in root inconsistent mode.


In this diagram, Bridge A in the MST region is the root for all three PVST+ instances except one (the red VLAN). Bridge C is the root of the red VLAN. Suppose that the loop created on the red VLAN, where Bridge C is the root, becomes blocked by Bridge B. This means that Bridge B is designated for all VLANs except the red one. An MST region is not able to do that. A boundary port can only be blocking or forwarding for all VLANs because the MST region is only running one spanning tree with the outside world. Thus, when Bridge B detects a better BPDU on its boundary port, the bridge invokes the BPDU guard to block this port. The port is placed in the root inconsistent mode. The exact same mechanism also leads Bridge A to block its boundary port. Connectivity is lost; however, a loop-free topology is preserved even in the presence of such a misconfiguration.

Note: As soon as a boundary port produces a root inconsistent error, investigate whether a PVST+ bridge has attempted to become the root for some VLANs.

Migration Strategy

The first step in the migration to 802.1s/w is to properly identify point-to-point and edge ports. Ensure all switch-to-switch links, on which a rapid transition is desired, are full-duplex. Edge ports are defined through the PortFast feature. Carefully decide how many instances are needed in the switched network, and keep in mind that an instance translates to a logical topology. Decide what VLANs to map onto those instances, and carefully select a root and a back-up root for each instance. Choose a configuration name and a revision number that will be common to all switches in the network. Cisco recommends that you place as many switches as possible into a single region; it is not advantageous to segment a network into separate regions. Avoid mapping any VLANs onto instance 0. Migrate the core first. Change the STP type to MST, and work your way down to the access switches. MST can interact with legacy bridges running PVST+ on a per-port basis, so it is not a problem to mix both types of bridges if interactions are clearly understood. Always try to keep the root of the CST and IST inside the region. If you interact with a PVST+ bridge through a trunk, ensure the MST bridge is the root for all VLANs allowed on that trunk.

For sample configurations, refer to:

802.1D vs 802.1W

New Port States and Port Roles

The 802.1D is defined in these four different port states:

  • Listening (交換 bpdu決定 界面將扮演的角色)
  • Learning (學習接收的訊框來源 MAC-ADDRESS 形成 CAM 記錄表)
  • Blocking (正常運作中 封閉資料訊框傳送功能的界面)
  • Forwarding (正常運作中可傳送資料訊框的狀態)

See the table in the Port States section of this document for more information.

The state of the port is mixed, whether it blocks or forwards traffic, and the role it plays in the active topology (root port, designated port, and so on). For example, from an operational point of view, there is no difference between a port in the blocking state and a port in the listening state. Both discard frames and do not learn MAC addresses. The real difference lies in the role the spanning tree assigns to the port. It can safely be assumed that a listening port is either designated or root and is on its way to the forwarding state. Unfortunately, once in the forwarding state, there is no way to infer from the port state whether the port is root or designated. This contributes to demonstrate the failure of this state-based terminology. RSTP decouples the role and the state of a port to address this issue.

Port States

There are only three port states left in RSTP that correspond to the three possible operational states. The 802.1D disabled, blocking, and listening states are merged into a unique 802.1w discarding state.

STP (802.1D) Port State

RSTP (802.1w) Port State

Is Port Included in Active Topology?

Is Port Learning MAC Addresses?

Disabled

Discarding

No

No

Blocking

Discarding

No

No

Listening

Discarding

Yes

No

Learning

Learning

Yes

Yes

Forwarding

Forwarding

Yes

Yes

Port Roles

The role is now a variable assigned to a given port. The root port and designated port roles remain, while the blocking port role is split into the backup and alternate port roles. The Spanning Tree Algorithm (STA) determines the role of a port based on Bridge Protocol Data Units (BPDUs). In order to simplify matters, the thing to remember about a BPDU is there is always a method to compare any two of them and decide whether one is more useful than the other. This is based on the value stored in the BPDU and occasionally on the port on which they are received. This considered, the information in this section explains practical approaches to port roles.

Root Port Roles

  • The port that receives the best BPDU on a bridge is the root port. This is the port that is the closest to the root bridge in terms of path cost. The STA elects a single root bridge in the whole bridged network (per-VLAN). The root bridge sends BPDUs that are more useful than the ones any other bridge sends. The root bridge is the only bridge in the network that does not have a root port. All other bridges receive BPDUs on at least one port.


Designated Port Role

  • A port is designated if it can send the best BPDU on the segment to which it is connected. 802.1D bridges link together different segments, such as Ethernet segments, to create a bridged domain. On a given segment, there can only be one path toward the root bridge. If there are two, there is a bridging loop in the network. All bridges connected to a given segment listen to the BPDUs of each and agree on the bridge that sends the best BPDU as the designated bridge for the segment. The port on that bridge that corresponds is the designated port for that segment.


Alternate and Backup Port Roles

  • These two port roles correspond to the blocking state of 802.1D. A blocked port is defined as not being the designated or root port. A blocked port receives a more useful BPDU than the one it sends out on its segment. Remember that a port absolutely needs to receive BPDUs in order to stay blocked. RSTP introduces these two roles for this purpose.
  • An alternate port receives more useful BPDUs from another bridge and is a port blocked. This is shown in this diagram:


  • A backup port receives more useful BPDUs from the same bridge it is on and is a port blocked. This is shown in this diagram:


This distinction is already made internally within 802.1D. This is essentially how Cisco UplinkFast functions. The rationale is that an alternate port provides an alternate path to the root bridge and therefore can replace the root port if it fails. Of course, a backup port provides redundant connectivity to the same segment and cannot guarantee an alternate connectivity to the root bridge. Therefore, it is excluded from the uplink group.

As a result, RSTP calculates the final topology for the spanning tree that uses the same criteria as 802.1D. There is absolutely no change in the way the different bridge and port priorities are used. The name blocking is used for the discarding state in Cisco implementation. CatOS releases 7.1 and later still display the listening and learning states. This gives even more information about a port than the IEEE standard requires. However, the new feature is now there is a difference between the role the protocol determines for a port and its current state. For example, it is now perfectly valid for a port to be designated and blocking at the same time. While this typically occurs for very short periods of time, it simply means that this port is in a transitory state towards the designated forwarding state.

New BPDU Format

Few changes have been introduced by RSTP to the BPDU format. Only two flags, Topology Change (TC) and TC Acknowledgment (TCA), are defined in 802.1D. However, RSTP now uses all six bits of the flag byte that remain in order to perform:

  • Encode the role and state of the port that originates the BPDU
  • Handle the proposal/agreement mechanism


Another important change is that the RSTP BPDU is now of type 2, version 2. The implication is that legacy bridges must drop this new BPDU. This property makes it easy for a 802.1w bridge to detect legacy bridges connected to it.

New BPDU Handling

BPDU are Sent Every Hello-Time

BPDU are sent every hello-time, and not simply relayed anymore. With 802.1D, a non-root bridge only generates BPDUs when it receives one on the root port. In fact, a bridge relays BPDUs more than it actually generates them. This is not the case with 802.1w. A bridge now sends a BPDU with its current information every <hello-time> seconds (2 by default), even if it does not receive any from the root bridge.

Faster Aging of Information

On a given port, if hellos are not received three consecutive times, protocol information can be immediately aged out (or if max_age expires). Because of the previously mentioned protocol modification, BPDUs are now used as a keep-alive mechanism between bridges. A bridge considers that it loses connectivity to its direct neighbor root or designated bridge if it misses three BPDUs in a row. This fast aging of the information allows quick failure detection. If a bridge fails to receive BPDUs from a neighbor, it is certain that the connection to that neighbor is lost. This is opposed to 802.1D where the problem might have been anywhere on the path to the root.

Note: Failures are detected even much faster in case of physical link failures.

Accepts Inferior BPDUs

This concept is what makes up the core of the BackboneFast engine. The IEEE 802.1w committee decided to incorporate a similar mechanism into RSTP. When a bridge receives inferior information from its designated or root bridge, it immediately accepts it and replaces the one previously stored.


Because Bridge C still knows the root is alive and well, it immediately sends a BPDU to Bridge B that contains information about the root bridge. As a result, Bridge B does not send its own BPDUs and accepts the port that leads to Bridge C as the new root port.

Rapid Transition to Forwarding State

Rapid transition is the most important feature introduced by 802.1w. The legacy STA passively waited for the network to converge before it turned a port into the forwarding state. The achievement of faster convergence was a matter of tuning the conservative default parameters (forward delay and max_age timers) and often put the stability of the network at stake. The new rapid STP is able to actively confirm that a port can safely transition to the forwarding state without having to rely on any timer configuration. There is now a real feedback mechanism that takes place between RSTP-compliant bridges. In order to achieve fast convergence on a port, the protocol relies upon two new variables: edge ports and link type.

Edge Ports

The edge port concept is already well known to Cisco spanning tree users, as it basically corresponds to the PortFast feature. All ports directly connected to end stations cannot create bridging loops in the network. Therefore, the edge port directly transitions to the forwarding state, and skips the listening and learning stages. Neither edge ports or PortFast enabled ports generate topology changes when the link toggles. Unlike PortFast, an edge port that receives a BPDU immediately loses edge port status and becomes a normal spanning tree port. At this point, there is a user-configured value and an operational value for the edge port state. The Cisco implementation maintains that the PortFast keyword be used for edge port configuration. This makes the transition to RSTP simpler.

Link Type

RSTP can only achieve rapid transition to the forwarding state on edge ports and on point-to-point links. The link type is automatically derived from the duplex mode of a port. A port that operates in full-duplex is assumed to be point-to-point, while a half-duplex port is considered as a shared port by default. This automatic link type setting can be overridden by explicit configuration. In switched networks today, most links operate in full-duplex mode and are treated as point-to-point links by RSTP. This makes them candidates for rapid transition to the forwarding state.

Convergence with 802.1D

This diagram illustrates how 802.1D deals with a new link that is added to a bridged network:


In this scenario, a link between the root bridge and Bridge A is added. Suppose there already is an indirect connection between Bridge A and the root bridge (via C – D in the diagram). The STA blocks a port and disables the bridging loop. First, as they come up, both ports on the link between the root and Bridge A are put in the listening state. Bridge A is now able to hear the root directly. It immediately propagates its BPDUs on the designated ports, toward the leaves of the tree. As soon as Bridges B and C receive this new superior information from Bridge A, they immediately relay the information towards the leaves. In a few seconds, Bridge D receives a BPDU from the root and instantly blocks port P1.


Spanning tree is very efficient in how it calculates the new topology of the network. The only problem now is that twice the forward delay must elapse before the link between the root and Bridge A eventually ends up in the forwarding state. This means 30 seconds of disruption of traffic (the entire A, B, and C part of the network is isolated) because the 8021.D algorithm lacks a feedback mechanism to clearly advertise that the network converges in a matter of seconds.

Convergence with 802.1w

Now, you can see how RSTP deals with a similar situation. Remember that the final topology is exactly the same as the one calculated by 802.1D (that is, one blocked port at the same place as before). Only the steps used to reach this topology have changed.

Both ports on the link between A and the root are put in designated blocking as soon as they come up. Thus far, everything behaves as in a pure 802.1D environment. However, at this stage, a negotiation takes place between Switch A and the root. As soon as A receives the BPDU of the root, it blocks the non-edge designated ports. This operation is called sync. Once this is done, Bridge A explicitly authorizes the root bridge to put its port in the forwarding state. This diagram illustrates the result of this process on the network. The link between Switch A and the root bridge is blocked, and both bridges exchange BPDUs.


Once Switch A blocks its non-edge designated ports, the link between Switch A and the root is put in the forwarding state and you reach the situation:


There still cannot be a loop. Instead of blocking above Switch A, the network now blocks below Switch A. However, the potential bridging loop is cut at a different location. This cut travels down the tree along with the new BPDUs originated by the root through Switch A. At this stage, the newly blocked ports on Switch A also negotiate a quick transition to the forwarding state with their neighbor ports on Switch B and Switch C that both initiate a sync operation. Other than the root port towards A, Switch B only has edge designated ports. Therefore, it has no port to block in order to authorize Switch A to go to the forwarding state. Similarly, Switch C only has to block its designated port to D. The state shown in this diagram is now reached:


Remember that the final topology is exactly the same as the 802.1D example, which means that port P1 on D ends up blocking. This means that the final network topology is reached, just in the time necessary for the new BPDUs to travel down the tree. No timer is involved in this quick convergence. The only new mechanism introduced by RSTP is the acknowledgment that a switch can send on its new root port in order to authorize immediate transition to the forwarding state, and bypasses the twice-the-forward-delay long listening and learning stages. The administrator only needs to remember these to benefit from fast convergence:

  • This negotiation between bridges is only possible when bridges are connected by point-to-point links (that is, full-duplex links unless explicit port configuration).
  • Edge ports play an even more important role now that PortFast is enabled on ports in 802.1D. For instance, if the network administrator fails to properly configure the edge ports on B, their connectivity is impacted by the link between A and the root that comes up.

Proposal/Agreement Sequence

When a port is selected by the STA to become a designated port, 802.1D still waits twice <forward delay> seconds (2×15 by default) before it transitions it to the forwarding state. In RSTP, this condition corresponds to a port with a designated role but a blocking state. These diagrams illustrate how fast transition is achieved step-by-step. Suppose a new link is created between the root and Switch A. Both ports on this link are put in a designated blocking state until they receive a BPDU from their counterpart.


When a designated port is in a discarding or learning state (and only in this case), it sets the proposal bit on the BPDUs it sends out. This is what occurs for port p0 of the root bridge, as shown in step 1 of the preceding diagram. Because Switch A receives superior information, it immediately knows that p1 is the new root port. Switch A then starts a sync to verify that all of its ports are in-sync with this new information. A port is in sync if it meets either of these criteria:

  • The port is in the blocking state, which means discarding in a stable topology.
  • The port is an edge port.

In order to illustrate the effect of the sync mechanism on different kind of ports, suppose there exists an alternate port p2, a designated forwarding port p3, and an edge port p4 on Switch A. Notice that p2 and p4 already meet one of the criteria. In order to be in sync (see step 2 of the preceding diagram), Switch A just needs to block port p3, and assign it the discarding state. Now that all of its ports are in sync, Switch A can unblock its newly selected root port p1 and send an agreement message to reply to the root. (see step 3). This message is a copy of the proposal BPDU, with the agreement bit set instead of the proposal bit. This ensures that port p0 knows exactly to which proposal the agreement it receives corresponds.


Once p0 receives that agreement, it can immediately transition to the forwarding state. This is step 4 of the preceding figure. Notice that port p3 is left in a designated discarding state after the sync. In step 4, that port is in the exact same situation as port p0 is in step 1. It then starts to propose to its neighbor, and attempts to quickly transition to the forwarding state.


UplinkFast

Another form of immediate transition to the forwarding state included in RSTP is similar to the Cisco UplinkFast proprietary spanning tree extension. Basically, when a bridge loses its root port, it is able to put its best alternate port directly into the forwarding mode (the appearance of a new root port is also handled by RSTP). The selection of an alternate port as the new root port generates a topology change. The 802.1w topology change mechanism clears the appropriate entries in the Content Addressable Memory (CAM) tables of the upstream bridge. This removes the need for the dummy multicast generation process of UplinkFast.

UplinkFast does not need to be configured further because the mechanism is included natively and enabled in RSTP automatically.

New Topology Change Mechanisms

When an 802.1D bridge detects a topology change, it uses a reliable mechanism to first notify the root bridge. This is shown in this diagram:


Once the root bridge is aware of a change in the topology of the network, it sets the TC flag on the BPDUs it sends out, which are then relayed to all the bridges in the network. When a bridge receives a BPDU with the TC flag bit set, it reduces its bridging-table aging time to forward delay seconds. This ensures a relatively quick flush of stale information. Refer to Understanding Spanning-Tree Protocol Topology Changes for more information on this process. This topology change mechanism is deeply remodeled in RSTP. Both the detection of a topology change and its propagation through the network evolve.