ワンストップのインテリジェント IT ソリューション

dw@donewin.com.hk |

Ihavefourequalcostparallelpathstothesamedestination.Iamdoingfastswitchingontwolinksandprocessswitchingontheothertwo.Howwillthepacketsberoutedinthissituation?

Cisco ルーターに関するよくある質問よくある質問

同じ宛先への等コストの並列パスが 4 つあります. 2 つのリンクで高速スイッチングを実行し、他の 2 つのリンクでプロセス スイッチングを実行しています。. この状況でパケットはどのようにルーティングされるのでしょうか?

Assume that we have four equal cost paths to some set of IP networks. インターフェース 1 と 2 fast switch (ip route-cacheenabled on the interface) , 3 と 4 do not (no ip route-cache). The router first establishes the four equal cost paths in a list (パス 1, 2, 3, と 4). When you do a show ip route x.x.x.x, the fournext hopsto x.x.x.x display.

The pointer is called interface_pointer on interface 1. Interface_pointer cycles through the interfaces and routes in some orderly deterministic fashion such as 1-2-3-4-1-2-3-4-1 and so on. The output of show ip routex.x.x.xhas a “*” to the left of thenext hopthat interface_pointer uses for a destination address not found in the cache. Each time that interface_pointer is used, it advances to the next interface or route.

To illustrate the point better, consider this repeating loop:

A packet comes in, destined for a network serviced by the four parallel paths.

The router checks to see if it is in the cache. (The cache starts off empty.)

If it is in the cache, the router sends it to the interface stored in the cache. Otherwise, the router sends it to the interface where the interface_pointer is and moves interface_pointer to the next interface in the list.

If the interface over which the router just sent the packet is running route-cache, the router populates the cache with that interface ID and the destination IP address. All subsequent packets to the same destination are then switched using the route-cache entry (thus they are fast-switched).

If there are two route-cache and two non-route-cache interfaces, there is a 50 percent probability that a uncached entry will hit an interface that caches entries, caching that destination to that interface. 時間とともに, the interfaces running fast switching (route-cache) carry all the traffic except destinations not in the cache. This happens because once a packet to a destination is process-switched over an interface, the interface_pointer moves and points to the next interface in the list. If this interface is also process-switched, then the second packet is process-switched over the interface and the interface_pointer moves on to point to the next interface. Since there are only two process-switched interfaces, the third packet will route to fast-switched interface, which, in turn, will cache. Once cached in the IP route-cache, all the packets to the same destination will be fast-switched. したがって, there is a 50 percent probability that a uncached entry will hit an interface that caches entries, caching that destination to that interface.

In case of a failure of a process-switched interface, the routing table is updated and you would have three equal cost paths (two fast-switched and one process-switched). 時間とともに, the interfaces running fast switching (route-cache) carry all the traffic except destinations not in the cache. With two route-cache and one non-route-cache interfaces, there is a 66 percent probability that a uncached entry will hit an interface that caches entries, caching that destination to that interface. You can expect that the two fast switched interfaces will carry all the traffic over time.

Similarly when a fast switched interface fails, you would have three equal cost paths, one fast-switched and two process-switched. Over time the interface running fast switching (route-cache) carries all the traffic except destinations not in cache. There is 33 percent probability that a uncached entry would hit an interface that cached entries, caching that destination to that interface. You can expect that the single interface with caching enabled will carry all of the traffic over time in this case.

If nointerface is running route-cache, the router round-robins the traffic on a packet-by-packet basis.

In conclusion, if multiple equal paths to a destination exist, some are process-switched while others are fast switched, then over time most of the traffic will be carried by the fast-switched interfaces only. The load balancing thus attained is not optimum and might in some cases lower the performance. したがって, it is recommended that you do one of the following:

Either have all route-cache or no route-cache on all interfaces in parallel paths.

また

Expect that the interfaces with caching enabled will carry all of the traffic over time.

前へ:

次:

返信を残す

ライブチャット
伝言を残す

    − 7 = 1