Thursday, February 6, 2014

NTS: Multicast VPN

Multicast VPN




Multicast VPN is defined in RFC 6513 and RFC 6514.
Cisco's Multicast VPN is defined in RFC 6037.



Two solutions
  • PIM/GRE mVPN or draft-rosen (RFC 6037)
    • PIM adjacencies between PEs to exchange mVPN routing information
    • unique multicast address per VPN
    • per-VPN PIM adjacencies between PEs and CEs
    • per-VPN MDT (GRE) tunnels between PEs
    • data MDT tunnels for optimization
  • BGP/MPLS mVPN or NG mVPN
    • BGP peerings between PEs to exchange mVPN routing information
    • PIM messages are carried in BGP
    • BGP autodiscovery for inter-PE tunnels
    • MPLS P2MP inclusive tunnels between PEs
    • selective tunnels for optimization

Only the PIM/GRE mVPN model (Cisco's original implementation) is described below.



MVPN

MVPN combines multicast with MPLS VPN. PE routers establish virtual PIM neighborships with other PE routers that are connected to the same VPN.

The VPN-specific multicast routing and forwarding database is referred to as MVRF.

A MDT (multicast distribution tree) tunnel interface is an interface that MVRF uses to access the multicast domain. MDT tunnels are point-to-multipoint.
 
Multicast packets are sent from the CE to the ingress PE and then encapsulated and transmitted across the core (over the MDT tunnel). At the egress PE, the encapsulated packets are decapsulated and then sent to the receiving CE.

When sending customer VRF traffic, PEs encapsulate the traffic in their own (S,G) state, where the G is the MDT group address, and the S is the MDT source for the PE. By joining the (S,G) MDT of its PE neighbors, a PE router is able to receive the encapsulated multicast traffic for that VRF.

All VPN packets passing through the provider network are viewed as native multicast packets and are routed based on the routing information in the core network.

To support MVPN, PE routers only need to support native multicast routing.

RTs should be configured so that the receiver VRF has unicast reachability to prefixes in the source VRF.


Data MDT

MVPN also supports optimized VPN traffic forwarding for high-bandwidth applications that have sparsely distributed receivers.

A dedicated multicast group can be used to encapsulate packets from a specific source and an optimized MDT can be created to send traffic only to PE routers connected to interested receivers.

A unique group per vrf should be used on the PEs.



Configuration

IOS
ip multicast-routing
!
ip pim ssm default
!
interface Loopback0
 ip pim sparse-mode
!
interface X
 ip pim sparse-mode
!
ip multicast-routing vrf VPN
!
vrf definition VPN
 address-family ipv4
  mdt default x.x.x.x
  mdt data x.x.x.x y.y.y.y
 exit-address-family
!
router bgp 100
 address-family ipv4 mdt
  neighbor x.x.x.x activate
 exit-address-family


IOS-XR
multicast-routing
 address-family ipv4
  interface Loopback0
   enable
  !
  mdt source Loopback0
 !
 vrf VPN
  address-family ipv4
   mdt default ipv4 x.x.x.x

   mdt data y.y.y.y/24
   interface all enable
!
router bgp 100
 address-family ipv4 mdt
 !
 neighbor x.x.x.x
  address-family ipv4 mdt



"mdt source" is required in IOS-XR (it can be configured under the VRF if it's specific for it).

Sparse mode must be activated on all physical interfaces where multicast will be passing through (global or VRF ones) and on the loopback interface used for the BGP VPNv4 peerings.

The RP setup of the CEs must agree with the VRF RP setup on the PEs. In case you manually define the RP (static RP) on the CEs, then this must be done on the PEs too (inside the vrf).



Verification
  • There should be (S,G) entries for each BGP neighbor, where S=BGP loopback and G=MDT default address
  • There should be a bidirectional PIM adjacency across a tunnel between the PEs, but inside each PE's VRF
  • If an RP is used on a CE, then each remote CE should know this RP 
  • Sources/Receivers from any site should be viewable on the RP
  • There should be an MDT data (S,G) entry for each pair of customer (S,G) entries


Verification (using only a default mdt)


MDT default (S,G) entries

IOS
R5#sh ip mroute sum
IP Multicast Routing Table
Flags: D - Dense, S - Sparse, B - Bidir Group, s - SSM Group, C - Connected,
       L - Local, P - Pruned, R - RP-bit set, F - Register flag,
       T - SPT-bit set, J - Join SPT, M - MSDP created entry, E - Extranet,
       X - Proxy Join Timer Running, A - Candidate for MSDP Advertisement,
       U - URD, I - Received Source Specific Host Report,
       Z - Multicast Tunnel, z - MDT-data group sender,
       Y - Joined MDT-data group, y - Sending to MDT-data group,
       V - RD & Vector, v - Vector
Outgoing interface flags: H - Hardware switched, A - Assert winner
 Timers: Uptime/Expires
 Interface state: Interface, Next-Hop or VCD, State/Mode

(*, 239.255.255.1), 00:34:36/stopped, RP 10.0.0.1, OIF count: 1, flags: SJCFZ
  (10.0.0.6, 239.255.255.1), 00:24:11/00:02:18, OIF count: 1, flags: JTZ
  (10.0.0.5, 239.255.255.1), 00:34:35/00:02:54, OIF count: 1, flags: FT


R5#sh ip mroute 239.255.255.1
IP Multicast Routing Table
Flags: D - Dense, S - Sparse, B - Bidir Group, s - SSM Group, C - Connected,
       L - Local, P - Pruned, R - RP-bit set, F - Register flag,
       T - SPT-bit set, J - Join SPT, M - MSDP created entry, E - Extranet,
       X - Proxy Join Timer Running, A - Candidate for MSDP Advertisement,
       U - URD, I - Received Source Specific Host Report,
       Z - Multicast Tunnel, z - MDT-data group sender,
       Y - Joined MDT-data group, y - Sending to MDT-data group,
       V - RD & Vector, v - Vector
Outgoing interface flags: H - Hardware switched, A - Assert winner
 Timers: Uptime/Expires
 Interface state: Interface, Next-Hop or VCD, State/Mode

(*, 239.255.255.1), 00:46:12/stopped, RP 10.0.0.1, flags: SJCFZ
  Incoming interface: FastEthernet0/0.15, RPF nbr 10.1.5.1
  Outgoing interface list:
    MVRF VPN, Forward/Sparse, 00:46:12/00:01:46

(10.0.0.6, 239.255.255.1), 00:35:47/00:02:28, flags: JTZ
  Incoming interface: FastEthernet0/0.57, RPF nbr 10.5.7.7
  Outgoing interface list:
    MVRF VPN, Forward/Sparse, 00:35:47/00:01:46

(10.0.0.5, 239.255.255.1), 00:46:12/00:03:19, flags: FT
  Incoming interface: Loopback0, RPF nbr 0.0.0.0
  Outgoing interface list:
    FastEthernet0/0.57, Forward/Sparse, 00:35:46/00:03:11



 
R5#sh bgp ipv4 mdt all 10.0.0.6/32
BGP routing table entry for 100:1:10.0.0.6/32        version 2
Paths: (1 available, best #1, table IPv4-MDT-BGP-Table)
  Not advertised to any peer
  Local
    10.0.0.6 from 10.0.0.1 (10.0.0.1)
      Origin incomplete, metric 0, localpref 100, valid, internal, best
      Originator: 10.0.0.6, Cluster list: 10.0.0.1, 10.0.0.20,
      MDT group address: 239.255.255.1



R5#sh ip pim mdt
  * implies mdt is the default MDT
  MDT Group/Num   Interface   Source                   VRF
* 239.255.255.1   Tunnel1     Loopback0                VPN
 


R5#sh ip pim mdt bgp
MDT (Route Distinguisher + IPv4)               Router ID         Next Hop
  MDT group 239.255.255.1
   100:1:10.0.0.6                              10.0.0.1          10.0.0.6





Verification (using a default and a data mdt)

MDT default (S,G) entries
MDT data (S,G) entries

IOS
R2#sh ip mroute
IP Multicast Routing Table
Flags: D - Dense, S - Sparse, B - Bidir Group, s - SSM Group, C - Connected,
       L - Local, P - Pruned, R - RP-bit set, F - Register flag,
       T - SPT-bit set, J - Join SPT, M - MSDP created entry, E - Extranet,
       X - Proxy Join Timer Running, A - Candidate for MSDP Advertisement,
       U - URD, I - Received Source Specific Host Report,
       Z - Multicast Tunnel, z - MDT-data group sender,
       Y - Joined MDT-data group, y - Sending to MDT-data group,
       V - RD & Vector, v - Vector
Outgoing interface flags: H - Hardware switched, A - Assert winner
 Timers: Uptime/Expires
 Interface state: Interface, Next-Hop or VCD, State/Mode

(2.2.2.2, 232.0.0.1), 00:08:53/00:03:27, flags: sT
  Incoming interface: Loopback0, RPF nbr 0.0.0.0
  Outgoing interface list:
    FastEthernet0/0.24, Forward/Sparse, 00:08:53/00:03:27

(19.19.19.19, 232.0.0.1), 00:50:48/stopped, flags: sTIZ
  Incoming interface: FastEthernet0/0.24, RPF nbr 20.2.4.4
  Outgoing interface list:
    MVRF VPN, Forward/Sparse, 00:50:48/00:00:11

(19.19.19.19, 232.0.1.0), 00:08:23/00:00:12, flags: sTIZ
  Incoming interface: FastEthernet0/0.24, RPF nbr 20.2.4.4
  Outgoing interface list:
    MVRF VPN, Forward/Sparse, 00:02:47/00:00:12

(19.19.19.19, 232.0.1.1), 00:01:59/00:01:00, flags: sTIZ
  Incoming interface: FastEthernet0/0.24, RPF nbr 20.2.4.4
  Outgoing interface list:
    MVRF VPN, Forward/Sparse, 00:01:59/00:01:00

R2#sh ip pim mdt
  * implies mdt is the default MDT
  MDT Group/Num   Interface   Source                   VRF
* 232.0.0.1       Tunnel0     Loopback0                VPN
  232.0.1.0       Tunnel0     Loopback0                VPN
  232.0.1.1       Tunnel0     Loopback0                VPN




R2#sh ip pim mdt bgp
MDT (Route Distinguisher + IPv4)               Router ID         Next Hop
  MDT group 232.0.0.1
   100:1:19.19.19.19                           19.19.19.19       19.19.19.19




In both scenarios, you can also verify the mGRE tunnels by looking at the tunnel interface itself.

IOS
R5#sh int tun1 | i protocol/transport
  Tunnel protocol/transport multi-GRE/IP


When all PIM adjacencies come up, as PIM neighbors in a VRF you should see all the other MDT PEs though a tunnel and all the local connected CEs through a physical interface.

IOS
R5#sh ip pim vrf VPN nei
PIM Neighbor Table
Mode: B - Bidir Capable, DR - Designated Router, N - Default DR Priority,
      P - Proxy Capable, S - State Refresh Capable, G - GenID Capable
Neighbor          Interface                Uptime/Expires    Ver   DR
Address                                                            Prio/Mode
192.168.59.9      FastEthernet0/0.59       00:00:22/00:01:22 v2    1 / DR S G
10.0.0.6          Tunnel1                  00:25:52/00:01:27 v2    1 / DR S P G





PIM inside a VRF Tunnel

IOS
interface Tunnel1
 ip vrf forwarding VPN-A
 ip address 99.99.99.1 255.255.255.0
 ip pim sparse-mode
 tunnel source 10.0.0.1
 tunnel destination 10.0.0.2

 tunnel vrf VPN-B
!
interface Tunnel1

 ip vrf forwarding VPN-A
 ip address 99.99.99.2 255.255.255.0
 ip pim sparse-mode
 tunnel source 10.0.0.2
 tunnel destination 10.0.0.1
 tunnel vrf VPN-B


"ip vrf forwarding" defines the vrf under which the tunnel (99.99.99.0/24) operates; above it's VPN-A.

"tunnel vrf" defines the vrf which is used to build the tunnel (from 10.0.0.1 to 10.0.0.2); above it's VPN-B. If the tunnel source and destination are in the global routing table, then you don't need to define their vrf with the "tunnel vrf X" command.




Extranet

An extranet site can have either the multicast source or the receivers (otherwise multicast happens intra-as).

The Source PE has the multicast source behind a directly connected CE through the Source MVRF

The Receiver PE has one or more receivers behind a directly connected CE through the Receiver MVRF

In order to achieve multicast connectivity between the Source and Receiver PEs, you must have the same default MDT group in the source and receiver MVRF.

Two solutions:
  • Configure the Receiver MVRF on the Source PE router
    • you need each receiver MVRF copied on the Source PE router
  • Configure the Source MVRF on the Receiver PE routers
    • you need the Source MVRF copied on all interested Receiver PE routers
In both cases, the receiver MVRF (wherever placed) must import the source MVRF's RT.

Only PIM-SM and PIM-SSM are supported.

The multicast source and the RP must reside in the same site of the MVPN, behind the same PE router.


Receiver MVRF on the Source PE

Source PE (IOS)
ip vrf VPN1-S-MVRF
 rd 100:1
 route-target export 100:1
 route-target import 100:1
 mdt default 232.1.1.1
!
ip vrf VPN2-R-MVRF
 rd 100:2
 route-target export 100:2
 route-target import 100:2
 route-target import 100:1
 mdt default 232.2.2.2
!
ip multicast-routing
ip multicast-routing vrf VPN1-S-MVRF
ip multicast-routing vrf VPN2-R-MVRF


Receiver PE (IOS)
ip vrf VPN2-R-MVRF
 rd 100:2
 route-target export 100:2
 route-target import 100:2
 route-target import 100:1
 mdt default 232.2.2.2
!
ip multicast-routing
ip multicast-routing vrf VPN2-R-MVRF



Source MVRF on the Receiver PE

Source PE (IOS)
ip vrf VPN1-S-MVRF
 rd 100:1
 route-target export 100:1
 route-target import 100:1
 mdt default 232.1.1.1
!
ip multicast-routing
ip multicast-routing vrf VPN1-S-MVRF


Receiver PE (IOS)
ip vrf VPN1-S-MVRF
 rd 100:1
 route-target export 100:1
 route-target import 100:1
 mdt default 232.1.1.1
!

ip vrf VPN2-R-MVRF
 rd 100:2
 route-target export 100:2
 route-target import 100:2
 route-target import 100:1
 mdt default 232.2.2.2
!
ip multicast-routing 

ip multicast-routing vrf VPN1-S-MVRF
ip multicast-routing vrf VPN2-R-MVRF

What matters most in both cases when doing the MVRF replication, is to have the same MDT on a MVRF on the Source PE and on a MVRF on the Receiver PE (excluding the Source MVRF).


Fixing RPF

There are two options:

static mroute between VRFs

Receiver PE (IOS)
ip mroute vrf VPN2-R-MVRF 192.168.1.1 255.255.255.255 fallback-lookup vrf VPN1-S-MVRF


group-based VRF selection

Receiver PE (IOS)
ip multicast vrf VPN2-R-MVRF rpf select vrf VPN1-S-MVRF group-list 1
ip multicast vrf VPN2-R-MVRF rpf select vrf VPN3-S-MVRF group-list 3
!
access-list 1 permit 231.0.0.0 0.255.255.255
access-list 3 permit 233.0.0.0 0.255.255.255



Inter-AS MVPN

To establish a Multicast VPN between two ASes, a MDT-default tunnel must be setup between the involved PE routers. The appropriate MDT-default group is configured on the PE router and is unique for each VPN.

All three (A,B,C) inter-as options are supported. For option A nothing extra is required since every AS is completely isolated from the others.

In order to solve the various RPF issues imposed by the limited visibility of PEs between different ASes, each VPNv4 route carries a new transitive attribute (the BGP connector attribute) that defines the route's originator.

Inside a common AS, the BGP connector attribute is the same as the next hop. Between ASes the BGP connector attribute stores (in case of ipv4 mdt) the ip address of the PE router that originated the VPNv4 prefix and is preserved even after the next hop attribute is rewritten by ASBRs.

The BGP connector attribute also helps ASBRs and receiver PEs insert the RPF vector needed to build the inter-AS MDT for source PEs in remote ASes.

The RPF proxy vector is a PIM TLV that contains the ip address of the router that will be used as proxy for RPF checks (helping in the forwarding of PIM Joins between ASes).

A new PIM hello option has also been introduced along with the PIM RPF Vector extension to determine if the upstream router is capable of parsing the new TLV. An RPF Vector is included in PIM messages only when all PIM neighbors on an RPF interface support it.

The RPF proxy (usually the ASBR) removes the vector for the PIM Join message when it sees itself in it.

  • BGP connector attribute
    • used in RPF checks inside a VRF
  • RPF proxy
    • used in RPF checks in the core

Configuration Steps
  • Option A
    • no MDT sessions between ASes is required
    • intra-as MDT sessions are configured as usual
  • Option B
    • intra-as MDT sessions between PEs, ASBRs and RRs
    • inter-as MDT session between ASBRs
    • RPF proxy vector on all PEs for their VRFs
    • RPF proxy vector on all Ps and ASBRs
    • next-hop-self on the MDT ASBRs
  • Option C
    • intra-as MDT sessions between PEs and RRs
    • inter-as MDT sessions between RRs
    • RPF proxy vector on all PEs for their VRFs
    • RPF proxy vector on all Ps and ASBRs
    • next-hop-unchanged on the MDT RRs

MSDP will be required if using an RP on both ASes. Prefer to use SSM in the core of both ASes.


Links



No comments:

Post a Comment