You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I have been reading up on MPLS and been doing some experiments.
Topology
I have the following topology, where the PEs are members of a network interconnected with tunnels (i.e. ssh -oTunnel=Ethernet -w any:any <host>) on the 192.168.2.0/24 network running OSPF and LDP enabled. The CEs connects to the cloud of PEs and is terminated to a TAP adapter with IP-addresses in the 172.20.2.0/24 domain. The PEs uses OSPF to distribute the routes on both domains, and each tap interface is capable of forwarding mpls datagrams. In addition, each of the PEs have a dummy0 interface configured with the ip-address of the router-id (i.e. PE1 10.0.0.1, PE2: 10.0.0.2).All CEs/PEs are Ubuntu Server 21.10 VMs.
I am able to create MPLS routes and forward these between the nodes. However, when adding the address to e.g. tap2 for PE2: "ip route add 172.20.2.5/32 peer 172.20.2.6 dev tap2"; you'll get the following routes installed on PE1:
172.20.2.5 nhid 20 via 192.168.2.2 dev tap0 proto ospf metric 20 onlink
172.20.2.6 nhid 20 via 192.168.2.2 dev tap0 proto ospf metric 20 onlink
Note that the entries for 172.20.2.4 and .6 are not there if OSPF if you remove "distribute connected" routes or avoid using "peer" clause when adding address to adapter, e.g. "ip addr add 172.20.2.2/32 peer 172.20.2.1 dev tap1". What I want to be able to do here is to emulate a MPLS network where a datagram is being decapsulated and forwarded on a particular Tap device. Say I add a route on PE1 like this "ip route add 172.20.2.6/32 encap mpls 33 via inet 192.168.2.2 dev tap0", and then on PE2 add a MPLS route "ip -f mpls 33 via 172.20.2.6 dev tap2". I can now see ICMP echo request from CE1 get encapsulated and forwarded on PE1/tap0 and correctly decapsulated on PE2 before it is forwarded on PE2/tap2.
As I understand MPLS, a MPLS header is removed and forwarded on the host prior to the final destination (unless explit-null is set). Typically, PE1 would decapsulate the packet before it forwarded it to PE2 (i.e. 172.20.2.6 is available at the next hop 10.0.0.2, ip nexthop ls reports id 20 via 192.168.2.2 dev tap0 scope link proto zebra onlink). This is generally fine, but with the mesh of tunnels approach; the final destination is at the peer end of the tunnel, i.e. PE2 needs to decapsulate and forward onto link between 172.20.2.5 to 172.20.2.6, and not PE1 decapsulate the datagram before sending it to PE2. It's like 172.20.2.4 and .6 should belong to a different next-hop group than 172.20.2.3 and 5 and thus require MPLS encapsulation before it is forwarded to PE2. If I omit the "peer " when assigning the adapter the address (and later add it manually), the address isn't re-distributed to the other nodes.
The problem I am trying to solve here is that CE2 and CE3 provides access to two possibly overlapping customer networks. PE1 will have custom software for managing tap1 to tag arriving packets with appropriate MPLS header for CE2 or CE3 before it injects it into the local networking stack.
Is there any way I can change my setup/configuration to make this work (dummy interfaces, whatever else Linux is capable of)?
I can tag a IP-datagram on PE1/tap1, but PE1 needs an incoming label / swap route defined for it, e.g. ip -f mpls add 16 as to 16 via inet 192.168.2.2 dev tap0. Can the need for a forwarding / swap route be avoided?
What is the best way to assign and distribute labels for a new CE attaching to this network? Can I create a static entry on the PE it is attached to, and all other PEs would learn this?
The documentation / Google regarding LSP/static label binding is a bit thin on this. And if a static label needs to be created and assigned, should there a convention to avoid conflict with labels assigned via LDPD?
I haven't played too much with the tc/flower configuration, but I could match all incoming MPLS packets and mirror them to an egress interface. However, the PEs will likely be configured in a full-mesh so determine which interface when dealing with a 1:1 mapping is likely going to be hairy. Any better idea?
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
-
I have been reading up on MPLS and been doing some experiments.
Topology
I have the following topology, where the PEs are members of a network interconnected with tunnels (i.e. ssh -oTunnel=Ethernet -w any:any <host>) on the 192.168.2.0/24 network running OSPF and LDP enabled. The CEs connects to the cloud of PEs and is terminated to a TAP adapter with IP-addresses in the 172.20.2.0/24 domain. The PEs uses OSPF to distribute the routes on both domains, and each tap interface is capable of forwarding mpls datagrams. In addition, each of the PEs have a dummy0 interface configured with the ip-address of the router-id (i.e. PE1 10.0.0.1, PE2: 10.0.0.2).All CEs/PEs are Ubuntu Server 21.10 VMs.
OSPF configuration PE1, for PE2 change router-id with 10.0.0.2:
Then the ldpd configuration on PE1, for PE2 change router-id and transport-address with 10.0.0.2:
I am able to create MPLS routes and forward these between the nodes. However, when adding the address to e.g. tap2 for PE2: "ip route add 172.20.2.5/32 peer 172.20.2.6 dev tap2"; you'll get the following routes installed on PE1:
Note that the entries for 172.20.2.4 and .6 are not there if OSPF if you remove "distribute connected" routes or avoid using "peer" clause when adding address to adapter, e.g. "ip addr add 172.20.2.2/32 peer 172.20.2.1 dev tap1". What I want to be able to do here is to emulate a MPLS network where a datagram is being decapsulated and forwarded on a particular Tap device. Say I add a route on PE1 like this "ip route add 172.20.2.6/32 encap mpls 33 via inet 192.168.2.2 dev tap0", and then on PE2 add a MPLS route "ip -f mpls 33 via 172.20.2.6 dev tap2". I can now see ICMP echo request from CE1 get encapsulated and forwarded on PE1/tap0 and correctly decapsulated on PE2 before it is forwarded on PE2/tap2.
As I understand MPLS, a MPLS header is removed and forwarded on the host prior to the final destination (unless explit-null is set). Typically, PE1 would decapsulate the packet before it forwarded it to PE2 (i.e. 172.20.2.6 is available at the next hop 10.0.0.2, ip nexthop ls reports id 20 via 192.168.2.2 dev tap0 scope link proto zebra onlink). This is generally fine, but with the mesh of tunnels approach; the final destination is at the peer end of the tunnel, i.e. PE2 needs to decapsulate and forward onto link between 172.20.2.5 to 172.20.2.6, and not PE1 decapsulate the datagram before sending it to PE2. It's like 172.20.2.4 and .6 should belong to a different next-hop group than 172.20.2.3 and 5 and thus require MPLS encapsulation before it is forwarded to PE2. If I omit the "peer " when assigning the adapter the address (and later add it manually), the address isn't re-distributed to the other nodes.
The problem I am trying to solve here is that CE2 and CE3 provides access to two possibly overlapping customer networks. PE1 will have custom software for managing tap1 to tag arriving packets with appropriate MPLS header for CE2 or CE3 before it injects it into the local networking stack.
The documentation / Google regarding LSP/static label binding is a bit thin on this. And if a static label needs to be created and assigned, should there a convention to avoid conflict with labels assigned via LDPD?
I haven't played too much with the tc/flower configuration, but I could match all incoming MPLS packets and mirror them to an egress interface. However, the PEs will likely be configured in a full-mesh so determine which interface when dealing with a 1:1 mapping is likely going to be hairy. Any better idea?
Cheers,
Beta Was this translation helpful? Give feedback.
All reactions