diff --git a/.github/workflows/cicd.yml b/.github/workflows/cicd.yml
index 24b9e633..ccdc0fce 100644
--- a/.github/workflows/cicd.yml
+++ b/.github/workflows/cicd.yml
@@ -13,9 +13,9 @@ env:
jobs:
docs-test:
- runs-on: ubuntu-20.04
+ runs-on: ubuntu-22.04
steps:
- - uses: actions/checkout@v2
+ - uses: actions/checkout@v4
- name: Login to GitHub Container Registry
uses: docker/login-action@v1
@@ -28,7 +28,7 @@ jobs:
- name: Cache htmltest external links
id: cache-htmltest
- uses: actions/cache@v2
+ uses: actions/cache@v4
with:
path: tmp/.htmltest
# key will contain hash of all md files to check if files have changed
@@ -71,13 +71,13 @@ jobs:
# - run: docker run -v $(pwd):/docs --entrypoint ash ghcr.io/srl-labs/mkdocs-material-insiders:$MKDOCS_MATERIAL_VER -c 'git config --global --add safe.directory /docs; mkdocs gh-deploy --force'
publish-docs:
- runs-on: ubuntu-20.04
+ runs-on: ubuntu-22.04
needs: docs-test
steps:
- name: Checkout
- uses: actions/checkout@v3
- with:
- fetch-depth: 0
+ uses: actions/checkout@v4
+ # with:
+ # fetch-depth: 0
- name: Login to GitHub Container Registry
uses: docker/login-action@v1
diff --git a/docs/blog/posts/2024/codespaces.md b/docs/blog/posts/2024/codespaces.md
index 1af45e1d..508f6195 100644
--- a/docs/blog/posts/2024/codespaces.md
+++ b/docs/blog/posts/2024/codespaces.md
@@ -1,7 +1,9 @@
---
date: 2024-07-04
tags:
- - codespaces
+ - codespaces
+authors:
+ - rdodin
---
# SR Linux labs in GitHub Codespaces
diff --git a/docs/blog/posts/2024/rt5-l3evpn.md b/docs/blog/posts/2024/rt5-l3evpn.md
new file mode 100644
index 00000000..1c02c082
--- /dev/null
+++ b/docs/blog/posts/2024/rt5-l3evpn.md
@@ -0,0 +1,34 @@
+---
+date: 2024-07-23
+tags:
+ - evpn
+authors:
+ - rdodin
+---
+
+# Route Type 5 L3 EVPN Tutorial
+
+Since the inception of our Data Center Fabric program in 2019 we have been focusing on EVPN-based deployments as the preferred choice for data centers of all sizes. And historically, EVPN has been associated with Layer 2 services, such as VPLS, VPWS, E-LAN. However, network engineers know it all too well that BGP can take it all, and over time EVPN grew to support inter-subnet routing, and subsequently, layer 3 VPNs.
+
+Now you can deploy L3 VPN services with EVPN, both in and outside of the data center. Yes, a single control plane EVPN umbrella can cover all your needs, or at least most of them.
+
+It was important for us to start with [L2 EVPN basics](../../../tutorials/l2evpn/intro.md) and cover the EVPN origins first, but now more and more workloads ditching the arcane requirement to have layer 2 connectivity, and more and more data centers can be built with pure layer 3 services.
+
+But Layer 3 EVPN services have many flavors... Some, such as RT5-only EVPN, are quite simple, while others offer more advanced features and require symmetric IRBs, SBDs, Interfacefull mode of operation, and ESI support. To ease in the L3 EVPN introduction we chose to start with the simplest form of L3 EVPN - RT5-only EVPN.
+
+To introduce you to the concept of L3 EVPN we prepared a comprehensive tutorial - **[:material-book: RT5-only L3 EVPN Tutorial](../../../tutorials/l3evpn/rt5-only/index.md)** - that covers gets you through a fun lab exercise where you will configure a small but representative multitenant L3 EVPN network:
+
+
+
+You'll get exposed to many interesting concepts, such as:
+
+* eBGP Unnumbered underlay to support the overlay services
+* iBGP overlay with EVPN address family
+* RT5-only EVPN service configuration for L3 workloads
+* EVPN service with BGP PE-CE routing protocol to support clients with routing on the host
+
+So, have your favorite drink ready, and let's have [our first dive](../../../tutorials/l3evpn/rt5-only/index.md) into the world of L3 EVPN!
+
+--8<-- "docs/tutorials/l3evpn/rt5-only/summary.md:linkedin-question"
+
+
diff --git a/docs/stylesheets/nokia.css b/docs/stylesheets/nokia.css
index 93135a57..87321c6c 100644
--- a/docs/stylesheets/nokia.css
+++ b/docs/stylesheets/nokia.css
@@ -265,4 +265,17 @@ https://github.com/squidfunk/mkdocs-material/discussions/4157#discussioncomment-
padding-top: 0.5rem;
}
-/* END border for content tabs */
\ No newline at end of file
+/* END border for content tabs */
+
+/* START hide code copy and selection icons until on hover */
+/* Hide the nav element by default */
+div.highlight .md-code__nav {
+ display: none;
+}
+
+/* Show the nav element when the div is hovered over */
+div.highlight:hover .md-code__nav {
+ display: flex;
+}
+
+/* END hide code copy and selection icons until on hover */
\ No newline at end of file
diff --git a/docs/tutorials/l3evpn/rt5-only/index.md b/docs/tutorials/l3evpn/rt5-only/index.md
new file mode 100644
index 00000000..364c846b
--- /dev/null
+++ b/docs/tutorials/l3evpn/rt5-only/index.md
@@ -0,0 +1,111 @@
+---
+comments: true
+tags:
+ - evpn
+---
+# RT5-only L3 EVPN Tutorial
+
+| | |
+| --------------------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
+| **Tutorial name** | RT5-only (aka Interface-less) L3 EVPN-VXLAN with SR Linux |
+| **Lab components** | 3 SR Linux nodes, 2 [FRR](https://frrouting.org), 2 Alpine nodes |
+| **Resource requirements** | :fontawesome-solid-microchip: 2vCPU :fontawesome-solid-memory: 8 GB |
+| **Lab Repo** | [srl-rt5-l3evpn-basics-lab][lab-repo] |
+| **Packet captures** | [EVPN IP Prefix routes exchange][capture-evpn-rt5] |
+| **Main ref documents** | [RFC 7432 - BGP MPLS-Based Ethernet VPN](https://datatracker.ietf.org/doc/html/rfc7432) [RFC 8365 - A Network Virtualization Overlay Solution Using Ethernet VPN (EVPN)](https://datatracker.ietf.org/doc/html/rfc8365) [RFC 9136 - IP Prefix Advertisement in Ethernet VPN (EVPN)](https://datatracker.ietf.org/doc/html/rfc9136) [Nokia 7220 SR Linux Advanced Solutions Guide][adv-sol-guide-evpn-l3] [Nokia 7220 SR Linux EVPN-VXLAN Guide][evpn-vxlan-guide] |
+| **Version information**[^1] | [`containerlab:v0.56.0`][clab-install], [`srlinux:24.3.3`][srlinux-container], [`frr:9.0.2`][frr-container] [`docker-ce:26.1.4`][docker-install] |
+| **Authors** | Korhan Kayhan [:material-linkedin:][kkayhan-linkedin] Roman Dodin [:material-linkedin:][rd-linkedin] [:material-twitter:][rd-twitter] and reviewers[^3] |
+
+While EVPN originally emerged as a Layer 2 VPN technology to overcome VPLS limitations, it has since evolved to become a unified control plane for many services, Layer 3 VPN included. Founded upon the BGP protocol, EVPN has [lots of flexibility and features](https://www.nokia.com/networks/ethernet-vpn/) to become a one-stop-shop for all VPN services in various network deployments, but especially fit for the IP fabrics.
+
+In the [Layer 2 EVPN Basics Tutorial][evpn-basics-tutorial] we discussed how to configure EVPN to provide a layer 2 service across an IP fabric. Today' focus will be on deploying a **Layer 3 Ethernet VPN (EVPN)** in the SR Linux-powered DC fabric. We will be working with an _interface-less_[^2] flavor of an L3 EVPN service that does not require the use of Integrated Routing and Bridging (IRB) interfaces, and as such has no need MAC VRF instances, ARP/ND entries synchronization, MAC/IP (RT2) and IMET routes.
+
+As you might expect, the Layer 3 EVPN is designed to provide Layer 3 services across the fabric. As such, there are no _stretched_ broadcast domains across the fabric and the customer equipment is directly connected via L3 interfaces to the leafs and often runs a PE-CE routing protocol to exchange IP prefixes.
+
+To explain the Layer 3 EVPN configuration and concepts we will use a lab representing a tiny fabric built with two leafs, one spine and two pairs of clients devices connected to the leafs; one pair per each tenant. The first pair of clients will represent L3 servers connected to leaf ports directly, while the second pair will be represented by an [FRRouting](https://frrouting.org) routers that act a CE router and announce routes.
+
+
+
+As part of this tutorial we will go over two L3 EVPN scenarios. First, we will demonstrate how we can provide connectivity for directly attached L3 clients of Tenant 1. These are the clients that are addressed with L3 interfaces and connected to the leaf devices directly.
+
+
+
+The second scenario will demonstrate how to connect CE devices of Tenant 2 that establish a BGP session with the leaf devices to exchange IP prefixes. The BGP EVPN will make sure that the client prefixes are distributed to the to the participants of the same L3 EVPN service of this tenant.
+
+
+
+From the data plane perspective we will be using VXLAN tunnels to transport the encapsulated tenant' packets through the IP fabric.
+
+As part of this tutorial we will configure the SR Linux-based DC fabric underlay with BGP Unnumbered. Then we will setup the overlay routing using iBGP with EVPN address family and proceed with the creation of an L3 EVPN service for the two tenants of our fabric.
+
+## Lab deployment
+
+To let you follow along the configuration steps of this tutorial we created [a lab][lab-repo] that you can deploy on any Linux VM with [containerlab][clab-install] or run in the cloud with [Codespaces](../../../blog/posts/2024/codespaces.md):
+
+/// tab | Locally
+
+```
+sudo containerlab deploy -c -t srl-labs/srl-l3evpn-basics-lab
+```
+
+Containerlab will pull the git repo to your current working directory and start deploying the lab.
+///
+/// tab | With Codespaces
+
+If you want to run the lab in a free cloud instance, click the button below to open the lab in GitHub Codespaces:
+
+
+
+
+
+**[Run](https://codespaces.new/srl-labs/srlinux-vlan-handling-lab?quickstart=1) this lab in GitHub Codespaces for free**.
+[Learn more](https://containerlab.dev/manual/codespaces) about Containerlab for Codespaces.
+Machine type: 2 vCPU · 8 GB RAM
+
+///
+
+The lab comes up online with the FRR nodes configured, and no configuration is present on the SR Linux nodes besides the basic setup. During the course of this tutorial we will configure the SR Linux nodes and explain the FRR config bits.
+
+If you want to deploy the lab with all configs already applied, just uncomment the `startup-config` knobs in the topology file.
+
+Once the deployment process is finished you'll see a table with the deployed nodes.
+Using the names provided in the table you can SSH into the nodes to start the configuration process. For example, to connect to the `l3evpn-leaf1` node you can use the following command:
+
+```bash
+ssh l3evpn-leaf1 #(1)!
+```
+
+1. If you happen to have an SSH key the login will be passwordless. If not, `admin:NokiaSrl1!` is the default username and password.
+
+With the lab deployed we are ready to embark on our [learn-by-doing EVPN configuration journey](underlay.md)!
+
+/// note | Are you new to SR Linux?
+We advise the newcomers not to skip the [Configuration Basics Guide][conf-basics-guide] as it provides just enough details to survive in the configuration waters we are about to get in.
+///
+
+[lab-repo]: https://github.com/srl-labs/srl-l3evpn-tutorial-lab/
+[clab-install]: https://containerlab.dev/install/
+[srlinux-container]: https://github.com/orgs/nokia/packages/container/package/srlinux
+[frr-container]: https://quay.io/repository/frrouting/frr?tab=tags
+[docker-install]: https://docs.docker.com/engine/install/
+[capture-evpn-rt5]: https://gitlab.com/rdodin/pics/-/wikis/uploads/e0d9687ad72413769e4407eb4e498f71/bgp-underlay-overlay-ex1.pcapng
+[adv-sol-guide-evpn-l3]: https://documentation.nokia.com/srlinux/24-3/books/advanced-solutions/evpn-vxlan-layer-3.html#evpn-vxlan-layer-3
+[evpn-vxlan-guide]: https://documentation.nokia.com/srlinux/24-3/books/evpn-vxlan/evpn-vxlan-tunnels-layer-3.html#evpn-vxlan-tunnels-layer-3
+[conf-basics-guide]: https://documentation.nokia.com/srlinux/24-3/title/basics.html
+[evpn-basics-tutorial]: ../../l2evpn/intro.md
+[rd-linkedin]: https://linkedin.com/in/rdodin
+[rd-twitter]: https://twitter.com/ntdvps
+[kkayhan-linkedin]: https://www.linkedin.com/in/korhan-kayhan-b6b45065/
+[mr-linkedin]: https://www.linkedin.com/in/michelredondo/
+
+[^1]: the following versions have been used to create this tutorial. The newer versions might work, but if they don't, please pin the version to the mentioned ones.
+[^2]: Two L3 EVPN service models are defined in [RFC 9136](https://datatracker.ietf.org/doc/html/rfc9136#name-ip-vrf-to-ip-vrf-model) - namely Interface-less and Interface-full. The focus of this tutorial is on the Interface-less model.
+[^3]: [Michel Redondo](https://learn.srlinux.dev/blog/author/michelredondo), [Sergey Fomin](https://learn.srlinux.dev/blog/author/sfomin), [Anton Zyablov](https://learn.srlinux.dev/blog/author/azyablov), [Jeroen van Bemmel](https://learn.srlinux.dev/blog/author/jbemmel), [Jorge Rabadan](https://datatracker.ietf.org/person/jorge.rabadan@nokia.com).
+
+
diff --git a/docs/tutorials/l3evpn/rt5-only/l3evpn-bgp-pe-ce.md b/docs/tutorials/l3evpn/rt5-only/l3evpn-bgp-pe-ce.md
new file mode 100644
index 00000000..30ed56b2
--- /dev/null
+++ b/docs/tutorials/l3evpn/rt5-only/l3evpn-bgp-pe-ce.md
@@ -0,0 +1,380 @@
+---
+comments: true
+---
+
+# L3 EVPN Instance with BGP PE-CE
+
+Now, off to a more elaborated example where a workload connected to the leaf talks BGP to it. Maybe it is a Kubernetes node that implements a LoadBalancer service as MetalLB or KubeVIP and wants to expose service to the outside world.
+Or, it is a fleet of hypervisors with virtual machines that don't need a stretched L2 network, then a BGP speaker on the hypervisor could announce the subnets to the fabric.
+
+There are deployment scenarios where the BGP on the host model works great, and we will show you how it can be implemented within our lab.
+
+
+
+In this chapter we will work with `ce1` and `ce2` nodes that belong to the `tenant-2` and connected to the same leaf pair.
+
+## BGP on the Host
+
+Let's first start with the BGP configuration on the workload (aka host) connected to the leaf. The idea behind running a BGP speaker on the host is to have a dynamic routing protocol that can advertise prefixes of the tenant systems running on the host to the fabric.
+
+Instead of having a single IP address assigned to the whole host as in the previous chapter, a single host will announce multiple prefixes, as many as it has tenant networks running on this host. In the lab environment, we will simply use a loopback interface on the host to simulate the tenant network. In reality the BGP speaker will get client networks programmed by other processes like Kubernetes, or configured according to the hypervisor's network configuration.
+
+As per the startup configuration of our CE routers, both have a loopback IP that needs to be advertised to the L3 EVPN Network Instance (ip-vrf). This requires setting up a routing protocol between the CE devices (frr) and the switches they're connected to (Leaf1 & Leaf2).
+Here are the config snippets for both CE nodes:
+
+///tab | ce1
+
+```
+--8<-- "https://raw.githubusercontent.com/srl-labs/srl-l3evpn-basics-lab/main/startup_configs/frr1.conf:lo-interface"
+--8<-- "https://raw.githubusercontent.com/srl-labs/srl-l3evpn-basics-lab/main/startup_configs/frr1.conf:bgp"
+```
+
+///
+///tab | ce2
+
+```
+--8<-- "https://raw.githubusercontent.com/srl-labs/srl-l3evpn-basics-lab/main/startup_configs/frr2.conf:lo-interface"
+--8<-- "https://raw.githubusercontent.com/srl-labs/srl-l3evpn-basics-lab/main/startup_configs/frr2.conf:bgp"
+```
+
+///
+
+As you can see, both routers come preconfigured with the respective loopbacks to simulate a client prefix that is to be advertised to other clients of the same EVPN service.
+
+Another peculiar thing is that the BGP configuration looks is identical on both CE1 and CE2 - they use the same AS number, peer IP and router ID. We achieve this similarity by using the same configuration on each CE-Leaf pair, which simplifies configuration management and troubleshooting. Here is how the BGP configuration looks like in our mini fabric:
+
+
+
+## BGP on the Leaf
+
+In the previous chapter, we created the `tenant-1` IP VRF, to which servers `srv1` and `srv2` were connected. For Tenant 2 we will configure the `tenant-2` IP VRF and the associated interfaces.
+
+A notable difference in the `tenant-2` IP VRF configuration is that we will configure a routing protocol within this VRF to establish peering with the CE devices.
+SR Linux supports OSPF, ISIS, and BGP as a PE-CE protocol. This time around we choose eBGP as our PE-CE protocol.
+
+The BGP configuration in the IP VRF is exactly the same as the global BGP configuration, we just use `tenant-2` as the network instance name. And remember, the configuration is identical on all leaf switches.
+
+1. **AS Number and Router ID**
+The initial step involves creating the `tenant-2` network instance and specifying the autonomous system number and router-id for this ip-vrf.
+
+ ``` srl
+ set / network-instance tenant-2 protocols bgp autonomous-system 65001
+ set / network-instance tenant-2 protocols bgp router-id 10.0.0.1
+ ```
+
+1. **BGP Address Family**
+Since our clients use IPv4 addresses, we activate the `ipv4-unicast` address family to facilitate route exchange with the client. Although we could've enabled IPv6 family as well, we chose not to as our clients do not have IPv6 routes to announce.
+
+ ``` srl
+ set / network-instance tenant-2 protocols bgp afi-safi ipv4-unicast admin-state enable
+ ```
+
+1. **Configure the Neighbor Parameters**
+We configure the BGP peer/neighbor IP and its corresponding autonomous system number, then assign the BGP neighbor to a peer group.
+
+ ``` srl
+ set / network-instance tenant-2 protocols bgp group client
+ set / network-instance tenant-2 protocols bgp neighbor 192.168.99.2 peer-as 65002
+ set / network-instance tenant-2 protocols bgp neighbor 192.168.99.2 peer-group client
+ ```
+
+1. **Allow BGP to exchange routes by default**
+By default, all incoming and outgoing eBGP routes are blocked. We will disable this default setting to permit all incoming and outgoing routes.
+
+ ``` srl
+ set / network-instance tenant-2 protocols bgp ebgp-default-policy import-reject-all false
+ set / network-instance tenant-2 protocols bgp ebgp-default-policy export-reject-all false
+ ```
+
+1. **Send Default Route to the Client**
+In the previous step, we disabled eBGP's default route import/export blocking. However, eBGP doesn't automatically announce routes to the client since it treats the peer as an external system and only announces selected routes through a policy. To share overlay routes with the client, we must either configure an export route policy or advertise a default route to the client.
+
+ ``` srl
+ set / network-instance tenant-2 protocols bgp group client send-default-route ipv4-unicast true
+ ```
+
+1. **Customer-facing and VXLAN interfaces**
+ Since we created a new IP VRF, we need to add to it a customer facing interface. Our CE devices are connected to `ethernet-1/2` interfaces on the leaf switches; we configure these interfaces with IPv4 addresses and attach them to the `tenant-2` IP VRF.
+
+ ``` srl
+ set / interface ethernet-1/2 subinterface 1 admin-state enable
+ set / interface ethernet-1/2 subinterface 1 ipv4 admin-state enable
+ set / interface ethernet-1/2 subinterface 1 ipv4 address 192.168.99.1/24
+ ```
+
+ We need not to forget to create a tunnel interface for this tenant. It needs to be configured with a new VNI value so that our tenants don't mix up their traffic.
+ Since our tenant 1 used VNI 100, we will configure a tunnel interface with a subinterface index 200 and a matching VNI 200 value:
+
+ ``` srl
+ set / tunnel-interface vxlan1 vxlan-interface 200 type routed
+ set / tunnel-interface vxlan1 vxlan-interface 200 ingress vni 100
+ ```
+
+ And add these interfaces to the network instance:
+
+ ``` srl
+ set / network-instance tenant-2 interface ethernet-1/2.1
+ set / network-instance tenant-2 vxlan-interface vxlan1.200
+ ```
+
+1. **EVPN configuration**
+ And the last bit is to add the EVPN bgp instance to the `tenant-2` VRF.
+
+ ``` srl
+ set / network-instance tenant-2 protocols bgp-evpn bgp-instance 1 admin-state enable
+ set / network-instance tenant-2 protocols bgp-evpn bgp-instance 1 vxlan-interface vxlan1.200
+ set / network-instance tenant-2 protocols bgp-evpn bgp-instance 1 evi 2
+ set / network-instance tenant-2 protocols bgp-vpn bgp-instance 1 route-target export-rt target:65001:2
+ set / network-instance tenant-2 protocols bgp-vpn bgp-instance 1 route-target import-rt target:65001:2
+
+ set / network-instance tenant-2 protocols bgp-vpn bgp-instance 1
+ set / network-instance tenant-2 protocols bgp-evpn bgp-instance 1 ecmp 8
+ ```
+
+The resulting configuration for the leaf routers is as follows:
+
+/// tab | leaf1
+
+```srl
+enter candidate
+
+--8<-- "https://raw.githubusercontent.com/srl-labs/srl-l3evpn-basics-lab/main/startup_configs/leaf1.conf:pece"
+
+commit now
+```
+
+///
+/// tab | leaf2
+
+```srl
+enter candidate
+
+--8<-- "https://raw.githubusercontent.com/srl-labs/srl-l3evpn-basics-lab/main/startup_configs/leaf2.conf:pece"
+
+commit now
+```
+
+///
+
+## Verification
+
+To ensure that each leaf has successfully established an eBGP session with the CE device and started to receive and advertise ipv4 prefixes issue the following command:
+
+/// tab | leaf1
+
+```srl
+A:leaf1# / show network-instance tenant-2 protocols bgp neighbor
+------------------------------------------------------------------------------------------------------------------------------------------------
+BGP neighbor summary for network-instance "tenant-2"
+Flags: S static, D dynamic, L discovered by LLDP, B BFD enabled, - disabled, * slow
+------------------------------------------------------------------------------------------------------------------------------------------------
+------------------------------------------------------------------------------------------------------------------------------------------------
++----------------+-----------------------+----------------+------+---------+-------------+-------------+------------+-----------------------+
+| Net-Inst | Peer | Group | Flag | Peer-AS | State | Uptime | AFI/SAFI | [Rx/Active/Tx] |
+| | | | s | | | | | |
++================+=======================+================+======+=========+=============+=============+============+=======================+
+| tenant-2 | 192.168.99.2 | client | S | 65002 | established | 0d:0h:22m:4 | ipv4- | [2/1/1] |
+| | | | | | | 5s | unicast | |
++----------------+-----------------------+----------------+------+---------+-------------+-------------+------------+-----------------------+
+------------------------------------------------------------------------------------------------------------------------------------------------
+Summary:
+1 configured neighbors, 1 configured sessions are established,0 disabled peers
+0 dynamic peers
+```
+
+///
+
+/// tab | leaf2
+
+```srl
+A:leaf2# / show network-instance tenant-2 protocols bgp neighbor
+-----------------------------------------------------------------------------------------------------------------------------------------------
+BGP neighbor summary for network-instance "tenant-2"
+Flags: S static, D dynamic, L discovered by LLDP, B BFD enabled, - disabled, * slow
+-----------------------------------------------------------------------------------------------------------------------------------------------
+-----------------------------------------------------------------------------------------------------------------------------------------------
++----------------+-----------------------+----------------+------+---------+-------------+-------------+------------+-----------------------+
+| Net-Inst | Peer | Group | Flag | Peer-AS | State | Uptime | AFI/SAFI | [Rx/Active/Tx] |
+| | | | s | | | | | |
++================+=======================+================+======+=========+=============+=============+============+=======================+
+| tenant-2 | 192.168.99.2 | client | S | 65002 | established | 0d:0h:24m:5 | ipv4- | [2/1/1] |
+| | | | | | | 4s | unicast | |
++----------------+-----------------------+----------------+------+---------+-------------+-------------+------------+-----------------------+
+-----------------------------------------------------------------------------------------------------------------------------------------------
+Summary:
+1 configured neighbors, 1 configured sessions are established,0 disabled peers
+0 dynamic peers
+```
+
+///
+
+Each leaf has announced a default route to its clients and receives the client's loopback IP. We can verify that by checking the advertised and received routes.
+
+/// tab | leaf1 - received
+
+```srl hl_lines="14"
+A:leaf1# / show network-instance tenant-2 protocols bgp neighbor 192.168.99.2 received-routes ipv4
+------------------------------------------------------------------------------------------------------------------------------------------------
+Peer : 192.168.99.2, remote AS: 65002, local AS: 65001
+Type : static
+Description : None
+Group : client
+------------------------------------------------------------------------------------------------------------------------------------------------
+Status codes: u=used, *=valid, >=best, x=stale
+Origin codes: i=IGP, e=EGP, ?=incomplete
++---------------------------------------------------------------------------------------------------------------------------------------+
+| Status Network Path-id Next Hop MED LocPref AsPath Origin |
++=======================================================================================================================================+
+| 0.0.0.0/0 0 192.168.99.2 - 100 [65002, 65001] ? |
+| u*> 10.91.91.91/32 0 192.168.99.2 - 100 [65002] i |
++---------------------------------------------------------------------------------------------------------------------------------------+
+------------------------------------------------------------------------------------------------------------------------------------------------
+2 received BGP routes : 1 used 1 valid
+------------------------------------------------------------------------------------------------------------------------------------------------
+```
+
+///
+
+/// tab | leaf1 - advertised
+
+```srl
+A:leaf1# / show network-instance tenant-2 protocols bgp neighbor 192.168.99.2 advertised-routes ipv4
+------------------------------------------------------------------------------------------------------------------------------------------------
+Peer : 192.168.99.2, remote AS: 65002, local AS: 65001
+Type : static
+Description : None
+Group : client
+------------------------------------------------------------------------------------------------------------------------------------------------
+Origin codes: i=IGP, e=EGP, ?=incomplete
++-------------------------------------------------------------------------------------------------------------------------------------------+
+| Network Path-id Next Hop MED LocPref AsPath Origin |
++===========================================================================================================================================+
+| 0.0.0.0/0 0 192.168.99.1 - 100 [65001] ? |
++-------------------------------------------------------------------------------------------------------------------------------------------+
+------------------------------------------------------------------------------------------------------------------------------------------------
+1 advertised BGP routes
+------------------------------------------------------------------------------------------------------------------------------------------------
+```
+
+///
+
+Let's examine the routing table of the VRF on each leaf. Both leaves share the same list of routes, with different next hops. Local routes resolve to a local interface, while remote routes learned from the other leaf resolve to a VXLAN tunnel.
+
+Loopback route of a remote client is highlighted.
+
+/// tab | leaf1
+
+```srl hl_lines="15-18"
+A:leaf1# / show network-instance tenant-2 route-table
+--------------------------------------------------------------------------------------------------------------------------------------------------------------
+IPv4 unicast route table of network instance tenant-2
+--------------------------------------------------------------------------------------------------------------------------------------------------------------
++------------------+------+-----------+--------------------+---------+---------+--------+-----------+-----------+-----------+-----------+--------------+
+| Prefix | ID | Route | Route Owner | Active | Origin | Metric | Pref | Next-hop | Next-hop | Backup | Backup Next- |
+| | | Type | | | Network | | | (Type) | Interface | Next-hop | hop |
+| | | | | | Instanc | | | | | (Type) | Interface |
+| | | | | | e | | | | | | |
++==================+======+===========+====================+=========+=========+========+===========+===========+===========+===========+==============+
+| 10.91.91.91/32 | 0 | bgp | bgp_mgr | True | tenant- | 0 | 170 | 192.168.9 | ethernet- | | |
+| | | | | | 2 | | | 9.0/30 (i | 1/2.1 | | |
+| | | | | | | | | ndirect/l | | | |
+| | | | | | | | | ocal) | | | |
+| 10.92.92.92/32 | 0 | bgp-evpn | bgp_evpn_mgr | True | tenant- | 0 | 170 | 10.0.0.2/ | | | |
+| | | | | | 2 | | | 32 (indir | | | |
+| | | | | | | | | ect/vxlan | | | |
+| | | | | | | | | ) | | | |
+| 192.168.99.0/30 | 0 | bgp-evpn | bgp_evpn_mgr | False | tenant- | 0 | 170 | 10.0.0.2/ | | | |
+| | | | | | 2 | | | 32 (indir | | | |
+| | | | | | | | | ect/vxlan | | | |
+| | | | | | | | | ) | | | |
+| 192.168.99.0/30 | 3 | local | net_inst_mgr | True | tenant- | 0 | 0 | 192.168.9 | ethernet- | | |
+| | | | | | 2 | | | 9.1 | 1/2.1 | | |
+| | | | | | | | | (direct) | | | |
+| 192.168.99.1/32 | 3 | host | net_inst_mgr | True | tenant- | 0 | 0 | None | None | | |
+| | | | | | 2 | | | (extract) | | | |
+| 192.168.99.3/32 | 3 | host | net_inst_mgr | True | tenant- | 0 | 0 | None (bro | | | |
+| | | | | | 2 | | | adcast) | | | |
++------------------+------+-----------+--------------------+---------+---------+--------+-----------+-----------+-----------+-----------+--------------+
+--------------------------------------------------------------------------------------------------------------------------------------------------------------
+IPv4 routes total : 6
+IPv4 prefixes with active routes : 5
+IPv4 prefixes with active ECMP routes: 0
+--------------------------------------------------------------------------------------------------------------------------------------------------------------
+```
+
+///
+
+/// tab | leaf2
+
+```srl hl_lines="11-14"
+A:leaf2# / show network-instance tenant-2 route-table
+--------------------------------------------------------------------------------------------------------------------------------------------------------------
+IPv4 unicast route table of network instance tenant-2
+--------------------------------------------------------------------------------------------------------------------------------------------------------------
++------------------+------+-----------+--------------------+---------+---------+--------+-----------+-----------+-----------+-----------+--------------+
+| Prefix | ID | Route | Route Owner | Active | Origin | Metric | Pref | Next-hop | Next-hop | Backup | Backup Next- |
+| | | Type | | | Network | | | (Type) | Interface | Next-hop | hop |
+| | | | | | Instanc | | | | | (Type) | Interface |
+| | | | | | e | | | | | | |
++==================+======+===========+====================+=========+=========+========+===========+===========+===========+===========+==============+
+| 10.91.91.91/32 | 0 | bgp-evpn | bgp_evpn_mgr | True | tenant- | 0 | 170 | 10.0.0.1/ | | | |
+| | | | | | 2 | | | 32 (indir | | | |
+| | | | | | | | | ect/vxlan | | | |
+| | | | | | | | | ) | | | |
+| 10.92.92.92/32 | 0 | bgp | bgp_mgr | True | tenant- | 0 | 170 | 192.168.9 | ethernet- | | |
+| | | | | | 2 | | | 9.0/30 (i | 1/2.1 | | |
+| | | | | | | | | ndirect/l | | | |
+| | | | | | | | | ocal) | | | |
+| 192.168.99.0/30 | 0 | bgp-evpn | bgp_evpn_mgr | False | tenant- | 0 | 170 | 10.0.0.1/ | | | |
+| | | | | | 2 | | | 32 (indir | | | |
+| | | | | | | | | ect/vxlan | | | |
+| | | | | | | | | ) | | | |
+| 192.168.99.0/30 | 3 | local | net_inst_mgr | True | tenant- | 0 | 0 | 192.168.9 | ethernet- | | |
+| | | | | | 2 | | | 9.1 | 1/2.1 | | |
+| | | | | | | | | (direct) | | | |
+| 192.168.99.1/32 | 3 | host | net_inst_mgr | True | tenant- | 0 | 0 | None | None | | |
+| | | | | | 2 | | | (extract) | | | |
+| 192.168.99.3/32 | 3 | host | net_inst_mgr | True | tenant- | 0 | 0 | None (bro | | | |
+| | | | | | 2 | | | adcast) | | | |
++------------------+------+-----------+--------------------+---------+---------+--------+-----------+-----------+-----------+-----------+--------------+
+--------------------------------------------------------------------------------------------------------------------------------------------------------------
+IPv4 routes total : 6
+IPv4 prefixes with active routes : 5
+IPv4 prefixes with active ECMP routes: 0
+--------------------------------------------------------------------------------------------------------------------------------------------------------------
+```
+
+///
+
+Then let's send a ping between `ce1` and `ce2` loopbacks to ensure that the datapath works.
+
+```
+sudo docker exec -i -t l3evpn-ce1 bash
+```
+
+```srl
+ce1:/# ping 10.92.92.92 -I 10.91.91.91 -c 2
+PING 10.92.92.92 (10.92.92.92) from 10.91.91.91: 56 data bytes
+64 bytes from 10.92.92.92: seq=0 ttl=63 time=1.456 ms
+64 bytes from 10.92.92.92: seq=1 ttl=63 time=0.845 ms
+
+--- 10.92.92.92 ping statistics ---
+2 packets transmitted, 2 packets received, 0% packet loss
+round-trip min/avg/max = 0.845/1.150/1.456 ms
+```
+
+Great, the datapath works!
+
+Control-plane things work exactly the same way as in the previous chapter. We just announce more prefixes via RT5 NLRI, and that's it.
+
+## Pros and Cons?
+
+The BGP on the Host model allows to advertise a range of prefixes from the host using a dynamic routing protocol. Keeping the same configuration on all hosts and leafs simplifies the management and troubleshooting, as well as allows for easy migration of hosts as the BGP config on the host doesn't need to change when the host is moved to another leaf.
+
+At the same time, it requires a BGP speaker on the host, which may not be feasible in all environments and introduces another routing protocol and stack to the host. So, as always, evaluate the trade-offs and choose the model that fits your environment best.
+
+With PE-CE protocol configured, it is possible to achieve multuhoming and load balancing of the traffic between CE devices. The load balancing will be done purely on L3 level using ECMP where CE devices will advertise the same prefixes to the different leafs and therefore the remote CE devices will have multiple paths to reach the advertised prefixes.
+
+We hope this was a fun configration marathon, and you enjoyed getting through this lab? Let's wrap it up with a quick [summary](summary.md).
+
+
diff --git a/docs/tutorials/l3evpn/rt5-only/l3evpn.md b/docs/tutorials/l3evpn/rt5-only/l3evpn.md
new file mode 100644
index 00000000..d5f67b1e
--- /dev/null
+++ b/docs/tutorials/l3evpn/rt5-only/l3evpn.md
@@ -0,0 +1,489 @@
+---
+comments: true
+---
+
+# L3 EVPN Instance
+
+In the prior chapters, we have been busy laying out the infrastructure foundation for the L3 overlay service. First we configured the IP fabric underlay routing, making sure that all leaf devices can reach spines and each other. Then, we established an iBGP peering between the leaf and spine devices with `evpn` address family to exchange overlay routing information.
+
+All this has been leading up to the creation of an L3 EVPN instances that will allow our clients (Tenant Systems in the RFC terms) to have private L3 connectivity between them.
+
+
+
+We will have two use L3 EVPN use cases to cover. The focus of this chapter is on creating an L3 EVPN instance for the **Tenant 1** where tenant devices (for example, servers, named `srv1` and `srv2` in the diagram) are directly connected to the fabric switches with L3 interfaces.
+In the next chapter we will build a VPN instance for Tenant 2, where the tenant devices are routers that run BGP and exchange routes with the leaf switches.
+
+As mentioned already, in this chapter the clients are directly connected to the leaf switches over L3 interfaces. Our clients are represented by `srv1` and `srv2` nodes and connected to the leaf switches. You can imagine, that these nodes are servers or another workload that requires L3 connectivity and are addressed with L3 interfaces themselves.
+
+
+
+The server nodes have their `eth1` interfaces configured with IPv4 addresses and our goal is to build the L3 connectivity between them such that `srv1` can ping `srv2` using their `eth1` interfaces.
+
+On a logical level, the nodes should appear to be connected to a virtual router that will enable inter-subnet connectivity for them. This virtual router is represented by the L3 EVPN instance that we are about to create.
+
+
+
+## Client-facing interface on leaf
+
+First we configure the client-facing interface on leaf switches. As per our lab, the srv device is connected to the leaf' `ethernet-1/1` port, so we enable this interface with a logical routed subinterface and assign an IP address.
+
+On each leaf we select IP address from the same subnet that the client is using. For example, if `srv1` has IP `192.168.1.100/24`, then we address the leaf interface with `192.168.1.1/24`:
+
+```srl
+set / interface ethernet-1/1 subinterface 1 admin-state enable
+set / interface ethernet-1/1 subinterface 1 ipv4 admin-state enable
+set / interface ethernet-1/1 subinterface 1 ipv4 address 192.168.1.1/24
+```
+
+## VXLAN interface
+
+We also need to create a VXLAN Tunnel End Point (VTEP) that will be used to encap/decap VXLAN traffic. On SR Linux this is done by creating a logical tunnel interface defined by a virtual network identifier (VNI) and an overlay network type. Type **routed** is chosen for Layer 3 routing, while **bridged** is used for Layer 2 switching[^1].
+
+```srl
+set / tunnel-interface vxlan1 vxlan-interface 100 type routed
+set / tunnel-interface vxlan1 vxlan-interface 100 ingress vni 100
+```
+
+The VNI value would play a crucial role in the VXLAN encapsulation and decapsulation process. It is used to identify the VXLAN tunnel and is used to map the VXLAN traffic to the correct VRF instance. For tenant 1 we chose to use VNI 100.
+
+## L3 Network Instance (IP-VRF)
+
+The next step is to create an L3 Network Instance (IP-VRF) on our leaf switches that is this virtual routing instance that contains the routing table for the L3 EVPN service.
+
+1. **Create Network Instance**
+
+ We create a network instance named `tenant-1` that will be of type `ip-vrf` to denote that it is an L3 VRF:
+
+ ```srl
+ set / network-instance tenant-1 type ip-vrf
+ set / network-instance tenant-1 admin-state enable
+ ```
+
+2. **Attach interfaces to the network instance**
+ Associate the previously configured client' subinterface and the tunnel interface with `tenant-1` VRF so that they become part of it:
+
+ ```srl
+ set / network-instance tenant-1 interface ethernet-1/1.1
+ set / network-instance tenant-1 vxlan-interface vxlan1.100
+ ```
+
+3. **Configure EVPN Parameters**
+ At this step we configure the BGP EVPN parameters of this IP VRF by creating a `bgp-instance` and adding the vxlan interface under it.
+
+ ```srl
+ set / network-instance tenant-1 protocols bgp-evpn bgp-instance 1 admin-state enable
+ set / network-instance tenant-1 protocols bgp-evpn bgp-instance 1 vxlan-interface vxlan1.100
+ ```
+
+ Define an EVPN Virtual Identifier (EVI) under the bgp-evpn instance will be used as a service identifier and to auto-derive the route distinguisher value.
+
+ ```srl
+ set / network-instance tenant-1 protocols bgp-evpn bgp-instance 1 evi 1
+ ```
+
+ We also create the `bgp-vpn` context under the IP VRF to enable multi-protocol BGP operation to support the EVPN route exchange.
+ Since we are going to exchange VPN routes (EVPN in this case) we need to provide a Route Target values for import and export so that the routes marked with this RT value would be imported in the target VRF.
+ We will set it manually, because otherwise auto-derivation process will use the AS number specified under the global BGP process, and we have different AS numbers per leaf.
+
+ ```srl
+ set / network-instance tenant-1 protocols bgp-vpn bgp-instance 1
+ set / network-instance tenant-1 protocols bgp-vpn bgp-instance 1 route-target export-rt target:65535:1
+ set / network-instance tenant-1 protocols bgp-vpn bgp-instance 1 route-target import-rt target:65535:1
+ ```
+
+ Optionally configure ECMP to enable load balancing in the overlay network.
+
+ ```srl
+ set / network-instance tenant-1 protocols bgp-evpn bgp-instance 1 ecmp 8
+ ```
+
+The resulting configuration will look like this:
+
+/// tab | leaf1
+
+```srl
+enter candidate
+
+--8<-- "https://raw.githubusercontent.com/srl-labs/srl-l3evpn-basics-lab/main/startup_configs/leaf1.conf:client-interface"
+
+--8<-- "https://raw.githubusercontent.com/srl-labs/srl-l3evpn-basics-lab/main/startup_configs/leaf1.conf:tunnel-interface"
+
+--8<-- "https://raw.githubusercontent.com/srl-labs/srl-l3evpn-basics-lab/main/startup_configs/leaf1.conf:ipvrf"
+
+commit now
+```
+
+///
+/// tab | leaf2
+
+```srl
+enter candidate
+
+--8<-- "https://raw.githubusercontent.com/srl-labs/srl-l3evpn-basics-lab/main/startup_configs/leaf2.conf:client-interface"
+
+--8<-- "https://raw.githubusercontent.com/srl-labs/srl-l3evpn-basics-lab/main/startup_configs/leaf2.conf:tunnel-interface"
+
+--8<-- "https://raw.githubusercontent.com/srl-labs/srl-l3evpn-basics-lab/main/startup_configs/leaf2.conf:ipvrf"
+
+commit now
+```
+
+///
+
+With this configuration in place we've built the following layout of basic L3 EVPN constructs on our leaf switches:
+
+
+
+## Verification
+
+To verify L3 EVPN configuration we can start with checking the BGP VPN status and checking that RT value is auto-derived from the EVI we set. And RD value is set manually to the same value on both leafs.
+
+```srl
+A:leaf1# show network-instance tenant-1 protocols bgp-vpn bgp-instance 1
+==================================================================================================
+Net Instance : tenant-1
+ bgp Instance 1
+--------------------------------------------------------------------------------------------------
+ route-distinguisher: 10.0.0.1:1, auto-derived-from-evi
+ export-route-target: target:65535:1, manual
+ import-route-target: target:65535:1, manual
+==================================================================================================
+```
+
+Next we can check the overlay BGP neighbor status:
+
+```srl
+A:leaf1# / show network-instance default protocols bgp neighbor 10.*
+----------------------------------------------------------------------------------------------------------------------------------------------------------
+BGP neighbor summary for network-instance "default"
+Flags: S static, D dynamic, L discovered by LLDP, B BFD enabled, - disabled, * slow
+----------------------------------------------------------------------------------------------------------------------------------------------------------
+----------------------------------------------------------------------------------------------------------------------------------------------------------
++-----------------+-------------------------+-----------------+------+---------+--------------+--------------+------------+-------------------------+
+| Net-Inst | Peer | Group | Flag | Peer-AS | State | Uptime | AFI/SAFI | [Rx/Active/Tx] |
+| | | | s | | | | | |
++=================+=========================+=================+======+=========+==============+==============+============+=========================+
+| default | 10.10.10.10 | overlay | S | 65535 | established | 0d:0h:7m:7s | evpn | [1/1/1] |
++-----------------+-------------------------+-----------------+------+---------+--------------+--------------+------------+-------------------------+
+----------------------------------------------------------------------------------------------------------------------------------------------------------
+Summary:
+1 configured neighbors, 1 configured sessions are established,0 disabled peers
+1 dynamic peers
+```
+
+Now we see that a single route has been sent and received by the `leaf1` to/from the `spine` switch acting as a Route Reflector. Let's check what has been received and sent:
+
+/// tab | received
+
+```srl
+A:leaf1# / show network-instance default protocols bgp neighbor 10.* received-routes evpn
+--------------------------------------------------------------------------------------------------
+Peer : 10.10.10.10, remote AS: 65535, local AS: 65535
+Type : static
+Description : None
+Group : overlay
+--------------------------------------------------------------------------------------------------
+Status codes: u=used, *=valid, >=best, x=stale
+Origin codes: i=IGP, e=EGP, ?=incomplete
+--------------------------------------------------------------------------------------------------
+Type 5 IP Prefix Routes
++--------+---------------------+--------+----------------+----------+-----+---------+------+
+| Status | Route-distinguisher | Tag-ID | IP-address | Next-Hop | MED | LocPref | Path |
++========+=====================+========+================+==========+=====+=========+======+
+| u*> | 10.0.0.2:100 | 0 | 192.168.2.0/24 | 10.0.0.2 | - | 100 | |
++--------+---------------------+--------+----------------+----------+-----+---------+------+
+--------------------------------------------------------------------------------------------------
+0 Ethernet Auto-Discovery routes 0 used, 0 valid
+0 MAC-IP Advertisement routes 0 used, 0 valid
+0 Inclusive Multicast Ethernet Tag routes 0 used, 0 valid
+0 Ethernet Segment routes 0 used, 0 valid
+1 IP Prefix routes 1 used, 1 valid
+--------------------------------------------------------------------------------------------------
+```
+
+///
+/// tab | sent
+
+```srl
+A:leaf1# / show network-instance default protocols bgp neighbor 10.* advertised-routes evpn
+--------------------------------------------------------------------------------------------------
+Peer : 10.10.10.10, remote AS: 65535, local AS: 65535
+Type : static
+Description : None
+Group : overlay
+--------------------------------------------------------------------------------------------------
+Origin codes: i=IGP, e=EGP, ?=incomplete
+--------------------------------------------------------------------------------------------------
+Type 5 IP Prefix Routes
++---------------------+--------+----------------+----------+-----+---------+------+
+| Route-distinguisher | Tag-ID | IP-address | Next-Hop | MED | LocPref | Path |
++=====================+========+================+==========+=====+=========+======+
+| 10.0.0.1:100 | 0 | 192.168.1.0/24 | 10.0.0.1 | - | 100 | |
++---------------------+--------+----------------+----------+-----+---------+------+
+--------------------------------------------------------------------------------------------------
+--------------------------------------------------------------------------------------------------
+0 advertised Ethernet Auto-Discovery routes
+0 advertised MAC-IP Advertisement routes
+0 advertised Inclusive Multicast Ethernet Tag routes
+0 advertised Ethernet Segment routes
+1 advertised IP Prefix routes
+--------------------------------------------------------------------------------------------------
+```
+
+///
+
+Brilliant, we receive the remote IP prefix `192.168.2.0/24` and sent local IP prefix `192.168.0.1/24` to the other leaf.
+
+/// details | Route Summarization
+In a real-world scenario, you would see more routes being exchanged, especially if you have multiple clients connected to the leaf switches. A good design practice is to summarize the routes on the leaf switches to reduce the number of routes exchanged between the leafs and the spine and mimimize the control plane churn when new host routes are added/removed.
+
+Route summarization is not covered in this tutorial, but it should be not that complicated to add it!
+///
+
+Let's have a look at the routing table of IP-VRF on both leafs:
+
+/// tab | leaf1
+
+```srl hl_lines="17-19"
+A:leaf1# / show network-instance ip-vrf-1 route-table
+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
+IPv4 unicast route table of network instance ip-vrf-1
+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
++----------------------+------+-----------+---------------------+---------+---------+--------+-----------+-------------+-------------+-------------+----------------+
+| Prefix | ID | Route | Route Owner | Active | Origin | Metric | Pref | Next-hop | Next-hop | Backup | Backup Next- |
+| | | Type | | | Network | | | (Type) | Interface | Next-hop | hop Interface |
+| | | | | | Instanc | | | | | (Type) | |
+| | | | | | e | | | | | | |
++======================+======+===========+=====================+=========+=========+========+===========+=============+=============+=============+================+
+| 192.168.1.0/24 | 4 | local | net_inst_mgr | True | tenant- | 0 | 0 | 192.168.1.1 | ethernet- | | |
+| | | | | | 1 | | | (direct) | 1/1.1 | | |
+| 192.168.1.1/32 | 4 | host | net_inst_mgr | True | tenant- | 0 | 0 | None | None | | |
+| | | | | | 1 | | | (extract) | | | |
+| 192.168.1.255/32 | 4 | host | net_inst_mgr | True | tenant- | 0 | 0 | None | | | |
+| | | | | | 1 | | | (broadcast) | | | |
+| 192.168.2.0/24 | 0 | bgp-evpn | bgp_evpn_mgr | True | tenant- | 0 | 170 | 10.0.0.2/32 | | | |
+| | | | | | 1 | | | (indirect/v | | | |
+| | | | | | | | | xlan) | | | |
++----------------------+------+-----------+---------------------+---------+---------+--------+-----------+-------------+-------------+-------------+----------------+
+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
+IPv4 routes total : 4
+IPv4 prefixes with active routes : 4
+IPv4 prefixes with active ECMP routes: 0
+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
+```
+
+///
+
+/// tab | leaf2
+
+```srl hl_lines="10"
+A:leaf2# / show network-instance ip-vrf-1 route-table
+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
+IPv4 unicast route table of network instance ip-vrf-1
+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
++----------------------+------+-----------+---------------------+---------+---------+--------+-----------+-------------+-------------+-------------+----------------+
+| Prefix | ID | Route | Route Owner | Active | Origin | Metric | Pref | Next-hop | Next-hop | Backup | Backup Next- |
+| | | Type | | | Network | | | (Type) | Interface | Next-hop | hop Interface |
+| | | | | | Instanc | | | | | (Type) | |
+| | | | | | e | | | | | | |
++======================+======+===========+=====================+=========+=========+========+===========+=============+=============+=============+================+
+| 192.168.1.0/24 | 0 | bgp-evpn | bgp_evpn_mgr | True | tenant- | 0 | 170 | 10.0.0.1/32 | | | |
+| | | | | | 1 | | | (indirect/v | | | |
+| | | | | | | | | xlan) | | | |
+| 192.168.2.0/24 | 4 | local | net_inst_mgr | True | tenant- | 0 | 0 | 192.168.2.1 | ethernet- | | |
+| | | | | | 1 | | | (direct) | 1/1.1 | | |
+| 192.168.2.1/32 | 4 | host | net_inst_mgr | True | tenant- | 0 | 0 | None | None | | |
+| | | | | | 1 | | | (extract) | | | |
+| 192.168.2.255/32 | 4 | host | net_inst_mgr | True | tenant- | 0 | 0 | None | | | |
+| | | | | | 1 | | | (broadcast) | | | |
++----------------------+------+-----------+---------------------+---------+---------+--------+-----------+-------------+-------------+-------------+----------------+
+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
+IPv4 routes total : 4
+IPv4 prefixes with active routes : 4
+IPv4 prefixes with active ECMP routes: 0
+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
+```
+
+///
+
+The routing table contains local and remote prefixes. Local prefixes, as expected, resolve via local interface pointing towards the CE device, while the remote prefix is resolved via the VXLAN tunnel interface.
+
+Last check would be to verify that datapath is working correctly. Let's connect to `ce1`:
+
+```bash
+sudo docker exec -i -t l3evpn-srv1 ash
+```
+
+```srl
+/ # ping 192.168.2.100 -c 2
+PING 192.168.2.100 (192.168.2.100): 56 data bytes
+64 bytes from 192.168.2.100: seq=0 ttl=63 time=1.205 ms
+64 bytes from 192.168.2.100: seq=1 ttl=63 time=0.841 ms
+
+--- 192.168.2.100 ping statistics ---
+2 packets transmitted, 2 packets received, 0% packet loss
+round-trip min/avg/max = 0.841/1.023/1.205 ms
+```
+
+Sweet, `srv1` can ping `srv2` over the IP fabric using the L3 EVPN service that we've just configured. Now let's dig deeper on the protocol details and explore the EVPN route types that made this datapath connectivity possible.
+
+## Control plane details
+
+It is quite important to understand how much simpler control plane operations are in case of a pure L3 EVPN service with no bridge domains involved. Here is what happens when we commit the configuration of the "tenant-1" L3 EVPN service on the leaf switches.
+
+/// note | packetcapture or it didn't happen
+The following explanation is based on the packet capture fetched with Edgeshark from the `leaf1`'s `e1-49` interface. You can [download the pcap][capture-evpn-rt5].
+
+///
+
+`leaf1` establishes a BGP session with `spine` (spine acts as a Route Reflector) and signals the multiprotocol capability AFI/SAFI=L2VPN/EVPN.
+
+```title="packet #9" linenums="1"
+Internet Protocol Version 4, Src: 10.0.0.1, Dst: 10.10.10.10
+Transmission Control Protocol, Src Port: 40979, Dst Port: 179, Seq: 1, Ack: 1, Len: 49
+Border Gateway Protocol - OPEN Message
+ Marker: ffffffffffffffffffffffffffffffff
+ Length: 49
+ Type: OPEN Message (1)
+ Version: 4
+ My AS: 65535
+ Hold Time: 90
+ BGP Identifier: 10.0.0.1
+ Optional Parameters Length: 20
+ Optional Parameters
+ Optional Parameter: Capability
+ Parameter Type: Capability (2)
+ Parameter Length: 18
+ Capability: Graceful Restart capability
+ Capability: Multiprotocol extensions capability
+ Type: Multiprotocol extensions capability (1)
+ Length: 4
+ AFI: Layer-2 VPN (25)
+ Reserved: 00
+ SAFI: EVPN (70)
+ Capability: Route refresh capability
+ Capability: Support for 4-octet AS number capability
+```
+
+Since both leafs have L3 interfaces in the `tenant-1` IP-VRF and EVPN is configured in this network instance, BGP process starts exchanging EVPN routes.
+
+First we have `leaf1` sending an update with the following contents:
+
+```title="packet #15" linenums="1" hl_lines="16 20 25-28 39-42"
+Internet Protocol Version 4, Src: 10.0.0.1, Dst: 10.10.10.10
+Transmission Control Protocol, Src Port: 40979, Dst Port: 179, Seq: 88, Ack: 88, Len: 143
+Border Gateway Protocol - UPDATE Message
+ Marker: ffffffffffffffffffffffffffffffff
+ Length: 113
+ Type: UPDATE Message (2)
+ Withdrawn Routes Length: 0
+ Total Path Attribute Length: 90
+ Path attributes
+ Path Attribute - MP_REACH_NLRI
+ Flags: 0x90, Optional, Extended-Length, Non-transitive, Complete
+ Type Code: MP_REACH_NLRI (14)
+ Length: 45
+ Address family identifier (AFI): Layer-2 VPN (25)
+ Subsequent address family identifier (SAFI): EVPN (70)
+ Next hop: 10.0.0.1
+ Number of Subnetwork points of attachment (SNPA): 0
+ Network Layer Reachability Information (NLRI)
+ EVPN NLRI: IP Prefix route
+ Route Type: IP Prefix route (5)
+ Length: 34
+ Route Distinguisher: 00010a0000010064 (10.0.0.1:1)
+ ESI: 00:00:00:00:00:00:00:00:00:00
+ Ethernet Tag ID: 0
+ IP prefix length: 24
+ IPv4 address: 192.168.1.0
+ IPv4 Gateway address: 0.0.0.0
+ VNI: 100
+ Path Attribute - ORIGIN: IGP
+ Path Attribute - AS_PATH: empty
+ Path Attribute - LOCAL_PREF: 100
+ Path Attribute - EXTENDED_COMMUNITIES
+ Flags: 0xc0, Optional, Transitive, Complete
+ Type Code: EXTENDED_COMMUNITIES (16)
+ Length: 24
+ Carried extended communities: (3 communities)
+ Route Target: 65535:100 [Transitive 2-Octet AS-Specific]
+ EVPN Router's MAC: Router's MAC: 1a:d3:02:ff:00:00 [Transitive EVPN]
+ Encapsulation: VXLAN Encapsulation [Transitive Opaque]
+ Type: Transitive Opaque (0x03)
+ Subtype (Opaque): Encapsulation (0x0c)
+ Tunnel type: VXLAN Encapsulation (8)
+```
+
+Quite a lot of information here in this Route Type 5 (RT5), but the most important part is the EVPN NLRI that contains the IP Prefix route the has `192.168.1.0` address with `/24` prefix length and `VNI=100`.
+This prefix route is derived from the IP address of the `ethernet-1/1.1` subinterface attached to the `ip-vrf-1` network instance. And the VNI value is the same as the one used in the VXLAN tunnel interface attached to the `tenant-1` network instance.
+
+At the very end of this update message we see the extended community that indicates that VXLAN encapsulation is used for this route. This information is crucial for the receiving leaf to know how to encapsulate the traffic towards the destination. We can ensure that this information is well received, by looking at the tunnel table on `leaf2`:
+
+```srl
+A:leaf2# /show tunnel vxlan-tunnel vtep 10.0.0.1
+--------------------------------------------------------------------
+Show report for vxlan-tunnels vtep
+--------------------------------------------------------------------
+VTEP Address: 10.0.0.1
+Index : 320047052051
+Last Change : 2024-07-22T12:57:11.000Z
+--------------------------------------------------------------------
+Destinations
+--------------------------------------------------------------------
++------------------+-----------------+------------+----------------+
+| Tunnel Interface | VXLAN Interface | Egress VNI | Type |
++==================+=================+============+================+
+| vxlan1 | 100 | 100 | ip-destination |
++------------------+-----------------+------------+----------------+
+--------------------------------------------------------------------
+0 bridged destinations, 0 multicast, 0 unicast, 0 es
+1 routed destinations
+```
+
+The VXLAN tunnel towards the `leaf1` is setup thanks to the extended community information in the EVPN route.
+
+And, quite frankly, this is it. A single RT5 route is all it takes to setup the non-IRB-based L3 EVPN service. Much simpler than the L2 EVPN service, isn't it?
+
+## Dataplane details
+
+Just to make sure that the control plane is not lying to us, let's have a look at the packet capture from the `e1-49` interface of `leaf1` when we have pings running from `srv1` to `srv2`:
+
+/// note | Dataplane packet capture
+[Here you can download][capture-icmp] the dataplane pcap for encapsulated ICMP packets
+///
+
+```hl_lines="5-9"
+Frame 11: 148 bytes on wire (1184 bits), 148 bytes captured (1184 bits) on interface e1-49, id 0
+Ethernet II, Src: 1a:d3:02:ff:00:31 (1a:d3:02:ff:00:31), Dst: 1a:80:04:ff:00:01 (1a:80:04:ff:00:01)
+Internet Protocol Version 4, Src: 10.0.0.1, Dst: 10.0.0.2
+User Datagram Protocol, Src Port: 50963, Dst Port: 4789
+Virtual eXtensible Local Area Network
+ Flags: 0x0800, VXLAN Network ID (VNI)
+ Group Policy ID: 0
+ VXLAN Network Identifier (VNI): 100
+ Reserved: 0
+Ethernet II, Src: 1a:d3:02:ff:00:00 (1a:d3:02:ff:00:00), Dst: 1a:1f:03:ff:00:00 (1a:1f:03:ff:00:00)
+Internet Protocol Version 4, Src: 192.168.1.100, Dst: 192.168.2.100
+Internet Control Message Protocol
+```
+
+Good news, the ICMP packets are encapsulated in VXLAN frames and sent over the IP fabric towards the destination. The destination leaf will decapsulate the packet and forward it towards the `ce2` device.
+
+## Pros and Cons?
+
+If the dataplane is simpler and there is less things to configure then why not use L3 EVPN with L3 interfaces all the time? Well, the answer is simple - it is not always feasible.
+
+To start with, you may have workloads that still require L2 connectivity. In this case you would need to use L2 EVPN service.
+
+Multihoming requires your server to be connected to multiple leaf switches and use ECMP to load balance the traffic. This puts a requirement on the server to be able to handle routing and to do ECMP hashing, which is another configuration step that may not be feasible in some cases.
+
+Besides multihoming, workload migration may be a challenge, since moving the workload from one leaf to another would mandate the change of the IP address on the server.
+
+Some of these limitations may be lifted off when [a more dynamic L3 EVPN service](l3evpn-bgp-pe-ce.md) is used with CE devices being actual routers exchanging prefixes with the L3 EVPN instance running on the leaf switches. Let's check it out!
+
+[^1]: Like it is in the [L2 EVPN tutorial](../../l2evpn/evpn.md#tunnelvxlan-interface).
+
+[capture-evpn-rt5]: https://gitlab.com/rdodin/pics/-/wikis/uploads/e0d9687ad72413769e4407eb4e498f71/bgp-underlay-overlay-ex1.pcapng
+[capture-icmp]: https://gitlab.com/rdodin/pics/-/wikis/uploads/580114f029cd12ef3c459f84b07e2963/icmp-vxlan.pcapng
+
+
diff --git a/docs/tutorials/l3evpn/rt5-only/overlay.md b/docs/tutorials/l3evpn/rt5-only/overlay.md
new file mode 100644
index 00000000..e4ec3a4f
--- /dev/null
+++ b/docs/tutorials/l3evpn/rt5-only/overlay.md
@@ -0,0 +1,129 @@
+---
+comments: true
+---
+
+# Overlay Routing
+
+With IP underlay configured we prepared the grounds for the EVPN overlay services. In order to create an EVPN service on top of an IP fabric our leaf devices should be able to exchange overlay routing information. And you guessed it, there is no better protocol for this job than BGP with an EVPN address family. At least that's what the industry has agreed upon.
+
+Since all our leaf switches can reach each other via loopbacks, we can establish a BGP peering between them with `evpn` address family enabled. Operators can choose to use iBGP or eBGP for this purpose. In this tutorial, we will use iBGP for the overlay routing using spine as the Route Reflector (RR).
+Utilizing RRs reduces the number of BGP sessions; leaf switches peer only with RRs and receive from other leafs at the same time. This approach minimizes configuration efforts, allows for centralized application of routing policies, but, at the same time, introduces another protocol.
+
+
+
+In this section our goal is to setup iBGP session with `evpn` address family between our leaf switches so that when we configure L3 EVPN instance in the next chapters; the overlay EVPN routes will be exchanged between the leafs using these sessions.
+
+Let's have a look at the configuration steps required to setup overlay routing on our leaf switches:
+
+1. **BGP peer-group**
+ Just like with the underlay, creating a BGP peer group simplifies configuring multiple BGP peers with similar requirements by grouping them together. We will call this group `overlay`.
+
+ ```{.srl .no-select}
+ set / network-instance default protocols bgp group overlay
+ ```
+
+2. **Autonomous System Number**
+ Since we are configuring a new iBGP instance, all routers should share the same AS number. We will use AS 65535; note, that we will have to set the peer-as and local-as, since otherwise a globally configured underlay AS number would be used.
+
+ ```{.srl .no-select}
+ set / network-instance default protocols bgp group overlay peer-as 65535
+ set / network-instance default protocols bgp group overlay local-as as-number 65535
+ ```
+
+3. **Address Family**
+ In the overlay, we only care about the EVPN routes, hence we are enabling the EVPN address family for the overlay BGP group and disabling the `ipv4-unicast` family that was enabled globally for the BGP process for the underlay routing.
+
+ ```{.srl .no-select}
+ set / network-instance default protocols bgp group overlay afi-safi evpn admin-state enable
+ set / network-instance default protocols bgp group overlay afi-safi ipv4-unicast admin-state disable
+ ```
+
+4. **Neighbors**
+
+ /// tab | leaf1 & leaf2
+ Leaf devices uses Spine's System IP for iBGP peering.
+
+ ```{.srl .no-select}
+ set / network-instance default protocols bgp neighbor 10.10.10.10 admin-state enable
+ set / network-instance default protocols bgp neighbor 10.10.10.10 peer-group overlay
+ ```
+
+ ///
+
+ /// tab | spine ( RR )
+ On the spine we configure dynamic peering, that accepts peers with any IP address. This simplifies the configuration, as we don't have to specify each leaf's IP address.
+
+ ```{.srl .no-select}
+ set / network-instance default protocols bgp dynamic-neighbors accept match 0.0.0.0/0 peer-group overlay
+ ```
+
+ ///
+
+5. **EVPN Route Reflector**
+
+ The command below will enable the route reflector functionality and only needs to be enabled on the spine.
+
+ ```{.srl .no-select}
+ set / network-instance default protocols bgp group overlay route-reflector client true
+ ```
+
+## Resulting configs
+
+Here are the config snippets for the leaf and spine devices covering everything we discussed above.
+
+/// tab | leaf1 & leaf2
+
+```srl
+enter candidate
+
+--8<-- "https://raw.githubusercontent.com/srl-labs/srl-l3evpn-basics-lab/main/startup_configs/leaf1.conf:ibgp-overlay"
+
+commit now
+
+```
+
+///
+/// tab | spine
+
+```srl
+enter candidate
+
+--8<-- "https://raw.githubusercontent.com/srl-labs/srl-l3evpn-basics-lab/main/startup_configs/spine.conf:ibgp-overlay"
+
+commit now
+```
+
+///
+
+## Verification
+
+Similarly to the verifications we did for the underlay, we can check the BGP neighbor status to ensure that the overlay iBGP peering is up and running. Since all leafs establish the iBGP session with the spine, we can list the session on the spine to ensure that all leafs are connected.
+
+```{.srl .no-select}
+--{ + running }--[ ]--
+A:spine# / show network-instance default protocols bgp neighbor 10.*
+----------------------------------------------------------------------------------------------------
+BGP neighbor summary for network-instance "default"
+Flags: S static, D dynamic, L discovered by LLDP, B BFD enabled, - disabled, * slow
+----------------------------------------------------------------------------------------------------
+----------------------------------------------------------------------------------------------------
++----------+----------+----------+----------+----------+----------+----------+----------+----------+
+| Net-Inst | Peer | Group | Flags | Peer-AS | State | Uptime | AFI/SAFI | [Rx/Acti |
+| | | | | | | | | ve/Tx] |
++==========+==========+==========+==========+==========+==========+==========+==========+==========+
+| default | 10.0.0.1 | overlay | D | 65535 | establis | 0d:0h:3m | evpn | [0/0/0] |
+| | | | | | hed | :35s | | |
+| default | 10.0.0.2 | overlay | D | 65535 | establis | 0d:0h:3m | evpn | [0/0/0] |
+| | | | | | hed | :27s | | |
++----------+----------+----------+----------+----------+----------+----------+----------+----------+
+----------------------------------------------------------------------------------------------------
+Summary:
+0 configured neighbors, 0 configured sessions are established,0 disabled peers
+4 dynamic peers
+```
+
+Both iBGP sessions from the spine towards the leafs are established. It is also perfectly fine to see no prefixes exchanged at this point, as we have not yet configured any EVPN services that would create the evpn routes.
+
+This is what we are going to do next in the [L3 EVPN section](l3evpn.md).
+
+
diff --git a/docs/tutorials/l3evpn/rt5-only/summary.md b/docs/tutorials/l3evpn/rt5-only/summary.md
new file mode 100644
index 00000000..defc2861
--- /dev/null
+++ b/docs/tutorials/l3evpn/rt5-only/summary.md
@@ -0,0 +1,43 @@
+---
+comments: true
+---
+
+# Summary
+
+While originally designed for layer 2 VPNs, EVPN has been extended to support inter-subnet routing, and subsequently, layer 3 VPNs. This tutorial walked you through the configuration of **a simple, interface-less, RT5-only layer 3 EVPN service**[^1] deployed on top of an IP fabric.
+
+The two scenarios covered in this tutorial included a Layer 3 CE end device connected to a leaf switch and a Layer 3 CE router device that utilized a PE-CE routing protocol to exchange prefixes. In both scenarios, the EVPN service was configured to provide end-to-end Layer 3 reachability between the CE prefixes.
+
+Since no IRB interfaces were used in this tutorial, the EVPN control plane was extremely simple, with only EVPN RT5 routes being exchanged between the leaf switches. No ARP/ND synchronization, no IMET routes, not MAC tables. This is a significant simplification compared to state required to support the Layer 2-based services.
+
+However, there are, as always, some considerations to keep in mind:
+
+1. When connecting servers to the fabric using L3 routed interfaces (as opposed to L2 interfaces), the servers must be reconfigured to use the leaf switch as the default gateway. You will have to configure routed interfaces on leaf switches per each server. This may become a challenge in certain environments.
+ All active load balancing must be done with ECMP and may require a routing protocol that supports ECMP. This, again, may or may not be feasible.
+2. When a PE-CE protocol is used, the configuration tasks are more comples on the CE side when compared to a simple LAG configuration in the case of L2 EVPN service or L3 EVPN with IRB.
+3. And lastly, another consideration to keep in mind when opting for pure Layer 3 services is the legacy workloads that may _require_ Layer 2 connectivity. In such cases, a Layer 2 EVPN is a must.
+
+In a nutshell, network designers and operators should carefully consider the trade-offs between the simplicity of the EVPN control plane and the additional tasks required on the server and CE device side when deciding on the type of EVPN service to deploy.
+
+
+/// admonition | Pure L3 EVPN fabrics in the wild?
+ type: quote
+We shout out to the community to share their experiences with pure L3 EVPN fabrics. Have you deployed one? What were the challenges? What were the benefits?
+
+Here is a [linkedin post with some pretty interesting comments](https://www.linkedin.com/feed/update/urn:li:activity:7221449552220823552/) on the topic by Pavel Lunin from Scaleway.
+///
+
+
+We are going to cover more advanced L3 EVPN scenarios with symmetric IRB interfaces, Interface-full mode of operation, and ESI support in the upcoming tutorials. Stay tuned!
+
+/// details | Resulting configs
+If you wish to start a lab with the resulting configurations from this tutorial already in place, you need to uncomment the `startup-config` knobs in the [topology file][lab-topo] prior to the lab deployment.
+
+In the repository, you therefore can find the full startup configs per each device in the [`startup_configs`][startup-configs-dir] directory.
+
+///
+
+[lab-topo]: https://github.com/srl-labs/srl-l3evpn-tutorial-lab/tree/main/l3evpn-tutorial.clab.yml
+[startup-configs-dir]: https://github.com/srl-labs/srl-l3evpn-tutorial-lab/tree/main/startup_configs
+
+[^1]: A more advanced, feature rich, and therefore complex L3 EVPN service introduces a combination of MAC and IP VRFs with IRB interfaces and ESI support. This tutorial does not cover these advanced topics.
diff --git a/docs/tutorials/l3evpn/rt5-only/underlay.md b/docs/tutorials/l3evpn/rt5-only/underlay.md
new file mode 100644
index 00000000..1dcd0739
--- /dev/null
+++ b/docs/tutorials/l3evpn/rt5-only/underlay.md
@@ -0,0 +1,825 @@
+---
+comments: true
+---
+
+
+
+# Underlay Routing
+
+Prior to configuring EVPN-based overlay and services, an underlay routing should be set up. The underlay routing ensures that all leaf VXLAN Termination End Points (VTEP) can reach each other via the IP fabric. This is typically done by leveraging a routing protocol to exchange loopback addresses of the leaf devices.
+
+SR Linux supports the following routing protocols for the underlay network:
+
+* ISIS
+* OSPF
+* BGP
+
+BGP as a routing protocol for large IP fabrics was well defined in [RFC7938](https://datatracker.ietf.org/doc/html/rfc7938) and can offer the following:
+
+* **Scalability:** BGP is known to scale well in very large networks, making it a good choice for scaled-out data center fabrics.
+* **Flexible Policy Engine:** BGP provides numerous attributes for policy matching, offering extensive options for traffic steering.
+* **Smaller Failure Impact Radius with BGP compared to IGP:**
+ * In case of a link failure in an ISIS/OSPF network, all devices need to run SPF on the entire link state database. The blast radius is effectively the whole network.
+ * In case of a link failure in an eBGP network, only devices one hop away need to recalculate the best path, this is because eBGP announces all routes with next-hop self and the next hop remains unchanged. The failure impact radius is only 1 hop.
+
+Utilizing eBGP as an underlay routing protocol for our lab would be depicted as follows:
+
+
+
+Leaf devices will peer with the spine device over eBGP and exchange IPv4 loopback prefixes. The loopback prefixes will be used later on for iBGP peering using EVPN address family, we will get to that in the [overlay section](overlay.md) of this tutorial.
+
+## BGP Unnumbered
+
+One of the infamous BGP disadvantages was that BGP did not have a neighbor discovery feature like IGP protocols have. Without this feature operators had to configure addresses on every BGP link and that was mundane and error prone.
+
+However, the popularity of BGP in the datacenter moved the needle in the right direction and today certain Network OS', SR Linux included, can setup BGP peering sessions with minimal effort using [IPv6 Link Local Address (LLA)](https://en.wikipedia.org/wiki/Link-local_address). And with [RFC 8950][RFC 8950] capability we can exchange IPv4 prefixes over the peering link with IPv6 nexthops.
+
+/// admonition | BGP IPv6 Unnumbered
+ type: quote
+The dynamic setup of one or more single-hop BGP sessions over a network segment that has no globally-unique IPv4 or IPv6 addresses is often called **BGP IPv6 Unnumbered**.
+
+Read more about it in the [SR Linux documentation][srl-unnumbered-docs].
+///
+
+BGP IPv6 Unnumbered utilizes:
+
+* **IPv6 Link-Local Addresses (IPv6 LLA):** Employed for communication on the same network segment, these addresses aren't routed outside their segment. In unnumbered BGP configurations, interfaces use IPv6 link-local addresses to form BGP sessions without requiring a unique global IP address per interface.
+* **Router Advertisements (RA):** As part of the Neighbor Discovery Protocol, Router Advertisements enable routers to broadcast their presence and share various information about the link and the Internet Layer on an IPv6 subnet. In BGP unnumbered, RA messages are used to announce/learn the peer’s link-local address.
+
+## Physical Interfaces
+
+The first thing we need to configure is the interfaces between the leaf and spine devices. According to the declarative definition of the lab topology file, our physical connections are as follows:
+
+
+
+The examples will target the highlighted interfaces between `leaf1` and spine devices, but at the end of this section, you will find the configuration snippets for all devices.
+
+We begin with connecting to the CLI of our nodes via SSH[^1]:
+
+```bash
+ssh l3evpn-leaf1
+```
+
+Let's got through a step by step process of an interface configuration on a `leaf1` switch:
+
+1. Enter the `candidate` configuration mode to make edits to the configuration
+
+ ```srl
+ Welcome to the srlinux CLI.
+ Type 'help' (and press ) if you need any help using this.
+
+
+ --{ running }--[ ]--
+ A:leaf1# enter candidate
+
+ --{ candidate shared default }--[ ]--
+ A:leaf1#
+ ```
+
+ The prompt will indicate we entered the candidate configuration mode. In the following steps we will enter the commands to make changes to the candidate config and at the end we will commit.
+
+2. As a next step, we create a subinterface with index 1 under a physical `ethernet-1/49` interface that connects leaf1 to spine.
+ In contrast with the L2 EVPN Tutorial, we will not configure an explicit IP address, but enable IPv6 with Router Advertisement messages on it . An IPv6 Link Local Address will be automatically configured for this interface.
+
+ The enablement of the `router-advertisement` on the IPv6 interface results in a router sending RA messages to directly connected peers, informing them of the interface's IP address. This will facilitate ARP/ND cache population.
+
+ ```srl
+ / interface ethernet-1/49
+ admin-state enable
+ subinterface 1 {
+ ipv6 {
+ admin-state enable
+ router-advertisement {
+ router-role {
+ admin-state enable
+ }
+ }
+ }
+ }
+ ```
+
+3. Attach the configured subinterfaces to the default network instance (aka GRT).
+
+ ```srl
+ / network-instance default interface ethernet-1/49.1
+ ```
+
+4. Apply the configuration changes by issuing a `commit now` command. The changes will be written to the running configuration.
+
+ ```srl
+ commit now
+ ```
+
+Below you will find the relevant configuration snippets for leafs and spine devices which you can paste in the terminal while being in `running` mode.
+
+/// tab | leaf1 and leaf2
+
+```srl
+enter candidate
+
+--8<-- "https://raw.githubusercontent.com/srl-labs/srl-l3evpn-basics-lab/main/startup_configs/leaf1.conf:physical-interfaces"
+
+commit now
+```
+
+///
+
+/// tab | spine
+
+```srl
+enter candidate
+
+--8<-- "https://raw.githubusercontent.com/srl-labs/srl-l3evpn-basics-lab/main/startup_configs/spine.conf:physical-interfaces"
+
+commit now
+```
+
+1. Cool trick with using [configuration ranges](../../../blog/posts/2023/cli-ranges.md), yeah!
+
+///
+
+Once those snippets are committed to the running configuration, we can ensure that the changes have been successfully applied by displaying the interface status.
+
+Below highlighted, you will see that an IPv6 link-layer address is auto assigned to each interface. This address is not routable and is not announced to other peers by default.
+
+/// tab | leaf1
+
+```srl hl_lines="10"
+--{ + running }--[ network-instance default interface ethernet-1/49.1 ]--
+A:leaf1# show / interface ethernet-1/49
+=========================================================================
+ethernet-1/49 is up, speed 100G, type None
+ ethernet-1/49.1 is up
+ Network-instances:
+ * Name: default (default)
+ Encapsulation : null
+ Type : routed
+ IPv6 addr : fe80::1835:2ff:feff:31/64 (link-layer, preferred)
+```
+
+///
+
+/// tab | leaf2
+
+```srl hl_lines="10"
+--{ + running }--[ network-instance default interface ethernet-1/49.1 ]--
+A:leaf2# show / interface ethernet-1/49
+=========================================================================
+ethernet-1/49 is up, speed 100G, type None
+ ethernet-1/49.1 is up
+ Network-instances:
+ * Name: default (default)
+ Encapsulation : null
+ Type : routed
+ IPv6 addr : fe80::18f3:3ff:feff:31/64 (link-layer, preferred)
+```
+
+///
+
+/// tab | spine
+
+```srl hl_lines="10 18"
+--{ + running }--[ network-instance default interface ethernet-1/{1..2}.1 ]--
+A:spine# show / interface ethernet-1/{1..2}
+=============================================================================
+ethernet-1/1 is up, speed 100G, type None
+ ethernet-1/1.1 is up
+ Network-instances:
+ * Name: default (default)
+ Encapsulation : null
+ Type : routed
+ IPv6 addr : fe80::183d:4ff:feff:1/64 (link-layer, preferred)
+-----------------------------------------------------------------------------
+ethernet-1/2 is up, speed 100G, type None
+ ethernet-1/2.1 is up
+ Network-instances:
+ * Name: default (default)
+ Encapsulation : null
+ Type : routed
+ IPv6 addr : fe80::183d:4ff:feff:2/64 (link-layer, preferred)
+-----------------------------------------------------------------------------
+=============================================================================
+Summary
+ 0 loopback interfaces configured
+ 2 ethernet interfaces are up
+ 0 management interfaces are up
+ 2 subinterfaces are up
+```
+
+///
+
+If we have a look in the ARP/ND neighbors list constructed from the received Router Advertisement messages we can see IPv6 LLA address of a neighboring node detected using ARP/ND protocol. For example, on `leaf1` and `spine` devices:
+
+/// tab | leaf1
+
+```srl
+--{ + running }--[ network-instance default interface ethernet-1/49.1 ]--
+A:leaf1# show / arpnd neighbors interface ethernet-1/49
++-----------+-----------+--------------------------------------+-----------+---------------------+-----------+---------------------+-----------+
+| Interface | Subinterf | Neighbor | Origin | Link layer address | Current | Next state change | Is Router |
+| | ace | | | | state | | |
++===========+===========+======================================+===========+=====================+===========+=====================+===========+
+| ethernet- | 1 | fe80::183d:4ff:feff:1 | dynamic | 1A:3D:04:FF:00:01 | stale | 3 hours from now | false |
+| 1/49 | | | | | | | |
++-----------+-----------+--------------------------------------+-----------+---------------------+-----------+---------------------+-----------+
+------------------------------------------------------------------------------------------------------------------------------------------------
+ Total entries : 1 (0 static, 1 dynamic)
+------------------------------------------------------------------------------------------------------------------------------------------------
+```
+
+///
+
+/// tab | spine
+
+```srl
+--{ + running }--[ ]--
+A:spine# show / arpnd neighbors interface ethernet-1/{1..2}
++-----------+-----------+--------------------------------------+-----------+---------------------+-----------+---------------------+-----------+
+| Interface | Subinterf | Neighbor | Origin | Link layer address | Current | Next state change | Is Router |
+| | ace | | | | state | | |
++===========+===========+======================================+===========+=====================+===========+=====================+===========+
+| ethernet- | 1 | fe80::1835:2ff:feff:31 | dynamic | 1A:35:02:FF:00:31 | stale | 3 hours from now | false |
+| 1/1 | | | | | | | |
+| ethernet- | 1 | fe80::18f3:3ff:feff:31 | dynamic | 1A:F3:03:FF:00:31 | stale | 3 hours from now | false |
+| 1/2 | | | | | | | |
++-----------+-----------+--------------------------------------+-----------+---------------------+-----------+---------------------+-----------+
+------------------------------------------------------------------------------------------------------------------------------------------------
+ Total entries : 2 (0 static, 2 dynamic)
+------------------------------------------------------------------------------------------------------------------------------------------------
+```
+
+///
+
+As the table above shows, the IPv6 link-local addresses of the neighboring nodes are detected using the ARP/ND protocol which is a precursor to the BGP peering establishment.
+
+## Loopback Interfaces
+
+In addition to the physical interfaces in our fabric we need to configure the loopback interfaces on our leaf devices so that they can build an iBGP peering over those interfaces with EVPN address family. This will be covered in the [Overlay Routing section](overlay.md) of this tutorial.
+
+Besides iBGP peering, the loopback interfaces will be used to originate and terminate VXLAN packets. And in the context of the VXLAN data plane, a special kind of a loopback needs to be created - `system0` interface.
+
+/// note | `system0`
+The `system0.0` interface hosts the loopback address used to originate and typically
+terminate VXLAN packets. This address is also used by default as the next-hop of all
+EVPN routes.
+///
+
+Configuration of the `system0` interface/subinterface is exactly the same as for the regular interfaces, with the exception that the `system0` interface name bears a special meaning and can only have one subinterface with index `0`. Assiming you are in the running configuration mode, paste the following snippets on each device:
+
+/// tab | leaf1
+
+```srl
+enter candidate
+
+--8<-- "https://raw.githubusercontent.com/srl-labs/srl-l3evpn-basics-lab/main/startup_configs/leaf1.conf:loopback-interfaces"
+
+commit now
+```
+
+///
+
+/// tab | leaf2
+
+```srl
+enter candidate
+
+--8<-- "https://raw.githubusercontent.com/srl-labs/srl-l3evpn-basics-lab/main/startup_configs/leaf2.conf:loopback-interfaces"
+
+commit now
+
+```
+
+///
+
+/// tab | spine
+
+```srl
+enter candidate
+
+--8<-- "https://raw.githubusercontent.com/srl-labs/srl-l3evpn-basics-lab/main/startup_configs/spine.conf:loopback-interfaces"
+
+commit now
+```
+
+///
+
+## eBGP Unnumbered for Underlay Routing
+
+Now we will set up the eBGP routing protocol that will be used for exchang loopback addresses throughout the fabric. These loopbacks will be used to set up iBGP EVPN peerings, which we will cover in the following chapter.
+
+The eBGP setup is done according to the following diagram:
+
+
+
+The private 32bit AS Numbers are used on all devices and Router ID is set to match the IPv4 address of the `system0` loopback interface.
+
+/// admonition | SR Linux and BGP Unnumbered for EVPN
+ type: warning
+SR Linux supports EVPN-VXLAN with BGP Unnumbered starting with 24.3.1 release.
+///
+
+Here is a breakdown of the configuration steps done on `leaf1` and you will find configuration for other devices at the end of this section:
+
+In this case we show the `set`-based configuration syntax
+
+1. **Assign Autonomous System Number**
+ Since we are using eBGP we have to configure AS number for every BGP speaker.
+
+ Most commonly datacenter designs would have a shared ASN between the spines to prevent traffic transiting via spines (valley-free routing). And an unique ASN per leaf to simplify BGP configuration and troubleshooting.
+
+ ```{.srl .no-select}
+ set / network-instance default protocols bgp autonomous-system 4200000001
+ ```
+
+2. **Assign a unique Router ID**
+ This is the BGP identifier reported to peers when a BGP session undergoes the establishment process.
+ As a best practice, we will configure Router ID to match the IPv4 address of the loopback (`system0`) interface.
+
+ ```srl
+ set / network-instance default protocols bgp router-id 10.0.0.1
+ ```
+
+3. **Create Routing Policy**
+
+ Recall, that our goal is to announce the loopback addresses of the leaf devices via eBGP so that we can establish iBGP peering over them later on.
+ In accordance with best security practices, and [RFC 8212](https://datatracker.ietf.org/doc/html/rfc8212), SR Linux does not announce anything via eBGP unless an explicit export policy exists. Let's configure one.
+
+ First, we will create a prefix set that matches the range of loopback addresses we want to send and receive.
+
+ ```{.srl .no-select}
+ set / routing-policy prefix-set system-loopbacks prefix 10.0.0.0/8 mask-length-range 32..32
+ ```
+
+ Next, we will create a routing policy that matches on the prefix set we just created and accepts them.
+
+ ```{.srl .no-select}
+ set / routing-policy policy system-loopbacks-policy statement 1 match prefix-set system-loopbacks
+ set / routing-policy policy system-loopbacks-policy statement 1 action policy-result accept
+ ```
+
+4. **Create BGP peer-group**
+ A BGP peer group simplifies configuring multiple BGP peers with similar requirements by grouping them together, allowing the same policies and attributes to be applied to all peers in the group. Here we create a group named `underlay` to be used for the eBGP peerings and set the created import/export policies to it.
+
+ ```{.srl .no-select}
+ set / network-instance default protocols bgp group underlay
+ set / network-instance default protocols bgp group underlay export-policy system-loopbacks-policy
+ set / network-instance default protocols bgp group underlay import-policy system-loopbacks-policy
+ ```
+
+5. **Enable `ipv4-unicast` Address Family**
+ In order to exchange IPv4 loopback IPs we need to enable `ipv4-unicast` address family; we put this under the global bgp region, since at least one address family must be enabled for the BGP process.
+
+ ```{.srl .no-select}
+ set / network-instance default protocols bgp afi-safi ipv4-unicast admin-state enable
+ ```
+
+6. **Configure dynamic BGP neighbors**
+ Here is the beauty of BGP IPv6 Unnumbered. We can configure dynamic BGP neighbors on the interfaces without specifying the neighbor's IP address. The BGP session will be established using the link-local address of the interface.
+
+ ```srl
+ set / network-instance default protocols bgp dynamic-neighbors interface ethernet-1/49.1 peer-group underlay
+ ```
+
+ To control which peers would be able allowed to form a BGP session with the `leaf1` device we can use the `allowed-peer-as` knob. This will limit the allowed AS numbers of the peers that can establish a BGP session with the device.
+
+ ```srl
+ set / network-instance default protocols bgp dynamic-neighbors interface ethernet-1/49.1 allowed-peer-as [ 4200000001..4200000010 ]
+ ```
+
+ /// details | want to have more control over the allowed peers?
+ It is also possible to only allow peers that match a certain prefix.
+
+ ```srl
+ set / network-instance default protocols bgp dynamic-neighbors accept match fe80::/10 peer-group underlay
+ ```
+
+ ///
+
+7. **Allow IPv4 Packets on IPv6-only Interfaces**
+
+ You may have noticed that our fabric now has a peculiar configuration of interfaces. The physical interfaces between leaf and spine devices are IPv6-only, whereas our `system0` loopback interfaces are addressed with IPv4.
+
+ Essentially we will have VXLANv4 packets traversing the IPv6-only interfaces and, by default, SR Linux drops IPv4 packets if the receiving interface lacks an operational IPv4 subinterface. To change this and allow IPv4 packets on IPv6-only interfaces, use the following system-wide config knob.
+
+ ```srl
+ set / network-instance default ip-forwarding receive-ipv4-check false
+ ```
+
+8. **Commit configuration**
+
+ Once we apply the config above (whole snippet below), we should have BGP peerings automatically established.
+
+ ```srl
+ --{ +* candidate shared default }--[ network-instance default protocols bgp ]--
+ A:leaf1# commit now
+ ```
+
+Here are the config snippets related to eBGP configuration per device for an easy copy paste experience. Note, that the snippets already include entering the candidate step and commit command at the end.
+
+/// tab | leaf1
+
+```srl
+enter candidate
+
+--8<-- "https://raw.githubusercontent.com/srl-labs/srl-l3evpn-basics-lab/main/startup_configs/leaf1.conf:ebgp-underlay"
+
+
+commit now
+```
+
+///
+
+/// tab | leaf2
+
+```srl
+enter candidate
+
+--8<-- "https://raw.githubusercontent.com/srl-labs/srl-l3evpn-basics-lab/main/startup_configs/leaf2.conf:ebgp-underlay"
+
+commit now
+```
+
+///
+
+/// tab | spine
+
+```srl
+enter candidate
+
+--8<-- "https://raw.githubusercontent.com/srl-labs/srl-l3evpn-basics-lab/main/startup_configs/spine.conf:ebgp-underlay"
+
+commit now
+```
+
+///
+
+## Verification
+
+Congratulations, we just configured the underlay routing using eBGP with IPv6 Unnumbered. Let's run some verification commands to ensure that we achieved the desired end state, which is to have leaf' loopback prefixes exchanged over the eBGP sessions.
+
+### BGP neighbor status
+
+First, verify that the eBGP peerings are in the established state using BGP Family IPv4-Unicast. Note that all peerings are dynamic, automatically configured using the dynamic-peering feature.
+
+/// tab | leaf1
+
+```srl
+--{ + running }--[ network-instance default interface system0.0 ]--
+A:leaf1# / show network-instance default protocols bgp neighbor
+-------------------------------------------------------------------------------------------------
+BGP neighbor summary for network-instance "default"
+Flags: S static, D dynamic, L discovered by LLDP, B BFD enabled, - disabled, * slow
+-------------------------------------------------------------------------------------------------
+-------------------------------------------------------------------------------------------------
++---------+---------+---------+---------+---------+---------+---------+---------+---------+
+| Net- | Peer | Group | Flags | Peer-AS | State | Uptime | AFI/SAF | [Rx/Act |
+| Inst | | | | | | | I | ive/Tx] |
++=========+=========+=========+=========+=========+=========+=========+=========+=========+
+| default | fe80::1 | underla | D | 4200000 | establi | 0d:0h:2 | ipv4- | [2/2/1] |
+| | 83d:4ff | y | | 010 | shed | 8m:42s | unicast | |
+| | :feff:1 | | | | | | | |
+| | %ethern | | | | | | | |
+| | et- | | | | | | | |
+| | 1/49.1 | | | | | | | |
++---------+---------+---------+---------+---------+---------+---------+---------+---------+
+-------------------------------------------------------------------------------------------------
+Summary:
+0 configured neighbors, 0 configured sessions are established,0 disabled peers
+1 dynamic peers
+```
+
+///
+
+/// tab | leaf2
+
+```srl
+--{ + running }--[ ]--
+A:leaf2# / show network-instance default protocols bgp neighbor
+-------------------------------------------------------------------------------------------------
+BGP neighbor summary for network-instance "default"
+Flags: S static, D dynamic, L discovered by LLDP, B BFD enabled, - disabled, * slow
+-------------------------------------------------------------------------------------------------
+-------------------------------------------------------------------------------------------------
++---------+---------+---------+---------+---------+---------+---------+---------+---------+
+| Net- | Peer | Group | Flags | Peer-AS | State | Uptime | AFI/SAF | [Rx/Act |
+| Inst | | | | | | | I | ive/Tx] |
++=========+=========+=========+=========+=========+=========+=========+=========+=========+
+| default | fe80::1 | underla | D | 4200000 | establi | 0d:0h:2 | ipv4- | [2/2/1] |
+| | 83d:4ff | y | | 010 | shed | 6m:40s | unicast | |
+| | :feff:2 | | | | | | | |
+| | %ethern | | | | | | | |
+| | et- | | | | | | | |
+| | 1/49.1 | | | | | | | |
++---------+---------+---------+---------+---------+---------+---------+---------+---------+
+-------------------------------------------------------------------------------------------------
+Summary:
+0 configured neighbors, 0 configured sessions are established,0 disabled peers
+1 dynamic peers
+```
+
+///
+
+/// tab | spine
+
+```srl
+A:spine# / show network-instance default protocols bgp neighbor
+---------------------------------------------------------------------------------------------
+BGP neighbor summary for network-instance "default"
+Flags: S static, D dynamic, L discovered by LLDP, B BFD enabled, - disabled, * slow
+---------------------------------------------------------------------------------------------
+---------------------------------------------------------------------------------------------
++---------+---------+---------+---------+---------+---------+---------+---------+---------+
+| Net- | Peer | Group | Flags | Peer-AS | State | Uptime | AFI/SAF | [Rx/Act |
+| Inst | | | | | | | I | ive/Tx] |
++=========+=========+=========+=========+=========+=========+=========+=========+=========+
+| default | fe80::1 | underla | D | 4200000 | establi | 0d:0h:3 | ipv4- | [1/1/1] |
+| | 835:2ff | y | | 001 | shed | 0m:49s | unicast | |
+| | :feff:3 | | | | | | | |
+| | 1%ether | | | | | | | |
+| | net- | | | | | | | |
+| | 1/1.1 | | | | | | | |
+| default | fe80::1 | underla | D | 4200000 | establi | 0d:0h:2 | ipv4- | [1/1/1] |
+| | 8f3:3ff | y | | 002 | shed | 7m:20s | unicast | |
+| | :feff:3 | | | | | | | |
+| | 1%ether | | | | | | | |
+| | net- | | | | | | | |
+| | 1/2.1 | | | | | | | |
++---------+---------+---------+---------+---------+---------+---------+---------+---------+
+---------------------------------------------------------------------------------------------
+Summary:
+0 configured neighbors, 0 configured sessions are established,0 disabled peers
+2 dynamic peers
+```
+
+///
+
+All good, we see two spines established eBGP session with the spine using ipv4-unicast address family.
+
+### Advertised routes
+
+We configured eBGP in the fabric's underlay to advertise the VXLAN tunnel endpoints (our `system0` interfaces). The output below verifies that the leafs are advertising their `system0` prefixes to the spine and spine advertises them to the respective leafs.
+
+ Note, that the neighbor address in the case of IPv6 Unnumbered is composed of a link-local address (`fe80:...`) and the interface name. You can use CLI autosuggestion to complete the interface name.
+
+/// tab | leaf1
+
+```srl hl_lines="13-14"
+--{ + running }--[ ]--
+A:leaf1# / show network-instance default protocols bgp neighbor fe80::183d:4ff:feff:1%ethernet-1/49.1 advertised-routes ipv4
+---------------------------------------------------------------------------------------------------------------
+Peer : fe80::183d:4ff:feff:1%ethernet-1/49.1, remote AS: 4200000010, local AS: 4200000001
+Type : static
+Description : None
+Group : underlay
+---------------------------------------------------------------------------------------------------------------
+Origin codes: i=IGP, e=EGP, ?=incomplete
++--------------------------------------------------------------------------------------------------------+
+| Network Path-id Next Hop MED LocPref AsPath Origin |
++========================================================================================================+
+| 10.0.0.1/32 0 fe80::1835:2 - 100 [4200000001] i |
+| ff:feff:31 |
++--------------------------------------------------------------------------------------------------------+
+---------------------------------------------------------------------------------------------------------------
+1 advertised BGP routes
+---------------------------------------------------------------------------------------------------------------
+```
+
+///
+
+/// tab | leaf2
+
+```srl hl_lines="13-14"
+--{ + running }--[ ]--
+A:leaf2# / show network-instance default protocols bgp neighbor fe80::183d:4ff:feff:2%ethernet-1/49.1 advertised-routes ipv4
+--------------------------------------------------------------------------------------------------------------
+Peer : fe80::183d:4ff:feff:2%ethernet-1/49.1, remote AS: 4200000010, local AS: 4200000002
+Type : static
+Description : None
+Group : underlay
+--------------------------------------------------------------------------------------------------------------
+Origin codes: i=IGP, e=EGP, ?=incomplete
++--------------------------------------------------------------------------------------------------------+
+| Network Path-id Next Hop MED LocPref AsPath Origin |
++========================================================================================================+
+| 10.0.0.2/32 0 fe80::18f3:3 - 100 [4200000002] i |
+| ff:feff:31 |
++--------------------------------------------------------------------------------------------------------+
+--------------------------------------------------------------------------------------------------------------
+1 advertised BGP routes
+--------------------------------------------------------------------------------------------------------------
+```
+
+///
+
+/// tab | spine
+
+Towards `leaf1`:
+
+```srl hl_lines="13-16"
+--{ + running }--[ ]--
+A:spine# / show network-instance default protocols bgp neighbor fe80::1835:2ff:feff:31%ethernet-1/1.1 advertised-routes ipv4
+-----------------------------------------------------------------------------------------------------------------------------
+Peer : fe80::1835:2ff:feff:31%ethernet-1/1.1, remote AS: 4200000001, local AS: 4200000010
+Type : static
+Description : None
+Group : underlay
+-----------------------------------------------------------------------------------------------------------------------------
+Origin codes: i=IGP, e=EGP, ?=incomplete
++----------------------------------------------------------------------------------------------------------------------+
+| Network Path-id Next Hop MED LocPref AsPath Origin |
++======================================================================================================================+
+| 10.0.0.2/32 0 fe80::183d:4ff - 100 [4200000010, i |
+| :feff:1 4200000002] |
+| 10.10.10.10/32 0 fe80::183d:4ff - 100 [4200000010] i |
+| :feff:1 |
++----------------------------------------------------------------------------------------------------------------------+
+-----------------------------------------------------------------------------------------------------------------------------
+2 advertised BGP routes
+-----------------------------------------------------------------------------------------------------------------------------
+```
+
+Towards `leaf2`:
+
+```srl hl_lines="13-16"
+--{ + running }--[ ]--
+A:spine# / show network-instance default protocols bgp neighbor fe80::18f3:3ff:feff:31%ethernet-1/2.1 advertised-routes ipv4
+-----------------------------------------------------------------------------------------------------------------------------
+Peer : fe80::18f3:3ff:feff:31%ethernet-1/2.1, remote AS: 4200000002, local AS: 4200000010
+Type : static
+Description : None
+Group : underlay
+-----------------------------------------------------------------------------------------------------------------------------
+Origin codes: i=IGP, e=EGP, ?=incomplete
++----------------------------------------------------------------------------------------------------------------------+
+| Network Path-id Next Hop MED LocPref AsPath Origin |
++======================================================================================================================+
+| 10.0.0.1/32 0 fe80::183d:4ff - 100 [4200000010, i |
+| :feff:2 4200000001] |
+| 10.10.10.10/32 0 fe80::183d:4ff - 100 [4200000010] i |
+| :feff:2 |
++----------------------------------------------------------------------------------------------------------------------+
+-----------------------------------------------------------------------------------------------------------------------------
+2 advertised BGP routes
+-----------------------------------------------------------------------------------------------------------------------------
+```
+
+///
+
+### Route table
+
+The last stop in the control plane verification process is to check if the remote loopback prefixes were installed in the `default` network-instance where we expect them to be:
+
+/// tab | leaf1
+
+```srl hl_lines="14"
+--{ + running }--[ ]--
+A:leaf1# / show network-instance default route-table
+--------------------------------------------------------------------------------------------------------------------------------------------------------
+IPv4 unicast route table of network instance default
+--------------------------------------------------------------------------------------------------------------------------------------------------------
++----------------+------+-----------+--------------------+---------+---------+--------+-----------+-----------+-----------+-----------+-------------+
+| Prefix | ID | Route | Route Owner | Active | Origin | Metric | Pref | Next-hop | Next-hop | Backup | Backup |
+| | | Type | | | Network | | | (Type) | Interface | Next-hop | Next-hop |
+| | | | | | Instanc | | | | | (Type) | Interface |
+| | | | | | e | | | | | | |
++================+======+===========+====================+=========+=========+========+===========+===========+===========+===========+=============+
+| 10.0.0.1/32 | 3 | host | net_inst_mgr | True | default | 0 | 0 | None | None | | |
+| | | | | | | | | (extract) | | | |
+| 10.0.0.2/32 | 0 | bgp | bgp_mgr | True | default | 0 | 170 | fe80::183 | ethernet- | | |
+| | | | | | | | | d:4ff:fef | 1/49.1 | | |
+| | | | | | | | | f:1 | | | |
+| | | | | | | | | (direct) | | | |
+| 10.10.10.10/32 | 0 | bgp | bgp_mgr | True | default | 0 | 170 | fe80::183 | ethernet- | | |
+| | | | | | | | | d:4ff:fef | 1/49.1 | | |
+| | | | | | | | | f:1 | | | |
+| | | | | | | | | (direct) | | | |
++----------------+------+-----------+--------------------+---------+---------+--------+-----------+-----------+-----------+-----------+-------------+
+--------------------------------------------------------------------------------------------------------------------------------------------------------
+IPv4 routes total : 3
+IPv4 prefixes with active routes : 3
+IPv4 prefixes with active ECMP routes: 0
+--------------------------------------------------------------------------------------------------------------------------------------------------------
+```
+
+///
+
+/// tab | leaf2
+
+```srl hl_lines="12"
+--{ + running }--[ ]--
+A:leaf2# / show network-instance default route-table
+--------------------------------------------------------------------------------------------------------------------------------------------------------
+IPv4 unicast route table of network instance default
+--------------------------------------------------------------------------------------------------------------------------------------------------------
++----------------+------+-----------+--------------------+---------+---------+--------+-----------+-----------+-----------+-----------+-------------+
+| Prefix | ID | Route | Route Owner | Active | Origin | Metric | Pref | Next-hop | Next-hop | Backup | Backup |
+| | | Type | | | Network | | | (Type) | Interface | Next-hop | Next-hop |
+| | | | | | Instanc | | | | | (Type) | Interface |
+| | | | | | e | | | | | | |
++================+======+===========+====================+=========+=========+========+===========+===========+===========+===========+=============+
+| 10.0.0.1/32 | 0 | bgp | bgp_mgr | True | default | 0 | 170 | fe80::183 | ethernet- | | |
+| | | | | | | | | d:4ff:fef | 1/49.1 | | |
+| | | | | | | | | f:2 | | | |
+| | | | | | | | | (direct) | | | |
+| 10.0.0.2/32 | 3 | host | net_inst_mgr | True | default | 0 | 0 | None | None | | |
+| | | | | | | | | (extract) | | | |
+| 10.10.10.10/32 | 0 | bgp | bgp_mgr | True | default | 0 | 170 | fe80::183 | ethernet- | | |
+| | | | | | | | | d:4ff:fef | 1/49.1 | | |
+| | | | | | | | | f:2 | | | |
+| | | | | | | | | (direct) | | | |
++----------------+------+-----------+--------------------+---------+---------+--------+-----------+-----------+-----------+-----------+-------------+
+--------------------------------------------------------------------------------------------------------------------------------------------------------
+IPv4 routes total : 3
+IPv4 prefixes with active routes : 3
+IPv4 prefixes with active ECMP routes: 0
+--------------------------------------------------------------------------------------------------------------------------------------------------------
+```
+
+///
+
+Both leafs have in their routing table a route to the loopback of the other leaf and therefore the underlay routing is working as expected.
+
+### Dataplane
+
+To finish the verification process let's ensure that the datapath is working, and the VTEPs on both leafs can reach each other via the routed underlay.
+
+For that we will use the `ping` command with src/dst set to loopback addresses:
+
+```srl title="leaf1 loopback pings leaf2 loopback"
+A:leaf1# ping network-instance default 10.0.0.2 -I 10.0.0.1 -c 3
+Using network instance default
+PING 10.0.0.2 (10.0.0.2) from 10.0.0.1 : 56(84) bytes of data.
+64 bytes from 10.0.0.2: icmp_seq=1 ttl=63 time=9.93 ms
+64 bytes from 10.0.0.2: icmp_seq=2 ttl=63 time=16.2 ms
+64 bytes from 10.0.0.2: icmp_seq=3 ttl=63 time=15.2 ms
+
+--- 10.0.0.2 ping statistics ---
+3 packets transmitted, 3 received, 0% packet loss, time 2002ms
+rtt min/avg/max/mdev = 9.926/13.776/16.178/2.750 ms
+
+```
+
+Perfect, the loopbacks are reachable and the fabric underlay is properly configured. We can proceed with EVPN service configuration!
+
+## Resulting configs
+
+Below you will find aggregated configuration snippets that contain the entire fabric configuration we did in the steps above. Those snippets are in the CLI format and were extracted with the `info` command.
+
+/// note
+`enter candidate` and `commit now` commands are part of the snippets, so it is possible to paste them right after you logged into the devices.
+///
+
+/// tab | leaf1
+
+```{.srl .code-scroll-lg}
+enter candidate
+
+--8<-- "https://raw.githubusercontent.com/srl-labs/srl-l3evpn-basics-lab/main/startup_configs/leaf1.conf:physical-interfaces"
+
+--8<-- "https://raw.githubusercontent.com/srl-labs/srl-l3evpn-basics-lab/main/startup_configs/leaf1.conf:loopback-interfaces"
+
+--8<-- "https://raw.githubusercontent.com/srl-labs/srl-l3evpn-basics-lab/main/startup_configs/leaf1.conf:ebgp-underlay"
+
+commit now
+```
+
+///
+
+/// tab | leaf2
+
+```{.srl .code-scroll-lg}
+enter candidate
+
+--8<-- "https://raw.githubusercontent.com/srl-labs/srl-l3evpn-basics-lab/main/startup_configs/leaf2.conf:physical-interfaces"
+
+--8<-- "https://raw.githubusercontent.com/srl-labs/srl-l3evpn-basics-lab/main/startup_configs/leaf2.conf:loopback-interfaces"
+
+--8<-- "https://raw.githubusercontent.com/srl-labs/srl-l3evpn-basics-lab/main/startup_configs/leaf2.conf:ebgp-underlay"
+
+commit now
+```
+
+///
+
+/// tab | spine
+
+```srl
+enter candidate
+
+--8<-- "https://raw.githubusercontent.com/srl-labs/srl-l3evpn-basics-lab/main/startup_configs/spine.conf:physical-interfaces"
+
+--8<-- "https://raw.githubusercontent.com/srl-labs/srl-l3evpn-basics-lab/main/startup_configs/spine.conf:loopback-interfaces"
+
+--8<-- "https://raw.githubusercontent.com/srl-labs/srl-l3evpn-basics-lab/main/startup_configs/spine.conf:ebgp-underlay"
+
+commit now
+```
+
+///
+
+Great stuff, now we are ready to move on to the [Overlay Routing configuration](overlay.md).
+
+[RFC 8950]: https://datatracker.ietf.org/doc/html/rfc8950
+[srl-unnumbered-docs]: https://documentation.nokia.com/srlinux/24-3/books/routing-protocols/bgp.html#bgp-unnumbered-peer
+
+[^1]: default SR Linux credentials are `admin:NokiaSrl1!`.
diff --git a/mkdocs.yml b/mkdocs.yml
index 5e6b54c6..2e2ace4f 100644
--- a/mkdocs.yml
+++ b/mkdocs.yml
@@ -87,6 +87,14 @@ nav:
- tutorials/evpn-mh/basics/index.md
- Configuration: tutorials/evpn-mh/basics/conf.md
- Verification: tutorials/evpn-mh/basics/verify.md
+ - Layer 3 EVPN:
+ - RT5-only L3 EVPN:
+ - tutorials/l3evpn/rt5-only/index.md
+ - Underlay Routing: tutorials/l3evpn/rt5-only/underlay.md
+ - Overlay Routing: tutorials/l3evpn/rt5-only/overlay.md
+ - L3 EVPN Instance: tutorials/l3evpn/rt5-only/l3evpn.md
+ - L3 EVPN with BGP PE-CE: tutorials/l3evpn/rt5-only/l3evpn-bgp-pe-ce.md
+ - Summary: tutorials/l3evpn/rt5-only/summary.md
- Infrastructure:
- KNE:
- tutorials/infrastructure/kne/index.md