Skip to content

Commit

Permalink
Cleanup and minor edits
Browse files Browse the repository at this point in the history
  • Loading branch information
ccascone committed Oct 31, 2019
1 parent dd9f962 commit 46d5024
Show file tree
Hide file tree
Showing 6 changed files with 49 additions and 54 deletions.
19 changes: 8 additions & 11 deletions EXERCISE-3.md
Original file line number Diff line number Diff line change
Expand Up @@ -121,7 +121,7 @@ ONOS app that includes a pipeconf. The pipeconf-related files are the following:
* [PipelinerImpl.java][PipelinerImpl.java]: An implementation of the `Pipeliner`
driver behavior;

To build the ONOS app (including the pipeconf), In the second terminal window,
To build the ONOS app (including the pipeconf), in the second terminal window,
use the command:

```
Expand Down Expand Up @@ -164,8 +164,7 @@ information such as:
* The ONOS driver to use for each device, `stratum-bmv2` in this case;
* The pipeconf to use for each device, `org.onosproject.ngsdn-tutorial` in this
case, as defined in [PipeconfLoader.java][PipeconfLoader.java];
* Configuration specific to our custom app, such as the `myStationMac` or a flag
to signal if a switch has to be considered a spine or not.
* Configuration specific to our custom app (`fabricDeviceConfig`)

This file contains also information related to the IPv6 configuration associated
to each switch interface. We will discuss this information in more details in
Expand All @@ -180,8 +179,6 @@ $ make netcfg
This command will push the `netcfg.json` to ONOS, triggering discovery and
configuration of the 4 switches.

FIXME: do log later, or deactivate lldp app for now to avoid clogging with error messages

Check the ONOS log (`make onos-log`), you should see messages like:

```
Expand Down Expand Up @@ -222,8 +219,8 @@ id=device:spine1, available=true, role=MASTER, type=SWITCH, driver=stratum-bmv2:
id=device:spine2, available=true, role=MASTER, type=SWITCH, driver=stratum-bmv2:org.onosproject.ngsdn-tutorial
```

Make sure you see `available=true` for all devices. That means ONOS is connected
to the device and the pipeline configuration has been pushed.
Make sure you see `available=true` for all devices. That means ONOS has a gRPC
channel open to the device and the pipeline configuration has been pushed.


#### Ports
Expand Down Expand Up @@ -292,10 +289,10 @@ deviceId=device:leaf1, groupCount=1
"Group" is an ONOS northbound abstraction that is mapped internally to different
types of P4Runtime entities. In this case you should see 1 group of type `CLONE`.

`CLONE` groups are mapped to a P4Runtime `CloneSessionEntry`, here used to clone
packets to the controller via packet-in. Note that the `id=0x63` is the same as
`#define CPU_CLONE_SESSION_ID 99` in the P4 program. This ID is hardcoded in the
pipeconf code, as the group is created by
`CLONE` groups are mapped to P4Runtime `CloneSessionEntry` entities, here used
to clone packets to the controller via packet-in. Note that the `id=0x63` is the
same as `#define CPU_CLONE_SESSION_ID 99` in the P4 program. This ID is
hardcoded in the pipeconf code. The group is created by
[PipelinerImpl.java][PipelinerImpl.java] in response to flow objectives mapped
to the ACL table and requesting to clone packets such as NDP and ARP.

Expand Down
23 changes: 11 additions & 12 deletions EXERCISE-4.md
Original file line number Diff line number Diff line change
@@ -1,11 +1,12 @@
# Exercise 4: Enabling link discovery via P4Runtime packet I/O

In this exercise, you will be asked to integrate the ONOS built-in link discovery
service with your P4 program. ONOS performs link discovery by using controller
packet-in/out. To make this work, you will need to apply simple changes to the
starter P4 code, validate the P4 changes using PTF-based data plane unit tests,
and finally, apply changes to the pipeconf Java implementation to enable ONOS's
built-in apps use the packet-in/out support provided by your P4 implementation.
In this exercise, you will be asked to integrate the ONOS built-in link
discovery service with your P4 program. ONOS performs link discovery by using
controller packet-in/out. To make this work, you will need to apply simple
changes to the starter P4 code, validate the P4 changes using PTF-based data
plane unit tests, and finally, apply changes to the pipeconf Java implementation
to enable ONOS's built-in apps to use the packet-in/out support provided by your
P4 implementation.

## Controller packet I/O with P4Runtime

Expand Down Expand Up @@ -50,13 +51,11 @@ The P4 starter code already provides support for the following capabilities:

* Parse the `cpu_out` header (if the ingress port is the CPU one)
* Emit the `cpu_in` header as the first one in the deparser
* Skip ingress pipeline processing for packet-outs and set the egress port to
the one specified in the `cpu_out` header
* Provide an ACL table with ternary match fields and an action to clone
packets to the CPU port (used to generate a packet-ins)

One piece is missing to provide complete packet-in support, and you have to modify
the P4 program to implement it:
Something is missing to provide complete packet-in/out support, and you have to
modify the P4 program to implement it:

1. Open `p4src/main.p4`;
2. Look for the implementation of the egress pipeline (`control EgressPipeImpl`);
Expand Down Expand Up @@ -230,8 +229,8 @@ Link stats are derived by ONOS by periodically obtaining the port counters for
each device. ONOS internally uses gNMI to read port information, including
counters.

In this case, you should see ~1 packet/s, as that's the rate of packet-outs
generated by the `lldpprovider` app.
In this case, you should see approx 1 packet/s, as that's the rate of
packet-outs generated by the `lldpprovider` app.

## Congratulations!

Expand Down
21 changes: 10 additions & 11 deletions EXERCISE-5.md
Original file line number Diff line number Diff line change
Expand Up @@ -27,7 +27,7 @@ are used to provide interface configuration to ONOS.
### Our P4 implementation of L2 bridging

The starter P4 code already defines tables to forward packets based on the
Ethernet address, precisely, two distinct tables to handle two different types
Ethernet address, precisely, two distinct tables, to handle two different types
of L2 entries:

1. Unicast entries: which will be filled in by the control plane when the
Expand All @@ -36,30 +36,29 @@ of L2 entries:
(NS) messages to all host-facing ports;

For (2), unlike ARP messages in IPv4, which are broadcasted to Ethernet
destination address FF:FF:FF:FF:FF:FF, NDP messages are sent to special
Ethernet addresses specified by RFC2464. These addresses are prefixed
with 33:33 and the last four octets are the last four octets of the IPv6
destination multicast address. The most straightforward way of matching
on such IPv6 broadcast/multicast packets, without digging in the details
of RFC2464, is to use a ternary match on `33:33:**:**:**:**`, where `*` means
"don't care".
destination address FF:FF:FF:FF:FF:FF, NDP messages are sent to special Ethernet
addresses specified by RFC2464. These addresses are prefixed with 33:33 and the
last four octets are the last four octets of the IPv6 destination multicast
address. The most straightforward way of matching on such IPv6
broadcast/multicast packets, without digging in the details of RFC2464, is to
use a ternary match on `33:33:**:**:**:**`, where `*` means "don't care".

For this reason, our solution defines two tables. One that matches in an exact
fashion `l2_exact_table` (easier to scale on switch ASIC memory) and one that
uses ternary matching `l2_ternary_table` (which requires more expensive TCAM
memories, usually much smaller).

These tables are applied to packets in an order defined in the `apply` block
area of the ingress pipeline (`IngressPipeImpl`):
of the ingress pipeline (`IngressPipeImpl`):

```
if (!l2_exact_table.apply().hit) {
l2_ternary_table.apply();
}
```

The ternary table has lower priority and is applied only if a matching entry is
not found in the exact one.
The ternary table has lower priority and it's applied only if a matching entry
is not found in the exact one.

**Note**: To keep things simple, we won't be using VLANs to segment our L2
domains. As such, when matching packets in the `l2_ternary_table`, these will be
Expand Down
18 changes: 9 additions & 9 deletions EXERCISE-6.md
Original file line number Diff line number Diff line change
Expand Up @@ -90,7 +90,7 @@ The first step will be to add new tables to `main.p4`.
We already provide ways to handle NDP NS and NA exchanged by hosts connected to
the same subnet (see `l2_ternary_table`). However, for hosts, the Linux
networking stack takes care of generating a NDP NA reply. For the switches in
our fabric, there's no Linux networking stack associated to it.
our fabric, there's no traditional networking stack associated to it.

There are multiple solutions to this problem:

Expand All @@ -106,7 +106,7 @@ There are multiple solutions to this problem:
option. You can decide to go with a different one, but you should keep in mind
that there will be less starter code for you to re-use.

The idea is simple, NDP NA packets have the same header structure as NDP NS
The idea is simple: NDP NA packets have the same header structure as NDP NS
ones. They are both ICMPv6 packets with different header field values, such as
different ICMPv6 type, different Ethernet addresses etc. A switch that knows the
MAC address of a given IPv6 target address found in an NDP NS request, can
Expand All @@ -117,13 +117,13 @@ To implement P4-based generation of NDP NA messages, look in
`ndp_ns_to_na` to transform an NDP NS packet into an NDP NA one. Your task is to
implement a table that uses such action.

This table should define a mapping between the interface IPv6 addresses
provided in [netcfg.json](mininet/netcfg.json) and the `myStationMac` associated
to each switch (also defined in netcfg.json). When an NDP
NS packet is received, asking to resolve one of such IPv6 addresses, the
`ndp_ns_to_na` action should be invoked with the given `myStationMac` as
parameter. The ONOS app will be responsible of inserting entries in this table
according to the content of netcfg.json.
This table should define a mapping between the interface IPv6 addresses provided
in [netcfg.json](mininet/netcfg.json) and the `myStationMac` associated to each
switch (also defined in netcfg.json). When an NDP NS packet is received, asking
to resolve one of such IPv6 addresses, the `ndp_ns_to_na` action should be
invoked with the given `myStationMac` as parameter. The ONOS app will be
responsible of inserting entries in this table according to the content of
netcfg.json.

The ONOS app already provides a component
[NdpReplyComponent.java](app/src/main/java/org/p4/p4d2/tutorial/NdpReplyComponent.java)
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -180,7 +180,7 @@ private void insertMulticastGroup(DeviceId deviceId) {
}

log.info("Adding L2 multicast group with {} ports on {}...",
ports.size(), deviceId);
ports.size(), deviceId);

// Forge group object.
final GroupDescription multicastGroup = Utils.buildMulticastGroup(
Expand All @@ -191,7 +191,7 @@ private void insertMulticastGroup(DeviceId deviceId) {
}

/**
* Insert flow rules matching matching ethernet destination
* Insert flow rules matching ethernet destination
* broadcast/multicast addresses (e.g. ARP requests, NDP Neighbor
* Solicitation, etc.). Such packets should be processed by the multicast
* group created before.
Expand Down Expand Up @@ -313,7 +313,7 @@ private void insertUnmatchedBridgingFlowRule(DeviceId deviceId) {
private void learnHost(Host host, DeviceId deviceId, PortNumber port) {

log.info("Adding L2 unicast rule on {} for host {} (port {})...",
deviceId, host.id(), port);
deviceId, host.id(), port);

// *** TODO EXERCISE 5
// Modify P4Runtime entity names to match content of P4Info file (look
Expand All @@ -324,7 +324,7 @@ private void learnHost(Host host, DeviceId deviceId, PortNumber port) {
final MacAddress hostMac = host.mac();
final PiCriterion hostMacCriterion = PiCriterion.builder()
.matchExact(PiMatchFieldId.of("MODIFY ME"),
hostMac.toBytes())
hostMac.toBytes())
.build();

// Action: set output port
Expand Down Expand Up @@ -425,7 +425,7 @@ public void event(HostEvent event) {

mainComponent.getExecutorService().execute(() -> {
log.info("{} event! host={}, deviceId={}, port={}",
event.type(), host.id(), deviceId, port);
event.type(), host.id(), deviceId, port);

learnHost(host, deviceId, port);
});
Expand Down Expand Up @@ -500,7 +500,7 @@ private void setUpAllDevices() {
// For all hosts connected to this device...
hostService.getConnectedHosts(device.id()).forEach(
host -> learnHost(host, host.location().deviceId(),
host.location().port()));
host.location().port()));
}
});
}
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -191,7 +191,7 @@ private void insertMulticastGroup(DeviceId deviceId) {
}

/**
* Insert flow rules matching matching ethernet destination
* Insert flow rules matching ethernet destination
* broadcast/multicast addresses (e.g. ARP requests, NDP Neighbor
* Solicitation, etc.). Such packets should be processed by the multicast
* group created before.
Expand Down Expand Up @@ -273,10 +273,10 @@ private void insertUnmatchedBridgingFlowRule(DeviceId deviceId) {
// Match unmatched traffic - Match ternary **:**:**:**:**:**
final PiCriterion unmatchedTrafficCriterion = PiCriterion.builder()
.matchTernary(
PiMatchFieldId.of("hdr.ethernet.dst_addr"),
MacAddress.valueOf("00:00:00:00:00:00").toBytes(),
MacAddress.valueOf("00:00:00:00:00:00").toBytes())
.build();
PiMatchFieldId.of("hdr.ethernet.dst_addr"),
MacAddress.valueOf("00:00:00:00:00:00").toBytes(),
MacAddress.valueOf("00:00:00:00:00:00").toBytes())
.build();

// Action: set multicast group id
final PiAction setMcastGroupAction = PiAction.builder()
Expand Down

0 comments on commit 46d5024

Please sign in to comment.