YANG Network InstancesLabN Consulting, L.L.C.lberger@labn.netDeutsche Telekomchopps@chopps.orgCisco Systems301 Midenhall WayCaryNCUSA27513acee@cisco.comivandean@gmail.comJabilXufeng_Liu@jabil.com
This document defines a network instance module. This module
can be used to manage the
virtual resource partitioning that may be present on a
network device. Examples of
common industry terms for virtual resource partitioning are Virtual
Routing and Forwarding (VRF) instances and Virtual Switch Instances
(VSIs).
This document defines the second of two new modules that are defined
to support the configuration and operation of network-devices that
allow for the partitioning of resources from both, or either,
management and networking perspectives. Both leverage the
YANG functionality enabled by YANG Schema Mount .
The first form of resource partitioning
provides a logical partitioning of a network device where each
partition is separately managed as essentially an independent
network element which is 'hosted' by the base network device.
These hosted network elements are referred to as logical
network elements, or LNEs, and are supported by the
logical-network-element module defined in .
That module is used to identify LNEs and associate resources from the
network-device with each LNE. LNEs themselves are represented
in YANG as independent network devices; each accessed
independently.
Examples of vendor terminology for an LNE include logical
system or logical router, and virtual switch, chassis, or fabric.
The second form, which is defined in this document, provides
support for what is commonly referred to as Virtual Routing and
Forwarding (VRF) instances as well as Virtual Switch Instances
(VSI), see and . In this form of resource
partitioning, multiple control plane and forwarding/bridging
instances are provided by and managed via a single (physical or
logical) network device. This form of resource partitioning is
referred to as a Network Instance and is supported by the
network-instance module defined below. Configuration and
operation of each network-instance is always via the network
device and the network-instance module.
One notable difference between the LNE model and the NI
model is that the NI model provides a framework for VRF and VSI
management. This document envisions the separate definition of VRF
and VSI, i.e., L3 and L2 VPN, technology specific models. An example
of such can be found in the emerging L3VPN model defined in and the examples discussed below.
The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL
NOT", "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and
"OPTIONAL" in this document are to be interpreted as described
in .
Readers are expected to be familiar with terms and concepts of
YANG and YANG Schema Mount .
This document uses the graphical representation of data models
defined in .
The top open issues are:
Schema mount currently doesn't allow parent-reference filtering on
the instance of the mount point, but rather just the schema. This
means it is not possible to filter based on actual data, e.g.,
bind-network-instance-name="green". In the schema mount
definition, the text and examples should be updated to cover this
case.
In this document, we consider network devices that support protocols
and functions defined within the IETF Routing Area, e.g, routers,
firewalls, and hosts. Such devices may be physical or virtual, e.g., a
classic router with custom hardware or one residing within a
server-based virtual machine implementing a virtual network function
(VNF). Each device may sub-divide their resources into logical
network elements (LNEs) each of which provides a managed logical
device. Examples of vendor terminology for an LNE include logical
system or logical router, and virtual switch, chassis, or fabric. Each
LNE may also support virtual routing and forwarding (VRF) and virtual
switching instance (VSI) functions, which are referred to below as a
network instances (NIs). This breakdown is represented in
Figure 1.
Figure 1: Module Element Relationships
A model for LNEs is described in and
the model for NIs is covered in this document in .
The current interface management model
is impacted by the definition of LNEs and
NIs. This document and
define augmentations to the interface module to support LNEs
and NIs.
The network instance model supports the configuration of VRFs and
VSIs. Each instance is supported by information that relates to the
device, for example the route target used when advertising VRF routes
via the mechanisms defined in , and
information that relates to the internal operation of the NI, for
example for routing protocols and OSPF . This document defines the
network-instance module that provides a basis for the management of
both types of information.
NI information that relates to the device, including the assignment of
interfaces to NIs, is defined as part of this document. The defined
module also provides a placeholder for the definition of NI-technology
specific information both at the device level and for NI internal
operation. Information related to NI internal operation is supported
via schema mount and
mounting appropriate modules under the mount point. Well known mount
points are defined for L3VPN, L2VPN, and L2+L3VPN NI types.
The network instance container is used to represent virtual routing
and forwarding instances (VRFs) and virtual switching instances
(VSIs). VRFs and VSIs are commonly used to isolate
routing and switching domains, for example to create virtual private
networks, each with their own active protocols and routing/switching
policies. The model supports both core/provider and virtual
instances. Core/provider instance information is accessible at the
top level of the server, while virtual instance information is
accessible under the root schema mount points.
The NI model can be represented using the tree format defined in as:
A network instance is identified by a
'name' string. This string is used both as
an index within the network-instance module and to associate
resources with a network instance as shown above in the
interface augmentation. The ni-type and root-type choice statements are used to
support different types of L2 and L3 VPN technologies.
The bind-ni-name-failed notification is used in certain failure cases.
The network-instance module is structured to facilitate the
definition of information models for specific types of VRFs and VSIs
using augmentations. For example, the information needed to support
VPLS, VxLAN and EVPN based L2VPNs are likely to be quite different.
Example models under development that could be restructured to take
advantage on NIs include, for L3VPNs and for L2VPNs .
Documents defining new YANG models for the support of specific types
of network instances should augment the network instance module.
The basic structure that should be used for such augmentations
include a case statement, with containers for configuration and
state data and finally, when needed, a type specific mount point.
Generally ni types, are expected to not need to define type specific
mount points, but rather reuse one of the well known mount point, as
defined in the next section. The following is an example type
specific augmentation:
YANG Schema Mount, ,
identifies mount points by name within a module. This definition
allows for the definition of mount points whose schema can be
shared across ni-types. As discussed above, ni-types largely
differ in the configuration information needed in the core/top
level instance to support the NI, rather than in the information
represented within an NI. This allows the use of shared mount
points across certain NI types.
The expectation is that there are actually very few different
schema that need to be defined to support NIs on an
implementation. In particular, it is expected that the following
three forms of NI schema are needed, and each can be defined with
a well known mount point that can be reused by future modules
defining ni-types.
The three well known mount points are:
vrf-root is intended for use with L3VPN type ni-types.
vsi-root is intended for use with L2VPN type ni-types.
vv-root is intended for use with ni-types that simultaneously
support L2VPN bridging and L3VPN routing capabilities.
Future model definitions should use the above mount points
whenever possible. When a well known mount point isn't
appropriate, a model may define a type specific mount point via
augmentation.
The following is an example of an L3VPN VRF using a hypothetical
augmentation to the networking instance schema defined in . More detailed examples can be
found in .
This shows YANG Routing Management and
YANG OSPF as mounted modules.
The mounted modules can reference interface information via a
parent-reference to the containers defined in .
Interfaces are a crucial part of any network device's
configuration and operational state. They generally include a
combination of raw physical interfaces, link-layer interfaces,
addressing configuration, and logical interfaces that may not
be tied to any physical interface. Several system services,
and layer 2 and layer 3 protocols may also associate
configuration or operational state data with different types of
interfaces (these relationships are not shown for simplicity).
The interface management model is defined by .
As shown below, the network-instance module augments the existing
interface management model by adding a name which is used on
interface or sub-interface types to identify an associated network
instance. Similarly, this name is also added for IPv4 and IPv6
types, as defined in .
The following is an example of envisioned usage. The
interfaces container includes a number of commonly used
components as examples:
The defined interface model is
structured to include all interfaces in a flat list, without
regard to virtual instances (e.g., VRFs) supported
on the device. The
bind-network-instance-name leaf provides the association
between an interface and its associated NI (e.g., VRF
or VSI). Note that as currently
defined, to assign an interface to both an LNE and NI, the interface
would first be assigned to the LNE using the mechanisms defined in
and then within that LNE's interface module, the LNE's representation
of that interface would be assigned to an NI.
Modules that may be used to
represent network instance
information will be available under the ni-type specific
'root' mount point. The
use-schema mechanism defined as part of the Schema Mount module
MUST be used with the module defined in this document to identify
accessible modules.
A future version of this document could relax this requirement.
Mounted modules in the non-inline case SHOULD be defined with
access, via the appropriate schema mount parent-references , to
device resources such as interfaces. An implementation MAY choose to
restrict parent referenced information to information related to a
specific instance, e.g., only allowing references to interfaces that
have a "bind-network-instance-name" which is identical to the
instance's "name".
All modules that represent control-plane and data-plane
information may be present at the 'root' mount point,
and be accessible via paths modified per
. The list of available
modules is expected to be implementation dependent, as is the
method used by an implementation to support NIs.
For example, the following could be used to define the data
organization of the example NI shown in :
Module data identified under "schema" will be instantiated under the
mount point identified under "mount-point". These modules will be
able to reference information for nodes belonging to top-level modules
that are identified under "parent-reference". Parent referenced
information is available to clients via their top level paths only,
and not under the associated mount point.
To allow a client to understand the previously mentioned instance
restrictions on parent referenced information, an implementation MAY
represent such restrictions in the "parent-reference" leaf-list. For
example:
Network instances may be controlled by clients using existing list
operations. When a list entry is created, a new instance is
instantiated. The models mounted under an NI root are expected to be
dependent on the server implementation. When a list entry is
deleted, an existing network instance is destroyed. For more
information, see Section 7.8.6.
Once instantiated, host network device resources can be
associated with the new NI. As previously mentioned, this
document augments ietf-interfaces with the bind-ni-name leaf
to support such associations for interfaces. When a
bind-ni-name is set to a valid NI name, an implementation
MUST take whatever steps are internally necessary to assign
the interface to the NI or provide an error message (defined
below) with an indication of why the assignment failed. It is
possible for the assignment to fail while processing the
set operation, or after asynchronous processing. Error
notification in the latter case is supported via a notification.
There are two different sets of security considerations to consider
in the context of this document. One set is security related to
information contained within mounted modules. The security
considerations for mounted modules are not substantively changed
based on the information being accessible within the context of an
NI. For example, when considering the modules defined in , the security considerations identified in that
document are equally applicable, whether those modules are accessed
at a server's root or under an NI instance's root node.
The second area for consideration is information contained in the NI
module itself. NI information represents network configuration and
route distribution policy information. As such, the security of this
information is important, but it is fundamentally no different than
any other interface or routing configuration information that has
already been covered in and .
The vulnerable "config true" parameters and subtrees are the
following:
This list specifies the network instances and the related control
plane protocols configured on a device.
This leaf indicates the NI instance to which an interface is
assigned.
Unauthorized access to any of these lists can adversely affect the
routing subsystem of both the local device and the network. This
may lead to network malfunctions, delivery of packets to
inappropriate destinations and other problems.
This document registers a URI in the IETF XML registry . Following the format in RFC 3688, the following
registration is requested to be made.
This document registers a YANG module in the YANG Module Names
registry .
The structure of the model defined in this document is described
by the YANG module below.
The Routing Area Yang Architecture design team members included Acee
Lindem, Anees Shaikh, Christian Hopps, Dean Bogdanovic, Lou Berger,
Qin Wu, Rob Shakir, Stephane Litkowski, and Yan Gang. Useful review
comments were also received by Martin Bjorklund and John Scudder.
This document was motivated by, and derived from,
.
The RFC text was produced using Marshall Rose's xml2rfc tool.
The following subsections provide example uses of NIs.
The following shows an example where two customer specific network
instances are configured:
The following shows state data for the example above.