INTERNET-DRAFT Yong Xue Document: draft-ietf-ipo-carrier-requirements-02.txt Worldcom Inc. Category: Informational (Editor) Expiration Date: September, 2002 Monica Lazer Jennifer Yates Dongmei Wang AT&T Ananth Nagarajan Sprint Hirokazu Ishimatsu Japan Telecom Co., LTD Steven Wright Bellsouth Olga Aparicio Cable & Wireless Global March, 2002. Carrier Optical Services Requirements Status of this Memo This document is an Internet-Draft and is in full conformance with all provisions of Section 10 of RFC2026. Internet-Drafts are working documents of the Internet Engineering Task Force (IETF), its areas, and its working groups. Note that other groups may also distribute working documents as Internet-Drafts. Internet-Drafts are draft documents valid for a maximum of six months and may be updated, replaced, or rendered obsolete by other documents at any time. It is inappropriate to use Internet-Drafts as reference material or to cite them other than as "work in progress." The list of current Internet-Drafts can be accessed at http://www.ietf.org/ietf/1id-abstracts.txt. The list of Internet-Draft Shadow Directories can be accessed at Y. Xue et al [Page 1] Internet Draft draft-ietf-ipo-carrier-requirements-02.txt March, 2002 http://www.ietf.org/shadow.html. Abstract This Internet Draft describes the major carrier's service requirements for the automatic switched optical networks (ASON) from both an end-user's as well as an operator's perspectives. Its focus is on the description of the service building blocks and service-related control plane functional requirements. The management functions for the optical services and their underlying networks are beyond the scope of this document and will be addressed in a separate document. Table of Contents 1. Introduction 3 1.1 Justification 4 1.2 Conventions used in this document 4 1.3 Value Statement 4 1.4 Scope of This Document 5 2. Abbreviations 7 3. General Requirements 7 3.1 Separation of Networking Functions 7 3.2 Separation of Call and Connection Control 8 3.3 Network and Service Scalability 9 3.4 Transport Network Technology 10 3.5 Service Building Blocks 11 4. Service Models and Applications 11 4.1 Service and Connection Types 11 4.2 Examples of Common Service Models 12 5. Network Reference Model 13 5.1 Optical Networks and Subnetworks 13 5.2 Network Interfaces 14 5.3 Intra-Carrier Network Model 17 5.4 Inter-Carrier Network Model 18 6. Optical Service User Requirements 19 6.1 Common Optical Services 19 6.2 Bearer Interface Types 20 6.3 Optical Service Invocation 20 6.4 Optical Connection Granularity 22 6.5 Other Service Parameters and Requirements 23 7. Optical Service Provider Requirements 24 7.1 Access Methods to Optical Networks 24 7.2 Dual Homing and Network Interconnections 24 7.3 Inter-domain connectivity 25 Y. Xue et al [Page 2] Internet Draft draft-ietf-ipo-carrier-requirements-02.txt March, 2002 7.4 Names and Address Management 26 7.5 Policy-Based Service Management Framework 26 8. Control Plane Functional Requirements for Optical Services 27 8.1 Control Plane Capabilities and Functions 27 8.2 Control Message Transport Network 29 8.3 Control Plane Interface to Data Plane 31 8.4 Management Plane Interface to Data Plane 31 8.5 Control Plane Interface to Management Plane 31 8.6 Control Plane Interconnection 32 9. Requirements for Signaling, Routing and Discovery 33 9.1 Requirements for information sharing over UNI, I-NNI and E-NNI 33 9.2 Signaling Functions 33 9.3 Routing Functions 34 9.4 Requirements for path selection 35 9.5 Automatic Discovery Functions 36 10. Requirements for service and control plane resiliency 37 10.1 Service resiliency 38 10.2 Control plane resiliency 40 11. Security Considerations 41 11.1 Optical Network Security Concerns 41 11.2 Service Access Control 42 12. Acknowledgements 43 13. References 43 Authors' Addresses 45 Appendix: Interconnection of Control Planes 47 1. Introduction Optical transport networks are evolving from the current TDM-based SONET/SDH optical networks as defined by ITU Rec. G.803 [ITU-G803] to the emerging WDM-based optical transport networks (OTN) as defined by the ITU Rec. G.872 in [ITU-G872]. Therefore in the near future, carrier optical transport networks will consist of a mixture of the SONET/SDH-based sub-networks and the WDM-based wavelength or fiber switched OTN sub-networks. The OTN networks can be either transparent or opaque depending upon if O-E-O functions are utilized within the sub-networks. Optical networking encompasses the functionalities for the establishment, transmission, multiplexing, switching of optical connections carrying a wide range of user signals of varying formats and bit rate. Some of the biggest challenges for the carriers are bandwidth Y. Xue et al [Page 3] Internet Draft draft-ietf-ipo-carrier-requirements-02.txt March, 2002 management and fast service provisioning in such a multi-technology networking environment. The emerging and rapidly evolving automatic switched optical networks or ASON technology [ITU-G8080, ITU-G807] is aimed at providing optical networks with intelligent networking functions and capabilities in its control plane to enable rapid optical connection provisioning, dynamic rerouting as well as multiplexing and switching at different granularity level, including fiber, wavelength and TDM time slots. The ASON control plane should not only enable the new networking functions and capabilities for the emerging OTN networks, but significantly enhance the service provisioning capabilities for the existing SONET/SDH networks as well. The ultimate goals should be to allow the carriers to quickly and dynamically provision network resources and to enhance network survivability using ring and mesh-based protection and restoration techniques. The carriers see that this new networking platform will create tremendous business opportunities for the network operators and service providers to offer new services to the market, reduce their network Capital and Operational expenses (CAPEX and OPEX), and improve their network efficiency. 1.1. Justification The charter of the IPO WG calls for a document on "Carrier Optical Services Requirements" for IP/Optical networks. This document addresses that aspect of the IPO WG charter. Furthermore, this document was accepted as an IPO WG document by unanimous agreement at the IPO WG meeting held on March 19, 2001, in Minneapolis, MN, USA. It presents a carrier and end-user perspective on optical network services and requirements. 1.2. Conventions used in this document The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this document are to be interpreted as described in RFC 2119. 1.3. Value Statement By deploying ASON technology, a carrier expects to achieve the following benefits from both technical and business perspectives: - Rapid Circuit Provisioning: ASON technology will enable the dynamic end-to-end provisioning of the optical connections across the optical network by using standard routing and signaling protocols. Y. Xue et al [Page 4] Internet Draft draft-ietf-ipo-carrier-requirements-02.txt March, 2002 - Enhanced Survivability: ASON technology will enable the network to dynamically reroute an optical connection in case of a failure using mesh-based network protection and restoration techniques, which greatly improves the cost-effectiveness compared to the current line and ring protection schemes in the SONET/SDH network. - Cost-Reduction: ASON networks will enable the carrier to better utilize the optical network , thus achieving significant unit cost reduction per Megabit due to the cost-effective nature of the optical transmission technology, simplified network architecture and reduced operation cost. - Service Flexibility: ASON technology will support provisioning of an assortment of existing and new services such as protocol and bit- rate independent transparent network services, and bandwidth-on- demand services. - Enhanced Interoperability: ASON technology will use a control plane utilizing industry and international standards architecture and protocols, which facilitate the interoperability of the optical network equipment from different vendors. In addition, the introduction of a standards-based control plane offers the following potential benefits: - Reactive traffic engineering at optical layer that allows network resources to be dynamically allocated to traffic flow. - Reduce the need for service providers to develop new operational support systems software for the network control and new service provisioning on the optical network, thus speeding up the deployment of the optical network technology and reducing the software development and maintenance cost. - Potential development of a unified control plane that can be used for different transport technologies including OTN, SONET/SDH, ATM and PDH. 1.4. Scope of this document This document is intended to provide, from the carriers perspective, a service framework and some associated requirements in relation to the optical services to be offered in the next generation optical transport networking environment and their service control and management functions. As such, this document concentrates on the Y. Xue et al [Page 5] Internet Draft draft-ietf-ipo-carrier-requirements-02.txt March, 2002 requirements driving the work towards realization of the automatic switched optical networks. This document is intended to be protocol- neutral, but the specific goals include providing the requirements to guide the control protocol development and enhancement within IETF in terms of reuse of IP-centric control protocols in the optical transport network. Every carrier's needs are different. The objective of this document is NOT to define some specific service models. Instead, some major service building blocks are identified that will enable the carriers to use them in order to create the best service platform most suitable to their business model. These building blocks include generic service types, service enabling control mechanisms and service control and management functions. The fundamental principles and basic set of requirements for the control plane of the automatic switched optical networks have been provided in a series of ITU Recommendations under the umbrella of the ITU ASTN/ASON architectural and functional requirements as listed below: Architecture: - ITU-T Rec. G.8070/Y.1301 (2001), Requirements for the Automatic Switched Transport Network (ASTN)[ASTN] - ITU-T Rec. G.8080/Y.1304 (2001), Architecture of the Automatic Switched Optical Network (ASON)[ASON] Signaling: - ITU-T Rec. G.7713/Y.1704 (2001), Distributed Call and Connection Management (DCM)[DCM] Routing: - ITU-T Draft Rec. G.7715/Y.1706 (2002), Routing Architecture and requirements for ASON Networks (work in progress)[ASONROUTING] Discovery: - ITU-T Rec. G.7714/Y.1705 (2001), Generalized Automatic Discovery [DISC] Control Transport Network: - ITU-T Rec. G.7712/Y.1703 (2001), Architecture and Specification of Data Communication Network[DCN] Y. Xue et al [Page 6] Internet Draft draft-ietf-ipo-carrier-requirements-02.txt March, 2002 This document provides further detailed requirements based on this ASTN/ASON framework. In addition, even though we consider IP a major client to the optical network in this document, the same requirements and principles should be equally applicable to non-IP clients such as SONET/SDH, ATM, ITU G.709, etc. 2. Abbreviations ASON Automatic Switched Optical Networking ASTN Automatic Switched Transport Network CAC Connection Admission Control NNI Node-to-Node Interface UNI User-to-Network Interface IWF Inter-Working Function I-NNI Interior NNI E-NNI Exterior NNI NE Network Element OTN Optical Transport Network OLS Optical Line System PI Physical Interface SLA Service Level Agreement 3. General Requirements In this section, a number of generic requirements related to the service control and management functions are discussed. 3.1. Separation of Networking Functions It makes logical sense to segregate the networking functions within each layer network into three logical functional planes: control plane, data plane and management plane. They are responsible for providing network control functions, data transmission functions and network management functions respectively. The crux of the ASON network is the networking intelligence that contains automatic routing, signaling and discovery functions to automate the network control functions. Control Plane: includes the functions related to networking control capabilities such as routing, signaling, and policy control, as well as resource and service discovery. These functions are automated. Y. Xue et al [Page 7] Internet Draft draft-ietf-ipo-carrier-requirements-02.txt March, 2002 Data Plane (transport plane): includes the functions related to bearer channels and signal transmission. Management Plane: includes the functions related to the management functions of network element, networks and network resources and services. These functions are less automated as compared to control plane functions. Each plane consists of a set of interconnected functional or control entities, physical or logical, responsible for providing the networking or control functions defined for that network layer. The separation of the control plane from both the data and management plane is beneficial to the carriers in that it: - Allows equipment vendors to have a modular system design that will be more reliable and maintainable thus reducing the overall systems ownership and operation cost. - Allows carriers to have the flexibility to choose a third party vendor control plane software systems as its control plane solution for its switched optical network. - Allows carriers to deploy a unified control plane and OSS/management systems to manage and control different types of transport networks it owes. - Allows carriers to use a separate control network specially designed and engineered for the control plane communications. The separation of control, management and transport function is required and it shall accommodate both logical and physical level separation. Note that it is in contrast to the IP network where the control messages and user traffic are routed and switched based on the same network topology due to the associated in-band signaling nature of the IP network. 3.2. Separation of call and connection control To support many enhanced optical services, such as scheduled bandwidth on demand and bundled connections, a call model based on the separation of the call control and connection control is essential. Y. Xue et al [Page 8] Internet Draft draft-ietf-ipo-carrier-requirements-02.txt March, 2002 The call control is responsible for the end-to-end session negotiation, call admission control and call state maintenance while connection control is responsible for setting up the connections associated with a call across the network. A call can correspond to zero, one or more connections depending upon the number of connections needed to support the call. The existence of the connection depends upon the existence of its associated call session and connection can be deleted and re- established while still keeping the call session up. The call control shall be provided at an ingress port or gateway port to the network such as UNI and E-NNI. The control plane shall support the separation of the call control from the connection control. The control plane shall support call admission control on call setup and connection admission control on connection setup. 3.3. Network and Service Scalability Although some specific applications or networks may be on a small scale, the control plane protocol and functional capabilities shall support large-scale networks. In terms of the scale and complexity of the future optical network, the following assumption can be made when considering the scalability and performance that are required of the optical control and management functions. - There may be up to thousands of OXC nodes and the same or higher order of magnitude of OADMs per carrier network. - There may be up to thousands of terminating ports/wavelength per OXC node. - There may be up to hundreds of parallel fibers between a pair of OXC nodes. - There may be up to hundreds of wavelength channels transmitted on each fiber. In relation to the frequency and duration of the optical connections: - The expected end-to-end connection setup/teardown time should be in the order of seconds, preferably less. Y. Xue et al [Page 9] Internet Draft draft-ietf-ipo-carrier-requirements-02.txt March, 2002 - The expected connection holding times should be in the order of minutes or greater. - There may be up to millions of simultaneous optical connections switched across a single carrier network. Note that even though automated rapid optical connection provisioning is required, the carriers expect the majority of provisioned circuits, at least in short term, to have a long lifespan ranging from months to years. In terms of service provisioning, some carriers may choose to perform testing prior to turning over to the customer. 3.4. Transport Network Technology Optical services can be offered over different types of underlying optical transport technologies including both TDM-based SONET/SDH network and WDM-based OTN networks. For this document, standards-based transport technologies SONET/SDH as defined in the ITU Rec. G.803 and OTN implementation framing as defined in ITU Rec. G.709 shall be supported. Note that the service characteristics such as bandwidth granularity and signaling framing hierarchy to a large degree will be determined by the capabilities and constraints of the server layer network. 3.5. Service Building Blocks The primary goal of this document is to identify a set of basic service building blocks the carriers can use to create the best suitable service models that serve their business needs. The service building blocks are comprised of a well-defined set of capabilities and a basic set of control and management functions. These capabilities and functions should support a basic set of services and enable a carrier to build enhanced services through extensions and customizations. Examples of the building blocks include the connection types, provisioning methods, control interfaces, policy control functions, and domain internetworking mechanisms, etc. Y. Xue et al [Page 10] Internet Draft draft-ietf-ipo-carrier-requirements-02.txt March, 2002 4. Service Model and Applications A carrier's optical network supports multiple types of service models. Each service model may have its own service operations, target markets, and service management requirements. 4.1. Service and Connection Types The optical network is primarily offering high bandwidth connectivity in the form of connections, where a connection is defined to be a fixed bandwidth connection between two client network elements, such as IP routers or ATM switches, established across the optical network. A connection is also defined by its demarcation from ingress access point, across the optical network, to egress access point of the optical network. The following connection capability topologies must be supported: - Bi-directional point-to-point connection - Uni-directional point-to-point connection - Uni-directional point-to-multipoint connection For point-to-point connection, the following three types of network connections based on different connection set-up control methods shall be supported: - Permanent connection (PC): Established hop-by-hop directly on each ONE along a specified path without relying on the network routing and signaling capability. The connection has two fixed end-points and fixed cross-connect configuration along the path and will stays permanently until it is deleted. This is similar to the concept of PVC in ATM. - Switched connection (SC): Established through UNI signaling interface and the connection is dynamically established by network using the network routing and signaling functions. This is similar to the concept of SVC in ATM. - Soft permanent connection (SPC): Established by specifying two PC at end-points and let the network dynamically establishes a SC connection in between. This is similar to the SPVC concept in ATM. The PC and SPC connections should be provisioned via management plane to control interface and the SC connection should be provisioned via Y. Xue et al [Page 11] Internet Draft draft-ietf-ipo-carrier-requirements-02.txt March, 2002 signaled UNI interface. 4.2. Examples of Common Service Models Each carrier may define its own service model based on it business strategy and environment. The following are three example service models that carriers may use. 4.2.1. Provisioned Bandwidth Service (PBS) The PBS model provides enhanced leased/private line services provisioned via service management interface (MI) using either PC or SPC type of connection. The provisioning can be real-time or near real-time. It has the following characteristics: - Connection request goes through a well-defined management interface - Client/Server relationship between clients and optical network. - Clients have no optical network visibility and depend on network intelligence or operator for optical connection setup. 4.2.2. Bandwidth on Demand Service (BDS) The BDS model provides bandwidth-on-demand dynamic connection services via signaled user-network interface (UNI). The provisioning is real-time and is using SC type of optical connection. It has the following characteristics: - Signaled connection request via UNI directly from the user or its proxy. - Customer has no or limited network visibility depending upon the control interconnection model used and network administrative policy. - Relies on network or client intelligence for connection set-up depending upon the control plane interconnection model used. 4.2.3. Optical Virtual Private Network (OVPN) The OVPN model provides virtual private network at the optical layer between a specified set of user sites. It has the following characteristics: Y. Xue et al [Page 12] Internet Draft draft-ietf-ipo-carrier-requirements-02.txt March, 2002 - Customers contract for specific set of network resources such as optical connection ports, wavelengths, etc. - Closed User Group (CUG) concept is supported as in normal VPN. - Optical connection can be of PC, SPC or SC type depending upon the provisioning method used. - An OVPN site can request dynamic reconfiguration of the connections between sites within the same CUG. - A customer may have visibility and control of network resources up to the extent allowed by the customer service contract. At a minimum, the PBS, BDS and OVPN service models described above shall be supported by the control functions. 5. Network Reference Model This section discusses major architectural and functional components of a generic carrier optical network, which will provide a reference model for describing the requirements for the control and management of carrier optical services. 5.1. Optical Networks and Subnetworks As mentioned before, there are two main types of optical networks that are currently under consideration: SDH/SONET network as defined in ITU Rec. G.803, and OTN as defined in ITU Rec. G.872. We assume an OTN is composed of a set of optical cross-connects (OXC) and optical add-drop multiplexer (OADM) which are interconnected in a general mesh topology using DWDM optical line systems (OLS). It is often convenient for easy discussion and description to treat an optical network as an subnetwork cloud, in which the details of the network become less important, instead focus is on the function and the interfaces the optical network provides. In general, a subnetwork can be defined as a set of access points on the network boundary and a set of point-to-point optical connections between those access points. Y. Xue et al [Page 13] Internet Draft draft-ietf-ipo-carrier-requirements-02.txt March, 2002 5.2. Network Interfaces A generic carrier network reference model describes a multi-carrier network environment. Each individual carrier network can be further partitioned into domains or sub-networks based on administrative, technological or architectural reasons. The demarcation between (sub)networks can be either logical or physical and consists of a set of reference points identifiable in the optical network. From the control plane perspective, these reference points define a set of control interfaces in terms of optical control and management functionality. The following figure 5.1 is an illustrative diagram for this. Y. Xue et al [Page 14] Internet Draft draft-ietf-ipo-carrier-requirements-02.txt March, 2002 +---------------------------------------+ | single carrier network | +--------------+ | | | | | +------------+ +------------+ | | IP | | | | | | | | Network +--UNI+ Optical +---UNI--+ Carrier IP | | | | | | Subnetwork | | network | | +--------------+ | | (Domain A) +--+ | | | | +------+-----+ | +------+-----+ | | | | | | | I-NNI E-NNI UNI | +--------------+ | | | | | | | | +------+-----+ | +------+-----+ | | IP +--UNI+ | +-----+ | | | Network | | | Optical | | Optical | | | | | | Subnetwork +-E-NNI--+ Subnetwork | | +--------------+ | | (Domain A) | | (Domain B) | | | +------+-----+ +------+-----+ | | | | | +---------------------------------------+ UNI E-NNI | | +------+-------+ +-------+--------+ | | | | | Other Client | | Other Carrier | | Network | | Network | | (ATM/SONET) | | | +--------------+ +----------------+ Figure 5.1 Generic Carrier Network Reference Model The network interfaces encompass two aspects of the networking functions: user data plane interface and control plane interface. The former concerns about user data transmission across the physical network interface and the latter concerns about the control message exchange across the network interface such as signaling, routing, etc. We call the former physical interface (PI) and the latter control plane interface. Unless otherwise stated, the control interface is assumed in the remaining of this document. 5.2.1. Control Plane Interfaces Control interface defines a relationship between two connected network entities on both side of the interface. For each control interface, we need to define an architectural function each side plays and a controlled set of information that can be exchanged Y. Xue et al [Page 15] Internet Draft draft-ietf-ipo-carrier-requirements-02.txt March, 2002 across the interface. The information flowing over this logical interface may include, but not limited to: - Endpoint name and address - Reachability/summarized network address information - Topology/routing information - Authentication and connection admission control information - Connection management signaling messages - Network resource control information Different types of the interfaces can be defined for the network control and architectural purposes and can be used as the network reference points in the control plane. In this document, the following set of interfaces are defined as shown in Figure 5.1. The User-Network Interface (UNI) is a bi-directional signaling interface between service requester and service provider control entities. The service request control entity resides outside the carrier network control domain. The Network-Network Interface (NNI) is a bi-directional signaling interface between two optical network elements or sub-networks. We differentiate between interior (I-NNI) and exterior (E-NNI) NNI as follows: - E-NNI: A NNI interface between two control plane entities belonging to different control domains. - I-NNI: A NNI interface between two control plane entities within the same control domain in the carrier network. It should be noted that it is quite common to use E-NNI between two sub-networks within the same carrier network if they belong to different control domains. Different types of interface, interior vs. exterior, have different implied trust relationship for security and access control purposes. Trust relationship is not binary, instead a policy-based control mechanism need to be in place to restrict the type and amount of information that can flow cross each type of interfaces depending the carrier's service and business requirements. Generally, two networks have a trust relationship if they belong to the same administrative domain. Y. Xue et al [Page 16] Internet Draft draft-ietf-ipo-carrier-requirements-02.txt March, 2002 An example of an interior interface is an I-NNI between two optical network elements in a single control domain. Exterior interface examples include an E-NNI between two different carriers or a UNI interface between a carrier optical network and its customers. The control plane shall support the UNI and NNI interface described above and the interfaces shall be configurable in terms of the type and amount of control information exchange and their behavior shall be consistent with the configuration (i.e., exterior versus interior interfaces). 5.3. Intra-Carrier Network Model Intra-carrier network model concerns the network service control and management issues within networks owned by a single carrier. 5.3.1. Multiple Sub-networks Without loss of generality, the optical network owned by a carrier service operator can be depicted as consisting of one or more optical sub-networks interconnected by direct optical links. There may be many different reasons for more than one optical sub-networks It may be the result of using hierarchical layering, different technologies across access, metro and long haul (as discussed below), or a result of business mergers and acquisitions or incremental optical network technology deployment by the carrier using different vendors or technologies. A sub-network may be a single vendor and single technology network. But in general, the carrier's optical network is heterogeneous in terms of equipment vendor and the technology utilized in each sub- network. 5.3.2. Access, Metro and Long-haul networks Few carriers have end-to-end ownership of the optical networks. Even if they do, access, metro and long-haul networks often belong to different administrative divisions as separate optical sub-networks. Therefore Inter-(sub)-networks interconnection is essential in terms of supporting the end-to-end optical service provisioning and management. The access, metro and long-haul networks may use different technologies and architectures, and as such may have different network properties. Y. Xue et al [Page 17] Internet Draft draft-ietf-ipo-carrier-requirements-02.txt March, 2002 In general, end-to-end optical connectivity may easily cross multiple sub-networks with the following possible scenarios: Access -- Metro -- Access Access - Metro -- Long Haul -- Metro - Access 5.4. Inter-Carrier Network Model The inter-carrier model focuses on the service and control aspects between different carrier networks and describes the internetworking relationship between them. 5.4.1. Carrier Network Interconnection Inter-carrier interconnection provides for connectivity between optical network operators. To provide the global reach end-to-end optical services, optical service control and management between different carrier networks becomes essential. It is possible to support distributed peering within the IP client layer network where the connectivity between two distant IP routers can be achieved via an optical transport network. 5.4.2. Implied Control Constraints In the inter-carrier network model, each carrier's optical network is a separate administrative domain. Both the UNI interface between the user and the carrier network and the NNI interface between two carrier's networks are crossing the carrier's administrative boundary and therefore are by definition exterior interfaces. In terms of control information exchange, the topology information shall not be allowed to cross both E-NNI and UNI interfaces. Y. Xue et al [Page 18] Internet Draft draft-ietf-ipo-carrier-requirements-02.txt March, 2002 6. Optical Service User Requirements This section describes the user requirements for optical services, which in turn impose the requirements on service control and management for the network operators. The user requirements reflect the perception of the optical service from a user's point of view. 6.1. Common Optical Services The basic unit of an optical transport service is fixed-bandwidth optical connectivity between parties. However different services are created based on its supported signal characteristics (format, bit rate, etc), the service invocation methods and possibly the associated Service Level Agreement (SLA) provided by the service provider. At present, the following are the major optical services provided in the industry: - SONET/SDH, with different degrees of transparency - Optical wavelength services - Ethernet at 1 Gbps and 10 Gbps - Storage Area Networks (SANs) based on FICON, ESCON and Fiber Channel Optical Wavelength Service refers to transport services where signal framing is negotiated between the client and the network operator (framing and bit-rate dependent), and only the payload is carried transparently. SONET/SDH transport is most widely used for network- wide transport. Different levels of transparency can be achieved in the SONET/SDH transmission. Ethernet Services, specifically 1Gb/s and 10Gbs Ethernet services, are gaining more popularity due to the lower costs of the customers' premises equipment and its simplified management requirements (compared to SONET or SDH). Ethernet services may be carried over either SONET/SDH (GFP mapping) or WDM networks. The Ethernet service requests will require some service specific parameters: priority class, VLAN Id/Tag, traffic Y. Xue et al [Page 19] Internet Draft draft-ietf-ipo-carrier-requirements-02.txt March, 2002 aggregation parameters. Storage Area Network (SAN) Services. ESCON and FICON are proprietary versions of the service, while Fiber Channel is the standard alternative. As is the case with Ethernet services, SAN services may be carried over either SONET/SDH (using GFP mapping) or WDM networks. The control plane shall provide the carrier with the capability functionality to provision, control and manage all the services listed above. 6.2. Bearer Interface Types All the bearer interfaces implemented in the ONE shall be supported by the control plane and associated signaling protocols. The following interface types shall be supported by the signaling protocol: - SDH/SONET - 1 Gb Ethernet, 10 Gb Ethernet (WAN mode) - 10 Gb Ethernet (LAN mode) - FC-N (N= 12, 50, 100, or 200) for Fiber Channel services - OTN (G.709) - PDH 6.3. Optical Service Invocation As mentioned earlier, the methods of service invocation play an important role in defining different services. 6.3.1. Provider-Controlled Service Provisioning In this scenario, users forward their service request to the provider via a well-defined service management interface. All connection management operations, including set-up, release, query, or modification shall be invoked from the management plane. 6.3.2. User-Control Service Provisioning In this scenario, users forward their service request to the provider via a well-defined UNI interface in the control plane (including proxy signaling). All connection management operation requests, including set-up, release, query, or modification shall be invoked Y. Xue et al [Page 20] Internet Draft draft-ietf-ipo-carrier-requirements-02.txt March, 2002 from directly connected user devices, or its signaling representative (such as a signaling proxy). 6.3.3. Call set-up requirements In summary the following requirements for the control plane have been identified: - The control plane shall support action result codes as responses to any requests over the control interfaces. - The control plane shall support requests for call set-up, subject to policies in effect between the user and the network. - The control plane shall support the destination client device's decision to accept or reject call set-up requests from the source client's device. - The control plane shall support requests for call set-up and deletion across multiple (sub)networks. - NNI signaling shall support requests for call set-up, subject to policies in effect between the (sub)networks. - Call set-up shall be supported for both uni-directional and bi- directional connections. - Upon call request initiation, the control plane shall generate a network unique Call-ID associated with the connection, to be used for information retrieval or other activities related to that connection. - CAC shall be provided as part of the call control functionality. It is the role of the CAC function to determine if the call can be allowed to proceed based on resource availability and authentication. - Negotiation for call set-up for multiple service level options shall be supported. - The policy management system must determine what kind of calls can be set up. - The control plane elements need the ability to rate limit (or pace) call setup attempts into the network. - The control plane shall report to the management plane, the Success/Failures of a call request. Y. Xue et al [Page 21] Internet Draft draft-ietf-ipo-carrier-requirements-02.txt March, 2002 - Upon a connection request failure, the control plane shall report to the management plane a cause code identifying the reason for the failure and all allocated resources shall be released. A negative acknowledgment shall be returned to the source. - Upon a connection request success a positive acknowledgment shall be returned to the source when a connection has been successfully established, the control plane shall be notified. - The control plane shall support requests for call release by Call- ID. - The control plane shall allow any end point or any intermediate node to initiate call release procedures. - Upon call release completion all resources associated with the call shall become available for access for new requests. - The management plane shall be able to release calls or connections established by the control plane both gracefully and forcibly on demand. - Partially deleted calls or connections shall not remain within the network. - End-to-end acknowledgments shall be used for connection deletion requests. - Connection deletion shall not result in either restoration or protection being initiated. - The control plane shall support management plane and neighboring device requests for status query. - The UNI shall support initial registration and updates of the UNI-C with the network via the control plane. 6.4. Optical Connection granularity The service granularity is determined by the specific technology, framing and bit rate of the physical interface between the ONE and the client at the edge and by the capabilities of the ONE. The control plane needs to support signaling and routing for all the services supported by the ONE. In general, there should not be a one- to-one correspondence imposed between the granularity of the service Y. Xue et al [Page 22] Internet Draft draft-ietf-ipo-carrier-requirements-02.txt March, 2002 provided and the maximum capacity of the interface to the user. The control plane shall support the ITU Rec. G.709 connection granularity for the OTN network. The control plane shall support the SDH/SONET connection granularity. Sub-rate interfaces shall be supported by the optical control plane such as VT /TU granularity (as low as 1.5 Mb/s). In addition, 1 Gb and 10 Gb granularity shall be supported for 1 Gb/s and 10 Gb/s (WAN mode) Ethernet framing types, if implemented in the hardware. The following fiber channel interfaces shall be supported by the control plane if the given interfaces are available on the equipment: - FC-12 - FC-50 - FC-100 - FC-200 Encoding of service types in the protocols used shall be such that new service types can be added by adding new code point values or objects. 6.5. Other Service Parameters and Requirements 6.5.1. Classes of Service We use "service level" to describe priority related characteristics of connections, such as holding priority, set-up priority, or restoration priority. The intent currently is to allow each carrier to define the actual service level in terms of priority, protection, and restoration options. Therefore, individual carriers will determine mapping of individual service levels to a specific set of quality features. The control plane shall be capable of mapping individual service classes into specific protection and / or restoration options. Y. Xue et al [Page 23] Internet Draft draft-ietf-ipo-carrier-requirements-02.txt March, 2002 6.5.2. Diverse Routing Attributes The ability to route service paths diversely is a highly desirable feature. Diverse routing is one of the connection parameters and is specified at the time of the connection creation. The following provides a basic set of requirements for the diverse routing support. The control plane routing algorithms shall be able to route a single demand diversely from N previously routed demands in terms of link disjoint path, node disjoint path and SRLG disjoint path. 7. Optical Service Provider Requirements This section discusses specific service control and management requirements from the service provider's point of view. 7.1. Access Methods to Optical Networks Multiple access methods shall be supported: - Cross-office access (User NE co-located with ONE) - Direct remote access (Dedicated links to the user) - Remote access via access sub-network (via a multiplexing/distribution sub-network) All of the above access methods must be supported. 7.2. Dual Homing and Network Interconnections Dual homing is a special case of the access network. Client devices can be dual homed to the same or different hub, the same or different access network, the same or different core networks, the same or different carriers. The different levels of dual homing connectivity result in many different combinations of configurations. The main objective for dual homing is for enhanced survivability. Dual homing must be supported. Dual homing shall not require the use of multiple addresses for the same client device. Y. Xue et al [Page 24] Internet Draft draft-ietf-ipo-carrier-requirements-02.txt March, 2002 7.3. Inter-domain connectivity A domain is a portion of a network, or an entire network that is controlled by a single control plane entity. This section discusses the various requirements for connecting domains. 7.3.1. Multi-Level Hierarchy Traditionally current transport networks are divided into core inter- city long haul networks, regional intra-city metro networks and access networks. Due to the differences in transmission technologies, service, and multiplexing needs, the three types of networks are served by different types of network elements and often have different capabilities. The diagram below shows an example three- level hierarchical network. +--------------+ | Core Long | +----------+ Haul +---------+ | | Subnetwork | | | +--------------+ | +-------+------+ +-------+------+ | | | | | Regional | | Regional | | Subnetwork | | Subnetwork | +-------+------+ +-------+------+ | | +-------+------+ +-------+------+ | | | | | Metro/Access | | Metro/Access | | Subnetwork | | Subnetwork | +--------------+ +--------------+ Figure 2 Multi-level hierarchy example Routing and signaling for multi-level hierarchies shall be supported to allow carriers to configure their networks as needed. 7.3.2. Network Interconnections Subnetworks may have multiple points of inter-connections. All relevant NNI functions, such as routing, reachability information exchanges, and inter-connection topology discovery must recognize and support multiple points of inter-connections between subnetworks. Dual inter-connection is often used as a survivable architecture. Y. Xue et al [Page 25] Internet Draft draft-ietf-ipo-carrier-requirements-02.txt March, 2002 The control plane shall provide support for routing and signaling for subnetworks having multiple points of interconnection. 7.4. Names and Address Management 7.4.1. Address Space Separation To ensure the scalability of and smooth migration toward to the optical switched network, the separation of three address spaces are required: - Internal transport network addresses: This is used for routing control plane messages within the transport network. - Transport Network Assigned (TNA) address: This is a routable address in the optical transport network. - Client addresses: This address has significance in the clientlayer. 7.4.2. Directory Services Directory Services shall support address resolution and translation between various user edge device names and corresponding optical network addresses. UNI shall use the user naming schemes for connection request. 7.4.3. Network element Identification Each control domain and each network element within it shall be uniquely identifiable. 7.5. Policy-Based Service Management Framework The IPO service must be supported by a robust policy-based management system to be able to make important decisions. Examples of policy decisions include: - What types of connections can be set up for a given UNI? Y. Xue et al [Page 26] Internet Draft draft-ietf-ipo-carrier-requirements-02.txt March, 2002 - What information can be shared and what information must be restricted in automatic discovery functions? - What are the security policies over signaling interfaces? - What border nodes should be used when routing depend on factors including, but not limited to source and destination address, border nodes loading, time of connection request. Requirements: - Service and network policies related to configuration and provisioning, admission control, and support of Service Level Agreements (SLAs) must be flexible, and at the same time simple and scalable. - The policy-based management framework must be based on standards- based policy systems (e.g., IETF COPS). - In addition, the IPO service management system must support and be backwards compatible with legacy service management systems. 8. Control Plane Functional Requirements for Optical Services This section addresses the requirements for the optical control plane in support of service provisioning. The scope of the control plane include the control of the interfaces and network resources within an optical network and the interfaces between the optical network and its client networks. In other words, it should include both NNI and UNI aspects. 8.1. Control Plane Capabilities and Functions The control capabilities are supported by the underlying control functions and protocols built in the control plane. 8.1.1. Network Control Capabilities The following capabilities are required in the network control plane to successfully deliver automated provisioning for optical services: - Network resource discovery Y. Xue et al [Page 27] Internet Draft draft-ietf-ipo-carrier-requirements-02.txt March, 2002 - Address assignment and resolution - Routing information propagation and dissemination - Path calculation and selection - Connection management These capabilities may be supported by a combination of functions across the control and the management planes. 8.1.2. Control Plane Functions for network control The following are essential functions needed to support network control capabilities: - Signaling - Routing - Automatic resource, service and neighbor discovery Specific requirements for signaling, routing and discovery are addressed in Section 9. The general requirements for the control plane functions to support optical networking and service functions include: - The control plane must have the capability to establish, teardown and maintain the end-to-end connection, and the hop-by-hop connection segments between any two end-points. - The control plane must have the capability to support traffic- engineering requirements including resource discovery and dissemination, constraint-based routing and path computation. - The control plane shall support network status or action result code responses to any requests over the control interfaces. - The control plane shall support call admission control on UNI and connection-admission control on NNI. - The control plane shall support graceful release of network resources associated with the connection after aUpon successful connection teardown or failed connections. - The control plane shall support management plane request for connection attributes/status query. Y. Xue et al [Page 28] Internet Draft draft-ietf-ipo-carrier-requirements-02.txt March, 2002 - The control plane must have the capability to support various protection and restoration schemes. - Control plane failures shall not affect active connections and shall not adversely impact the transport and data planes. - The control plane should allow separation of major control function entities including routing, signaling and discovery and should allow different control distribution of those functionalities, including centralized, distributed or hybrid. - The control plane should allow physical separation of the control plane from the transport plane to support either tightly coupled or loosely coupled control plane solutions. - The control plane should support the routing and signaling proxy to participate in the normal routing and signaling message exchange and processing. - Security and resilience are crucial issues for the control plane and will be addressed in Section 10 and 11 of this document. 8.2. Control Message Transport Network The control message transport network is a transport network for control plane messages and it consists of a set of control channels that interconnect the nodes within the control plane. Therefore, the control message transport network must be accessible by each of the communicating nodes (e.g., OXCs). If an out-of-band IP-based control message transport network is an overlay network built on top of the IP data network using some tunneling technologies, these tunnels must be standards-based such as IPSec, GRE, etc. - The control message transport network must terminate at each of the nodes in the transport plane. - The control message transport network shall not be assumed to have the same topology as the data plane, nor shall the data plane and control plane traffic be assumed to be congruently routed. A control channel is the communication path for transporting control messages between network nodes, and over the UNI (i.e., between the UNI entity on the user side (UNI-C) and the UNI entity on the network side (UNI-N)). The control messages include signaling messages, Y. Xue et al [Page 29] Internet Draft draft-ietf-ipo-carrier-requirements-02.txt March, 2002 routing information messages, and other control maintenance protocol messages such as neighbor and service discovery. The following three types of signaling in the control channel shall be supported: - In-band signaling: The signaling messages are carried over a logical communication channel embedded in the data-carrying optical link or channel. For example, using the overhead bytes in SONET data framing as a logical communication channel falls into the in-band signaling methods. - In fiber, Out-of-band signaling: The signaling messages are carried over a dedicated communication channel separate from the optical data-bearing channels, but within the same fiber. For example, a dedicated wavelength or TDM channel may be used within the same fiber as the data channels. - Out-of-fiber signaling: The signaling messages are carried over a dedicated communication channel or path within different fibers to those used by the optical data-bearing channels. For example, dedicated optical fiber links or communication path via separate and independent IP-based network infrastructure are both classified as out-of-fiber signaling. The UNI control channel and proxy signaling defined in the OIF UNI 1.0 [OIFUNI] shall be supported. The control message transport network provides communication mechanisms between entities in the control plane. - The control message transport network shall support reliable message transfer. - The control message transport network shall have its own OAM mechanisms. - The control message transport network shall use protocols that support congestion control mechanisms. In addition, the control message transport network should support message priorities. Message prioritization allows time critical messages, such as those used for restoration, to have priority over other messages, such as other connection signaling messages and topology and resource discovery messages. The control message transport network shall be highly reliable and implement failure recovery. Y. Xue et al [Page 30] Internet Draft draft-ietf-ipo-carrier-requirements-02.txt March, 2002 8.3. Control Plane Interface to Data Plane In the situation where the control plane and data plane are provided by different suppliers, this interface needs to be standardized. Requirements for a standard control-data plane interface are under study. The specification of a control plane interface to the data plane is outside the scope of this document. Control plane should support a standards based interface to configure and switching fabrics and port functions. Data plane shall monitor and detect the failure (LOL, LOS, etc.) and quality degradation (high BER, etc.) of the signals and be able to provide signal-failure and signal-degrade alarms to the control plane accordingly to trigger proper mitigation actions in the control plane. 8.4. Management Plane Interface to Data Plane The management plane shall be responsible for the network resource management in the data plane. It should able to partition the network resources and control the allocation and the deallocation of the resource for the use by the control plane. Data plane shall monitor and detect the failure and quality degradation of the signals and be able to provide signal-failure and signal-degrade alarms plus associated detailed fault information to the management plane to trigger and enable the management for fault location and repair. Management plane failures shall not affect the normal operation of a configured and operational control plane or data plane. 8.5. Control Plane Interface to Management Plane The control plane is considered a managed entity within a network. Therefore, it is subject to management requirements just as other managed entities in the network are subject to such requirements. Control plane should be able to service the requests from the management plane for end-to-end connection provisioning (e.g. SPC connection) and control plane database information query (e.g. topology database) Y. Xue et al [Page 31] Internet Draft draft-ietf-ipo-carrier-requirements-02.txt March, 2002 Control plane shall report all the control plane faults to the management plane with detailed fault information In general, the management plane shall have authority over the control plane. Management plane should be able to configure the routing, signaling and discovery control parameters such as hold-down timers, hello-interval, etc. to effect the behavior of the control plane. In the case of network failure, both the management plane and the control plane need fault information at the same priority. The control plane shall be responsible for providing necessary statistic data such as call counts, traffic counts to the management plane. They should be available upon the query from the management plane. The management plane shall be able to tear down connections established by the control plane both gracefully and forcibly on demand. 8.6. Control Plane Interconnection When two (sub)networks are interconnected on transport plane level, so should be two corresponding control network at the control plane. The control plane interconnection model defines the way how two control networks can be interconnected in terms of controlling relationship and control information flow allowed between them. 8.6.1. Interconnection Models There are three basic types of control plane network interconnection models: overlay, peer and hybrid, which are defined by the IETF IPO WG document [IPO_frame], as discussed in the Appendix. Choosing the level of coupling depends upon a number of different factors, some of which are: - Variety of clients using the optical network - Relationship between the client and optical network - Operating model of the carrier Overlay model (UNI like model) shall be supported for client to optical control plane interconnection. Other models are optional for client to optical control plane interconnection. Y. Xue et al [Page 32] Internet Draft draft-ietf-ipo-carrier-requirements-02.txt March, 2002 For optical to optical control plane interconnection all three models shall be supported. In general, the priority for support of interconnection models should be overlay, hybrid and peer, in decreasing order. 9. Requirements for Signaling, Routing and Discovery 9.1. Requirements for information sharing over UNI, I-NNI and E-NNI Different types of interfaces shall impose different requirements and functionality due to their different trust relationships. Specifically: - Topology information shall not be exchanged across E-NNI and UNI. - The control plane shall allow the carrier to configure the type and extent of control information exchange across various interfaces. - Address resolution exchange over UNI is needed if an addressing directory service is not available. 9.2. Signaling Functions Call and connection control and management signaling messages are used for the establishment, modification, status query and release of an end-to-end optical connection. Unless otherwise specified, the word "signaling" refers to both inter-domain and intra-domain signaling. - The inter-domain signaling protocol shall be agnostic to the intra- domain signaling protocol for all the domains within the network. - Signaling shall support both strict and loose routing. - Signaling shall support individual as well as groups of connection requests. - Signaling shall support fault notifications. - Inter-domain signaling shall support per connection, globally unique identifiers for all connection management primitives based on a well-defined naming scheme. - Inter-domain signaling shall support crank-back and rerouting. Y. Xue et al [Page 33] Internet Draft draft-ietf-ipo-carrier-requirements-02.txt March, 2002 9.3. Routing Functions Routing includes reachability information propagation, network topology/resource information dissemination and path computation. Network topology/resource information dissemination is to provide each node in the network with information about the carrier network such that a single node is able to support constraint-based path selection. A mixture of hop-by-hop routing, explicit/source routing and hierarchical routing will likely be used within future transport networks. All three mechanisms (Hop-by-hop routing, explicit / source-based routing and hierarchical routing) must be supported. Messages crossing untrusted boundaries must not contain information regarding the details of an internal network topology. Requirements for routing information dissemination: - The inter-domain routing protocol shall be agnostic to the intra- domain routing protocol within any of the domains within the network. - The exchange of the following types of information shall be supported by inter-domain routing protocols: - Inter-domain topology - Per-domain topology abstraction - Per domain reachability information - Metrics for routing decisions supporting load sharing, a range of service granularity and service types, restoration capabilities, diversity, and policy. Major concerns for routing protocol performance are scalability and stability, which impose the following requirement on the routing protocols: - The routing protocol shall scale with the size of the network The routing protocols shall support following requirements: 1. Routing protocol shall support hierarchical routing information dissemination, including topology information aggregation and summarization. 2. The routing protocol(s) shall minimize global information and keep information locally significant as much as possible. Over external interfaces only reachability information, next Y. Xue et al [Page 34] Internet Draft draft-ietf-ipo-carrier-requirements-02.txt March, 2002 routing hop and service capability information should be exchanged. Any other network related information shall not leak out to other networks. 3. The routing protocol shall be able to minimize global information and keep information locally significant as much as possible (e.g., information local to a node, a sub-network, a domain, etc). For example, a single optical node may have thousands of ports. The ports with common characteristics need not to be advertised individually. 4. The routing protocol shall distinguish static routing information and dynamic routing information. The routing protocol operation shall update dynamic and static routing information differently. Only dynamic routing information shall be updated in real time. 5. Routing protocol shall be able to control the dynamic information updating frequency through different types of thresholds. Two types of thresholds could be defined: absolute threshold and relative threshold. 6. The routing protocol shall support trigger-based and timeout-based information update. 7. Inter-domain routing protocol shall support policy-based routing information exchange. 8. The routing protocol shall be able to support different levels of protection/restoration and other resiliency requirements. These are discussed in Section 10. All the scalability techniques will impact the network resource representation accuracy. The tradeoff between accuracy of the routing information and the routing protocol scalability is an important consideration to be made by network operators. 9.4. Requirements for path selection The following are functional requirements for path selection: - Path selection shall support shortest path routing. - Path selection shall also support constraint-based routing. At least the following constraints shall be supported: Y. Xue et al [Page 35] Internet Draft draft-ietf-ipo-carrier-requirements-02.txt March, 2002 - Cost - Link utilization - Diversity - Service Class - Path selection shall be able to include/exclude some specific network resources, based on policy. - Path selection shall be able to support different levels of diversity, including node, link, SRLG and SRG. - Path selection algorithms shall provide carriers the ability to support a wide range of services and multiple levels of service classes. Parameters such as service type, transparency, bandwidth, latency, bit error rate, etc. may be relevant. 9.5. Automatic Discovery Functions Automatic discovery functions include neighbor, resource and service discovery. 9.5.1. Neighbor discovery Neighbor Discovery can be described as an instance of auto-discovery that is used for associating two network entities within a layer network based on a specified adjacency relation. The control plane shall support the following neighbor discovery capability as described in [ITU-g7714]: - Physical media adjacency that detects and verifies the physical layer network connectivity between two connected network element ports. - Logical network adjacency that detects and verify the logical network layer connection above the physical layer between network layer specific ports. - Control adjacency that detect and verify the logical neighboring relation between two control entities associated with data plane network elements that form either physical or logical adjacency. The control plane shall support manual neighbor adjacency configuration to either overwrite or supplement the automatic neighbor discovery function. Y. Xue et al [Page 36] Internet Draft draft-ietf-ipo-carrier-requirements-02.txt March, 2002 9.5.2. Resource Discovery Resource discovery is concerned with the ability to verify physical connectivity between two ports on adjacent network elements, improve inventory management of network resources, detect configuration mismatches between adjacent ports, associating port characteristics of adjacent network elements, etc. Resource discovery shall be supported. Resource discovery can be achieved through either manual provisioning or automated procedures. The procedures are generic while the specific mechanisms and control information can be technology dependent. After neighbor discovery resource verification and monitoring must be performed periodically to verify physical attributes to ensure compatibility. 9.5.3. Service Discovery Service Discovery can be described as an instance of auto-discovery that is used for verifying and exchanging service capabilities of a network. Service discovery can only happen after neighbor discovery. Since service capabilities of a network can dynamically change, service discovery may need to be repeated. Service discovery is required for all the optical services supported. 10. Requirements for service and control plane resiliency Resiliency is a network capability to continue its operations under the condition of failures within the network. The automatic switched optical network assumes the separation of control plane and data plane. Therefore the failures in the network can be divided into those affecting the data plane and those affecting the control plane. To provide enhanced optical services, resiliency measures in both data plane and control plane should be implemented. The following failure handling principles shall be supported. The control plane shall provide optical service failure detection and recovery functions such that the failures in the data plane within the control plane coverage can be quickly mitigated. The failure of control plane shall not in any way adversely affect the normal functioning of existing optical connections in the data plane. Y. Xue et al [Page 37] Internet Draft draft-ietf-ipo-carrier-requirements-02.txt March, 2002 In general, there shall be no single point of failure for all major control plane functions, including signaling, routing etc. The control plane shall provide reliable transfer of signaling messages and flow control mechanisms for easing any congestion within the control plane. 10.1. Service resiliency In circuit-switched transport networks, the quality and reliability of the established optical connections in the transport plane can be enhanced by the protection and restoration mechanisms provided by the control plane functions. Rapid recovery is required by transport network providers to protect service and also to support stringent Service Level Agreements (SLAs) that dictate high reliability and availability for customer connectivity. Protection and restoration are closely related techniques for repairing network node and link failures. Protection is a collection of failure recovery techniques meant to rehabilitate failed connections by pre-provisioning dedicated protection network connections and switching to the protection circuit once the failure is detected. Restoration is a collection of reactive techniques used to rehabilitate failed connections by dynamic rerouting the failed connection around the network failures using the shared network resources. The protection switching is characterized by shorter recovery time at the cost of the dedicated network resources while dynamic restoration is characterized by longer recover time with efficient resource sharing. Furthermore, the protection and restoration can be performed either on a per link/span basis or on an end-to-end connection path basis. The formal is called local repair initiated a node closest to the failure and the latter is called global repair initiated from the ingress node. The protection and restoration actions are usually in reaction to the failure in the networks. However, during the network maintenance affecting the protected connections, a network operator need to proactively force the traffic on the protected connections to switch to its protection connection. The failure and signal degradation in the transport plane is usually technology specific and therefore shall be monitored and detected by the transport plane. The transport plane shall report both physical level failure and signal degradation to the control plane in the form of the signal Y. Xue et al [Page 38] Internet Draft draft-ietf-ipo-carrier-requirements-02.txt March, 2002 failure alarm and signal degrade alarm. The control plane shall support both alarm-triggered and hold-down timers based protection switching and dynamic restoration for failure recovery. Clients will have different requirements for connection availability. These requirements can be expressed in terms of the "service level", which can be mapped to different restoration and protection options and priority related connection characteristics, such as holding priority(e.g. pre-emptable or not), set-up priority, or restoration priority. However, how the mapping of individual service levels to a specific set of protection/restoration options and connection priorities will be determined by individual carriers. In order for the network to support multiple grades of service, the control plane must support differing protection and restoration options on a per connection basis. In order for the network to support multiple grades of service, the control plane must support setup priority, restoration priority and holding priority on a per connection basis. In general, the following protection schemes shall be considered for all protection cases within the network: - Dedicated protection: 1+1 and 1:1 - Shared protection: 1:N and M:N. - Unprotected The control plane shall support "extra-traffic" capability, which allows unprotected traffic to be transmitted on the protection circuit. The control plane shall support both trunk-side and drop-side protection switching. The following restoration schemes should be supported: - Restorable - Un-restorable Protection and restoration can be done on an end-to-end basis per connection. It can also be done on a per span or link basis between two adjacent network nodes. These schemes should be supported. The protection and restoration actions are usually triggered by the failure in the networks. However, during the network maintenance affecting the protected connections, a network operator need to proactively force the traffic on the protected connections to switch Y. Xue et al [Page 39] Internet Draft draft-ietf-ipo-carrier-requirements-02.txt March, 2002 to its protection connection. Therefore in order to support easy network maintenance, it is required that management initiated protection and restoration be supported. Protection and restoration configuration should be based on software only. The control plane shall allow the modification of protection and restoration attributes on a per-connection basis. The control plane shall support mechanisms for reserving bandwidth resources for restoration. The control plane shall support mechanisms for normalizing connection routing (reversion) after failure repair. Normal connection management operations (e.g., connection deletion) shall not result in protection/restoration being initiated. 10.2. Control plane resiliency The control plane may be affected by failures in signaling network connectivity and by software failures (e.g., signaling, topology and resource discovery modules). The signaling control plane should implement signaling message priorities to ensure that restoration messages receive preferential treatment, resulting in faster restoration. The optical control plane signal network shall support protection and restoration options to enable it to self-healing in case of failures within the control plane. Control network failure detection mechanisms shall distinguish between control channel and software process failures. The control plane failure shall only impact the capability to provision new services. Fault localization techniques for the isolation of failed control resources shall be supported. Recovery from control plane failures shall result in complete recovery and re-synchronization of the network. Y. Xue et al [Page 40] Internet Draft draft-ietf-ipo-carrier-requirements-02.txt March, 2002 11. Security Considerations In this section, security considerations and requirements for optical services and associated control plane requirements are described. 11.1. Optical Network Security Concerns Since optical service is directly related to the physical network which is fundamental to a telecommunications infrastructure, stringent security assurance mechanism should be implemented in optical networks. In terms of security, an optical connection consists of two aspects. One is security of the data plane where an optical connection itself belongs, and the other is security of the control plane. 11.1.1. Data Plane Security - Misconnection shall be avoided in order to keep the user's data confidential. For enhancing integrity and confidentiality of data, it may be helpful to support scrambling of data at layer 2 or encryption of data at a higher layer. 11.1.2. Control Plane Security It is desirable to decouple the control plane from the data plane physically. Restoration shall not result in miss-connections (connections established to a destination other than that intended), even for short periods of time (e.g., during contention resolution). For example, signaling messages, used to restore connectivity after failure, should not be forwarded by a node before contention has been resolved. Additional security mechanisms should be provided to guard against intrusions on the signaling network. Some of these may be done with the help of the management plane. - Network information shall not be advertised across exterior interfaces (UNI or E-NNI). The advertisement of network information across the E-NNI shall be controlled and limited in a configurable policy based fashion. The advertisement of network information shall be isolated and managed separately by each administration. Y. Xue et al [Page 41] Internet Draft draft-ietf-ipo-carrier-requirements-02.txt March, 2002 - The signaling network itself shall be secure, blocking all unauthorized access. The signaling network topology and addresses shall not be advertised outside a carrier's domain of trust. - Identification, authentication and access control shall be rigorously used by network operators for providing access to the control plane. - Discovery information, including neighbor discovery, service discovery, resource discovery and reachability information should be exchanged in a secure way. - Information on security-relevant events occurring in the control plane or security-relevant operations performed or attempted in the control plane shall be logged in the management plane. - The management plane shall be able to analyze and exploit logged data in order to check if they violate or threat security of the control plane. - The control plane shall be able to generate alarm notifications about security related events to the management plane in an adjustable and selectable fashion. - The control plane shall support recovery from successful and attempted intrusion attacks. 11.2. Service Access Control From a security perspective, network resources should be protected from unauthorized accesses and should not be used by unauthorized entities. Service access control is the mechanism that limits and controls entities trying to access network resources. Especially on the UNI and E-NNI, Connection Admission Control (CAC) functions should also support the following security features: - CAC should be applied to any entity that tries to access network resources through the UNI (or E-NNI). CAC should include an authentication function of an entity in order to prevent masquerade (spoofing). Masquerade is fraudulent use of network resources by pretending to be a different entity. An authenticated entity should be given a service access level in a configurable policy basis. - The UNI and NNI should provide optional mechanisms to ensure origin authentication and message integrity for connection management requests such as set-up, tear-down and modify and connection signaling messages. This is important in order to prevent Denial of Y. Xue et al [Page 42] Internet Draft draft-ietf-ipo-carrier-requirements-02.txt March, 2002 Service attacks. The UNI and E-NNI should also include mechanisms, such as usage-based billing based on CAC, to ensure non-repudiation of connection management messages. - Each entity should be authorized to use network resources according to the service level given. 12. Acknowledgements The authors of this document would like to acknowledge the valuable inputs from John Strand, Yangguang Xu, Deborah Brunhard, Daniel Awduche, Jim Luciani, Lynn Neir, Wesam Alanqar, Tammy Ferris, Mark Jones and Jerry Ash. 13. References [carrier-framework] Y. Xue et al., Carrier Optical Services Framework and Associated UNI requirements", draft-many-carrier- framework-uni-00.txt, IETF, Nov. 2001. [oif2001.196.0] M. Lazer, "High Level Requirements on Optical Network Addressing", oif2001.196.0. [oif2001.046.2] J. Strand and Y. Xue, "Routing For Optical Networks With Multiple Routing Domains", oif2001.046.2. [ipo-impairements] J. Strand et al., "Impairments and Other Constraints on Optical Layer Routing", Work in Progress, IETF [ccamp-gmpls] Y. Xu et al., "A Framework for Generalized Multi- Protocol Label Switching (GMPLS)", Work in Progress, IETF. [mesh-restoration] G. Li et al., "RSVP-TE extensions for shared mesh restoration in transport networks", Work in Progress, IETF. Y. Xue et al [Page 43] Internet Draft draft-ietf-ipo-carrier-requirements-02.txt March, 2002 [sls-framework] Yves T'Joens et al., "Service Level Specification and Usage Framework", Work in Progress, IETF. [control-frmwrk] G. Bernstein et al., "Framework for MPLS-based control of Optical SDH/SONET Networks", Work in Progress, IETF. [ccamp-req] J. Jiang et al., "Common Control and Measurement Plane Framework and Requirements", Work in Progress, IETF. [tewg-measure] W. S. Lai et al., "A Framework for Internet Traffic Engineering Neasurement", Work in Progress, IETF. [ccamp-g.709] A. Bellato, "G. 709 Optical Transport Networks GMPLS Control Framework", Work in Progress, IETF. [onni-frame] D. Papadimitriou, "Optical Network-to-Network Interface Framework and Signaling Requirements", Work in Progress, IETF. [oif2001.188.0] R. Graveman et al.,"OIF Security requirement", oif2001.188.0.a. [ASTN] ITU-T Rec. G.8070/Y.1301 (2001), Requirements for the Automatic Switched Transport Network (ASTN). [ASON] ITU-T Rec. G.8080/Y.1304 (2001), Architecture of the Automatic Switched Optical Network (ASON). [DCM] ITU-T Rec. G.7713/Y.1704 (2001), Distributed Call and Connection Management (DCM). [ASONROUTING] ITU-T Draft Rec. G.7715/Y.1706 (2002), Routing Architecture and requirements for ASON Networks (work in progress). [DISC] ITU-T Rec. G.7714/Y.1705 (2001), Generalized Automatic Discovery. [DCN]ITU-T Rec. G.7712/Y.1703 (2001), Architecture and Specification of Data Communication Network. Y. Xue et al [Page 44] Internet Draft draft-ietf-ipo-carrier-requirements-02.txt March, 2002 Author's Addresses Yong Xue UUNET/WorldCom 22001 Loudoun County Parkway Ashburn, VA 20147 Email: yong.xue@wcom.com Monica Lazer AT&T 900 ROUTE 202/206N PO BX 752 BEDMINSTER, NJ 07921-0000 mlazer@att.com Jennifer Yates, AT&T Labs 180 PARK AVE, P.O. BOX 971 FLORHAM PARK, NJ 07932-0000 jyates@research.att.com Dongmei Wang AT&T Labs Room B180, Building 103 180 Park Avenue Florham Park, NJ 07932 mei@research.att.com Ananth Nagarajan Sprint 9300 Metcalf Ave Overland Park, KS 66212, USA ananth.nagarajan@mail.sprint.com Hirokazu Ishimatsu Japan Telecom Co., LTD 2-9-1 Hatchobori, Chuo-ku, Tokyo 104-0032 Japan Phone: +81 3 5540 8493 Fax: +81 3 5540 8485 EMail: hirokazu@japan-telecom.co.jp Olga Aparicio Cable & Wireless Global 11700 Plaza America Drive Reston, VA 20191 Phone: 703-292-2022 Y. Xue et al [Page 45] Internet Draft draft-ietf-ipo-carrier-requirements-02.txt March, 2002 Email: olga.aparicio@cwusa.com Steven Wright Science & Technology BellSouth Telecommunications 41G70 BSC 675 West Peachtree St. NE. Atlanta, GA 30375 Phone +1 (404) 332-2194 Email: steven.wright@snt.bellsouth.com Y. Xue et al [Page 46] Internet Draft draft-ietf-ipo-carrier-requirements-02.txt March, 2002 Appendix: Interconnection of Control Planes The interconnection of the IP router (client) and optical control planes can be realized in a number of ways depending on the required level of coupling. The control planes can be loosely or tightly coupled. Loose coupling is generally referred to as the overlay model and tight coupling is referred to as the peer model. Additionally there is the augmented model that is somewhat in between the other two models but more akin to the peer model. The model selected determines the following: - The details of the topology, resource and reachability information advertised between the client and optical networks - The level of control IP routers can exercise in selecting paths across the optical network The next three sections discuss these models in more details and the last section describes the coupling requirements from a carrier's perspective. Peer Model (I-NNI like model) Under the peer model, the IP router clients act as peers of the optical transport network, such that single routing protocol instance runs over both the IP and optical domains. In this regard the optical network elements are treated just like any other router as far as the control plane is concerned. The peer model, although not strictly an internal NNI, behaves like an I-NNI in the sense that there is sharing of resource and topology information. Presumably a common IGP such as OSPF or IS-IS, with appropriate extensions, will be used to distribute topology information. One tacit assumption here is that a common addressing scheme will also be used for the optical and IP networks. A common address space can be trivially realized by using IP addresses in both IP and optical domains. Thus, the optical networks elements become IP addressable entities. The obvious advantage of the peer model is the seamless interconnection between the client and optical transport networks. The tradeoff is that the tight integration and the optical specific routing information that must be known to the IP clients. The discussion above has focused on the client to optical control plane inter-connection. The discussion applies equally well to Y. Xue et al [Page 47] Internet Draft draft-ietf-ipo-carrier-requirements-02.txt March, 2002 inter-connecting two optical control planes. Overlay (UNI-like model) Under the overlay model, the IP client routing, topology distribution, and signaling protocols are independent of the routing, topology distribution, and signaling protocols at the optical layer. This model is conceptually similar to the classical IP over ATM model, but applied to an optical sub-network directly. Though the overlay model dictates that the client and optical network are independent this still allows the optical network to re-use IP layer protocols to perform the routing and signaling functions. In addition to the protocols being independent the addressing scheme used between the client and optical network must be independent in the overlay model. That is, the use of IP layer addressing in the clients must not place any specific requirement upon the addressing used within the optical control plane. The overlay model would provide a UNI to the client networks through which the clients could request to add, delete or modify optical connections. The optical network would additionally provide reachability information to the clients but no topology information would be provided across the UNI. Augmented model (E-NNI like model) Under the augmented model, there are actually separate routing instances in the IP and optical domains, but information from one routing instance is passed through the other routing instance. For example, external IP addresses could be carried within the optical routing protocols to allow reachability information to be passed to IP clients. A typical implementation would use BGP between the IP client and optical network. The augmented model, although not strictly an external NNI, behaves like an E-NNI in that there is limited sharing of information. Generally in a carrier environment there will be more than just IP routers connected to the optical network. Some other examples of clients could be ATM switches or SONET ADM equipment. This may drive the decision towards loose coupling to prevent undue burdens upon non-IP router clients. Also, loose coupling would ensure that future clients are not hampered by legacy technologies. Y. Xue et al [Page 48] Internet Draft draft-ietf-ipo-carrier-requirements-02.txt March, 2002 Additionally, a carrier may for business reasons want a separation between the client and optical networks. For example, the ISP business unit may not want to be tightly coupled with the optical network business unit. Another reason for separation might be just pure politics that play out in a large carrier. That is, it would seem unlikely to force the optical transport network to run that same set of protocols as the IP router networks. Also, by forcing the same set of protocols in both networks the evolution of the networks is directly tied together. That is, it would seem you could not upgrade the optical transport network protocols without taking into consideration the impact on the IP router network (and vice versa). Operating models also play a role in deciding the level of coupling. [Freeland] gives four main operating models envisioned for an optical transport network: - ISP owning all of its own infrastructure (i.e., including fiber and duct to the customer premises) - ISP leasing some or all of its capacity from a third party - Carriers carrier providing layer 1 services - Service provider offering multiple layer 1, 2, and 3 services over a common infrastructure Although relatively few, if any, ISPs fall into category 1 it would seem the mostly likely of the four to use the peer model. The other operating models would lend themselves more likely to choose an overlay model. Most carriers would fall into category 4 and thus would most likely choose an overlay model architecture. Y. Xue et al [Page 49] Internet Draft draft-ietf-ipo-carrier-requirements-02.txt March, 2002 Full Copyright Statement Copyright (C) The Internet Society (2002). All Rights Reserved. This document and translations of it may be copied and furnished to others, and derivative works that comment on or otherwise explain it or assist in its implementation may be prepared, copied, published and distributed, in whole or in part, without restriction of any kind, provided that the above copyright notice and this paragraph are included on all such copies and derivative works. However, this document itself may not be modified in any way, such as by removing the copyright notice or references to the Internet Society or other Internet organizations, except as needed for the purpose of developing Internet standards in which case the procedures for copyrights defined in the Internet Standards process must be followed, or as required to translate it into languages other than English. The limited permissions granted above are perpetual and will not be revoked by the Internet Society or its successors or assigns. This document and the information contained herein is provided on an "AS IS" basis and THE INTERNET SOCIETY AND THE INTERNET ENGINEERING TASK FORCE DISCLAIMS ALL WARRANTIES, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO ANY WARRANTY THAT THE USE OF THE INFORMATION HEREIN WILL NOT INFRINGE ANY RIGHTS OR ANY IMPLIED WARRANTIES OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Y. Xue et al [Page 50]