Internet Draft Ajay Bakre File: draft-bakre-mcast-atm-00.txt Takeshi Nishida Expiration: May 1998 C&C Research Labs NEC USA, Inc. November 1997 IP Multicast over ATM Networks with Cut-through Forwarding for Inter LIS Traffic Status of this Memo This document is an Internet Draft. Internet Drafts are working documents of the Internet Engineering Task Force (IETF), its Areas, and its Working Groups. Note that other groups may also distribute working documents as Internet Drafts. Internet Drafts are draft documents valid for a maximum of six months. Internet Drafts may be updated, replaced, or obsoleted by other documents at any time. It is not appropriate to use Internet Drafts as reference material or to cite them other than as a "working draft" or "work in progress." To learn the current status of any Internet-Draft, please check the "1id-abstracts.txt" listing contained in the Internet-Drafts Shadow Directories on ftp.is.co.za (Africa), nic.nordu.net (Europe), munnari.oz.au (Pacific Rim), ds.internic.net (US East Coast), or ftp.isi.edu (US West Coast). Abstract This document proposes a scheme for IP multicasting in ATM networks, which can achieve cut-through forwarding for inter LIS multicast traffic using ATM protocols. 1. Introduction The emergence of NHRP [8] as an alternative to hop-by-hop routing for IP unicast traffic has given rise to hopes that a similar solution can be developed for IP multicast as well. The problems associated with extending multicast address resolution to the inter LIS case have been well documented in [3] and [4]. In particular, scalability and VC management issues make it impractical to extend multicast address resolution to inter LIS multicast routing. This document proposes an alternate scheme based on ATM protocols, which can provide true shortcut paths for inter LIS multicast traffic in a multi-LIS ATM cloud. By true shortcut paths we mean the paths resulting from the use of ATM signaling without regard to the way LISs are interconnected. The proposed scheme is described informally. The primary goal of this document is to encourage a renewed discussion on the feasibility of cut-through forwarding for inter LIS multicast traffic. Bakre, Nishida Expires: May 1998 [Page 1] Internet Draft Inter LIS Multicast November, 1997 2. Proposed Scheme The proposed IP multicasting scheme is based on multicast switches. A multicast switch is somewhat similar to a switch-router in the sense that it is capable of packet level forwarding as well as cell level forwarding. However, a multicast switch also has some important differences from a switch-router. First, unlike a router, which can be a part of multiple LISs, a multicast switch is part of exactly one LIS. Thus instead of multiple IP interfaces, a multicast switch essentially has only two "sides" - on one are the IP/ATM hosts within its LIS, on the other are all other multicast switches and IP/ATM hosts in the ATM cloud. Second, instead of using one of the IP based multicast routing protocols for hop-by-hop forwarding of multicast traffic, multicast switches treat the ATM cloud as a shared network where each multicast switch is just one hop away from all other multicast switches. Using the features mentioned above allows achieving many important objectives. First, it allows aggregation of receivers inside an LIS, these receivers being represented in the ATM cloud by the multicast switch in the LIS. Second, for inter-LIS multicast forwarding, VCs are established among multicast switches using ATM signaling giving true shortcuts regardless of how individual LISs are interconnected. Third, multicast switches can concatenate intra LIS VCs with inter LIS VCs to achieve cut-through forwarding through those switches. Fourth, multicast forwarding within the ATM cloud can be separated from the traditional routing considerations of "reverse shortest path". This greatly simplifies inter LIS multicast forwarding. 3. Intra LIS Multicasting For multicasting within an LIS, the designated multicast switch in the LIS can also function as a multicast server (MCS), albeit with an important difference. Unlike the MCS proposed in [9], which only uses packet level forwarding, a multicast switch can support both cell level and packet level forwarding. Which method is used for a given multicast group depends on considerations such as the number of senders and receivers for that group within the LIS and the need for QoS based multicast using RSVP. A multicast switch thus takes over the VC management function from individual senders, but leaves the option open for either concentrating the traffic from multiple senders on a single point to multipoint intra-LIS data VC or establishing a separate VC for each sender. As an example, if the number of senders within the LIS is small, separate VC trees may be established for each sender by the multicast switch (also see the section on QoS considerations below). Bakre, Nishida Expires: May 1998 [Page 2] Internet Draft Inter LIS Multicast November, 1997 4. Inter LIS Multicasting For multicasting across LIS boundaries, multicast switches in individual LISs form a control tree among themselves. This control tree may consist of point to multipoint VCs, one rooted at each multicast switch with all other multicast switches added as leaves. Another possibility is to have a mesh of point to point VCs interconnecting pairs of multicast switches, which may be the method of choice for QoS multicast (see the section on QoS considerations). Scalability issues are considered in another section later. A multicast switch can learn about other multicast switches in the ATM cloud through various mechanisms, e.g. by tagging and propagating such information via PNNI updates. The control tree is used by multicast switches to exchange group membership information about their respective LISs. Multicast switches can learn about the existence of senders and receivers within their LISs from their local MARSs. On discovering a sender within its LIS, a multicast switch can learn about the existence of receivers in other LISs from the information exchanged with other multicast switches in the ATM cloud. To form an inter LIS multicast tree, a multicast switch that has a sender within its LIS, establishes a point to multipoint data VC to all other multicast switches that have receivers within their LISs. ATM signaling is used to establish the inter LIS VC ensuring true shortcut paths to all downstream multicast switches. Multicast switches added as leaves to this inter LIS VC in turn form intra LIS data VCs within their respective LISs for distribution of the multicast traffic originating at senders within as well as outside their LISs. Whether a multicast switch aggregates traffic from multiple senders within its LIS on a single outgoing inter LIS VC or whether different inter LIS VCs are formed for each local sender, can be decided by individual switches. The latter method allows cell level forwarding and may be suitable for QoS traffic, but can be wasteful if the number of senders in an LIS is large. Downstream (receiving) multicast switches in turn determine how to distribute traffic from multiple senders (both local and external). The alternatives range from aggregating all traffic for local receivers on a single intra LIS VC to having a separate VC for each sender. If separate intra and inter LIS VCs are established by each multicast switch for each sender, individual multicast switches can concatenate incoming and outgoing VCs to form a complete multicast tree for each sender. This allows cell level (cut-through) forwarding from the sender to all the receivers. Any aggregation of senders on the other hand, will involve packet level forwarding at the point of aggregation to prevent cell interleaving, although no routing lookup is needed at the multicast switches once the VCs are established. Bakre, Nishida Expires: May 1998 [Page 3] Internet Draft Inter LIS Multicast November, 1997 5. Interoperation with External Mrouters Interoperation with Mrouters outside the ATM cloud is achieved by edge multicast switches. A multicast switch configured as an edge switch participates in one or more IDMR protocols that may be in use on its non-ATM interfaces. In addition, all edge switches in an ATM cloud cooperate to partition the external networks in a way that allows one of the edge switches to become the designated forwarder for each external subnet (or IP network). What this means is that if traffic from an outside sender arrives at one or more edge switches, only one of them (the designated forwarder for the sender's subnet) will establish an inter LIS data tree within the ATM cloud. All the edge switches can forward traffic originating inside the ATM cloud to outside Mrouters however. This includes multicast traffic that uses the ATM cloud as a transit network (possibly with some receivers in the ATM cloud as well). 6. Scalability Issues for Multicast Switches If the number of LISs (and multicast switches) in an ATM cloud is large, the requirement in the proposed scheme that each multicast switch exchange multicast group membership information with all other multicast switches in the ATM cloud, may be hard to implement. In such a case, a hierarchy of multicast switches analogous to the PNNI hierarchy may be used such that multicast switches in a PNNI domain exchange complete group membership information with each other, but only summarized information is exchanged at the higher levels. In such a case, an inter LIS multicast tree will consist of individual inter LIS trees in each PNNI domain together with any VCs required to interconnect such trees across domain boundaries. One proposal for using the PNNI hierarchy for multicasting in ATM networks can be found in [10]. 7. QoS Considerations To support IP Integrated Services over ATM networks using RSVP [5], multicast switches also provide termination points for RSVP messages originating in their respective LISs. Aggregate QoS requests based on RESV messages from individual LISs can be forwarded to one or more multicast switches that have local senders. These multicast switches can then establish inter LIS data VCs with sufficiently large QoS parameters, to satisfy all downstream reservations. Additional control VCs are needed within each LIS for propagating RSVP control messages. A single point to multipoint control VC from a multicast switch to all registered multicast receivers in an LIS can be shared by all multicast groups for the distribution of PATH messages from local and external senders. QoS receivers will need Bakre, Nishida Expires: May 1998 [Page 4] Internet Draft Inter LIS Multicast November, 1997 additional point to point control VCs to send RESV messages back to the local multicast switch. 8. Related Work Cell switch routers (CSRs) [7] can provide shortcut paths through IP routers. One problem associated with CSRs is that simple concatenation of intra LIS VCs does not necessarily yield a true shortcut path from a sender to one or more receivers, as these VCs are formed individually using IP routing information. Another scheme proposed for shortcut multicast routing is the IMSS proposal [1], which uses two different protocols, namely CONGRESS and IP-SENATE, to resolve IP multicast addresses to ATM addresses of downstream routers and to establish inter-LIS shortcut paths to such routers. This scheme consists of fairly complex protocols and attempts to solve a very general problem of IP multicasting in ATM, using a mix of routers that are capable of shortcut forwarding as well as hop-by-hop forwarding. By contrast, the scheme proposed in this document focuses on providing a simple shortcut forwarding solution for inter LIS multicast traffic in an ATM cloud that uses IDMR protocols only at the edge switches. 9. Conclusion A scheme for IP multicasting was proposed in this document that allows cut-through forwarding of inter LIS multicast traffic. The scheme is based on multicast switches and includes methods of establishing intra and inter LIS VCs for data and control traffic. 10. Security Considerations Security considerations are not addressed in this document. 11. Intellectual Property Considerations NEC may seek patent or other intellectual property protection for some aspects discussed in this document. 12. Acknowledgments The authors have benefitted from discussions with the following persons in preparing this document: Dipankar Raychaudhuri, Arup Acharya, Rajiv Dighe, Kunihiro Taniguchi, Hirohito Sakamoto and Bala Rajagopalan. 13. References [1] Anker, T., et al., "IMSS: IP Multicast Shortcut Service", Work in Progress, July 1997. Bakre, Nishida Expires: May 1998 [Page 5] Internet Draft Inter LIS Multicast November, 1997 [2] Armitage, G., "Support for Multicast over UNI 3.0/3.1 based ATM Networks", RFC 2022, November 1996. [3] Armitage, G., "Issues affecting MARS Cluster Size", RFC 2121, March 1997. [4] Armitage, G., "VENUS - Very Extensive Non-Unicast Service", RFC 2191, September 1997. [5] Braden, R., et al., "Resource ReSerVation Protocol (RSVP) -- Version 1 Functional Specification", RFC 2205, September 1997. [6] Crawley, E., et al., "A Framework for Integrated Services and RSVP over ATM", Work in Progress, November 1997. [7] Katsube, Y., et al., "Toshiba's Router Architecture Extensions for ATM : Overview", RFC 2098, February 1997. [8] Luciani, J., et al., "NBMA Next Hop Resolution Protocol (NHRP)", Work in Progress, September 1997. [9] Talpade, R. and Ammar, M., "Multicast Server Architectures for MARS-based ATM multicasting", RFC 2149, May 1997. [10] Venkateswaran, R. and Raghavendra, C.S., "Hierarchical Multicast Routing in Wide-Area ATM Networks", Proc. ICC '96, June 1996. [11] ATM Forum, "ATM User-Network Interface (UNI) Signalling Specification Version 4.0", af-sig-0061.000, July, 1996. [12] ATM Forum PNNI subworking group, "Private Network-Network Interface Specification Version 1.0 (PNNI 1.0)", afpnni-0055.000, March 1996. 14. Authors' Addresses Ajay Bakre C&C Research Labs, NEC USA, Inc. 110 Rio Robles, San Jose CA 95132, USA. Phone: +1-408-943-3034 Email: bakre@ccrl.sj.nec.com Takeshi Nishida C&C Research Labs, NEC USA, Inc. 110 Rio Robles, San Jose CA 95132, USA. Phone: +1-408-943-3030 Email: nishida@ccrl.sj.nec.com Bakre, Nishida Expires: May 1998 [Page 6]