draft-deoliveira-diff-te-preemption-01.txt March, 2003 Jaudelice C. de Oliveira Leonardo C. Chen Caterina Scoglio Ian F. Akyildiz Georgia Institute of Technology IETF Internet Draft Expires: September, 2003 March, 2003 LSP Preemption policies for Diff-Serv-aware MPLS Traffic Engineering Status of this Memo This document is an Internet-Draft and is in full conformance with all provisions of Section 10 of RFC2026. Internet-Drafts are Working documents of the Internet Engineering Task Force (IETF), its areas, and its working groups. Note that other groups may also distribute working documents as Internet-Drafts. Internet-Drafts are draft documents valid for a maximum of six months and may be updated, replaced, or obsoleted by other documents at any time. It is inappropriate to use Internet-Drafts as reference material or to cite them other than as "work in progress." The list of current Internet-Drafts can be accessed at http://www.ietf.org/ietf/1id-abstracts.txt. The list of Internet-Draft Shadow Directories can be accessed at http://www.ietf.org/shadow.html. Abstract When the establishment of a higher priority LSP requires the preemption of a set of lower priority LSPs, a node has to make a local decision on the set of preemptable LSPs and select which LSPs will be preempted, based on a certain objective, in order to accomodate the newly signaled high priority LSP. The preempted LSPs are then rerouted. This draft documents a preemption policy which can be modified in order to stress different objectives: preempt the lowest priority LSPs, preempt the minimum number of LSPs, preempt the exact required bandwidth in order to fit the new LSP. Simulation results are given and a comparison among several different policies, with respect to preemption cascading, number of preempted LSPs, priority and wasted bandwidth is also included. de Oliveira, Chen, Scoglio, and Akyildiz 1 draft-deoliveira-diff-te-preemption-01.txt March, 2003 1. Motivation Work is currently ongoing in the IETF Traffic Engineering Working Group to define the requirements and protocol extensions for DiffServ-aware MPLS Traffic Engineering (DS-TE) [DSTE-REQ,DSTE-PROTO]. Several Bandwidth Constraint models for use with DS-TE have been proposed [BC-RD,BC-MAM, BC-MAR] and their performance was analized with respect to the use of preemption. Recently, a non-disruptive rerouting mechanism for preempted TE LSPs was proposed in [SOFT-PREPT]. Preemption can be used to assure that high priority LSPs can be always routed through relatively favorable paths within a differentiated services environment. In the same context, preemption can be used to implement various prioritized access policies as well as restoration policies following fault events [TE-REQ]. Although not a mandatory attribute in the traditional IP world, preemption becomes indeed a more attractive strategy in a Differentiated Services scenario [DEC-PREP,ATM-PREP]. Nevertheless, in the DS-TE approach, whose issues and requirements are discussed in [DSTE-REQ], the preemption policy is considered an important piece on the bandwidth reservation and management puzzle, but no preemption strategy is defined. This draft proposes a flexible preemption policy that can be adjusted in order to stress the desired preemption criteria: priority of LSPs to be preempted, number of LSPs to be preempted, and amount of bandwidth preempted. The implications (cascading effect, bandwidth wastage, priority of preempted LSPs) of selecting a certain order of importance for the criteria are discussed for the examples given. 2. Introduction In [TE-REQ], issues and requirements for Traffic Engineering in an MPLS network are highlighted. In order to address both traffic oriented and resource oriented performance objectives, the authors point out the need for *priority* and *preemption* parameters as Traffic Engineering attributes of traffic trunks. The notion of preemption and preemption priority is defined in [TEWG-FW], and preemption attributes are defined in [TE-REQ]. A *traffic trunk* is defined as an aggregate of traffic flows belonging to the same class which are placed inside an LSP [DSTE-REQ]. In this context, preemption is the act of selecting an LSP which will be removed from a given path in order to give room to another LSP with a higher priority. More specifically, the *preemption attributes* determine whether an LSP with a certain *setup preemption priority* can preempt another LSP with a lower *holding preemption priority* from a given path, when there is a competition for available resources. The preempted LSP may then be rerouted and soft preemption [SOFT-PREPT] can be used to avoid service disruption. For readability a number of definitions from [DSTE-REQ] are repeated here: de Oliveira, Chen, Scoglio, and Akyildiz 2 draft-deoliveira-diff-te-preemption-01.txt March, 2003 Class-Type (CT): the set of Traffic Trunks crossing a link that is governed by a specific set of Bandwidth constraints. CT is used for the purposes of link bandwidth allocation, constraint based routing and admission control. A given Traffic Trunk belongs to the same CT on all links. TE-Class: A pair of: i. a Class-Type ii. a preemption priority allowed for that Class-Type. This means that an LSP transporting a Traffic Trunk from that Class-Type can use that preemption priority as the set-up priority, as the holding priority or both. By definition there may be more than one TE-Class using the same CT, as long as each TE-Class uses a different preemption priority. Also, there may be more than one TE-Class with the same preemption priority, provided that each TE-Class uses a different CT. The network administrator may define the TE-Classes in order to support preemption across CTs, to avoid preemption within a certain CT, or to avoid preemption completely, when so desired. To ensure coherent operation, the same TE-Classes must be configured in every Label Switched Router (LSR) in the DS-TE domain. As a consequence of a per-TE-Class treatment, the Interior Gateway Protocol (IGP) needs to advertise separate Traffic Engineering information for each TE-Class, which consists of the *Unreserved Bandwidth* (UB) information [DSTE-PROTO]. The UB information will be used by the routers, checking against the bandwidth constraint model parameters, to decide whether preemption is needed. Details on how to calculate the UB are given in [DSTE-PROTO]. 3. LSP Setup Procedure and Preemption A new LSP setup request has two important parameters: bandwidth and preemption priority. In order to minimize wastage, the set of LSPs to be preempted can be selected by optimizing an objective function that represents these two parameters, and the number of LSPs to be preempted. More specifically, the objective function could be any or a combination of the following [DEC-PREP,ATM-PREP]: * Preempt the connections that have the least priority (preemption priority). The QoS of high priority traffics would be better satisfied. * Preempt the least number of LSPs. The number of LSPs that need to be rerouted would be lower. * Preempt the least amount of bandwidth that still satisfies the request. Resource utilization would be better. After the preemption selection phase is finished, the selected LSPs must be torn down (and possibly rerouted), releasing the reserved bandwidth. The new LSP is established, using the currently available bandwidth. The UB information is then updated. Figure 1 shows a flowchart that summarizes how each LSP setup request is treated in a preemption enabled scenario. de Oliveira, Chen, Scoglio, and Akyildiz 3 draft-deoliveira-diff-te-preemption-01.txt March, 2003 LSP Setup Request (TE-Class i, bw=r) | | v NO UB[TE-Class i] >= r ? ----> Reject LSP Setup | |YES | v NO Preemption Needed ? ----> Setup LSP/ Reroute LSPs | Update UB |YES ^ | | v | Preemption --------------- Algorithm Fig. 1: Flowchart for LSP setup procedure. In [DEC-PREP], the authors propose connection preemption policies that optimize the discussed criteria in a given order of importance: number of connections, bandwidth, and priority; and bandwidth, priority, and number of connections. The novelty in this draft's approach is to propose an objective function that can be adjusted by the service provider in order to stress the desired criteria. No particular criteria order is enforced. 4. Preemption Cascading The decision of preempting an LSP may cause other preemptions in the network. This is called preemption cascading effect and different cascading levels may be acchieved by the preemption of a single LSP. The cascading levels are defined in the following manner: when an LSP is preempted and rerouted without causing any further preemption, the cascading is said to be of level 0. However, when a preempted LSP is rerouted and in order to be established in the new route it also causes the preemption of other LSPs, the cascading is said to be of level 1, and so on. Preemption cascading is not a desirable feature and therefore policies that minimize it are of interest. In the following, a new versatile preemption heuristic will be presented. In the next Section, preemption simulation results will be discussed and the cascading effect will be analized. de Oliveira, Chen, Scoglio, and Akyildiz 4 draft-deoliveira-diff-te-preemption-01.txt March, 2003 5. Preemption Heuristic 5.1. Preempting Resources on a Path It is important to note that once a request for an LSP setup arrives, the routers on the path to be taken by the new LSP need to check for bandwidth availability in all links that compose the path. For the links in which the available bandwidth is not enough, the preemption policy needs to be activated in order to guarantee the end-to-end bandwidth reservation for the new LSP. This is a decentralized approach, in which every node on the path would be responsible to run the preemption algorithm and determine which LSPs would be preempted in order to fit the new request. A decentralized approach may sometimes not lead to an optimal solution. In another approach, a *manager entity* runs the preemption policy and determines the best LSPs to be preempted in order to free the required bandwidth in all the links that compose the path. A unique LSP may be already set in between several nodes on that path, and the preemption of that LSP would free the required bandwidth in many links that compose the path. Both centralized and decentralized approaches have its advantages and drawbacks. A centralized approach would be more precise, but requires that the whole network state be stored and updated accordingly, which raises scalability issues. In a network where LSPs are mostly static, an off-line decision can be made to reroute LSPs and the centralized approach could be appropriate. However, in a dynamic network in which LSPs are setup and torn down in a frequent manner, the correctness of the stored network state could be questionable. In this scenario, the decentralized approach would bring more benefits, even when resulting in a non-optimal solution. A distributed approach is also easier to be implemented due to the distributed nature of the current Internet protocols. Since the current Internet routing protocols are essentially distributed, a decentralized approach was selected for the LSP preemption policy. The parameters required by the new preemption policies are currently available for protocols such as OSPF or are easy to be determined. 5.2. Preemption Heuristic Algorithm Consider a request for a new LSP setup with bandwidth b and setup preemption priority p. When preemption is needed, due to lack of available resources, the preemptable LSPs will be chosen among the ones with lower holding preemption priority (higher numerical value) in order to fit r=b-Abw(l). The constant r represents the actual bandwidth that needs to be preempted (the requested, b, minus the available bandwidth on link l: Abw(l)). L is the set of active LSPs having a holding preemption priority lower (numerically higher) than p. b(l) is the bandwidth reserved by LSP l in L, expressed in bandwidth modules, and p(l) is the holding preemption priority of LSP l. de Oliveira, Chen, Scoglio, and Akyildiz 5 draft-deoliveira-diff-te-preemption-01.txt March, 2003 In order to represent a cost for each preemption priority, an associated cost y(l) inversely related to the holding preemption priority p(l) is defined. For simplicity, a linear relation y(l)=8-p(l) is chosen. y is a cost vector with L components, y(l). b is as a reserved bandwidth vector with dimension L, and components b(l). Concerning the objective function, as reported in the previous section, three main objectives can be reached in the selection of preempted LSPs: * minimize the priority of preempted LSPs, * minimize the number of preempted LSPs, * minimize the preempted bandwidth. To have the widest choice on the overall objective that each service provider needs to achieve, we define the following equation, which for simplicity is chosen as a weighted sum of the above mentioned criteria: H(l)= alpha y(l) + beta + gamma (b(l)-r)^2 In this equation, alpha y(l) represents the cost of preempting LSP l, beta represents the choice of a minimum number of LSPs to be preempted in order to fit the request r, and gamma (b(l)-r)^2 penalizes a choice of an LSP to be preempted that would result in high bandwidth wastage. Coefficients alpha, beta, and gamma are suitable weights that can be configured in order to stress the importance of each component in H. H is calculated for each LSP in L. The LSPs to be preempted are chosen as the ones with smaller H that add enough bandwidth to accommodate r. In case H contained only repeated values (alpha=0, beta=1, gamma=0), the LSPs are not reordered, and the selection of LSPs for preemption follows the original sequence of LSPs. For other cases, when H contains occasionally repeated values, the sequence of choice follows the bandwidth b reserved for each of the regarded LSPs, in increasing order. For each LSP with repeated H, the algorithm checks whether the bandwidth b assigned to that LSP only is enough to satisfy r. If there is no such LSP, it checks whether the bandwidth of each of those LSPs, added to the previously preempted LSPs' bandwidth is enough to satisfy r. If that is not true for any LSP in that repeated H value sequence, the algorithm preempts the LSP that has the larger amount of bandwidth in the sequence, and keeps preempting in decreasing order of b until r is satisfied or the sequence is finished. If the sequence is finished and r is not satisfied, the algorithm again selects LSPs to be preempted based on an increasing order of H. More details on the algorithm are given in [PREEMPTION]. The algorithm's output contains the information about which LSPs are to be preempted and the amount of bandwidth preempted. de Oliveira, Chen, Scoglio, and Akyildiz 6 draft-deoliveira-diff-te-preemption-01.txt March, 2003 6. Examples 6.1. Static Case Consider a link with 16 LSPs with reserved bandwidth b in Mbps, preemption holding priority p, and cost y, as shown in Table 1. In this example, 8 TE-Classes are active. The preemption here is being performed in a single link as an illustrative example. ------------------------------------------------------- LSP l1 l2 l3 l4 l5 l6 l7 l8 ------------------------------------------------------- Bandwidth (b) 20 10 60 25 20 1 75 45 Priority (p) 1 2 3 4 5 6 7 5 Cost (y) 7 6 5 4 3 2 1 3 ------------------------------------------------------- LSP l9 l10 l11 l12 l13 l14 l15 l16 ------------------------------------------------------- Bandwidth (b) 100 5 40 85 50 20 70 25 Priority (p) 3 6 4 5 2 3 4 7 Cost (y) 5 2 4 3 6 5 4 1 ------------------------------------------------------- Table 1: LSPs for static case. Suppose the network operator decides to configure beta=0, indicating that the number of LSPs preempted is not important (rerouting is allowed and not expensive: small topology), alpha=1 and gamma=1, indicating that preemption priority and preempted bandwidth are more important. A request for an LSP establishment arrives with r=155 Mbps and p=0 (highest possible priority, which implies that all LSPs with p>0 in Table 1 will be considered when running the algorithm). LSPs l8, l12, and l16 are selected for preemption. Suppose the network operator decides that it is more appropriate to configure alpha=1, beta=1, and gamma=0, because in this network rerouting is now cheaper, LSP priority is again very important, but bandwidth is not a critical issue. In this case, LSPs l7 and l12 are selected for preemption. To take into account the number of LSPs preempted, the preemption priority, and the amount of bandwidth preempted, the network operator may set alpha=beta=gamma=1 (these numbers could also be normalized in order to balance the importance of each component in the cost function). In that case, LSPs l12 and l15 are selected. From the above example, it can be observed that when the number of LSPs preempted was not an issue, 3 LSPs adding exactly the requested bandwidth, and with the lowest priority were selected. When a possible waste of bandwidth was not an issue, 2 LSPs were selected, adding more bandwidth than requested, but with lower preemption priority. Considering the three factors as crucial, 2 LSPs are preempted, and in this case adding exactly 155 Mbps with the lowest possible preemption priorities. de Oliveira, Chen, Scoglio, and Akyildiz 7 draft-deoliveira-diff-te-preemption-01.txt March, 2003 If a balance amongst the three objectives is sought, the coefficients alpha, beta, and gamma need to be configured in a proper manner. In the example, alpha is multiplying a term that could be any value between 1 and sum(y) (1 and 60), beta is multiplying a number between 1 and L (total number of LSPs in the link: 1 and 16), and gamma is multiplying a number between min(b) and sum(b) (1 and 651). It is very likely that neither the number multiplied by alpha nor the number multiplied by beta will be large when compared to the number multiplied by gamma, which will be of the order of r. Depending on the value of r, the gamma factor in the objective function can be quite large when compared to the other two terms. As an example, assume a request arrives for an LSP requesting r=90. If only priority is selected as the most important criteria (alpha=1, beta=gamma=0), LSPs l7 and l16 would be selected for preemption. When number of preempted LSPs would be the criteria of consideration (beta=1, alpha=gamma=0), LSP l9 would be selected, releasing 100 Mbps. If bandwidth is the only important criteria, LSPs l5 and l15 could be selected, adding exactly 90 Mbps. When a balance is sought, the coefficients could be selected as follows: alpha=1, beta=1 and gamma=0.01. In that case, two LSPs would be selected for preemption, LSPs l7 and l16, adding 100 Mbps, but both with the least priority. The sensitivity of the objective function to the coefficients was analyzed, and it was determined that, in this case, the same LSPs were selected for preemption when alpha >= 0.35, beta <= 3, and gamma <= 0.3. 6.2. Dynamic Cases 6.2.1 Dynamic Case I R04--------R05----------R01 | | \ | | | \ | | | \ | | | \ | | R08----R09---R06----R02 | | | \ | | | | \ | | | | \ | | | | \ | R11--------R10---R07----R03 Fig. 2. Network topology for dynamic case 1. In the network depicted above in Figure 2, suppose: Each link has 100Mbps of bandwidth. Source and destination for each LSP setup request are selected randomly (uniform distribution). Bandwidth request for each LSP is randomly selected from 2, 4, 6, 8, or 10 Mbps. LSP setup requests have 50% chance of being of priority 7 (lower), 20% of being 6, and 6% each of being priorities 1-5. LSP setup requests arrive according to a Poisson process with interarrival average rate 2 seconds and holding time exponentially distributed with mean 500 seconds. Experiments were run for the preemption policy with different values of alpha, beta, and gamma, and for a total of 3980 LSP setup requests. de Oliveira, Chen, Scoglio, and Akyildiz 8 draft-deoliveira-diff-te-preemption-01.txt March, 2003 Considering the priority of LSPs as the most important parameter (alpha=1, beta=gamma=0), 7% of the LSP setup requests were immediately rejected. 8% of the LSPs were preempted and 74% of those were succesfully rerouted. Cascading did not exceed level 1. 83.8% of the preemption affected only a single LSP, 12.5% affected 2, 3.4% affected 3 LSPs, and 0.3% preempted 4 LSPs. When wasted bandwidth was the considered criterion (alpha=beta=0, gamma=1), 7% of the LSP setup requests were immediately rejected. 10% of the LSPs were preempted and 78% of those were succesfully rerouted. Cascading did not exceed level 2. 84.1% of the preemption affected only a single LSP, 14.1% affected 2, and 1.8% affected 3 LSPs. Combining the parameters to build more complex policies is also being studied. When the priority and the number of LSPs are considered (alpha=beta=1, gamma=0), the same results as the first case were obtained (alpha=1, beta=gamma=0). If priority and wasted bandwidth were considered the most important (alpha=1, beta=0, gamma=1), 8% of the LSP setup requests were immediately rejected. 9% of the LSPs were preempted and 78% of those were succesfully rerouted. Cascading did not exceed level 2. 84.1% of the preemption affected only a single LSP, 14.0% affected 2, 1.6% affected 3 LSPs and 0.3% affected 5 LSPs. Finally, when the number of LSPs and the wasted bandwidth were the considered criterion (alpha=0, beta=gamma=1), 7% of the LSP setup requests were immediately rejected. 10% of the LSPs were preempted and 78% of those were succesfully rerouted. Cascading did not exceed level 2. 84.1% of the preemption affected only a single LSP, 14.1% affected 2 and 1.8% affected 3 LSPs. de Oliveira, Chen, Scoglio, and Akyildiz 9 draft-deoliveira-diff-te-preemption-01.txt March, 2003 6.2.2 Dynamic Case II R01 R06 R12----R15 | | / \ | | | / \ | | | / \ | | | / \ | R03----R02----R07----R09----R13----R16 | | / | | | \ | | / | | | \ | | / | | | \ | | / | | | \ R04----R08-----------R10----R14 | R18 \ | | / | | / | \ | | / | | / | \ | | / | | / | \ | | / | | / | R05 R11----R17----R19 R22 | | | \ | | | \ | | | \ | | | \ R20----R23----R25----R24----R28 | \ | \ | | | | \ | \ | | | | \ | \ | | | | \ | \ | | | R21 R26 R29----R30----R33----R36 | | \ \ | / | | | \ \ | / | | | \ \ | / | | | \ \ | / | R27 R32 R31----R34----R39 | \ / | | \ / | | \ / | | \ / | R35----R37----R38----R40 Fig. 3. Network topology for dynamic case 2. A larger topology was considered, as shown in Figure 3. The same capacity and dynamic distribution of LSPs as in dynamic case 1 was used. The simulation results are shown in Table 2. The following different policies were studied: * PB : Priority is the most important factor and selects the LSPs that minimize the waste of bandwidth. * BN : Bandwidth is the most important factor and selects the minimum number of LSPs to fit the new request * RND: LSPs are randomly selected for preemption * P : Only LSP priority is considered in the selection. * PN : Sorts the preemptable LSPs in increasing priority (decreasing value) and selects the minimum number of LSPs. de Oliveira, Chen, Scoglio, and Akyildiz 10 draft-deoliveira-diff-te-preemption-01.txt March, 2003 ---------------------------------------------------- | Heuristic | Other Policies | No | -------------------------------------------------------------| | | PB | BN | RND | P | PN |Preemption| ------------------------------------------------------------------------- Total | 7867 | 7867 | 7867 | 7867 | 7867 | 7867 | Accepted | 87.2% | 87.2% | 87.8% | 87.8% | 88.4% | 82.9% | Rejected | 12.8% | 12.8% | 12.2% | 12.2% | 11.6% | 17.1% | ------------------------------------------------------------------------- Preempted | 12.0% | 15.0% | 19.5% | 14.2% | 11.2% | Rerouted | 95.4% | 96.3% | 96.9% | 94.9% | 94.0% | Destroyed | 4.6% | 3.7% | 3.1% | 5.1% | 6.0% | -------------------------------------------------------------- Maximum | | | | | | Cascading Level | 1 | 2 | 5 | 1 | 1 | -------------------------------------------------------------- Number of Preempted | | | | | | Average | 1.14 | 1.8 | 1.3 | 1.45 | 1.2 | Worst Case | 4 | 3 | 4 | 5 | 5 | -------------------------------------------------------------- Wasted Bandwidth | | | | | | Average (Mbps) | 0.82 | 0.76 | 2.89 | 2.9 | 3.6 | Worst Case (Mbps) | 8 | 8 | 8 | 8 | 8 | -------------------------------------------------------------- Preempted Priority | | | | | | Average | 6.9 | 6.61 | 6.48 | 6.9 | 6.92 | Worst Case | 5 | 2 | 2 | 5 | 5 | -------------------------------------------------------------- Table 2: Results for dynamic case II. When the preemption heuristic PB was in use, less LSPs were preempted and it caused less cascading. This was expected because the algorithm will preempt LSPs with low priority, which will not propagate the preemption to other levels. Consequently, more LSPs were destroyed (not able to be rerouted). When BN was used, cascading was higher than the first case, due to the fact that LSPs with higher priority could be preempted. The number of preempted LSPs was also higher as a consequence. However, the wasted bandwidth showed the best result. In the RND (random) policy, the cascading effect was a lot stronger due to the preemption of LSPs with higher priority (worst case 2), which could then preempt other LSPs. The wasted bandwidth is also much higher. When compared to PB, policy P resulted in a higher number of preempted LSPs and also a higher rate of LSPs that were destroyed due to preemption. The cascading level was the same. However, the wasted bandwidth was much higher. Policy PN led to a smaller number of preempted LSPs. This policy preempts larger LSPs (higher wasted bandwidth) with low priority. Therefore, it makes room for more connections to be setup (improve acceptance rate) and minimizes the number of preempted LSPs. de Oliveira, Chen, Scoglio, and Akyildiz 11 draft-deoliveira-diff-te-preemption-01.txt March, 2003 The last column shows the results when preemption is not allowed. Fewer LSPs would be setup and consequently more LSPs would be rejected. The simulation results show that when preemption is based on priority, cascading is not critical, since the preempted LSPs will not be able to propagate preemption much further. When bandwidth is considered, fewer LSPs are preempted in each link and the wasted bandwidth is low. The heuristic PB seems to combine all these features, yielding the best results. 9. Security Considerations The practice described in this draft does not raise specific security issues beyond those of existing TE. 10. Acknowledgment We would like to acknowledge input and helpful comments from Jean-Philippe Vasseur and Francois Le Faucheur (Cisco Systems, Inc.), George Uhl (Swales Aerospace) and Jeff Smith (NASA Goddard). de Oliveira, Chen, Scoglio, and Akyildiz 12 draft-deoliveira-diff-te-preemption-01.txt March, 2003 References [DSTE-REQ] F. Le Faucheur and W. Lai, "Requirements for support of Diff-Serv-aware MPLS Traffic Engineering," draft-ietf-tewg-diff-te- reqts-06.txt, September 2002. [DSTE-PROTO] F. Le Faucheur, "Protocol extensions for support of Diff-Serv-aware MPLS Traffic Engineering," draft-ietf-tewg-diff-te- proto-02.txt, October 2002. [BC-RD] F. Le Faucheur, "Russian Dolls Bandwidth Constraints Model for Diff-Serv-aware MPLS Traffic Engineering," draft-ietf-tewg-diff-te- russian-00.txt, October 2002. [BC-MAM] W. Lai, "Bandwidth Constraint Models for Diffserv-aware MPLS Traffic Engineering," draft-wlai-tewg-bcmodel-00.txt, June 2002. [BC-MAR] J. Ash, "Max Allocation with Reservation BW Constraint Model for MPLS/DiffServ TE," draft-ash-mpls-dste-bcmodel-max-alloc-resv-00.txt, November, 2002. [SOFT-PREPT] M. R. Meyer, D. Maddux, and J.-P. Vasseur, "MPLS Traffic Engineering Soft preemption," draft-meyer-mpls-soft-preemption-00.txt, March, 2003. [TEWG-FW] Awduche et al, "Overview and Principles of Internet Traffic Engineering," RFC3272, May 2002. [TE-REQ] Awduche et al, "Requirements for Traffic Engineering over MPLS," RFC2702, September 1999. [DEC-PREP] M. Peyravian and A. D. Kshemkalyani, "Decentralized Network Connection Preemption Algorithms," Computer Networks and ISDN Systems, vol. 30 (11), pp. 1029-1043, June 1998. [ATM-PREP] S. Poretsky and T. Gannon, "An Algorithm for Connection Precedence and Preemption in Asynchronous Transfer Mode (ATM) Networks," Proceedings of IEEE ICC 1998. [PREEMPTION] J. C. de Oliveira et al, "A New Preemption Policy for DiffServ-Aware Traffic Engineering to Minimize Rerouting," Proceedings of INFOCOM 2002. de Oliveira, Chen, Scoglio, and Akyildiz 13 draft-deoliveira-diff-te-preemption-01.txt March, 2003 Jaudelice C. de Oliveira Broadband & Wireless Networking Laboratory Georgia Institute of Technology 250 14th St. NW Room 556 Atlanta, GA 30318 USA Email: jau@ece.gatech.edu Leonardo C. Chen Broadband & Wireless Networking Laboratory Georgia Institute of Technology 250 14th St. NW Room 556 Atlanta, GA 30318 USA Email: leochen@ece.gatech.edu Caterina Scoglio Broadband & Wireless Networking Laboratory Georgia Institute of Technology 250 14th St. NW Room 556 Atlanta, GA 30318 USA Email: caterina@ece.gatech.edu Ian F. Akyildiz Broadband & Wireless Networking Laboratory Georgia Institute of Technology 250 14th St. NW Room 556 Atlanta, GA 30318 USA Email: ian@ece.gatech.edu