Matching entries: 0
settings...
AuthorTitleYearJournal/ProceedingsKeywordsINSPEC KeywordsDOI/URL
Abdelwahed, S. and Jia Bai and Rong Su and Kandasamy, N. On the application of predictive control techniques for adaptive performance management of computing systems 2009 Network and Service Management, IEEE Transactions on
Vol. 6(4), pp. 212 -225 
self-management of computing systems , autonomic computing , model-based management techniques , power management of computing systems adaptive control , control engineering computing , fault tolerant computing , predictive control , real-time systems DOI  
Abstract: This paper addresses adaptive performance management of real-time computing systems. We consider a generic model-based predictive control approach that can be applied to a variety of computing applications in which the system performance must be tuned using a finite set of control inputs. The paper focuses on several key aspects affecting the application of this control technique to practical systems. In particular, we present techniques to enhance the speed of the control algorithm for real-time systems. Next we study the feasibility of the predictive control policy for a given system model and performance specification under uncertain operating conditions. The paper then introduces several measures to characterize the performance of the controller, and presents a generic tool for system modeling and automatic control synthesis. Finally, we present a case study involving a real-time computing system to demonstrate the applicability of the predictive control framework.
BibTeX:

@article{5374030,

  author = {Abdelwahed, S. and Jia Bai and Rong Su and Kandasamy, N.},

  title = {On the application of predictive control techniques for adaptive performance management of computing systems},

  journal = {Network and Service Management, IEEE Transactions on},

  year = {2009},

  volume = {6},

  number = {4},

  pages = {212 -225},

  doi = {http://dx.doi.org/10.1109/TNSM.2009.04.090402}

}

Aceto, G. and Botta, A. and Pescape, A. and Westphal, C. Efficient Storage and Processing of High-Volume Network Monitoring Data 2013 Network and Service Management, IEEE Transactions on
Vol. 10(2), pp. 162-175 
network monitoring;monitoring data compression;network measurements;traffic analysis DOI  
Abstract: Monitoring modern networks involves storing and transferring huge amounts of data. To cope with this problem, in this paper we propose a technique that allows to transform the measurement data in a representation format meeting two main objectives at the same time. Firstly, it allows to perform a number of operations directly on the transformed data with a controlled loss of accuracy, thanks to the mathematical framework it is based on. Secondly, the new representation has a small memory footprint, allowing to reduce the space needed for data storage and the time needed for data transfer. To validate our technique, we perform an analysis of its performance in terms of accuracy and memory footprint. The results show that the transformed data closely approximates the original data (within 5% relative error) while achieving a compression ratio of 20%; storage footprint can also be gradually trreduced towards the one of the state-of-the-art compression tools, such as bzip2, if higher approximation is allowed. Finally, a sensibility analysis show that technique allows to trade-off the accuracy on different input fields so to accommodate for specific application needs, while a scalability analysis indicates that the technique scales with input size spanning up to three orders of magnitude.
BibTeX:

@article{6415957,

  author = {Aceto, G. and Botta, A. and Pescape, A. and Westphal, C.},

  title = {Efficient Storage and Processing of High-Volume Network Monitoring Data},

  journal = {Network and Service Management, IEEE Transactions on},

  year = {2013},

  volume = {10},

  number = {2},

  pages = {162-175},

  doi = {http://dx.doi.org/10.1109/TNSM.2013.011713.110215}

}

Adam, C. and Stadler, R. Service Middleware for Self-Managing Large-Scale Systems 2007 Network and Service Management, IEEE Transactions on
Vol. 4(3), pp. 50 -64 
control systems , distributed algorithms , large-scale systems , middleware , peer to peer computing , resource management , routing , scalability , size control , system testing client-server systems , computational complexity , distributed algorithms , middleware , peer-to-peer computing , resource allocation DOI  
Abstract: Resource management poses particular challenges in large-scale systems, such as server clusters that simultaneously process requests from a large number of clients. A resource management scheme for such systems must scale both in the in the number of cluster nodes and the number of applications the cluster supports. Current solutions do not exhibit both of these properties at the same time. Many are centralized, which limits their scalability in terms of the number of nodes, or they are decentralized but rely on replicated directories, which also reduces their ability to scale. In this paper, we propose novel solutions to request routing and application placement- two key mechanisms in a scalable resource management scheme. Our solution to request routing is based on selective update propagation, which ensures that the control load on a cluster node is independent of the system size. Application placement is approached in a decentralized manner, by using a distributed algorithm that maximizes resource utilization and allows for service differentiation under overload. The paper demonstrates how the above solutions can be integrated into an overall design for a peer-to-peer management middleware that exhibits properties of self-organization. Through complexity analysis and simulation, we show to which extent the system design is scalable. We have built a prototype using accepted technologies and have evaluated it using a standard benchmark. The testbed measurements show that the implementation, within the parameter range tested, operates efficiently, quickly adapts to a changing environment and allows for effective service differentiation by a system administrator.
BibTeX:

@article{4489641,

  author = {Adam, C. and Stadler, R.},

  title = {Service Middleware for Self-Managing Large-Scale Systems},

  journal = {Network and Service Management, IEEE Transactions on},

  year = {2007},

  volume = {4},

  number = {3},

  pages = {50 -64},

  doi = {http://dx.doi.org/10.1109/TNSM.2007.021103}

}

Adam, Constantin and Stadler, Rolf A middleware design for large-scale clusters offering multiple services 2006 Network and Service Management, IEEE Transactions on
Vol. 3(1), pp. 1 -12 
autonomic computing , decentralized control , quality of service , self-organization , web services DOI  
Abstract: We present a decentralized design that dynamically allocates resources to multiple services inside a global server cluster. The design supports QoS objectives (maximum response time and maximum loss rate) for each service. A system administrator can modify policies that assign relative importance to services and, in this way, control the resource allocation process. Distinctive features of our design are the use of an epidemic protocol to disseminate state and control information, as well as the decentralized evaluation of utility functions to control resource partitioning among services. Simulation results show that the system operates both effectively and efficiently; it meets the QoS objectives and dynamically adapts to load changes and to failures. In case of overload, the service quality degrades gracefully, controlled by the cluster policies.
BibTeX:

@article{4798302,

  author = {Adam, Constantin and Stadler, Rolf},

  title = {A middleware design for large-scale clusters offering multiple services},

  journal = {Network and Service Management, IEEE Transactions on},

  year = {2006},

  volume = {3},

  number = {1},

  pages = {1 -12},

  doi = {http://dx.doi.org/10.1109/TNSM.2006.4798302}

}

Agarwal, M. and Gautam Kar and Mahindru, R. and Neogi, A. and Sailer, A. Performance problem prediction in transaction-based e-business systems 2008 Network and Service Management, IEEE Transactions on
Vol. 5(1), pp. 1 -10 
costs , data mining , delay , monitoring , network servers , performance analysis , resource management , runtime environment , throughput , web server business data processing , electronic data interchange DOI  
Abstract: Key areas in managing e-commerce systems are problem prediction, root cause analysis, and automated problem remediation. Anticipating SLO violations by proactive problem determination (PD) is particularly important since it can significantly lower the business impact of application performance problems. The main contribution of this paper is to investigate proactive PD based on two important concepts: dependency graphs and dynamic runtime performance characteristics of resources that comprise an I/T environment. The authors show how one can calculate and use the contribution of all supporting resources for a transaction to the end-to-end SLO for that transaction. Higher order moments of these components' contributions are further tracked for proactive alerting. An important aspect of this process is the classification of user transactions based on the profile of their resource usage, enabling one to set appropriate thresholds for the different classes only. Combined with the complete or semi-complete dependency information, our approach confines the scope of potential root causes to a small set of components, thus enabling efficient performance problem anticipation and quick remediation.
BibTeX:

@article{4570775,

  author = {Agarwal, M. and Gautam Kar and Mahindru, R. and Neogi, A. and Sailer, A.},

  title = {Performance problem prediction in transaction-based e-business systems},

  journal = {Network and Service Management, IEEE Transactions on},

  year = {2008},

  volume = {5},

  number = {1},

  pages = {1 -10},

  doi = {http://dx.doi.org/10.1109/TNSM.2008.080101}

}

Agarwal, Manoj K. and Gupta, Manish and Kar, Gautam and Neogi, Anindya and Sailer, Anca Mining activity data for dynamic dependency discovery in e-business systems 2004 Network and Service Management, IEEE Transactions on
Vol. 1(2), pp. 49 -58 
resource management , computer network management , dependency graph , event correlation , monitoring DOI  
Abstract: The growing popularity of e-business has stimulated web sites to evolve from static content servers to complex multi-tier systems built from heterogeneous server platforms. E-businesses now spend a large fraction of their IT budgets maintaining, troubleshooting, and optimizing these web sites. It has been shown that such system management activities may be simplified or automated to various extents if a dynamic dependency graph of the system were available. Currently, all known solutions to the dynamic dependency graph extraction problem are intrusive in nature, i.e. require modifications at application or middleware level. In this paper, we describe non-intrusive techniques based on data mining, which process existing monitoring data generated by server platforms to automatically extract the system component dependency graphs in multi-tier e-business platforms, without any additional application or system modification.
BibTeX:

@article{4798290,

  author = {Agarwal, Manoj K. and Gupta, Manish and Kar, Gautam and Neogi, Anindya and Sailer, Anca},

  title = {Mining activity data for dynamic dependency discovery in e-business systems},

  journal = {Network and Service Management, IEEE Transactions on},

  year = {2004},

  volume = {1},

  number = {2},

  pages = {49 -58},

  doi = {http://dx.doi.org/10.1109/TNSM.2004.4798290}

}

Aib, I. and Boutaba, R. On Leveraging Policy-Based Management for Maximizing Business Profit 2007 Network and Service Management, IEEE Transactions on
Vol. 4(3), pp. 25 -39 
cost function , error correction , heart , laser sintering , phase detection , quality management , quality of service , runtime , scheduling algorithm , testing approximation theory , optimisation , profitability , scheduling DOI  
Abstract: This paper presents a systematic approach to business and policy driven refinement. It also discusses an implementation of an application-hosting service level agreement (SLA) use case. We make use of a simple application hosting SLA template, for which we derive a low-level policy-based service level specification (SLS). The SLS policy set is then analyzed for static consistency and runtime efficiency. The Static Analysis phase involves several consistency tests introduced to detect and correct errors in the original SLS. The Dynamic analysis phase considers the runtime dynamics of policy execution as part of the policy refinement process. This latter phase aims at optimizing the business profit of the service provider. Through mathematical approximation, we derive three policy scheduling algorithms. The algorithms are then implemented and compared against random and first come first served (FCFS) scheduling. This paper shows, in addition to the systematic refinement process, the importance of analyzing the dynamics of a policy management solution before it is actually implemented. The simulations have been performed using the VS Policy Simulator tool.
BibTeX:

@article{4489642,

  author = {Aib, I. and Boutaba, R.},

  title = {On Leveraging Policy-Based Management for Maximizing Business Profit},

  journal = {Network and Service Management, IEEE Transactions on},

  year = {2007},

  volume = {4},

  number = {3},

  pages = {25 -39},

  doi = {http://dx.doi.org/10.1109/TNSM.2007.021104}

}

Al-Hamadi, H. and Chen, Ing-Ray Adaptive Network Defense Management for Countering Smart Attack and Selective Capture in Wireless Sensor Networks 2015 Network and Service Management, IEEE Transactions on
Vol. 12(3), pp. 451-466 
adaptive systems;analytical models;intrusion detection;redundancy;routing;tin;wireless sensor networks;mttf;wireless sensor networks;intrusion detection;intrusion tolerance;multipath routing;selective cap-ture;smart attack DOI  
Abstract: We propose and analyze adaptive network defense management for countering smart attack and se-lective capture which aim to cripple the basic data delivery functionality of a base station based wireless sensor net-work. With selective capture, the adversaries strategically capture sensors and turn them into inside attackers. With smart attack, an inside attacker is capable of performing random, opportunistic and insidious attacks to evade de-tection and maximize their chance of success. We develop a model-based analysis methodology with simulation val-idation to identify the best defense protocol settings under which the sensor network lifetime is maximized against selective capture and smart attack.
BibTeX:

@article{7117417,

  author = {Al-Hamadi, H. and Chen, Ing-Ray},

  title = {Adaptive Network Defense Management for Countering Smart Attack and Selective Capture in Wireless Sensor Networks},

  journal = {Network and Service Management, IEEE Transactions on},

  year = {2015},

  volume = {12},

  number = {3},

  pages = {451-466},

  doi = {http://dx.doi.org/10.1109/TNSM.2015.2441059}

}

Al-Hamadi, Hamid and Chen, Ing-Ray Redundancy Management of Multipath Routing for Intrusion Tolerance in Heterogeneous Wireless Sensor Networks 2013 Network and Service Management, IEEE Transactions on
Vol. 10(2), pp. 189-203 
heterogeneous wireless sensor networks;energy conservation;intrusion detection;multipath routing;reliability;security DOI  
Abstract: In this paper we propose redundancy management of heterogeneous wireless sensor networks (HWSNs), utilizing multipath routing to answer user queries in the presence of unreliable and malicious nodes. The key concept of our redundancy management is to exploit the tradeoff between energy consumption vs. the gain in reliability, timeliness, and security to maximize the system useful lifetime. We formulate the tradeoff as an optimization problem for dynamically determining the best redundancy level to apply to multipath routing for intrusion tolerance so that the query response success probability is maximized while prolonging the useful lifetime. Furthermore, we consider this optimization problem for the case in which a voting-based distributed intrusion detection algorithm is applied to detect and evict malicious nodes in a HWSN. We develop a novel probability model to analyze the best redundancy level in terms of path redundancy and source redundancy, as well as the best intrusion detection settings in terms of the number of voters and the intrusion invocation interval under which the lifetime of a HWSN is maximized. We then apply the analysis results obtained to the design of a dynamic redundancy management algorithm to identify and apply the best design parameter settings at runtime in response to environment changes, to maximize the HWSN lifetime.
BibTeX:

@article{6514999,

  author = {Al-Hamadi, Hamid and Chen, Ing-Ray},

  title = {Redundancy Management of Multipath Routing for Intrusion Tolerance in Heterogeneous Wireless Sensor Networks},

  journal = {Network and Service Management, IEEE Transactions on},

  year = {2013},

  volume = {10},

  number = {2},

  pages = {189-203},

  doi = {http://dx.doi.org/10.1109/TNSM.2013.043013.120282}

}

Al-Shaer, E.S. and Hamed, H.H. Modeling and Management of Firewall Policies 2004 Network and Service Management, IEEE Transactions on
Vol. 1(1), pp. 2 -10 
defense industry , high level languages , home automation , ip networks , information filtering , information filters , matched filters , national security , technology management , telecommunication traffic DOI  
Abstract: Firewalls are core elements in network security. However, managing firewall rules, especially for enterprise networks, has become complex and error-prone. Firewall filtering rules have to be carefully written and organized in order to correctly implement the security policy. In addition, inserting or modifying a filtering rule requires thorough analysis of the relationship between this rule and other rules in order to determine the proper order of this rule and commit the updates. In this paper we present a set of techniques and algorithms that provide automatic discovery of firewall policy anomalies to reveal rule conflicts and potential problems in legacy firewalls, and anomaly-free policy editing for rule insertion, removal, and modification. This is implemented in a user-friendly tool called ??Firewall Policy Advisor.?? The Firewall Policy Advisor significantly simplifies the management of any generic firewall policy written as filtering rules, while minimizing network vulnerability due to firewall rule misconfiguration.
BibTeX:

@article{4623689,

  author = {Al-Shaer, E.S. and Hamed, H.H.},

  title = {Modeling and Management of Firewall Policies},

  journal = {Network and Service Management, IEEE Transactions on},

  year = {2004},

  volume = {1},

  number = {1},

  pages = {2 -10},

  doi = {http://dx.doi.org/10.1109/TNSM.2004.4623689}

}

Alshaer, H. and Elmirghani, J. Multilayer Dynamic Traffic Grooming with Constrained Differentiated Resilience in IP/MPLS-over-WDM Networks 2012 Network and Service Management, IEEE Transactions on
Vol. 9(1), pp. 60-72 
constraints-based differentiated resilience , ip/mpls-over-wdm networks , differentiated traffic grooming , dynamic connections provisioning , heuristic scheme , multilayer scheme DOI  
Abstract: Our research study in this paper focuses on supporting differentiated resilience services in IP/MPLS-over-WDM networks with minimum resources while guaranteeing the quality in provisioned services (QoS) for subscribers. This has been achieved through our multilayer scheme which supports dynamic traffic grooming associated with constrained differentiated resilience. This scheme incorporates an intelligent adaptive heuristic approach and other traffic management mechanisms which solve multiple challenging problems: Differentiated multilayer dynamic traffic grooming based on connections granularity and priority, and connection admission control and wavelength assignment subject to multiple QoS and resilience constraints. We have implemented this scheme to evaluate its performance through conducting simulation experiments. The results demonstrate that our multilayer scheme can enable a network operator to significantly improve the utilization of resources in WDM networks as well as reduce the connection and bandwidth blocking probabilities of all supported traffic classes while guaranteeing their requirements in terms of QoS, resilience and optical physical impairments.
BibTeX:

@article{6092406,

  author = {Alshaer, H. and Elmirghani, J.},

  title = {Multilayer Dynamic Traffic Grooming with Constrained Differentiated Resilience in IP/MPLS-over-WDM Networks},

  journal = {Network and Service Management, IEEE Transactions on},

  year = {2012},

  volume = {9},

  number = {1},

  pages = {60-72},

  doi = {http://dx.doi.org/10.1109/TNSM.2011.110911.110115}

}

Amannejad, Y. and Krishnamurthy, D. and Far, B. Managing Performance Interference in Cloud-Based Web Services 2015 Network and Service Management, IEEE Transactions on
Vol. 12(3), pp. 320-333 
cloud computing;estimation;interference;measurement;monitoring;time factors;cloud computing;machine learning;software performance;virtualization DOI  
Abstract: Web services have increasingly begun to rely on public cloud platforms. The virtualization technologies employed by public clouds can, however, trigger contention between virtual machines (VMs) for shared physical machine resources, thereby leading to performance problems for Web services. Past studies have exploited physical-machine-level performance metrics such as clock cycles per instruction to detect such platform-induced performance interference. Unfortunately, public cloud customers do not have access to such metrics. They can only typically access VM-level metrics and application-level metrics such as transaction response times, and such metrics alone are often not useful for detecting inter-VM contention. This poses a difficult challenge to Web service operators for detecting and mitigating platform-induced performance interference issues inside the cloud. We propose a machine-learning-based interference detection technique to address this problem. The technique applies collaborative filtering to predict whether a given transaction being processed by a Web service is adversely suffering from interference. The results can be then used by a management controller to trigger remedial actions, e.g., reporting problems to the system manager or switching cloud providers. Results using a realistic Web benchmark show that the approach is effective. The most effective variant of our approach is able to detect about 96% of performance interference events with almost no false alarms. Furthermore, we show that a load redistribution technique that exploits the information from our detection technique is able to more effectively mitigate the interference than techniques that are interference agnostic.
BibTeX:

@article{7156122,

  author = {Amannejad, Y. and Krishnamurthy, D. and Far, B.},

  title = {Managing Performance Interference in Cloud-Based Web Services},

  journal = {Network and Service Management, IEEE Transactions on},

  year = {2015},

  volume = {12},

  number = {3},

  pages = {320-333},

  doi = {http://dx.doi.org/10.1109/TNSM.2015.2456172}

}

Amini, R. and Dziong, Z. An Economic Framework for Routing and Channel Allocation in Cognitive Wireless Mesh Networks 2013 Network and Service Management, IEEE Transactions on
Vol. 11(2), pp. 188-203 
cognitive radio;markov decision process;channel allocation;channel reuse;economic model;routing;wireless mesh network DOI  
BibTeX:

@article{6679373,

  author = {Amini, R. and Dziong, Z.},

  title = {An Economic Framework for Routing and Channel Allocation in Cognitive Wireless Mesh Networks},

  journal = {Network and Service Management, IEEE Transactions on},

  year = {2013},

  volume = {11},

  number = {2},

  pages = {188-203},

  doi = {http://dx.doi.org/10.1109/TNSM.2013.120413.120533}

}

Amokrane, A. and Langar, R. and Faten Zhani, M. and Boutaba, R. and Pujolle, G. Greenslater: On Satisfying Green SLAs in Distributed Clouds 2015 Network and Service Management, IEEE Transactions on
Vol. 12(3), pp. 363-376 
bandwidth;carbon;carbon dioxide;distributed databases;energy consumption;green products;resource management;distributed cloud;energy efficiency;green sla;virtual data center DOI  
Abstract: With the massive adoption of cloud-based services, high energy consumption and carbon footprint of cloud infrastructures have become a major concern in IT industry. Consequently, many governments and IT advisory organizations have urged IT stakeholders (i.e., cloud provider and cloud customers) to embrace green IT and regularly monitor and report their carbon emissions and put in place efficient strategies and techniques to control the environmental impact of their infrastructures and/or applications. Motivated by this growing trend, we investigate, in this paper, how cloud providers can meet Service Level Agreements (SLAs) with green requirements. In such SLAs, a cloud customer requires from cloud providers that carbon emissions generated by the leased resources should not exceed a fixed bound. We hence propose a resource management framework allowing cloud providers to provision resources in the form of Virtual Data Centers (VDCs) (i.e., a set of virtual machines and virtual links with guaranteed bandwidth) across a geo-distributed infrastructure with the aim of reducing operational costs and green SLA violation penalties. Extensive simulations show that the proposed solution maximizes the cloud provider¡¯s profit and minimizes the violation of green SLAs.
BibTeX:

@article{7118746,

  author = {Amokrane, A. and Langar, R. and Faten Zhani, M. and Boutaba, R. and Pujolle, G.},

  title = {Greenslater: On Satisfying Green SLAs in Distributed Clouds},

  journal = {Network and Service Management, IEEE Transactions on},

  year = {2015},

  volume = {12},

  number = {3},

  pages = {363-376},

  doi = {http://dx.doi.org/10.1109/TNSM.2015.2440423}

}

Amokrane, A. and Langar, R. and Zhani, M.F. and Boutaba, R. and Pujolle, G. Greenslater: On Satisfying Green SLAs in Distributed Clouds 2015 Network and Service Management, IEEE Transactions on
Vol. 12(3), pp. 363-376 
bandwidth;carbon;carbon dioxide;distributed databases;energy consumption;green products;resource management;distributed cloud;energy efficiency;green sla;virtual data center;distributed cloud;energy efficiency;virtual data center DOI  
Abstract: With the massive adoption of cloud-based services, high energy consumption and carbon footprint of cloud infrastructures have become a major concern in the IT industry. Consequently, many governments and IT advisory organizations have urged IT stakeholders (i.e., cloud provider and cloud customers) to embrace green IT and regularly monitor and report their carbon emissions and put in place efficient strategies and techniques to control the environmental impact of their infrastructures and/or applications. Motivated by this growing trend, we investigate, in this paper, how cloud providers can meet Service Level Agreements (SLAs) with green requirements. In such SLAs, a cloud customer requires from cloud providers that carbon emissions generated by the leased resources should not exceed a fixed bound. We hence propose a resource management framework allowing cloud providers to provision resources in the form of Virtual Data Centers (VDCs) (i.e., a set of virtual machines and virtual links with guaranteed bandwidth) across a geo-distributed infrastructure with the aim of reducing operational costs and green SLA violation penalties. Extensive simulations show that the proposed solution maximizes the cloud provider's profit and minimizes the violation of green SLAs.
BibTeX:

@article{7118746,

  author = {Amokrane, A. and Langar, R. and Zhani, M.F. and Boutaba, R. and Pujolle, G.},

  title = {Greenslater: On Satisfying Green SLAs in Distributed Clouds},

  journal = {Network and Service Management, IEEE Transactions on},

  year = {2015},

  volume = {12},

  number = {3},

  pages = {363-376},

  doi = {http://dx.doi.org/10.1109/TNSM.2015.2440423}

}

Ariba, Y. and Gouaisbaut, F. and Labit, Y. Feedback control for router management and TCP/IP network stability 2009 Network and Service Management, IEEE Transactions on
Vol. 6(4), pp. 255 -266 
active queue management , tcp network model , congestion control , control theory , multiple time delay system , stability computer network management , delays , quality of service , queueing theory , stability , state feedback , telecommunication congestion control , telecommunication network topology , time-varying systems , transport protocols DOI  
Abstract: Several works have established links between congestion control in communication networks and feedback control theory. In this paper, following this paradigm, the design of an AQM (active queue management) ensuring the stability of the congestion phenomenon at a router is proposed. To this end, a modified fluid flow model of TCP (transmission control protocol) that takes into account all delays of the topology is introduced. Then, appropriate tools from control theory are used to address the stability issue and to cope with the time-varying nature of the multiple delays. More precisely, the design of the AQM is formulated as a structured state feedback for multiple time delay systems through the quadratic separation framework. The objective of this mechanism is to ensure the regulation of the queue size of the congested router as well as flow rates to a prescribed level. Furthermore, the proposed methodology allows to set arbitrarily the QoS (quality of service) of the communications following through the controlled router. Finally, a numerical example and some simulations support the exposed theory.
BibTeX:

@article{5374033,

  author = {Ariba, Y. and Gouaisbaut, F. and Labit, Y.},

  title = {Feedback control for router management and TCP/IP network stability},

  journal = {Network and Service Management, IEEE Transactions on},

  year = {2009},

  volume = {6},

  number = {4},

  pages = {255 -266},

  doi = {http://dx.doi.org/10.1109/TNSM.2009.04.090405}

}

Bacon, Jean and Eyers, David and Pasquier, Thomas F.J.-M. and Singh, Jatinder and Papagiannis, Ioannis and Pietzuch, Peter Information Flow Control for Secure Cloud Computing 2014 Network and Service Management, IEEE Transactions on
Vol. 11(1), pp. 76-89 
access control;cloud computing;data models;runtime;software as a service;cloud;data security;information flow;information flow control (ifc) DOI  
Abstract: Security concerns are widely seen as an obstacle to the adoption of cloud computing solutions. Information Flow Control (IFC) is a well understood Mandatory Access Control methodology. The earliest IFC models targeted security in a centralised environment, but decentralised forms of IFC have been designed and implemented, often within academic research projects. As a result, there is potential for decentralised IFC to achieve better cloud security than is available today. In this paper we describe the properties of cloud computing¡ªPlatform-as-a-Service clouds in particular¡ªand review a range of IFC models and implementations to identify opportunities for using IFC within a cloud computing context. Since IFC security is linked to the data that it protects, both tenants and providers of cloud services can agree on security policy, in a manner that does not require them to understand and rely on the particulars of the cloud software stack in order to effect enforcement.
BibTeX:

@article{6701293,

  author = {Bacon, Jean and Eyers, David and Pasquier, Thomas F.J.-M. and Singh, Jatinder and Papagiannis, Ioannis and Pietzuch, Peter},

  title = {Information Flow Control for Secure Cloud Computing},

  journal = {Network and Service Management, IEEE Transactions on},

  year = {2014},

  volume = {11},

  number = {1},

  pages = {76-89},

  doi = {http://dx.doi.org/10.1109/TNSM.2013.122313.130423}

}

Badonnel, R. and State, R. and Festor, O. A Probabilistic Approach for Managing Mobile Ad-Hoc Networks 2007 Network and Service Management, IEEE Transactions on
Vol. 4(1), pp. 39 -50 
ad hoc networks , disaster management , fault detection , information management , monitoring , nominations and elections , organizing , quality management , quality of service ad hoc networks , mobile computing , mobile radio , telecommunication network management DOI  
Abstract: A pure management approach where all the nodes are managed at any time is too strict for mobile ad-hoc networks. Instead of addressing the management of the whole network, we propose a probabilistic scheme where only a subset of nodes is managed in order to provide a light-weight and efficient management. These nodes are determined based on their network behavior to favor subsets of well connected and network participating nodes. With respect to such a selective management scheme, we derive probabilistic guarantees on the percentage of nodes to be managed. Our contribution is centered on a distributed self-organizing management algorithm at the application layer, its efficient deployment into a management architecture and on a comprehensive simulation study. We will show how to organize the management plane by extracting spatio-temporal components and by selecting manager nodes with several election mechanisms based on degree centrality, eigenvector centrality and K-means paradigm.
BibTeX:

@article{4275033,

  author = {Badonnel, R. and State, R. and Festor, O.},

  title = {A Probabilistic Approach for Managing Mobile Ad-Hoc Networks},

  journal = {Network and Service Management, IEEE Transactions on},

  year = {2007},

  volume = {4},

  number = {1},

  pages = {39 -50},

  doi = {http://dx.doi.org/10.1109/TNSM.2007.030104}

}

Baliosian, J. and Stadler, R. Distributed auto-configuration of neighboring cell graphs in radio access networks 2010 Network and Service Management, IEEE Transactions on
Vol. 7(3), pp. 145 -157 
self-management, distributed management , auto-configuration, radio access networks, umts 3g mobile communication , mobility management (mobile radio) , radio access networks , radio links , routing protocols , telecommunication network planning , telecommunication network reliability DOI  
Abstract: In order to execute a handover processes in a GSM or UMTS Radio Access Network, each cell has a list of neighbors to which such handovers may be made. Today, these lists are statically configured during network planning, which does not allow for dynamic adaptation of the network to changes and unexpected events such as a cell failure. This paper advocates an autonomic, decentralized approach to dynamically configure neighboring cell lists. The main contribution of this work is a novel protocol, called DOC, which detects and continuously tracks the coverage overlaps among cells. The protocol executes on a spanning tree where the nodes are radio base stations and the links represent communication channels. Over this tree, nodes periodically exchange information about terminals that are in their respective coverage area. Bloom filters are used for efficient representations of terminal sets and efficient set operations. The protocol aggregates Bloom filters to reduce the communication overhead and also for routing messages along the tree. Using simulation, we study the system in steady state, when a base station is added or a base station fails, and also during the initialization phase where the system self-configures.
BibTeX:

@article{5560570,

  author = {Baliosian, J. and Stadler, R.},

  title = {Distributed auto-configuration of neighboring cell graphs in radio access networks},

  journal = {Network and Service Management, IEEE Transactions on},

  year = {2010},

  volume = {7},

  number = {3},

  pages = {145 -157},

  doi = {http://dx.doi.org/10.1109/TNSM.2010.1009.I9P0330}

}

Balon, S. and Lepropre, J. and Delcourt, O. and Skivee, F. and Leduc, G. Traffic Engineering an Operational Network with the TOTEM Toolbox 2007 Network and Service Management, IEEE Transactions on
Vol. 4(1), pp. 51 -61 
algorithm design and analysis , failure analysis , multiprotocol label switching , power engineering and energy , protocols , routing , telecommunication traffic , tellurium , testing , traffic control multiprotocol label switching , telecommunication networks , traffic engineering computing DOI  
Abstract: We explain how the TOTEM toolbox can be used to engineer an operational network. TOTEM is an open source TOolbox for Traffic Engineering Methods which covers IP- based and MPLS-based intradomain traffic engineering (TE) algorithms, but also interdomain TE. In this paper, we use the toolbox as an off-line simulator to optimise the traffic of an operational network. To help an operator choose between an IP-based or MPLS-based solution, or find the best way to load- balance a network for a given traffic, our case study compares several IP and MPLS routing algorithms, evaluates the impact of hot-potato routing on the intradomain traffic matrix, and analyses the worst-case link failure. This study reveals the power of a toolbox that federates many traffic engineering algorithms.
BibTeX:

@article{4275034,

  author = {Balon, S. and Lepropre, J. and Delcourt, O. and Skivee, F. and Leduc, G.},

  title = {Traffic Engineering an Operational Network with the TOTEM Toolbox},

  journal = {Network and Service Management, IEEE Transactions on},

  year = {2007},

  volume = {4},

  number = {1},

  pages = {51 -61},

  doi = {http://dx.doi.org/10.1109/TNSM.2007.030105}

}

Bandara, Arosha K. and Lupu, Emil C. and Russo, Alessandra and Dulay, Naranker and Sloman, Morris and Flegkas, Paris and Charalambides, Marinos and Pavlou, George Policy refinement for IP differentiated services Quality of Service management 2006 Network and Service Management, IEEE Transactions on
Vol. 3(2), pp. 2 -13 
goal refinement , policy refinement , refinement patterns DOI  
Abstract: Policy-based management provides the ability to dynamically re-configure DiffServ networks such that desired Quality of Service (QoS) goals are achieved. This includes network provisioning decisions, performing admission control, and adapting bandwidth allocation dynamically. QoS management aims to satisfy the Service Level Agreements (SLAs) contracted by the provider and therefore QoS policies are derived from SLA specifications and the provider's business goals. This policy refinement is usually performed manually with no means of verifying that the policies written are supported by the network devices and actually achieve the desired QoS goals. Tool support is lacking and policy refinement has rarely been addressed in the literature. This paper extends our previous approach to policy refinement and shows how to apply it to the domain of DiffServ QoS management. We make use of goal elaboration and abductive reasoning to derive strategies that will achieve a given high-level goal. By combining these strategies with events and constraints, we show how policies can be refined, and what tool support can be provided for the refinement process using examples from the QoS management domain. The approach presented here can be used in other application domains such as storage area networks or security management.
BibTeX:

@article{4798308,

  author = {Bandara, Arosha K. and Lupu, Emil C. and Russo, Alessandra and Dulay, Naranker and Sloman, Morris and Flegkas, Paris and Charalambides, Marinos and Pavlou, George},

  title = {Policy refinement for IP differentiated services Quality of Service management},

  journal = {Network and Service Management, IEEE Transactions on},

  year = {2006},

  volume = {3},

  number = {2},

  pages = {2 -13},

  doi = {http://dx.doi.org/10.1109/TNSM.2006.4798308}

}

Bannazadeh, H. and Leon-Garcia, A. A Distributed Probabilistic Commitment Control Algorithm for Service-Oriented Systems 2010 Network and Service Management, IEEE Transactions on
Vol. 7(4), pp. 204 -217 
admission control , finite capacity queuing networks , qos guarantee , queuing networks , service-oriented architecture distributed algorithms , quality of service , queueing theory , service-oriented architecture DOI  
Abstract: Application creation through service composition is a cornerstone for several architectures including Service-Oriented Architecture. As the number and diversity of applications created based on this paradigm increase, the need for guaranteeing quality of service becomes more important. In this paper, we present a distributed algorithm for guaranteeing a specified level of application completion probability. The algorithm is designed to control service commitments in both queue-less and queue-enabled service-oriented systems. The algorithm does not assume a specific distribution type for service execution times and application request inter-arrival times, and hence is suitable for systems with stationary or non-stationary request arrivals. We show that the proposed distributed algorithm achieves its performance objectives for both queue-less and queue-enabled service oriented systems.
BibTeX:

@article{5668977,

  author = {Bannazadeh, H. and Leon-Garcia, A.},

  title = {A Distributed Probabilistic Commitment Control Algorithm for Service-Oriented Systems},

  journal = {Network and Service Management, IEEE Transactions on},

  year = {2010},

  volume = {7},

  number = {4},

  pages = {204 -217},

  doi = {http://dx.doi.org/10.1109/TNSM.2010.1012.I9P0338}

}

Bao, F. and Chen, I. and Chang, M. and Cho, J. Hierarchical Trust Management for Wireless Sensor Networks and its Applications to Trust-Based Routing and Intrusion Detection 2012 Network and Service Management, IEEE Transactions on
Vol. 9(2), pp. 169-183 
trust management , intrusion detection , performance analysis , routing , security , wireless sensor networks DOI  
Abstract: We propose a highly scalable cluster-based hierarchical trust management protocol for wireless sensor networks (WSNs) to effectively deal with selfish or malicious nodes. Unlike prior work, we consider multidimensional trust attributes derived from communication and social networks to evaluate the overall trust of a sensor node. By means of a novel probability model, we describe a heterogeneous WSN comprising a large number of sensor nodes with vastly different social and quality of service (QoS) behaviors with the objective to yield quotedblleft ground truthquotedblright node status. This serves as a basis for validating our protocol design by comparing subjective trust generated as a result of protocol execution at runtime against objective trust obtained from actual node status. To demonstrate the utility of our hierarchical trust management protocol, we apply it to trust-based geographic routing and trust-based intrusion detection. For each application, we identify the best trust composition and formation to maximize application performance. Our results indicate that trust-based geographic routing approaches the ideal performance level achievable by flooding-based routing in message delivery ratio and message delay without incurring substantial message overhead. For trust-based intrusion detection, we discover that there exists an optimal trust threshold for minimizing false positives and false negatives. Furthermore, trust-based intrusion detection outperforms traditional anomaly-based intrusion detection approaches in both the detection probability and the false positive probability.
BibTeX:

@article{6174485,

  author = {Bao, F. and Chen, I. and Chang, M. and Cho, J.},

  title = {Hierarchical Trust Management for Wireless Sensor Networks and its Applications to Trust-Based Routing and Intrusion Detection},

  journal = {Network and Service Management, IEEE Transactions on},

  year = {2012},

  volume = {9},

  number = {2},

  pages = {169-183},

  doi = {http://dx.doi.org/10.1109/TCOMM.2012.031912.110179}

}

Bartolini, C. and Stefanelli, C. and Tortonesi, M. SYMIAN: Analysis and performance improvement of the IT incident management process 2010 Network and Service Management, IEEE Transactions on
Vol. 7(3), pp. 132 -144 
business-driven it management (bdim) , it service management, incident management , decision support, information technology infrastructure library (itil) decision support systems , information management , organisational aspects DOI  
Abstract: Incident Management is the process through which IT support organizations manage to restore normal service operation after a service disruption. The complexity of real-life enterprise-class IT support organizations makes it extremely hard to understand the impact of organizational, structural and behavioral components on the performance of the currently adopted incident management strategy and, consequently, which actions could improve it. This paper presents SYMIAN, a decision support tool for the performance improvement of the incident management function in IT support organizations. SYMIAN simulates the effect of corrective measures before their actual implementation, enabling time, effort, and cost saving. To this end, SYMIAN models the IT support organization as an open queuing network, thereby enabling the evaluation of both the system-wide dynamics as well as the behavior of the individual organization components and their interactions. Experimental results show the SYMIAN effectiveness in the performance analysis and tuning of the incident management process for real-life IT support organizations.
BibTeX:

@article{5560569,

  author = {Bartolini, C. and Stefanelli, C. and Tortonesi, M.},

  title = {SYMIAN: Analysis and performance improvement of the IT incident management process},

  journal = {Network and Service Management, IEEE Transactions on},

  year = {2010},

  volume = {7},

  number = {3},

  pages = {132 -144},

  doi = {http://dx.doi.org/10.1109/TNSM.2010.1009.I9P0321}

}

Batista, Daniel M. and da Fonseca, Nelson L. S. Scheduling Grid Tasks in Face of Uncertain Communication Demands 2011 Network and Service Management, IEEE Transactions on
Vol. 8(2), pp. 92 -103 
grid networks , resource management , task scheduling , uncertainty fuzzy set theory , grid computing , optimisation , quality of service , scheduling DOI  
Abstract: Grid scheduling is essential to Quality of Service provisioning as well as to efficient management of grid resources. Grid scheduling usually considers the state of the grid resources as well application demands. However, such demands are generally unknown for highly demanding applications, since these often generate data which will be transferred during their execution. Without appropriate assessment of these demands, scheduling decisions can lead to poor performance. Thus, it is of paramount importance to consider uncertainties in the formulation of a grid scheduling problem. This paper introduces the IPDT-FUZZY scheduler, a scheduler which considers the demands of grid applications with such uncertainties. The scheduler uses fuzzy optimization, and both computational and communication demands are expressed as fuzzy numbers. Its performance was evaluated, and it was shown to be attractive when communication requirements are uncertain. Its efficacy is compared, via simulation, to that of a deterministic counterpart scheduler and the results reinforce its adequacy for dealing with the lack of accuracy in the estimation of communication demands.
BibTeX:

@article{5871351,

  author = {Batista, Daniel M. and da Fonseca, Nelson L. S.},

  title = {Scheduling Grid Tasks in Face of Uncertain Communication Demands},

  journal = {Network and Service Management, IEEE Transactions on},

  year = {2011},

  volume = {8},

  number = {2},

  pages = {92 -103},

  doi = {http://dx.doi.org/10.1109/TNSM.2011.050311.100060}

}

Battisha, M. and Elmaghraby, A. and Meleis, H. and Samineni, S. Adaptive tracking of network behavioral signals for real time forensic analysis of service quality degradation 2008 Network and Service Management, IEEE Transactions on
Vol. 5(2), pp. 105 -117 
adaptive control , adaptive systems , computer crime , degradation , forensics , jitter , programmable control , quality management , signal analysis , web and internet services ip networks , adaptive signal detection , computer crime , moving average processes , quality of service , telecommunication security DOI  
Abstract: The current shift from the static access based service model to the dynamic application based service model introduced major challenges for effective forensics of any quality degradation of the provided service. In addition, about 55 percent of the Tier 1 and Tier 2 providers are planning to offer managed security services to guarantee an attack free IP service. In this article, we propose a novel approach of modeling the network behavior in order to select meaningful metrics to be used in tracking the network behavior changes. Based on the deftly selected metrics, we utilize an adaptive exponentially weighted moving average (EWMA) with a moving centerline control chart to monitor the changes of the network behavior. Signaling the network behavior changes in association with the service objective based network behavioral model should provide the required information for effective forensic of the service quality degradation. Our methodology is applied on both simulated and real traces of network behavioral metrics. We illustrate the effectiveness of the forensic analysis model for the selection of relevant behavioral metrics. As well, we show how the adaptive EWMA can be used for tracking the changes in the network behavior from normal to abnormal and vice versa.
BibTeX:

@article{4694135,

  author = {Battisha, M. and Elmaghraby, A. and Meleis, H. and Samineni, S.},

  title = {Adaptive tracking of network behavioral signals for real time forensic analysis of service quality degradation},

  journal = {Network and Service Management, IEEE Transactions on},

  year = {2008},

  volume = {5},

  number = {2},

  pages = {105 -117},

  doi = {http://dx.doi.org/10.1109/TNSM.2008.021104}

}

Begnum, Kyrre and Burgess, Mark and Jonassen, Tore M. and Fagernes, Siri On the stability of adaptive Service Level Agreements 2006 Network and Service Management, IEEE Transactions on
Vol. 3(1), pp. 13 -21 
service level agreements , adaptive policy , dynamical systems , feedback DOI  
Abstract: We consider some implications of non-linear feedback, due to policy combinatorics, on policy-based management of networked services. We pay special attention to the case where the monitoring of certain aspects of Service Level Agreements is used to alter future policy dynamically, according to a control feedback scheme. Using two simple models, we show that nonlinear policies are generally unstable to service provision, i.e. provide no reliable service levels (QoS). Hence we conclude that automated control by policy-rule combinatorics can damage quality of service goals.
BibTeX:

@article{4798303,

  author = {Begnum, Kyrre and Burgess, Mark and Jonassen, Tore M. and Fagernes, Siri},

  title = {On the stability of adaptive Service Level Agreements},

  journal = {Network and Service Management, IEEE Transactions on},

  year = {2006},

  volume = {3},

  number = {1},

  pages = {13 -21},

  doi = {http://dx.doi.org/10.1109/TNSM.2006.4798303}

}

Belabed, D. and Secci, S. and Pujolle, G. and Medhi, D. Striking a Balance Between Traffic Engineering and Energy Efficiency in Virtual Machine Placement 2015 Network and Service Management, IEEE Transactions on
Vol. 12(2), pp. 202-216 
containers;optimization;ports (computers);servers;switches;topology;virtualization;data center networking;multipath forwarding;traffic engineering;vm placement;vm placement;virtual bridging;virtual bridging;data center networking;multipath forwarding;traffic engineering DOI  
Abstract: The increasing adoption of server virtualization has recently favored three key technology advances in data-center networking: the emergence at the hypervisor software level of virtual bridging functions between virtual machines and the physical network; the possibility to dynamically migrate virtual machines across virtualization servers in the data-center network (DCN); a more efficient exploitation of the large path diversity by means of multipath forwarding protocols. In this paper, we investigate the impact of these novel features in DCN optimization by providing a comprehensive mathematical formulation and a repeated matching heuristic for its resolution. We show, in particular, how virtual bridging and multipath forwarding impact common DCN optimization goals, traffic engineering (TE) and energy efficiency (EE), and assess their utility in the various cases of four different DCN topologies. We show that virtual bridging brings a high performance gain when TE is the primary goal and should be deactivated when EE becomes important. Moreover, we show that multipath forwarding can bring relevant gains only when EE is the primary goal and virtual bridging is not enabled.
BibTeX:

@article{7061534,

  author = {Belabed, D. and Secci, S. and Pujolle, G. and Medhi, D.},

  title = {Striking a Balance Between Traffic Engineering and Energy Efficiency in Virtual Machine Placement},

  journal = {Network and Service Management, IEEE Transactions on},

  year = {2015},

  volume = {12},

  number = {2},

  pages = {202-216},

  doi = {http://dx.doi.org/10.1109/TNSM.2015.2413755}

}

Bellavista, P. and Cinque, M. and Cotroneo, D. and Foschini, L. Self-adaptive handoff management for mobile streaming continuity 2009 Network and Service Management, IEEE Transactions on
Vol. 6(2), pp. 80 -94 
resource management, handoff management, quality of service, wireless networks, multimedia streaming internet , media streaming , mobile computing , mobility management (mobile radio) , quality of service , radio access networks DOI  
Abstract: Self-adaptive management and quality adaptation of multimedia services are open challenges in the heterogeneous wireless Internet, where different wireless access points potentially enable anywhere anytime Internet connectivity. One of the most challenging issues is to guarantee streaming continuity with maximum quality, despite possible handoffs at multimedia provisioning time. To enable handoff management to self-adapt to specific application requirements with minimum resource consumption, this paper offers three main contributions. First, it proposes a simple way to specify handoff-related service-level objectives that are focused on quality metrics and tolerable delay. Second, it presents how to automatically derive from these objectives a set of parameters to guide system-level configuration about handoff strategies and dynamic buffer tuning. Third, it describes the design and implementation of a novel handoff management infrastructure for maximizing streaming quality while minimizing resource consumption. Our infrastructure exploits i) experimentally evaluated tuning diagrams for resource management and ii) handoff prediction/awareness. The reported results show the effectiveness of our approach, which permits to achieve the desired quality-delay tradeoff in common Internet deployment environments, even in presence of vertical handoffs.
BibTeX:

@article{5374829,

  author = {Bellavista, P. and Cinque, M. and Cotroneo, D. and Foschini, L.},

  title = {Self-adaptive handoff management for mobile streaming continuity},

  journal = {Network and Service Management, IEEE Transactions on},

  year = {2009},

  volume = {6},

  number = {2},

  pages = {80 -94},

  doi = {http://dx.doi.org/10.1109/TNSM.2009.090602}

}

Bellavista, P. and Corradi, A. and Giannelli, C. Differentiated Management Strategies for Multi-Hop Multi-Path Heterogeneous Connectivity in Mobile Environments 2011 Network and Service Management, IEEE Transactions on
Vol. 8(3), pp. 190 -204 
wireless computing , connectivity management , context awareness , cooperative networking , network management middleware internet , middleware , mobile computing , mobility management (mobile radio) DOI  
Abstract: The widespread availability of mobile devices with multiple wireless interfaces, such as UMTS/GPRS, IEEE 802.11a/b/g and Bluetooth, is pushing for the support of multi-homing and multi-channel connectivity, also enabled by multi-hop cooperative paths to the Internet. The goal is to transparently allow the synergic exploitation of "best" connectivity opportunities available at runtime, by enabling cooperative connectivity, extended wireless coverage, and effective load balancing (for both energy and bandwidth consumption). To this purpose, we claim the need for innovative, lightweight, and proactive evaluation metrics for connectivity management by exploiting application-level awareness of expected node mobility, path throughput, and energy availability. To demonstrate the effectiveness of these solution guidelines for Multi-hop Multi-path Heterogeneous Connectivity (MMHC), we have designed, implemented, and thoroughly evaluated our evaluation metrics on top of the MMHC middleware, which are original because they i) enable the management of multiple multi-hop paths, also made up by heterogeneous wireless links, ii) support connectivity management decisions depending on dynamically gathered context indicators, and iii) can proactively trigger management operations with limited overhead. The extensive set of reported results, from both simulations and real testbed, provides a useful guide for the full understanding of how, to what extent, and which context-based evaluation metrics can enable effective MMHC management in differentiated application/deployment scenarios.
BibTeX:

@article{6009141,

  author = {Bellavista, P. and Corradi, A. and Giannelli, C.},

  title = {Differentiated Management Strategies for Multi-Hop Multi-Path Heterogeneous Connectivity in Mobile Environments},

  journal = {Network and Service Management, IEEE Transactions on},

  year = {2011},

  volume = {8},

  number = {3},

  pages = {190 -204},

  doi = {http://dx.doi.org/10.1109/TCOMM.2011.072611.100066}

}

Bera, P. and Ghosh, S.K. and Dasgupta, P. Policy Based Security Analysis in Enterprise Networks: A Formal Approach 2010 Network and Service Management, IEEE Transactions on
Vol. 7(4), pp. 231 -243 
network security , sat based verification , access control policies (acl) authorisation , business data processing , computability , computer network security , formal verification , network routing DOI  
Abstract: In a typical enterprise network, there are several sub-networks or network zones corresponding to different departments or sections of the organization. These zones are interconnected through set of Layer-3 network devices (or routers). The service accesses within the zones and also with the external network (e.g., Internet) are usually governed by a enterprise-wide security policy. This policy is implemented through appropriate set of access control lists (ACL rules) distributed across various network interfaces of the enterprise network. Such networks faces two major security challenges, (i) conflict free representation of the security policy, and (ii) correct implementation of the policy through distributed ACL rules. This work presents a formal verification framework to analyze the security implementations in an enterprise network with respect to the organizational security policy. It generates conflict-free policy model from the enterprise-wide security policy and then formally verifies the distributed ACL implementations with respect to the conflict-free policy model. The complexity in the verification process arises from extensive use of temporal service access rules and presence of hidden service access paths in the networks. The proposed framework incorporates formal modeling of conflict-free policy specification and distributed ACL implementation in the network and finally deploys Boolean satisfiability (SAT) based verification procedure to check the conformation between the policy and implementation models.
BibTeX:

@article{5668979,

  author = {Bera, P. and Ghosh, S.K. and Dasgupta, P.},

  title = {Policy Based Security Analysis in Enterprise Networks: A Formal Approach},

  journal = {Network and Service Management, IEEE Transactions on},

  year = {2010},

  volume = {7},

  number = {4},

  pages = {231 -243},

  doi = {http://dx.doi.org/10.1109/TNSM.2010.1012.0365}

}

Bermudez, I. and Traverso, S. and Munafo, M. and Mellia, M. A Distributed Architecture for the Monitoring of Clouds and CDNs: Applications to Amazon AWS 2014 Network and Service Management, IEEE Transactions on
Vol. 11(4), pp. 516-529 
cloud computing;geology;ip networks;measurement;monitoring;probes;servers;amazon web services;network monitoring;cloud computing;content delivery networks DOI  
Abstract: Clouds and CDNs are systems that tend to separate the content being requested by users from the physical servers capable of serving it. From the network point of view, monitoring and optimizing performance for the traffic they generate are challenging tasks, given that the same resource can be located in multiple places, which can, in turn, change at any time. The first step in understanding cloud and CDN systems is thus the engineering of a monitoring platform. In this paper, we propose a novel solution that combines passive and active measurements and whose workflow has been tailored to specifically characterize the traffic generated by cloud and CDN infrastructures. We validate our platform by performing a longitudinal characterization of the very well known cloud and CDN infrastructure provider Amazon Web Services (AWS). By observing the traffic generated by more than 50?000 Internet users of an Italian Internet Service Provider, we explore the EC2, S3, and CloudFront AWS services, unveiling their infrastructure, the pervasiveness of web services they host, and their traffic allocation policies as seen from our vantage points. Most importantly, we observe their evolution over a two-year-long period. The solution provided in this paper can be of interest for the following: 1) developers aiming at building measurement tools for cloud infrastructure providers; 2) developers interested in failure and anomaly detection systems; and 3) third-party service-level agreement certificators who can design systems to independently monitor performance. Finally, we believe that the results about AWS presented in this paper are interesting as they are among the first to unveil properties of AWS as seen from the operator point of view.
BibTeX:

@article{6920067,

  author = {Bermudez, I. and Traverso, S. and Munafo, M. and Mellia, M.},

  title = {A Distributed Architecture for the Monitoring of Clouds and CDNs: Applications to Amazon AWS},

  journal = {Network and Service Management, IEEE Transactions on},

  year = {2014},

  volume = {11},

  number = {4},

  pages = {516-529},

  doi = {http://dx.doi.org/10.1109/TNSM.2014.2362357}

}

Bhamare, D. and Gumaste, A and Krishnamoorthy, M. and Dayama, N.R. On the Backbone VLAN Identifier (BVID) Allocation in 802.1Qay Provider Backbone Bridged ¡ª Traffic Engineered Networks 2014 Network and Service Management, IEEE Transactions on
Vol. 11(2), pp. 172-187 
access protocols;graph theory;optimisation;telecommunication traffic;wireless lan;48-bit backbone mac address;bvid allocation problem;ieee 802.1qay pbb-te;vlan tag;backbone vlan identifier allocation;backbone media access control address;carrier ethernet;constrained optimization problem;network graph;provider backbone bridging-traffic engineering standard;virtual local area network tag;multiprotocol label switching;ports (computers);resource management;sonet;switches;synchronous digital hierarchy;provider-backbone-bridging - traffic engineering;backbone virtual network identifier (bvid);carrier ethernet;label assignment in provider networks DOI  
Abstract: Carrier Ethernet is rapidly being deployed in the metropolitan and core segments of the transport network. One of the emerging flavors of Carrier Ethernet is the IEEE 802.1Qay PBB-TE or Provider Backbone Bridging-Traffic Engineering standard. PBB-TE relies on the assignment of a network-specific Virtual Local Area Network (VLAN) tag, called the Backbone VLAN ID or BVID that is used in conjunction with a backbone Media Access Control (MAC) address for forwarding. The 12-bit BVID along with 48-bit Backbone MAC address are used to forward an Ethernet frame. The assignment of BVIDs in a network is critical, given that there are only 4094 possible assignments, especially for those paths that are overlapping in the network graph and incident at the same destination. While the only way to scale is to reuse BVIDs, this method can lead to a complication if the same BVID is allocated to an overlapping path. To the best of our knowledge, this is the first instance of isolating this problem of limited BVID availability which rises only due to graphical overlap between services. We formulate and solve this as a constrained optimization problem. We present optimal and heuristic algorithms to solve the BVID problem. The optimal approach solves the `static' case, while the heuristic can solve both the `static' and the `dynamic' cases of the BVID allocation problem. Results show that the developed heuristics perform close to the optimal and can be used in commercial settings for both the static and dynamic cases.
BibTeX:

@article{6750687,

  author = {Bhamare, D. and Gumaste, A and Krishnamoorthy, M. and Dayama, N.R.},

  title = {On the Backbone VLAN Identifier (BVID) Allocation in 802.1Qay Provider Backbone Bridged ¡ª Traffic Engineered Networks},

  journal = {Network and Service Management, IEEE Transactions on},

  year = {2014},

  volume = {11},

  number = {2},

  pages = {172-187},

  doi = {http://dx.doi.org/10.1109/TNSM.2014.022614.120358}

}

BK, P. and Kuri, J. An Estimated Delay Based Association Policy for Web Browsing in a Multirate WLAN 2013 Network and Service Management, IEEE Transactions on
Vol. 10(1), pp. 70-83 
wlan , access points , association , infrastructure mode , web browsing DOI  
Abstract: Station-Access Point (STA-AP) association is an important function in IEEE 802.11 Wireless LAN (WLAN) management. We obtain an association policy that can be implemented in centralized WLAN management devices, or in STAs, by taking into account explicitly two aspects of practical importance: (a) TCP-controlled short file downloads interspersed with read times (motivated by web browsing), and (b) different STAs associated with an Access Point (AP) at possibly different rates (depending on distance from the AP). Our approach is based on two steps. First, we consider an analytical model to obtain the aggregate AP throughput for long TCP-controlled file downloads when STAs are associated at k different rates r_1, r_2, ..., r_k; this extends earlier work in the literature. Second, we present a 2-node closed queueing network model to approximate the expected average-sized file download time for a user who shares the AP with other users associated at a multiplicity of rates. These analytical results motivate the proposed association policy, called the Estimated Delay based Association (EDA) policy: Associate with the AP at which the expected file download time is the least. Simulations indicate that for a web-browsing type traffic scenario, EDA performs substantially better than other policies that have been proposed earlier. Crucially, the improved performance is sustained even in realistic evaluation scenarios, where the assumptions underpinning the analytical model do not hold. To the best of our knowledge, this is the first work that proposes an association policy tailored specifically for web browsing. Apart from this, our analytical results could be of independent interest.
BibTeX:

@article{6220829,

  author = {BK, P. and Kuri, J.},

  title = {An Estimated Delay Based Association Policy for Web Browsing in a Multirate WLAN},

  journal = {Network and Service Management, IEEE Transactions on},

  year = {2013},

  volume = {10},

  number = {1},

  pages = {70-83},

  doi = {http://dx.doi.org/10.1109/TNSM.2012.061212.100090}

}

Bolla, R. and Bruschi, R. and Davoli, F. and Lombardo, C. Fine-Grained Energy-Efficient Consolidation in SDN Networks and Devices 2015 Network and Service Management, IEEE Transactions on
Vol. 12(2), pp. 132-145 
computer architecture;energy consumption;green products;hardware;protocols;standards;switches;network management;consolidation;green abstraction layer;network management;orchestration;software defined networks DOI  
Abstract: The constant evolution and expansion of the Internet and Internet-related technologies has exposed the limitations of the current networking infrastructures, which are represented by the unsustainable power consumption and low level of scalability. In fact, these infrastructures are still based on the typical, ossified architecture of the TCP/IP paradigm. In order to cope with the Future Internet requirements, recent contributions envisage an evolution towards more programmable and efficient paradigms. In this respect, this paper describes the basic issues, the technical approaches, and the methodologies for the implementation of power management primitives in the context of the emerging Software Defined Networking. In detail, we propose to extend one of the most prominent solutions aimed at increasing networking flexibility, the OpenFlow Protocol, to integrate the energy-aware capabilities offered by the Green Abstraction Layer (GAL). However, the mere introduction of node-level solutions would be of little or no use in the absence of a network-wide management scheme to guarantee inter-operability and effectiveness of the proposed architecture. In this respect, this work also proposes an analytical model for the management of a network with these capabilities. The results will show how our solutions are well suited to provide a scalable and efficient network architecture able to manage the orchestration and consolidation of the available resources.
BibTeX:

@article{7103350,

  author = {Bolla, R. and Bruschi, R. and Davoli, F. and Lombardo, C.},

  title = {Fine-Grained Energy-Efficient Consolidation in SDN Networks and Devices},

  journal = {Network and Service Management, IEEE Transactions on},

  year = {2015},

  volume = {12},

  number = {2},

  pages = {132-145},

  doi = {http://dx.doi.org/10.1109/TNSM.2015.2431074}

}

Bolla, R. and Bruschi, R. and Lombardo, C. and Podda, F. OpenFlow in the Small: A Flexible and Efficient Network Acceleration Framework for Multi-Core Systems 2014 Network and Service Management, IEEE Transactions on
Vol. 11(3), pp. 390-404 
acceleration;hardware;multicore processing;network interfaces;operating systems;ports (computers);standards;network processors;openflow;network programmability DOI  
Abstract: Multi-core processors optimized for networking applications typically combine general-purpose cores with offloading engines to relieve the processor cores of specialized packet processing tasks, such as parsing, classification, and security. Unfortunately, modern embedded operating systems still lack an effective and advanced hardware abstraction to exploit these aspects optimally. Based on these considerations, this paper proposes a novel framework, OpenFlow in the Small (OFiS), specifically designed to provide a flexible hardware abstraction layer for heterogeneous multi-core systems with advanced hardware accelerators for network offloading. OFiS represents such accelerators as standard OpenFlow switches inside the processor, moving the edge of the OpenFlow network management to the computational resources inside the end-boxes. As indicated in the experimental evaluation, OFiS exploits hardware parallelism and consolidates the software tasks at finer granularities.
BibTeX:

@article{6873740,

  author = {Bolla, R. and Bruschi, R. and Lombardo, C. and Podda, F.},

  title = {OpenFlow in the Small: A Flexible and Efficient Network Acceleration Framework for Multi-Core Systems},

  journal = {Network and Service Management, IEEE Transactions on},

  year = {2014},

  volume = {11},

  number = {3},

  pages = {390-404},

  doi = {http://dx.doi.org/10.1109/TNSM.2014.2346078}

}

Breitgand, David and Goldstein, Maayan and Shehory, Ealan HenisaOnn Efficient Control of False Negative and False Positive Errors with Separate Adaptive Thresholds 2011 Network and Service Management, IEEE Transactions on
Vol. 8(2), pp. 128 -140 
system performance , adaptive algorithm , adaptive control , performance analysis adaptive control , error statistics , performance evaluation , systems analysis DOI  
Abstract: Component level performance thresholds are widely used as a basic means for performance management. As the complexity of managed applications increases, manual threshold maintenance becomes a difficult task. Complexity arises from having a large number of application components and their operational metrics, dynamically changing workloads, and compound relationships between application components. To alleviate this problem, we advocate that component level thresholds should be computed, managed and optimized automatically and autonomously. To this end, we have designed and implemented a performance threshold management application that automatically and dynamically computes two separate component level thresholds: one for controlling Type I errors and another for controlling Type II errors. Our solution additionally facilitates metric selection thus minimizing management overheads. We present the theoretical foundation for this autonomic threshold management application, describe a specific algorithm and its implementation, and evaluate it using real-life scenarios and production data sets. As our present study shows, with proper parameter tuning, our on-line dynamic solution is capable of nearly optimal performance thresholds calculation.
BibTeX:

@article{5708247,

  author = {Breitgand, David and Goldstein, Maayan and Shehory, Ealan HenisaOnn},

  title = {Efficient Control of False Negative and False Positive Errors with Separate Adaptive Thresholds},

  journal = {Network and Service Management, IEEE Transactions on},

  year = {2011},

  volume = {8},

  number = {2},

  pages = {128 -140},

  doi = {http://dx.doi.org/10.1109/TNSM.2011.020111.00055}

}

Brunner, Marcu and Nunzi, Giorgio and Dietz, Thomas and Kazuhiko, Isoyama Customer-oriented GMPLS service management and resilience differentiation 2004 Network and Service Management, IEEE Transactions on
Vol. 1(2), pp. 92 -102 
internet , marketing and sales , multiprotocol label switching , packet switching , resilience , routing , sonet , service oriented architecture , standardization , wavelength division multiplexing DOI  
Abstract: Generalized Multi-Protocol Label Switching (GMPLS) is currently under standardization at the Internet Engineering Task Force (IETF). It basically reuses the MPLS control plane (IP routing and signaling) for various technologies such as fiber switching, DWDM, SONET, and packet MPLS. In this article, we propose a management architecture, which allows a service provider to offer customers various services based on a GMPLS infrastructure. The business model behind the architecture is comparable to the online flight ticket sales systems known.
BibTeX:

@article{4798294,

  author = {Brunner, Marcu and Nunzi, Giorgio and Dietz, Thomas and Kazuhiko, Isoyama},

  title = {Customer-oriented GMPLS service management and resilience differentiation},

  journal = {Network and Service Management, IEEE Transactions on},

  year = {2004},

  volume = {1},

  number = {2},

  pages = {92 -102},

  doi = {http://dx.doi.org/10.1109/TNSM.2004.4798294}

}

Burgess, M. and Canright, G. Scalability of Peer Configuration Management in Logically Ad Hoc Networks 2004 Network and Service Management, IEEE Transactions on
Vol. 1(1), pp. 21 -29 
ad hoc networks , communication channels , databases , mobile computing , open systems , peer to peer computing , protocols , scalability , technology management , telecommunication network management DOI  
Abstract: Current interest in ad hoc and peer-to-peer networking technologies prompts a re-examination of models for configuration management within these frameworks. In the future, network management methods may have to scale to millions of nodes within a single organization, with complex social constraints. In this paper, we discuss whether it is possible to manage the configuration of large numbers of network devices using well known and not so well known configuration models, and we discuss how the special characteristics of ad hoc and peer-to-peer networks are reflected in this problem.
BibTeX:

@article{4623691,

  author = {Burgess, M. and Canright, G.},

  title = {Scalability of Peer Configuration Management in Logically Ad Hoc Networks},

  journal = {Network and Service Management, IEEE Transactions on},

  year = {2004},

  volume = {1},

  number = {1},

  pages = {21 -29},

  doi = {http://dx.doi.org/10.1109/TNSM.2004.4623691}

}

Burst, Ken and Joiner, Laurie and Grimes, Gary Delay Based Congestion Detection and Admission Control for Voice quality in enterprise or carrier controlled IP Networks 2005 Network and Service Management, IEEE Transactions on
Vol. 2(1), pp. 1 -8 
admission control , delay , diffserv networks , ip networks , internet telephony , multiprotocol label switching , probes , protocols , quality of service , waste materials DOI  
Abstract: Reservations based admission control, using Multi-protocol Label Switching (MPLS) is the leading method being considered by traditional carriers for maintaining Quality of Service (QoS) when deploying Voice over Internet Protocol (VoIP). In this research we explore an alternative to reservations based admission control called Delay Based Congestion Detection and Admission Control (DBCD/AC), a form of Endpoint Admission Control. DBCD/AC is a method for edge devices, such as media gateways, to detect impending congestion in the core based on delay measurements and analysis. When impending congestion is detected, the edge devices refuse new incoming connections to the media gateways to mitigate the congestion. This research examines the characteristics of DBCD/AC and finds that DBCD/AC is a promising alternative to a reservations based admission control approach for enterprise or carrier controlled IP Networks.
BibTeX:

@article{4798296,

  author = {Burst, Ken and Joiner, Laurie and Grimes, Gary},

  title = {Delay Based Congestion Detection and Admission Control for Voice quality in enterprise or carrier controlled IP Networks},

  journal = {Network and Service Management, IEEE Transactions on},

  year = {2005},

  volume = {2},

  number = {1},

  pages = {1 -8},

  doi = {http://dx.doi.org/10.1109/TNSM.2005.4798296}

}

Capone, A. and Elias, J. and Martignon, F. Models and Algorithms for the Design of Service Overlay Networks 2008 Network and Service Management, IEEE Transactions on
Vol. 5(3), pp. 143 -156 
service deployment, network planning, overlay networks, service-level agreements, optimization, heuristics internet , quality of service , telecommunication network planning DOI  
Abstract: Service overlay networks (SONs) can provide end-to-end quality of service guarantees in the Internet without requiring significant changes to the underlying network infrastructure. A SON is an application-layer network operated by a third-party Internet service provider (ISP) that owns a set of overlay nodes, residing in the underlying ISP domains, interconnected by overlay links. The deployment of a SON can be a capital-intensive investment, and hence its planning requires careful decisions, including the overlay nodes' placement, the capacity provisioning of overlay links as well as of access links that connect the end-users to the SON infrastructure. In this paper, we propose two novel optimization models for the planning of SONs. The first model minimizes the SON installation cost while providing full coverage to all network's users. The second model maximizes the SON operator's profit by further choosing which users to serve, based on the expected gain, and taking into consideration budget constraints. We also introduce two efficient heuristics to get near-optimal solutions for largescale instances in a reasonable computation time. We provide numerical results of the proposed models and heuristics on a set of realistic-size instances, and discuss the effect of different parameters on the characteristics of the planned networks. We show that in the considered network scenarios the proposed heuristics perform close to the optimum with a short computing time.
BibTeX:

@article{4805131,

  author = {Capone, A. and Elias, J. and Martignon, F.},

  title = {Models and Algorithms for the Design of Service Overlay Networks},

  journal = {Network and Service Management, IEEE Transactions on},

  year = {2008},

  volume = {5},

  number = {3},

  pages = {143 -156},

  doi = {http://dx.doi.org/10.1109/TNSM.2009.031102}

}

Casas, P. and D'Alconzo, A. and Fiadino, P. and Bar, A. and Finamore, A. and Zseby, T. When YouTube Does not Work: Analysis of QoE-Relevant Degradation in Google CDN Traffic 2014 Network and Service Management, IEEE Transactions on
Vol. 11(4), pp. 441-457 
degradation;google;ip networks;monitoring;servers;videos;youtube;content delivery networks;quality of experience;statistical data analysis;traffic monitoring;youtube;content delivery networks;quality of experience;statistical data analysis;traffic monitoring DOI  
Abstract: YouTube is the most popular service in today's Internet. Google relies on its massive content delivery network (CDN) to push YouTube videos as close as possible to the end-users, both to improve their watching experience as well as to reduce the load on the core of the network, using dynamic server selection strategies. However, we show that such a dynamic approach can actually have negative effects on the end-user quality of experience (QoE). Through the comprehensive analysis of one month of YouTube flow traces collected at the network of a large European ISP, we report a real case study in which YouTube QoE-relevant degradation affecting a large number of users occurs as a result of Google's server selection strategies. We present an iterative and structured process to detect, characterize, and diagnose QoE-relevant anomalies in CDN distributed services such as YouTube. The overall process uses statistical analysis methodologies to unveil the root causes behind automatically detected problems linked to the dynamics of CDNs' server selection strategies.
BibTeX:

@article{6975242,

  author = {Casas, P. and D'Alconzo, A. and Fiadino, P. and Bar, A. and Finamore, A. and Zseby, T.},

  title = {When YouTube Does not Work: Analysis of QoE-Relevant Degradation in Google CDN Traffic},

  journal = {Network and Service Management, IEEE Transactions on},

  year = {2014},

  volume = {11},

  number = {4},

  pages = {441-457},

  doi = {http://dx.doi.org/10.1109/TNSM.2014.2377691}

}

Castro, Alfonso and Villagra, Victor A. and Fuentes, Beatriz and Costales, Begona A Flexible Architecture for Service Management in the Cloud 2014 Network and Service Management, IEEE Transactions on
Vol. 11(1), pp. 116-125 
cloud computing;cognition;computer architecture;ontologies;semantics;service management;cloud computing;multi-domain;multi-provider;semantic;shared knowledge plane DOI  
Abstract: Cloud computing is a style of computing where different capabilities are provided as a service to customers using Internet technologies. The most common offered services are Infrastructure (IasS), Software (SaaS) and Platform (PaaS). This work integrates the service management into the cloud computing concept and shows how management can be provided as a service in the cloud. Nowadays, services need to adapt their functionalities across heterogeneous environments with different technological and administrative domains. The implied complexity of this situation can be simplified by a service management architecture in the cloud. This paper focuses on this architecture, taking into account specific service management functionalities, like incident management or KPI/SLA management, and provides a complete solution. The proposed architecture is based on a distributed set of agents, using semantic-based techniques: a Shared Knowledge Plane, instantiated in the cloud, has been introduced to ensure communication between agents.
BibTeX:

@article{6750690,

  author = {Castro, Alfonso and Villagra, Victor A. and Fuentes, Beatriz and Costales, Begona},

  title = {A Flexible Architecture for Service Management in the Cloud},

  journal = {Network and Service Management, IEEE Transactions on},

  year = {2014},

  volume = {11},

  number = {1},

  pages = {116-125},

  doi = {http://dx.doi.org/10.1109/TNSM.2014.022614.1300421}

}

Chia-Wei Chang and Seungjoon Lee and Lin, B. and Jia Wang The taming of the shrew: mitigating low-rate TCP-targeted attack 2010 Network and Service Management, IEEE Transactions on
Vol. 7(1), pp. 1 -13 
shrew attack, differential tagging, fair drop rate telecommunication network routing , telecommunication security , transport protocols DOI  
Abstract: A Shrew attack, which uses a low-rate burst carefully designed to exploit TCP's retransmission timeout mechanism, can throttle the bandwidth of a TCP flow in a stealthy manner. While such an attack can significantly degrade the performance of all TCP-based protocols and services including Internet routing (e.g., BGP), no existing scheme clearly solves the problem in real network scenarios. In this paper, we propose a simple protection mechanism, called SAP (Shrew Attack Protection), for defending against a Shrew attack. Rather than attempting to track and isolate Shrew attackers, SAP identifies TCP victims by monitoring their drop rates and preferentially admits those packets from the victims with high drop rates to the output queue. This is to ensure that well-behaved TCP sessions can retain their bandwidth shares. Our simulation results indicate that under a Shrew attack, SAP can prevent TCP sessions from closing, and effectively enable TCP flows to maintain high throughput. SAP is a destination-port-based mechanism and requires only a small number of counters to find potential victims, which makes SAP readily implementable on top of existing router mechanisms.
BibTeX:

@article{5412869,

  author = {Chia-Wei Chang and Seungjoon Lee and Lin, B. and Jia Wang},

  title = {The taming of the shrew: mitigating low-rate TCP-targeted attack},

  journal = {Network and Service Management, IEEE Transactions on},

  year = {2010},

  volume = {7},

  number = {1},

  pages = {1 -13},

  doi = {http://dx.doi.org/10.1109/TNSM.2010.I8P0308}

}

Charalambides, M. and Flegkas, P. and Pavlou, G. and Rubio-Loyola, J. and Bandara, A.K. and Lupu, E.C. and Russo, A. and Dulay, N. and Sloman, M. Policy conflict analysis for diffserv quality of service management 2009 Network and Service Management, IEEE Transactions on
Vol. 6(1), pp. 15 -30 
qos management policies, conflict detection, dynamic conflict resolution. diffserv networks , bandwidth allocation , computer network management , formal specification , process algebra , program diagnostics , quality of service , reasoning about programs , telecommunication computing , telecommunication congestion control , telecommunication traffic DOI  
Abstract: Policy-based management provides the ability to (re-)configure differentiated services networks so that desired Quality of Service (QoS) goals are achieved. This requires implementing network provisioning decisions, performing admission control, and adapting bandwidth allocation to emerging traffic demands. A policy-based approach facilitates flexibility and adaptability as policies can be dynamically changed without modifying the underlying implementation. However, inconsistencies may arise in the policy specification. In this paper we provide a comprehensive set of QoS policies for managing Differentiated Services (DiffServ) networks, and classify the possible conflicts that can arise between them. We demonstrate the use of Event Calculus and formal reasoning for the analysis of both static and dynamic conflicts in a semi-automated fashion. In addition, we present a conflict analysis tool that provides network administrators with a user-friendly environment for determining and resolving potential inconsistencies. The tool has been extensively tested with large numbers of policies over a range of conflict types.
BibTeX:

@article{5331278,

  author = {Charalambides, M. and Flegkas, P. and Pavlou, G. and Rubio-Loyola, J. and Bandara, A.K. and Lupu, E.C. and Russo, A. and Dulay, N. and Sloman, M.},

  title = {Policy conflict analysis for diffserv quality of service management},

  journal = {Network and Service Management, IEEE Transactions on},

  year = {2009},

  volume = {6},

  number = {1},

  pages = {15 -30},

  doi = {http://dx.doi.org/10.1109/TNSM.2009.090302}

}

Ming Chen and Xiaorui Wang and Taylor, B. Achieving Bounded Matching Delay and Maximized Throughput in Information Dissemination Management 2011 Network and Service Management, IEEE Transactions on
Vol. 8(1), pp. 26 -38 
feedback control real-time scheduling , distributed model predictive control , distributed systems , end-to-end task , quality of service , real-time and embedded systems information dissemination , information management , information systems , optimal control , predictive control DOI  
Abstract: The demand for high performance information dissemination is increasing in many applications, such as e-commerce and security alerting systems. These applications usually require that the desired information be matched between numerous sources and sinks based on established subscriptions in a timely manner while a maximized system throughput be achieved to find more matched results. Existing work primarily focuses on only one of the two requirements, either timeliness or throughput. This can lead to an unnecessarily underutilized system or poor guarantees on matching delays. In this paper, we propose an integrated solution that controls both the matching delay and CPU utilization in information dissemination systems to achieve bounded matching delay for high-priority information and maximized system throughput in an example information dissemination system. In addition, we design an admission control scheme to meet the timeliness requirements for selected low-priority information. Our solution is based on optimal control theory for guaranteed control accuracy and system stability. Empirical results on a hardware testbed demonstrate that our controllers can meet the timeliness requirements while achieving maximized system throughput.
BibTeX:

@article{5702355,

  author = {Ming Chen and Xiaorui Wang and Taylor, B.},

  title = {Achieving Bounded Matching Delay and Maximized Throughput in Information Dissemination Management},

  journal = {Network and Service Management, IEEE Transactions on},

  year = {2011},

  volume = {8},

  number = {1},

  pages = {26 -38},

  doi = {http://dx.doi.org/10.1109/TNSM.2011.012111.00004}

}

Chen, Yang and Wang, Xiao and Shi, Cong and Lua, Eng Keong and Fu, Xiaoming and Deng, Beixing and Li, Xing Phoenix: A Weight-Based Network Coordinate System Using Matrix Factorization 2011 Network and Service Management, IEEE Transactions on
Vol. 8(4), pp. 334 -347 
internet topology , peer to peer computing , network coordinate system , network monitoring , triangle inequality violation DOI  
Abstract: Network coordinate (NC) systems provide a lightweight and scalable way for predicting the distances, i.e., round-trip latencies among Internet hosts. Most existing NC systems embed hosts into a low dimensional Euclidean space. Unfortunately, the persistent occurrence of Triangle Inequality Violation (TIV) on the Internet largely limits the distance prediction accuracy of those NC systems. Some alternative systems aim at handling the persistent TIV, however, they only achieve comparable prediction accuracy with Euclidean distance based NC systems. In this paper, we propose an NC system, so-called Phoenix, which is based on the matrix factorization model. Phoenix introduces a weight to each reference NC and trusts the NCs with higher weight values more than the others. The weight-based mechanism can substantially reduce the impact of the error propagation. Using the representative aggregate data sets and the newly measured dynamic data set collected from the Internet, our simulations show that Phoenix achieves significantly higher prediction accuracy than other NC systems. We also show that Phoenix quickly converges to steady state, performs well under host churn, handles the drift of the NCs successfully by using regularization, and is robust against measurement anomalies. Phoenix achieves a scalable yet accurate end-to-end distances monitoring. In addition, we study how well an NC system can characterize the TIV property on the Internet by introducing two new quantitative metrics, so-called RE_RPL and AE_RPL. We show that Phoenix is able to characterize TIV better than other existing NC systems.
BibTeX:

@article{6092405,

  author = {Chen, Yang and Wang, Xiao and Shi, Cong and Lua, Eng Keong and Fu, Xiaoming and Deng, Beixing and Li, Xing},

  title = {Phoenix: A Weight-Based Network Coordinate System Using Matrix Factorization},

  journal = {Network and Service Management, IEEE Transactions on},

  year = {2011},

  volume = {8},

  number = {4},

  pages = {334 -347},

  doi = {http://dx.doi.org/10.1109/TNSM.2011.110911.100079}

}

Cheng, G. and Chen, H. and Wang, Z. and Yi, P. and Zhang, F. and Hu, H. Towards Adaptive Network Nodes via Service Chain Construction 2015 Network and Service Management, IEEE Transactions on
Vol. 12(2), pp. 248-262 
adaptive systems;architecture;atomic layer deposition;computer architecture;ip networks;internet;protocols;network architecture;function management;functional combination;network adaptability;reconstructive network DOI  
Abstract: Network functional combination is a promising direction in enhancing Internet adaptability. It decomposes the current layered network into fine-grained building blocks and combines them on demand. However, what legacy functions should be decomposed and how to combine them in an optimal way are unclear. We propose a novel adaptive architecture called reconstructive network architecture (RECON) based on the principles of the Complex Adaptive System. This study has three main contributions. First, RECON decomposes functions of the protocol stack at layers 3 and 4 into fine-grained building blocks, called atomic capabilities to open the network core functions unlike existing solutions. Second, RECON can customize different service chains on demand by combining atomic capabilities in an optimal way. We formulate the atomic capability combination into a nonlinear integer optimization problem with the proposed algorithm to reach an appropriate tradeoff between the optimal solution and computation cost. Finally, we implement a proof-of-concept for RECON in the network node. Results are corroborated by several numerical simulations.
BibTeX:

@article{7105934,

  author = {Cheng, G. and Chen, H. and Wang, Z. and Yi, P. and Zhang, F. and Hu, H.},

  title = {Towards Adaptive Network Nodes via Service Chain Construction},

  journal = {Network and Service Management, IEEE Transactions on},

  year = {2015},

  volume = {12},

  number = {2},

  pages = {248-262},

  doi = {http://dx.doi.org/10.1109/TNSM.2015.2432116}

}

Chieng, David and Marshall, Alan and Parr, Gerard SLA brokering and bandwidth reservation negotiation schemes for QoS-aware internet 2005 Network and Service Management, IEEE Transactions on
Vol. 2(1), pp. 39 -49 
availability , bandwidth , channel allocation , diffserv networks , engineering management , ip networks , prototypes , quality of service , telecommunication traffic , web and internet services DOI  
Abstract: We present a novel Service Level Agreement (SLA)-driven service provisioning architecture, which enables dynamic and flexible bandwidth reservation schemes on a per-user or per-application basis. Various session level SLA negotiation schemes involving bandwidth allocation, service start time and service duration parameters are introduced and analyzed. The results show that these negotiation schemes can be utilized for the benefit of both end users and network providers in achieving the highest individual SLA optimization in terms of key Quality of Service (QoS) metrics and price. The inherent characteristics of software agents such as autonomy, adaptability and social abilities offer many advantages in this dynamic, complex, and distributed network environment especially when performing Service Level Agreements (SLA) definition negotiations and brokering tasks. This article also presents a service broker prototype based on Fujitsu's Phoenix Open Agent Mediator (OAM) agent technology, which was used to demonstrate a range of SLA brokering scenarios.
BibTeX:

@article{4798300,

  author = {Chieng, David and Marshall, Alan and Parr, Gerard},

  title = {SLA brokering and bandwidth reservation negotiation schemes for QoS-aware internet},

  journal = {Network and Service Management, IEEE Transactions on},

  year = {2005},

  volume = {2},

  number = {1},

  pages = {39 -49},

  doi = {http://dx.doi.org/10.1109/TNSM.2005.4798300}

}

Cianfrani, A. and Eramo, V. and Listanti, M. and Polverini, M. and Vasilakos, A. An OSPF-Integrated Routing Strategy for QoS-Aware Energy Saving in IP Backbone Networks 2012 Network and Service Management, IEEE Transactions on
Vol. 9(3), pp. 254-267 
energy efficient networks , ip routing , performance evaluation DOI  
Abstract: This paper deals with an energy saving routing solution, called Energy Saving IP Routing (ESIR), to be applied in an IP network. ESIR operation is integrated with Open Shorthest Path First (OSPF) protocol and allows the selection of the links to be switched off so that the negative effects of the IP topology reconfiguration procedures are avoided. The basic mechanisms which ESIR is based on are the concepts of SPT exportation and move. These mechanisms allow to share a Shortest Path Tree (SPT) between neighbor routers, so that the overall set of active network links can be reduced. Properties of moves are defined and the energy saving problem in an IP network is formulated as the problem of finding the Maximum Set of Compatible Moves (MSCM). The MSCM problem is investigated in two steps: firstly, a relaxed version of the problem, named basic MSCM problem, is considered in which QoS requirements are neglected; in the second step, the solution of the full problem, named QoS-aware MSCM problem, is faced. We prove that the basic MSCM problem can be formulated as the well-known Maximum Clique Problem in a graph; instead the QoS-aware MSCM introduces a condition equivalent to the Knapsack problem. ILP formulations to solve both the problems are given and heuristics to solve them in practical cases are proposed. The performance evaluation shows that in a real ISP network scenario ESIR is able to switch off up to 30% of network links by exploiting over-provisioning adopted by operators in the network resource planning phase and typical daily traffic trend.
BibTeX:

@article{6172595,

  author = {Cianfrani, A. and Eramo, V. and Listanti, M. and Polverini, M. and Vasilakos, A.},

  title = {An OSPF-Integrated Routing Strategy for QoS-Aware Energy Saving in IP Backbone Networks},

  journal = {Network and Service Management, IEEE Transactions on},

  year = {2012},

  volume = {9},

  number = {3},

  pages = {254-267},

  doi = {http://dx.doi.org/10.1109/TNSM.2012.031512.110165}

}

Cicic, T. and Hansen, A.F. and Kvalbein, A. and Hartmann, M. and Martin, R. and Menth, M. and Gjessing, S. and Lysne, O. Relaxed multiple routing configurations: IP fast reroute for single and correlated failures 2009 Network and Service Management, IEEE Transactions on
Vol. 6(1), pp. 1 -14 
ip fast reroute, multi-topology routing, network protection, network utilization, correlated failures, shared risk groups. ip networks , computer network management , fault tolerance , telecommunication network reliability , telecommunication network routing , telecommunication network topology , telecommunication traffic DOI  
Abstract: Multi-topology routing is an increasingly popular IP network management concept that allows transport of different traffic types over disjoint network paths. The concept is of particular interest for implementation of IP fast reroute (IP FRR). The authors have previously proposed an IP FRR scheme based on multi-topology routing called multiple routing configurations (MRC). MRC supports guaranteed, instantaneous recovery from any single link or node failure in biconnected networks as well as from many combined failures, provided sufficient bandwidth on the surviving links. Furthermore, in MRC different failures result in routing over different network topologies, which gives a good control of the traffic distribution in the networks after a failure. In this paper we present two contributions. First we define an enhanced IP FRR scheme which we call "relaxed MRC" (rMRC). Through experiments we demonstrate that rMRC is an improvement over MRC in all important aspects. Resource utilization in the presence of failures is significantly better, both in terms of paths lengths and in terms of load distribution between the links. The requirement to internal state in the routers is reduced as rMRC requires fewer backup topologies to provide the same degree of protection. In addition to this, the preprocessing needed to generate the backup topologies is simplified. The second contribution is an extension of rMRC that can provide fast reroute in the presence of multiple correlated failures. Our evaluations demonstrate only a small penalty in path lengths and in the number of backup topologies required.
BibTeX:

@article{5331277,

  author = {Cicic, T. and Hansen, A.F. and Kvalbein, A. and Hartmann, M. and Martin, R. and Menth, M. and Gjessing, S. and Lysne, O.},

  title = {Relaxed multiple routing configurations: IP fast reroute for single and correlated failures},

  journal = {Network and Service Management, IEEE Transactions on},

  year = {2009},

  volume = {6},

  number = {1},

  pages = {1 -14},

  doi = {http://dx.doi.org/10.1109/TNSM.2009.090301}

}

Cittadini, L. and Rimondini, M. and Vissicchio, S. and Corea, M. and Di Battista, G. From Theory to Practice: Efficiently Checking BGP Configurations for Guaranteed Convergence 2011 Network and Service Management, IEEE Transactions on
Vol. 8(4), pp. 387 - 400 
algorithms , bgp , network management , routing convergence and stability DOI  
Abstract: Internet Service Providers can enforce a fine-grained control of Interdomain Routing by cleverly configuring the Border Gateway Protocol. However, the price to pay for the flexibility of BGP is the lack of convergence guarantees. The literature on network protocol design introduced several sufficient conditions that routing policies should satisfy to guarantee convergence. However, a methodology to systematically check BGP policies for convergence is still missing. This paper presents two fundamental contributions. First, we describe a heuristic algorithm that statically checks BGP configurations for guaranteed routing convergence. Our algorithm has several highly desirable properties: i) it exceeds state-of-the-art algorithms by correctly reporting more configurations as stable, ii) it can be implemented efficiently enough to analyze Internet-scale configurations, iii) it is free from false positives, namely never reports a potentially oscillating configuration as stable, and iv) it can help spot troublesome points in a detected oscillation. Second, we propose an architecture for a modular tool that exploits our algorithm to process native router configurations and report the presence of potential oscillations. Such a tool can effectively integrate syntactic checkers and assist operators in verifying configurations. We validate our approach using a prototype implementation and show that it scales well enough to enable Internet-scale convergence checks.
BibTeX:

@article{6070518,

  author = {Cittadini, L. and Rimondini, M. and Vissicchio, S. and Corea, M. and Di Battista, G.},

  title = {From Theory to Practice: Efficiently Checking BGP Configurations for Guaranteed Convergence},

  journal = {Network and Service Management, IEEE Transactions on},

  year = {2011},

  volume = {8},

  number = {4},

  pages = {387 - 400},

  doi = {http://dx.doi.org/10.1109/TNSM.2011.110311.100109}

}

Colitti, L. and Di Battista, G. and Patrignani, M. IPv6-in-IPv4 Tunnel Discovery: Methods and Experimental Results 2004 Network and Service Management, IEEE Transactions on
Vol. 1(1), pp. 30 -38 
data security , encapsulation , fets , ip networks , internet , network topology , routing protocols , testing , transport protocols , tunneling DOI  
Abstract: Tunnels are widely used to improve security and to expand networks without having to deploy native infrastructure. They play an important role in the migration to IPv6, which relies on IPv6-in-IPv4 tunnels where native connectivity is not available. However, tunnels offer lower performance and are less than native links. In this paper we introduce a number of techniques to detect, and collect information about, IPv6-in-IPv4 tunnels, and show how a known tunnel can be used as a ??vantage point?? to launch third-party tunnel-discovery explorations, scaling up the discovery process. We describe our Tunneltrace tool, which implements the proposed techniques, and validate them by means of a wide experimentation on the 6bone tunneled network, on native networks in Italy, the Netherlands, and Japan, and through the test boxes deployed worldwide by the RIPE NCC as part of the Test Traffic Measurements Service. We assess to what extent 6bone registry information is coherent with the actual network topology, and we provide the first experimental results on the current distribution of IPv6-in-IPv4 tunnels in the Internet, showing that even ??native?? networks reach more than 60 percent of all IPv6 prefixes through tunnels. Furthermore, we provide historical data on the migration to native IPv6, showing that the impact of tunnels in the IPv6 Internet did not significantly decrease over a six-month period. Finally, we briefly touch on the security issues posed by IPv6-in-IPv4 tunnels, discussing possible threats and countermeasures.
BibTeX:

@article{4623692,

  author = {Colitti, L. and Di Battista, G. and Patrignani, M.},

  title = {IPv6-in-IPv4 Tunnel Discovery: Methods and Experimental Results},

  journal = {Network and Service Management, IEEE Transactions on},

  year = {2004},

  volume = {1},

  number = {1},

  pages = {30 -38},

  doi = {http://dx.doi.org/10.1109/TNSM.2004.4623692}

}

Combes, R. and Altman, Z. and Altman, E. Self-Organizing Relays: Dimensioning, Self-Optimization, and Learning 2012 Network and Service Management, IEEE Transactions on
Vol. 9(4), pp. 487-500 
ofdma , relay , load balancing , queuing theory , reinforcement learning , self configuration , self optimization , stability , stochastic approximation DOI  
Abstract: Relay stations are an important component of heterogeneous networks introduced in the LTE-Advanced technology as a means to provide very high capacity and QoS all over the cell area. This paper develops a self-organizing network (SON) feature to optimally allocate resources between backhaul and station to mobile links. Static and dynamic resource sharing mechanisms are investigated. For stationary ergodic traffic we provide a queuing model to calculate the optimal resource sharing strategy and the maximal capacity of the network analytically. When traffic is not stationary, we propose a load balancing algorithm to adapt both the resource sharing and the zones covered by the relays based on measurements. Convergence to an optimal configuration is proven using stochastic approximation techniques. Self-optimizing dynamic resource allocation is tackled using a Markov Decision Process model. Stability in the infinite buffer case and blocking rate and file transfer time in the finite buffer case are considered. For a scalable solution with a large number of relays, a well-chosen parameterized family of policies is considered, to be used as expert knowledge. Finally, a model-free approach is shown in which the network can derive the optimal parameterized policy, and the convergence to a local optimum is proven.
BibTeX:

@article{6287521,

  author = {Combes, R. and Altman, Z. and Altman, E.},

  title = {Self-Organizing Relays: Dimensioning, Self-Optimization, and Learning},

  journal = {Network and Service Management, IEEE Transactions on},

  year = {2012},

  volume = {9},

  number = {4},

  pages = {487-500},

  doi = {http://dx.doi.org/10.1109/TNSM.2012.082512.120252}

}

Coucheney, Pierre and Maille, Patrick and Tuffin, Bruno Impact of Competition Between ISPs on the Net Neutrality Debate 2013 Network and Service Management, IEEE Transactions on
Vol. 10(4), pp. 425-433 
competitive analysis;internet;internet service providers;investment;nash equilibrium;pricing;network neutrality;game theory;pricing DOI  
BibTeX:

@article{6589032,

  author = {Coucheney, Pierre and Maille, Patrick and Tuffin, Bruno},

  title = {Impact of Competition Between ISPs on the Net Neutrality Debate},

  journal = {Network and Service Management, IEEE Transactions on},

  year = {2013},

  volume = {10},

  number = {4},

  pages = {425-433},

  doi = {http://dx.doi.org/10.1109/TNSM.2013.090313.120326}

}

Cridlig, V. and State, R. and Festor, O. Role-Based Access Control for XML enabled multi-protocol management gateways 2006 Network and Service Management, IEEE Transactions on
Vol. 3(1), pp. 22 -32 
management gateways , snmp , xml-based management , key management , security DOI  
Abstract: While security is often supported in standard management frameworks, security is of major importance in the management plane. In this paper we address the provisioning of a security #x201C;continuum #x201D; for management frameworks based on multi-protocol gateways. We provide an in depth security extension of such a gateway using the Role Based Access Control paradigm and show how to integrate our approach within a broader XML-based management framework. Two case studies are investigated: while the first one proposes to map an XML-based RBAC policy to SNMP access control model, the second one maps the same policy to CLI security levels. The target objective is to provide consistent access control policies not only locally on each device whatever be the network management framework but also globally through the managed domain.
BibTeX:

@article{4798304,

  author = {Cridlig, V. and State, R. and Festor, O.},

  title = {Role-Based Access Control for XML enabled multi-protocol management gateways},

  journal = {Network and Service Management, IEEE Transactions on},

  year = {2006},

  volume = {3},

  number = {1},

  pages = {22 -32},

  doi = {http://dx.doi.org/10.1109/TNSM.2006.4798304}

}

Croce, D. and Leonardi, E. and Mellia, M. Large-Scale Available Bandwidth Measurements: Interference in Current Techniques 2011 Network and Service Management, IEEE Transactions on
Vol. 8(4), pp. 361 - 374 
available bandwidth , distributed systems , measurements , mutual interference , scalability DOI  
Abstract: The end-to-end available bandwidth of an Internet path is a desirable information that can be exploited to optimize system performance. Several tools have been proposed in the past to estimate it. However, existing measurement techniques were not designed for large-scale deployments. In this paper we show that current tools do not properly work where multiple probing processes share a portion of a path. We provide experimental evidence to quantify the impact of mutual interference between measurements. We further analyze the characteristics of popular tools, quantifying (i) the impact of mutual interference, (ii) the total overhead imposed to the network and (iii) the intrusiveness of the measurement process in a large-scale scenario. Our goal is to effectively quantify the impact of concurrent measurements on current estimation techniques and to offer some simple guidelines for dimensioning a large-scale measurement system.
BibTeX:

@article{6070521,

  author = {Croce, D. and Leonardi, E. and Mellia, M.},

  title = {Large-Scale Available Bandwidth Measurements: Interference in Current Techniques},

  journal = {Network and Service Management, IEEE Transactions on},

  year = {2011},

  volume = {8},

  number = {4},

  pages = {361 - 374},

  doi = {http://dx.doi.org/10.1109/TNSM.2011.110311.110110}

}

Dabbagh, M. and Hamdaoui, B. and Guizani, M. and Rayes, A. Energy-Efficient Resource Allocation and Provisioning Framework for Cloud Data Centers 2015 Network and Service Management, IEEE Transactions on
Vol. 12(3), pp. 377 - 391 
clustering algorithms;energy consumption;google;measurement;memory management;servers;switches;cloud computing;data clustering;energy efficiency;wiener filtering;workload prediction DOI  
Abstract: Energy efficiency has recently become a major issue in large data centers due to financial and environmental concerns. This paper proposes an integrated energy-aware resource provisioning framework for cloud data centers. The proposed framework: i) predicts the number of virtual machine (VM) requests, to be arriving at cloud data centers in the near future, along with the amount of CPU and memory resources associated with each of these requests, ii) provides accurate estimations of the number of physical machines (PMs) that cloud data centers need in order to serve their clients, and iii) reduces energy consumption of cloud data centers by putting to sleep unneeded PMs. Our framework is evaluated using real Google traces collected over a 29-day period from a Google cluster containing over 12,500 PMs. These evaluations show that our proposed energy-aware resource provisioning framework makes substantial energy savings.
BibTeX:

@article{7111351,

  author = {Dabbagh, M. and Hamdaoui, B. and Guizani, M. and Rayes, A.},

  title = {Energy-Efficient Resource Allocation and Provisioning Framework for Cloud Data Centers},

  journal = {Network and Service Management, IEEE Transactions on},

  year = {2015},

  volume = {12},

  number = {3},

  pages = {377 - 391},

  doi = {http://dx.doi.org/10.1109/TNSM.2015.2436408}

}

Dahiphale, Devendra and Karve, Rutvik and Vasilakos, Athanasios V. and Liu, Huan and Yu, Zhiwei and Chhajer, Amit and Wang, Jianmin and Wang, Chaokun An Advanced MapReduce: Cloud MapReduce, Enhancements and Applications 2014 Network and Service Management, IEEE Transactions on
Vol. 11(1), pp. 101-115 
cloud computing;mapreduce;pipelining;spot market;stream processing DOI  
Abstract: Recently, Cloud Computing is attracting great attention due to its provision of configurable computing resources. MapReduce (MR) is a popular framework for data-intensive distributed computing of batch jobs. MapReduce suffers from the following drawbacks: 1. It is sequential in its processing of Map and Reduce Phases 2. Being cluster based, its scalability is relatively limited. 3. It does not support flexible pricing. 4. It does not support stream data processing. We describe Cloud MapReduce (CMR), which overcomes these limitations. Our results show that CMR is more efficient and runs faster than other implementations of the MR framework. In addition to this, we showcase how CMR can be further enhanced to: 1. Support stream data processing in addition to batch data by parallelizing the Map and Reduce phases through a pipelining model. 2. Support flexible pricing using Amazon Cloud's spot instances and to deal with massive machine terminations caused by spot price fluctuations. 3. Improve throughput and speed-up processing over traditional MR by more than 30% for large data sets. 4. Provide added flexibility and scalability by leveraging features of the cloud computing model. Click-stream analysis, real-time multimedia processing, time-sensitive analysis and other stream processing applications can also be supported.
BibTeX:

@article{6805345,

  author = {Dahiphale, Devendra and Karve, Rutvik and Vasilakos, Athanasios V. and Liu, Huan and Yu, Zhiwei and Chhajer, Amit and Wang, Jianmin and Wang, Chaokun},

  title = {An Advanced MapReduce: Cloud MapReduce, Enhancements and Applications},

  journal = {Network and Service Management, IEEE Transactions on},

  year = {2014},

  volume = {11},

  number = {1},

  pages = {101-115},

  doi = {http://dx.doi.org/10.1109/TNSM.2014.031714.130407}

}

Dahmouni, H. and Girard, A. and Ouzineb, M. and Sanso, B. The Impact of Jitter on Traffic Flow Optimization in Communication Networks 2012 Network and Service Management, IEEE Transactions on
Vol. 9(3), pp. 279-292 
delay , ip network planning , qos , design , jitter model , multimedia services , optimization , traffic engineering DOI  
Abstract: Current network planning and design methods use the average delay, packet loss and throughput as metrics to optimize the network cost and performance. New multimedia applications, on the other hand, also have critical jitter requirements that are not taken into account by these methods. Here, we explore the impact on the network performance of adding these jitter constraints. We use a fast jitter calculation model to solve the optimal routing problem for flows subject to jitter or delay constraints. We find that the optimal routing is very different for the two kinds of flows: They should be routed on different paths, the jitter-constrained flows should not be split on multiple paths while the opposite conclusion is true for delay-constrained flows.
BibTeX:

@article{6205095,

  author = {Dahmouni, H. and Girard, A. and Ouzineb, M. and Sanso, B.},

  title = {The Impact of Jitter on Traffic Flow Optimization in Communication Networks},

  journal = {Network and Service Management, IEEE Transactions on},

  year = {2012},

  volume = {9},

  number = {3},

  pages = {279-292},

  doi = {http://dx.doi.org/10.1109/TNSM.2012.051712.110148}

}

Dalvandi, A. and Gurusamy, M. and Chua, K.C. Time-aware VMFlow Placement, Routing and Migration for Power Efficiency in Data Centers 2015 Network and Service Management, IEEE Transactions on
Vol. 12(3), pp. 349 - 362 
bandwidth;ports (computers);power demand;resource management;routing;servers;switches;bandwidth guarantee;power efficiency;routing;time-aware tenant requests;vm-migration;vm-placement DOI  
Abstract: Increased power usage and network performance variation due to best-effort bandwidth sharing significantly affect tenancy cost, cloud adoption, and data center efficiencies. In this article, we propose a novel time-aware request model which enables tenants to specify an estimated required time-duration, in addition to their required server resources for Virtual Machines (VMs) and network bandwidth for their communication. We investigate the VM-placement and routing problem, which allocates both server and network resources for the specified timeduration, to provide resource guarantees. Further, we exploit VM-migration while considering its power consumption overhead, to improve power saving and resource utilization. Using the multi-component utilization-based power model, we formulate the problem as an optimization problem that maximizes the acceptance rate while consuming as low power as possible. We develop fast online heuristics that allocate resources for requests, considering their duration and bandwidth demand. We also develop migration policies augmenting these heuristics. For migration heuristics, we propose server-migration and switchmigration approaches which migrate the VMs between the powered-on servers only if their migrations result in turning-off at least one server and switch, respectively. We demonstrate the effectiveness of the proposed heuristics in terms of power saving, acceptance ratio, and migration overhead using comprehensive simulation results.
BibTeX:

@article{7122351,

  author = {Dalvandi, A. and Gurusamy, M. and Chua, K.C.},

  title = {Time-aware VMFlow Placement, Routing and Migration for Power Efficiency in Data Centers},

  journal = {Network and Service Management, IEEE Transactions on},

  year = {2015},

  volume = {12},

  number = {3},

  pages = {349 - 362},

  doi = {http://dx.doi.org/10.1109/TNSM.2015.2443838}

}

Dandapat, S. and Mitra, B. and Choudhury, R. and Ganguly, N. Smart Association Control in Wireless Mobile Environment Using Max-Flow 2012 Network and Service Management, IEEE Transactions on
Vol. 9(1), pp. 73-86 
load balancing , association control , fairness , max-flow , wireless internet DOI  
Abstract: WiFi clients must associate to a specific Access Point (AP) to communicate over the Internet. Current association methods are based on maximum Received Signal Strength Index (RSSI) implying that a client associates to the strongest AP around it. This is a simple scheme that has performed well in purely distributed settings. Modern wireless networks, however, are increasingly being connected by a wired backbone. The backbone allows for out-of-band communication among APs, opening up opportunities for improved protocol design. This paper takes advantage of this opportunity through a coordinated client association scheme where APs consider a global view of the network, and decide on the optimal client-AP association. We show that such an association outperforms RSSI based schemes in several scenarios, while remaining practical and scalable for wide-scale deployment. We also show that optimal association is a NP-Hard problem and our max-flow based heuristic is a promising solution.
BibTeX:

@article{6094145,

  author = {Dandapat, S. and Mitra, B. and Choudhury, R. and Ganguly, N.},

  title = {Smart Association Control in Wireless Mobile Environment Using Max-Flow},

  journal = {Network and Service Management, IEEE Transactions on},

  year = {2012},

  volume = {9},

  number = {1},

  pages = {73-86},

  doi = {http://dx.doi.org/10.1109/TNSM.2011.113011.100098}

}

Deshpande, S. and Thottan, M. and Sikdar, B. An online scheme for the isolation of BGP misconfiguration errors 2008 Network and Service Management, IEEE Transactions on
Vol. 5(2), pp. 78 -90 
bgp , anomaly detection , misconfiguration bgp , anomaly detection , misconfiguration DOI  
Abstract: Being the primary interdomain routing protocol, border gateway protocol (BGP) is the singular means of path establishment across the Internet. Therefore, misconfiguration errors in BGP routers result in failure to establish paths which in turn can cause several networks to become unreachable. In this paper, we first analyze data from recent BGP tables to show that misconfiguration errors occur very frequently in the Internet today. We then show theoretically and using real-world events the impact of these errors on routing stability. A scheme for real-time isolation of large-scale BGP misconfiguration events is then proposed in this paper. Our methodology is based on statistical techniques and is evaluated using data from past wellknown misconfiguration events. We show the effectiveness of our method as compared to the current state-of-the-art.
BibTeX:

@article{4694133,

  author = {Deshpande, S. and Thottan, M. and Sikdar, B.},

  title = {An online scheme for the isolation of BGP misconfiguration errors},

  journal = {Network and Service Management, IEEE Transactions on},

  year = {2008},

  volume = {5},

  number = {2},

  pages = {78 -90},

  doi = {http://dx.doi.org/10.1109/TNSM.2008.021101}

}

Devi, UmaMaheswari C. and Kalle, Ritesh and Kalyanaraman, Shivkumar Multi-Tiered, Burstiness-Aware Bandwidth Estimation and Scheduling for VBR Video Flows 2013 Network and Service Management, IEEE Transactions on
Vol. 10(1), pp. 29-42 
qoe , qos , vbr , video streaming , burstiness , scheduling DOI  
Abstract: The increasing demand for high-quality streaming video delivered to mobile clients necessitates efficient bandwidth utilization and allocation at not only the wireless channel but also the wired backhaul of broadband cellular networks. In this context, we propose techniques for increasing the link utilization and enhancing the quality-of-experience (QoE) for end users while multiplexing video streams over a wired link. For increasing the link utilization, we present a generic multi-tiered bandwidth estimation and scheduling scheme that can guarantee lower bounds on loss for flows at lower tiers. This scheme can be used for supporting heterogeneous loss classes, providing differentiated losses for different layers of video streams, or providing per-flow guarantees using lower aggregate bandwidth than schemes proposed in the literature. For enhancing the end-user QoE, we present a scheme for minimizing correlated losses and improving the smoothness of video quality by minimizing the maximum loss suffered by any logical unit of a stream and also the variability in loss across the length of the stream. In extensive simulations performed using video sources encoded in various formats, our multi-tiered scheme could lower the estimated bandwidth and improve statistical multiplexing gains by up to 25% with two and three classes and over 30% in the context of providing per-flow guarantees and differentiated loss for different layers. Our loss-minimization approach could lower the maximum loss by a factor of five and the loss variance by more than an order of magnitude.
BibTeX:

@article{6407139,

  author = {Devi, UmaMaheswari C. and Kalle, Ritesh and Kalyanaraman, Shivkumar},

  title = {Multi-Tiered, Burstiness-Aware Bandwidth Estimation and Scheduling for VBR Video Flows},

  journal = {Network and Service Management, IEEE Transactions on},

  year = {2013},

  volume = {10},

  number = {1},

  pages = {29-42},

  doi = {http://dx.doi.org/10.1109/TNSM.2012.092712.120240}

}

Dharmaraja, S. and Jindal, V. and Varshney, U. Reliability and Survivability Analysis for UMTS Networks: An Analytical Approach 2008 Network and Service Management, IEEE Transactions on
Vol. 5(3), pp. 132 -142 
reliability, survivability, markov chain, reliability block diagram, hierarchical modeling, umts networks 3g mobile communication , markov processes , cellular radio , fault tolerance , telecommunication network reliability DOI  
Abstract: Reliability and survivability are the two important attributes of cellular networks. In the existing literature, these measures were studied through the simulation. In this paper, we construct an analytical model to determine reliability and survivability attributes of third generation and beyond Universal Mobile Telecommunication Systems (UMTS) networks. Hierarchical architecture of UMTS networks is modeled using stochastic models such as Markov chains, semi-Markov process, reliability block diagrams and Markov reward models to obtain these attributes. The model can be tailored to evaluate the reliability and survivability attributes of other beyond third generation cellular networks such as All-IP UMTS networks and CDMA2000. Numerical results illustrate the applicability of the proposed analytical model. It is observed that incorporating fault tolerance increases the network reliability and survivability. The results are useful for reliable topological design of UMTS networks. In addition, it can help the guarantee of network connectivity after any failure, without over dimensioning the networks. Moreover, it might have some impact from the point of view of the design and evaluation of UMTS infrastructures.
BibTeX:

@article{4805130,

  author = {Dharmaraja, S. and Jindal, V. and Varshney, U.},

  title = {Reliability and Survivability Analysis for UMTS Networks: An Analytical Approach},

  journal = {Network and Service Management, IEEE Transactions on},

  year = {2008},

  volume = {5},

  number = {3},

  pages = {132 -142},

  doi = {http://dx.doi.org/10.1109/TNSM.2009.031101}

}

Diao, Yixin and Eskesen, Frank and Froehlich, Steven and Hellerstein, Joseph L. and Keller, Alexander and Spainhower, Lisa F. and Surendra, Maheswaran Service level management: A dynamic discovery and optimization approach 2004 Network and Service Management, IEEE Transactions on
Vol. 1(2), pp. 83 -91 
dbms instrumentation , online optimization , common information model (cim) , service level management DOI  
Abstract: Optimizing configuration parameters for achieving service level objectives is time-consuming and skills-intensive. This paper proposes a generic approach to automating this task. By generic, we mean that the approach is relatively independent of the target system for which the optimization is done. Our approach uses online adjustment of configuration parameters to discover the system's performance characteristics. Doing so creates two challenges: (1) handling interdependencies between configuration parameters and (2) minimizing the deleterious effects on production workload while the optimization is underway. Our approach addresses (1) by including in the architecture a rule-based component that handles interdependencies between configuration parameters. For (2), we use a feedback mechanism for online optimization that searches the parameter space in a way that generally avoids poor performance at intermediate steps. Our studies of a DB2 Universal Database Server under an e-commerce workload indicate that our approach is effective in practice.
BibTeX:

@article{4798293,

  author = {Diao, Yixin and Eskesen, Frank and Froehlich, Steven and Hellerstein, Joseph L. and Keller, Alexander and Spainhower, Lisa F. and Surendra, Maheswaran},

  title = {Service level management: A dynamic discovery and optimization approach},

  journal = {Network and Service Management, IEEE Transactions on},

  year = {2004},

  volume = {1},

  number = {2},

  pages = {83 -91},

  doi = {http://dx.doi.org/10.1109/TNSM.2004.4798293}

}

Diao, Y. and Lam, L. and Shwartz, L. and Northcutt, D. Modeling the Impact of Service Level Agreements During Service Engagement 2014 Network and Service Management, IEEE Transactions on
Vol. 11(4), pp. 431-440 
business;calendars;complexity theory;data models;optimization;predictive models;standards;it service management;queueing model;service delivery cost;service engagement;service level agreement DOI  
Abstract: One of the key promises of IT strategic outsourcing is to deliver greater IT service management through lower cost. However, this raises a critical question: How can one predict the service delivery cost that will deliver the promised service level agreements (SLAs)? This is particularly challenging since such prediction is mostly needed during the service engagement phase where the SLAs and the delivery cost are negotiated, and the detailed service modeling data are not available. In this paper, we propose a modeling framework that uses queueing-model-based approaches to estimate the impact of SLAs on the delivery cost. We further propose a set of approximation techniques to address the complexity of service delivery and an optimization model to predict the delivery cost subject to service-level constraints and service stability conditions. We demonstrate the applicability of the proposed methodology using data from a large IT service delivery environment.
BibTeX:

@article{6979254,

  author = {Diao, Y. and Lam, L. and Shwartz, L. and Northcutt, D.},

  title = {Modeling the Impact of Service Level Agreements During Service Engagement},

  journal = {Network and Service Management, IEEE Transactions on},

  year = {2014},

  volume = {11},

  number = {4},

  pages = {431-440},

  doi = {http://dx.doi.org/10.1109/TNSM.2014.2378779}

}

Dietrich, D. and Rizk, A. and Papadimitriou, P. Multi-Provider Virtual Network Embedding With Limited Information Disclosure 2015 Network and Service Management, IEEE Transactions on
Vol. 12(2), pp. 188-201 
availability;bandwidth;indium phosphide;network topology;peer-to-peer computing;substrates;topology;network virtualization;topology abstraction;virtual network embedding;virtualized infrastructures DOI  
Abstract: The ever-increasing need to diversify the Internet has recently revived the interest in network virtualization. Wide-area virtual network (VN) deployment raises the need for VN embedding (VNE) across multiple Infrastructure Providers (InPs), due to the InP's limited geographic footprint. Multi-provider VNE, in turn, requires a layer of indirection, interposed between the Service Providers and the InPs. Such brokers, usually known as VN Providers, are expected to have very limited knowledge of the physical infrastructure, since InPs will not be willing to disclose detailed information about their network topology and resource availability to third parties. Such information disclosure policies entail significant implications on resource discovery and allocation. In this paper, we study the challenging problem of multi-provider VNE with limited information disclosure (LID). In this context, we initially investigate the visibility of VN Providers on substrate network resources and question the suitability of topology-based requests for VNE. Subsequently, we present linear programming formulations for: (i) the partitioning of traffic matrix based VN requests into segments mappable to InPs, and (ii) the mapping of VN segments into substrate network topologies. VN request partitioning is carried out under LID, i.e., VN Providers access only information which is not deemed confidential by InPs. We further investigate the suboptimality of LID on VNE against a ¡°best-case¡± scenario where the complete network topology and resource availability information is available to VN Providers.
BibTeX:

@article{7072477,

  author = {Dietrich, D. and Rizk, A. and Papadimitriou, P.},

  title = {Multi-Provider Virtual Network Embedding With Limited Information Disclosure},

  journal = {Network and Service Management, IEEE Transactions on},

  year = {2015},

  volume = {12},

  number = {2},

  pages = {188-201},

  doi = {http://dx.doi.org/10.1109/TNSM.2015.2417652}

}

Dimitriou, S. and Tsioliaridou, A. and Tsaoussidis, V. Introducing size-oriented dropping policies as QoS-supportive functions 2010 Network and Service Management, IEEE Transactions on
Vol. 7(1), pp. 14 -27 
active queue management, fairness, service differentiation internet , bandwidth allocation , data communication , differentiation , probability , quality of service , queueing theory , telecommunication traffic DOI  
Abstract: The continuous increase of Internet users worldwide, as well as the extensive need to support real-time traffic and bulk data transfers simultaneously, has directed research towards service differentiation schemes. These schemes either propose techniques that provide users with the necessary quality guarantees or follow a "better-than-best-effort" approach to satisfy broadly the varying needs of different applications. We depart from our new service principle called Less Impact Better Service (LIBS) and propose a novel service differentiation method, namely size-oriented dropping policies, which uses packet size to categorize time-sensitive from delay-tolerant flows and prioritize packet dropping probability, accordingly. Unlike existing proposals, the distinction of flows is dynamic and the notion of packet size is abstract and comparative; a packet size is judged as a unit within a dynamic sample space, that is, current queue occupancy. We evaluate size-oriented dropping policies both analytically and experimentally; we observe a significant increase on the perceived quality of real-time applications. Delaysensitive flows increase their bandwidth share, to reach a state of system fairness, regulating the dominant behavior of bulk-data flows.
BibTeX:

@article{5412870,

  author = {Dimitriou, S. and Tsioliaridou, A. and Tsaoussidis, V.},

  title = {Introducing size-oriented dropping policies as QoS-supportive functions},

  journal = {Network and Service Management, IEEE Transactions on},

  year = {2010},

  volume = {7},

  number = {1},

  pages = {14 -27},

  doi = {http://dx.doi.org/10.1109/TNSM.2010.I9P0313}

}

Dorling, K. and Messier, G.G. and Valentin, S. and Magierowski, S. Minimizing the Net Present Cost of Deploying and Operating Wireless Sensor Networks 2015 Network and Service Management, IEEE Transactions on
Vol. 12(3), pp. 511-525 
economic indicators;hardware;maintenance engineering;minimization;optimization;relays;wireless sensor networks;wireless sensor network (wsn);budget;cost;deployment;lifetime;net present cost (npc);net present value (npv) DOI  
Abstract: Minimizing the cost of deploying and operating a wireless sensor network (WSN) involves deciding how to partition a budget between competing expenses such as node hardware, energy, and labor. Most commercial network operators account for interest rates in their budgeting exercises, providing a financial incentive to defer some costs until a later time. In this paper, we propose a net present cost (NPC) model for WSN capital and operating expenses that accounts for interest rates. Our model optimizes the number, size, and spacing between expenditures in order to minimize the NPC required for the network to achieve a desired operational lifetime. In general, this optimization problem is non-convex, but if the spacing between expenditures is linearly proportional to the size of the expenditures, and the number of maintenance cycles is known in advance, the problem becomes convex and can be solved to global optimality. If non-deferrable recurring costs are low, then evenly spacing the expenditures can provide near-optimal results. With the provided models and methods, network operators can now derive a payment schedule to minimize NPC while accounting for various operational parameters. The numerical examples show substantial cost benefits under practical assumptions.
BibTeX:

@article{7175032,

  author = {Dorling, K. and Messier, G.G. and Valentin, S. and Magierowski, S.},

  title = {Minimizing the Net Present Cost of Deploying and Operating Wireless Sensor Networks},

  journal = {Network and Service Management, IEEE Transactions on},

  year = {2015},

  volume = {12},

  number = {3},

  pages = {511-525},

  doi = {http://dx.doi.org/10.1109/TNSM.2015.2464071}

}

Duan, Qiang and Yan, Yuhong and Vasilakos, Athanasios V. A Survey on Service-Oriented Network Virtualization Toward Convergence of Networking and Cloud Computing 2012 Network and Service Management, IEEE Transactions on
Vol. 9(4), pp. 373 -392 
network virtualization , cloud computing , network-as-a-service (naas) , the service-oriented architecture DOI  
Abstract: The crucial role that networking plays in Cloud computing calls for a holistic vision that allows combined control, management, and optimization of both networking and computing resources in a Cloud environment, which leads to a convergence of networking and Cloud computing. Network virtualization is being adopted in both telecommunications and the Internet as a key attribute for the next generation networking. Virtualization, as a potential enabler of profound changes in both communications and computing domains, is expected to bridge the gap between these two fields. Service-Oriented Architecture (SOA), when applied in network virtualization, enables a Network-as-a-Service (NaaS) paradigm that may greatly facilitate the convergence of networking and Cloud computing. Recently the application of SOA in network virtualization has attracted extensive interest from both academia and industry. Although numerous relevant research works have been published, they are currently scattered across multiple fields in the literature, including telecommunications, computer networking, Web services, and Cloud computing. In this article we present a comprehensive survey on the latest developments in service-oriented network virtualization for supporting Cloud computing, particularly from a perspective of network and Cloud convergence through NaaS. Specifically, we first introduce the SOA principle and review recent research progress on applying SOA to support network virtualization in both telecommunications and the Internet. Then we present a framework of network-Cloud convergence based on service-oriented network virtualization and give a survey on key technologies for realizing NaaS, mainly focusing on state of the art of network service description, discovery, and composition. We also discuss the challenges brought in by network-Cloud convergence to these technologies and research opportunities available in these areas, with a hope to arouse the research community's interest - n this emerging interdisciplinary field.
BibTeX:

@article{6375901,

  author = {Duan, Qiang and Yan, Yuhong and Vasilakos, Athanasios V.},

  title = {A Survey on Service-Oriented Network Virtualization Toward Convergence of Networking and Cloud Computing},

  journal = {Network and Service Management, IEEE Transactions on},

  year = {2012},

  volume = {9},

  number = {4},

  pages = {373 -392},

  doi = {http://dx.doi.org/10.1109/TNSM.2012.113012.120310}

}

Eramo, V. and Listanti, M. and Cianfrani, A. Design and evaluation of a new multi-path incremental routing algorithm on software routers 2008 Network and Service Management, IEEE Transactions on
Vol. 5(4), pp. 188 -203 
dijkstra algorithm, multi-path dynamic shortest path, open source code, quagga routing software, ospf web services , public domain software , routing protocols DOI  
Abstract: In this paper we analyze intra-domain routing protocols improvements to support new features required by realtime services. In particular we introduce OSPF fast convergence and highlight the advantage of using a dynamic algorithm instead of the Dijkstra one to compute the shortest paths. Then we propose a new multi-path dynamic algorithm which uses multipath information to make a fast determination about the new shortest paths when a link failure occurs, reducing this way the network re-convergence time. To evaluate the proposed algorithm performance we have implemented it in the OSPF code of the Quagga open-source routing software. We compare our own algorithm with three different dynamic algorithms, like the one implemented in Cisco routers and the two others, well known in literature, proposed by Narvaez and Ramalingam-Reps. We show how, by exploiting multi-path information, our algorithm performs, in many case studies, better than the above algorithms, especially in a link failure scenario.
BibTeX:

@article{5010443,

  author = {Eramo, V. and Listanti, M. and Cianfrani, A.},

  title = {Design and evaluation of a new multi-path incremental routing algorithm on software routers},

  journal = {Network and Service Management, IEEE Transactions on},

  year = {2008},

  volume = {5},

  number = {4},

  pages = {188 -203},

  doi = {http://dx.doi.org/10.1109/TNSM.2009.041101}

}

Erjongmanee, S. and Ji, C. Large-Scale Network-Service Disruption: Dependencies and External Factors 2011 Network and Service Management, IEEE Transactions on
Vol. 8(4), pp. 375 - 386 
internet application , heterogeneous databases , human information processing DOI  
Abstract: Large-scale service disruptions in communication have been observed in the past but are not well-understood. The goal of this work is to gain a better understanding of disruptions in communication services in response to large-scale external disturbances such as hurricanes. In particular, Hurricane Ike is drawn as a case study, and heterogeneous data is obtained from networks, storm, and system administrators. Using the data, we first study network-wide disruptions and dependences among different unreachable subnets. Our findings show that 120 out of 230 subnets in our data set were unreachable, among which 88 subnets became unreachable dependently at a time scale of seconds or less than three minutes. We then study dependencies between communication service-disruptions and external factors such as weather and power. Unreachable subnets are found to be weakly correlated with the storm. Power outages and lack of spare power are reported to be certain causes of communication disruptions. New research issues emerge for information acquisition across communication and power infrastructures as well as weather, and information sharing among organizations.
BibTeX:

@article{6070520,

  author = {Erjongmanee, S. and Ji, C.},

  title = {Large-Scale Network-Service Disruption: Dependencies and External Factors},

  journal = {Network and Service Management, IEEE Transactions on},

  year = {2011},

  volume = {8},

  number = {4},

  pages = {375 - 386},

  doi = {http://dx.doi.org/10.1109/TNSM.2011.110311.110106}

}

Fallon, L. and O'Sullivan, D. The Aesop Approach for Semantic-Based End-User Service Optimization 2014 Network and Service Management, IEEE Transactions on
Vol. 11(2), pp. 220-234 
computer network management;home networks;knowledge based systems;mobility management (mobile radio);ontologies (artificial intelligence);quality of experience;aesop approach;aesop engine;autonomic end-user service optimization;compliance levels;end-user service analysis;end-user service delivery optimization;end-user service management domain;end-user service session context monitoring;holistic model;home area network test bed;knowledge base;management systems;optimization ontology;quality of experience;self-contained models;semantic algorithms;semantic-based end-user service optimization;semantic-based techniques;service delivery networks;context;knowledge based systems;monitoring;ontologies;optimization;semantics;unified modeling language;end-user services;autonomic;quality of experience;quality of service;semantic DOI  
Abstract: The need to autonomically optimize end-user service experience in near real time has been identified in the literature in recent years. Management systems that monitor end-user service session context exist but approaches that estimate end-user service experience from session context do not analyze the compliance of that experience with user expectations. Approaches that optimize end-user service delivery are not applicable to arbitrary services; they either optimize specific service types or use general mechanisms that do not consider service experience. The lack of a holistic model for end-user service management is a barrier to autonomic end-user service optimization. This paper presents Aesop, an approach addressing autonomic optimization of end-user service delivery using semantic-based techniques. Its knowledge base uses the End-User Service Analysis and Optimization ontology, which models the end-user service management domain and partitions knowledge that varies over time for efficient access. The Aesop Engine executes an autonomic loop in near real time, which runs semantic algorithms to monitor sessions, analyze their compliance with expectations, and plan and execute optimizations on service delivery networks. The algorithms are efficient because they operate on small partitioned subsets of the Knowledge Base held as separate self-contained models at run time. An Aesop implementation was evaluated on a home area network test bed where compliance of service sessions with expectations when optimization was active was compared with compliance of an identical set of sessions when optimization was inactive. Significant improvements were observed on compliance levels of high priority sessions in all experimental scenarios, with compliance levels more than doubled in some cases.
BibTeX:

@article{6817575,

  author = {Fallon, L. and O'Sullivan, D.},

  title = {The Aesop Approach for Semantic-Based End-User Service Optimization},

  journal = {Network and Service Management, IEEE Transactions on},

  year = {2014},

  volume = {11},

  number = {2},

  pages = {220-234},

  doi = {http://dx.doi.org/10.1109/TNSM.2014.2321784}

}

Fang, Shuo and Foh, Chuan Heng and Aung, Khin Mi Mi Differentiated Congestion Management of Data Traffic for Data Center Ethernet 2011 Network and Service Management, IEEE Transactions on
Vol. 8(4), pp. 322 -333 
ethernet congestion management , computer network performance , storage area networks DOI  
Abstract: This paper aims at designing a congestion and priority solution for Ethernet congestion management. Following the popular approach that uses a cooperation of an Additive Increase and Multiplicative Decrease (AIMD) based rate limiter and Explicit Congestion Notification (ECN) active queue management to combat congestions in Ethernet, the proposal considers differentiated AIMD settings for rate limiters to achieve congestion control differentiation for traffic of different priorities. We illustrate that while the operations of AIMD and ECN are independent, by using different AIMD settings, we can achieve differentiated control of bandwidth utilization. We develop a control theoretic analytical model to study the effectiveness of our proposed method. Moreover, we implement our proposed method in OMNET++ simulator to conduct simulation experiments. Our analytical and simulation results both indicate the effectiveness of bandwidth ratio differentiation.
BibTeX:

@article{6092404,

  author = {Fang, Shuo and Foh, Chuan Heng and Aung, Khin Mi Mi},

  title = {Differentiated Congestion Management of Data Traffic for Data Center Ethernet},

  journal = {Network and Service Management, IEEE Transactions on},

  year = {2011},

  volume = {8},

  number = {4},

  pages = {322 -333},

  doi = {http://dx.doi.org/10.1109/TNSM.2011.110911.100076}

}

Faraci, G. and Schembra, G. An Analytical Model to Design and Manage a Green SDN/NFV CPE Node 2015 Network and Service Management, IEEE Transactions on
Vol. 12(3), pp. 435-450 
analytical models;computer architecture;hardware;markov processes;servers;software;virtualization;markov modeling;nfv;network function allocation;qos;sdn;management costs;network function allocation DOI  
Abstract: In the last few years, SDN and NFV have been introduced with the potential to change the ossified Internet paradigm, with the final goal of creating a more agile and flexible network, at the same time reducing both CAPEX and OPEX costs. For this reason, a lot of research efforts have been devoted to optimize the implementation of these technologies, also inheriting experience from data center management. However, orchestration and management of SDN/NFV nodes present new challenges with respect to data center management, mainly due to the telecommunications context where NFV resides. With this in mind, the target of this paper is to define a management model for NFV customers and service providers, a green policy of the customer premises equipment (CPE) nodes, and an analytical model to support their design. The model is then applied to a case study to demonstrate how it can be used to optimize system performance and choose the most important parameters characterizing the design of a CPE node.
BibTeX:

@article{7160778,

  author = {Faraci, G. and Schembra, G.},

  title = {An Analytical Model to Design and Manage a Green SDN/NFV CPE Node},

  journal = {Network and Service Management, IEEE Transactions on},

  year = {2015},

  volume = {12},

  number = {3},

  pages = {435-450},

  doi = {http://dx.doi.org/10.1109/TNSM.2015.2454293}

}

Foteinos, V. and Tsagkaris, K. and Peloso, P. and Ciavaglia, L. and Demestichas, P. Operator-Friendly Traffic Engineering in IP/MPLS Core Networks 2014 Network and Service Management, IEEE Transactions on
Vol. 11(3), pp. 333-349 
algorithm design and analysis;energy consumption;ip networks;monitoring;multiprotocol label switching;nanoelectromechanical systems;routing;core networks;energy efficiency;load balancing;multipath;operator;traffic engineering DOI  
Abstract: Global Internet traffic has increased tremendously and this trend is anticipated to continue for the next years. Consequently, network's performance and users' satisfaction cannot be guaranteed. The necessity for adequate bandwidth to carry this increased traffic has led to the addition of resources to the current core network and service infrastructures, thus affecting the levels of the consumed energy and generally resulting in higher OPerational EXpenditures (OPEX). Obviously, tackling such growth requires sophisticated Traffic Engineering (TE) and associated management schemes. On the one hand, TE mechanisms should be intelligent and self-adaptive so that to take fast and reliable decisions with respect to traffic allocation into network paths. On the other hand, the management of this intelligence cannot rely on the traditional command and control paradigm. Contrarily, it needs to be based on systems that hide technology complexity from the operator and relax him from the rather slow and error prone task of manual configuration. Accordingly, in this work, we present an operator-friendly management framework that is used to drive the decisions of an autonomous algorithm for TE in IP/MPLS core networks. Through the framework, the operator is able to select from a set of high level policies, which the proposed TE algorithm needs to take into account while seeking for routing configurations during its autonomous operation. The behavior of the proposed TE algorithm under the operator choices is experimented through numerous simulations and extensive test cases. Results showcase the efficiency and optimal performance of the algorithm, compared to other TE solutions proposed in literature, while at the same time they validate the framework's friendliness towards operator.
BibTeX:

@article{6878416,

  author = {Foteinos, V. and Tsagkaris, K. and Peloso, P. and Ciavaglia, L. and Demestichas, P.},

  title = {Operator-Friendly Traffic Engineering in IP/MPLS Core Networks},

  journal = {Network and Service Management, IEEE Transactions on},

  year = {2014},

  volume = {11},

  number = {3},

  pages = {333-349},

  doi = {http://dx.doi.org/10.1109/TNSM.2014.2346085}

}

Francesco Paolucciois, J. and Abdelnur, H. and State, R. and Festor, O. Machine Learning Techniques for Passive Network Inventory 2010 Network and Service Management, IEEE Transactions on
Vol. 7(4), pp. 244 -257 
fingerprinting , svm , inventory management , syntactic tree fingerprint identification , learning (artificial intelligence) , pattern classification , security of data , signalling protocols , support vector machines DOI  
Abstract: Being able to fingerprint devices and services, i.e., remotely identify running code, is a powerful service for both security assessment and inventory management. This paper describes two novel fingerprinting techniques supported by isomorphic based distances which are adapted for measuring the similarity between two syntactic trees. The first method leverages the support vector machines paradigm and requires a learning stage. The second method operates in an unsupervised manner thanks to a new classification algorithm derived from the ROCK and QROCK algorithms. It provides an efficient and accurate classification. We highlight the use of such classification techniques for identifying the remote running applications. The approaches are validated through extensive experimentations on SIP (Session Initiation Protocol) for evaluating the impact of the different parameters and identifying the best configuration before applying the techniques to network traces collected by a real operator.
BibTeX:

@article{5668980,

  author = {Francesco Paolucciois, J. and Abdelnur, H. and State, R. and Festor, O.},

  title = {Machine Learning Techniques for Passive Network Inventory},

  journal = {Network and Service Management, IEEE Transactions on},

  year = {2010},

  volume = {7},

  number = {4},

  pages = {244 -257},

  doi = {http://dx.doi.org/10.1109/TNSM.2010.1012.0352}

}

Francois, F. and Wang, N. and Moessner, K. and Georgoulas, S. Optimizing Link Sleeping Reconfigurations in ISP Networks with Off-Peak Time Failure Protection 2013 Network and Service Management, IEEE Transactions on
Vol. 10(2), pp. 176-188 
energy-aware network management , green networks , link sleeping , network robustness , traffic engineering DOI  
Abstract: Energy consumption in ISP backbone networks has been rapidly increasing with the advent of increasingly bandwidth-hungry applications. Network resource optimization through sleeping reconfiguration and rate adaptation has been proposed for reducing energy consumption when the traffic demands are at their low levels. It has been observed that many operational backbone networks exhibit regular diurnal traffic patterns, which offers the opportunity to apply simple time-driven link sleeping reconfigurations for energy-saving purposes. In this work, an efficient optimization scheme called Time-driven Link Sleeping (TLS) is proposed for practical energy management which produces an optimized combination of the reduced network topology and its unified off-peak configuration duration in daily operations. Such a scheme significantly eases the operational complexity at the ISP side for energy saving, but without resorting to complicated online network adaptations. The G #x00c9;ANT network and its real traffic matrices were used to evaluate the proposed TLS scheme. Simulation results show that up to 28.3% energy savings can be achieved during off-peak operation without network performance deterioration. In addition, considering the potential risk of traffic congestion caused by unexpected network failures based on the reduced topology during off-peak time, we further propose a robust TLS scheme with Single Link Failure Protection (TLS-SLFP) which aims to achieve an optimized trade-off between network robustness and energy efficiency performance.
BibTeX:

@article{6317103,

  author = {Francois, F. and Wang, N. and Moessner, K. and Georgoulas, S.},

  title = {Optimizing Link Sleeping Reconfigurations in ISP Networks with Off-Peak Time Failure Protection},

  journal = {Network and Service Management, IEEE Transactions on},

  year = {2013},

  volume = {10},

  number = {2},

  pages = {176-188},

  doi = {http://dx.doi.org/10.1109/TNSM.2012.092412.120295}

}

Francois, F. and Ning Wang and Moessner, K. and Georgoulas, S. and de Oliveira Schmidt, R. Leveraging MPLS Backup Paths for Distributed Energy-Aware Traffic Engineering 2014 Network and Service Management, IEEE Transactions on
Vol. 11(2), pp. 235-249 
multiprotocol label switching;quality of service;telecommunication links;telecommunication network reliability;telecommunication network routing;telecommunication network topology;telecommunication power management;telecommunication traffic;abilene;gbp;g?ant;mpls backup paths;backbone networks;energy efficiency;energy saving;energy-aware traffic engineering;green backup paths;link failure protection;multiprotocol label switching;network topologies;packet delays;point-of-presence representation;quality-of-service;router;delays;energy consumption;green products;multiprotocol label switching;network topology;optimization;quality of service;green networks;mpls;backup paths;distributed;energy efficiency;failure protection;online;traffic engineering DOI  
Abstract: Backup paths are usually pre-installed by network operators to protect against single link failures in backbone networks that use multi-protocol label switching. This paper introduces a new scheme called Green Backup Paths (GBP) that intelligently exploits these existing backup paths to perform energy-aware traffic engineering without adversely impacting the primary role of these backup paths of preventing traffic loss upon single link failures. This is in sharp contrast to most existing schemes that tackle energy efficiency and link failure protection separately, resulting in substantially high operational costs. GBP works in an online and distributed fashion, where each router periodically monitors its local traffic conditions and cooperatively determines how to reroute traffic so that the highest number of physical links can go to sleep for energy saving. Furthermore, our approach maintains quality-of-service by restricting the use of long backup paths for failure protection only, and therefore, GBP avoids substantially increased packet delays. GBP was evaluated on the point-of-presence representation of two publicly available network topologies, namely, G?ANT and Abilene, and their real traffic matrices. GBP was able to achieve significant energy saving gains, which are always within 15% of the theoretical upper bound.
BibTeX:

@article{6811161,

  author = {Francois, F. and Ning Wang and Moessner, K. and Georgoulas, S. and de Oliveira Schmidt, R.},

  title = {Leveraging MPLS Backup Paths for Distributed Energy-Aware Traffic Engineering},

  journal = {Network and Service Management, IEEE Transactions on},

  year = {2014},

  volume = {11},

  number = {2},

  pages = {235-249},

  doi = {http://dx.doi.org/10.1109/TNSM.2014.2321839}

}

Francois, P. and Bonaventure, O. and Decraene, B. and Coste, P.-A. Avoiding Disruptions During Maintenance Operations on BGP Sessions 2007 Network and Service Management, IEEE Transactions on
Vol. 4(3), pp. 1 -11 
costs , frequency , internet , lab-on-a-chip , manuals , performance evaluation , routing , scheduling , telecommunication network topology , virtual private networks internet , routing protocols DOI  
Abstract: This paper presents a solution aimed at avoiding losses of connectivity when an eBGP peering link is shut down by an operator for a maintenance. Currently, shutting down an eBGP session can lead to transient losses of connectivity even though alternate path are available at the borders of the network. This is very unfortunate as ISPs face more and more stringent service level agreements, and maintenance operations are predictable operations, so that there is time to adapt to the change and preserve the respect of the service level agreement.
BibTeX:

@article{4489643,

  author = {Francois, P. and Bonaventure, O. and Decraene, B. and Coste, P.-A.},

  title = {Avoiding Disruptions During Maintenance Operations on BGP Sessions},

  journal = {Network and Service Management, IEEE Transactions on},

  year = {2007},

  volume = {4},

  number = {3},

  pages = {1 -11},

  doi = {http://dx.doi.org/10.1109/TNSM.2007.021102}

}

Franke, U. Optimal IT Service Availability: Shorter Outages, or Fewer? 2012 Network and Service Management, IEEE Transactions on
Vol. 9(1), pp. 22-33 
sla management , service level agreements , availability , fault management , optimization techniques , policy-based management DOI  
Abstract: High enterprise IT service availability is a key success factor throughout many industries. While understanding of the economic importance of availability management is becoming more widespread, the implications for management of Service Level Agreements (SLAs) and thinking about availability risk management are just beginning to unfold. This paper offers a framework within which to think about availability management, highlighting the importance of variance of outage costs. The importance of variance is demonstrated using simulations on existing data sets of revenue data. An important implication is that when outage costs are proportional to outage duration, more but shorter outages should be preferred to fewer but longer, in order to minimize variance. Furthermore, two archetypal cases where the cost of an outage depends non-linearly on its duration are considered. An optimal outage length is derived, and some guidance is also given for its application when the variance of hourly downtime costs is considered. The paper is concluded with a discussion about the feasibility of the method, its practitioner relevance and its implications for SLA management.
BibTeX:

@article{6092407,

  author = {Franke, U.},

  title = {Optimal IT Service Availability: Shorter Outages, or Fewer?},

  journal = {Network and Service Management, IEEE Transactions on},

  year = {2012},

  volume = {9},

  number = {1},

  pages = {22-33},

  doi = {http://dx.doi.org/10.1109/TNSM.2011.110811.110122}

}

Freire, E.P. and Ziviani, A. and Salles, R.M. Detecting VoIP calls hidden in web traffic 2008 Network and Service Management, IEEE Transactions on
Vol. 5(4), pp. 204 -214 
network anomaly detection, skype, p2p voip systems, http traffic internet telephony , peer-to-peer computing , telecommunication traffic , transport protocols DOI  
Abstract: Peer-to-peer (P2P) voice over IP (VoIP) applications (e.g. Skype or Google Talk) commonly use Web TCP ports (80 or 443) as a fallback mechanism to delude restrictive firewalls. This strategy renders this kind of traffic quite difficult to be detected by network managers. To deal with this issue, we propose and evaluate a method to detect VoIP calls hidden in Web traffic. We validate our proposal considering both Skype and Google Talk generated traffic by using real-world experimental data gathered at a commercial Internet Service Provider (ISP) and an academic institution. Our experimental results demonstrate that our proposed method achieves a performance of around 90% detection rate of VoIP calls hidden in Web traffic with a false positive rate of only 2%, whereas a 100% detection rate is achieved with a false positive rate limited to only 5%. We also evaluate the feasibility of applying our proposal in real-time detection scenarios.
BibTeX:

@article{5010444,

  author = {Freire, E.P. and Ziviani, A. and Salles, R.M.},

  title = {Detecting VoIP calls hidden in web traffic},

  journal = {Network and Service Management, IEEE Transactions on},

  year = {2008},

  volume = {5},

  number = {4},

  pages = {204 -214},

  doi = {http://dx.doi.org/10.1109/TNSM.2009.041102}

}

Jing Fu and Sjodin, P. and Karlsson, G. Loop-free updates of forwarding tables 2008 Network and Service Management, IEEE Transactions on
Vol. 5(1), pp. 22 -35 
computer science , computerized monitoring , condition monitoring , degradation , fault diagnosis , ip networks , information technology , management information systems , scalability , web and internet services internet , computer network management , computer network reliability , fault diagnosis DOI  
Abstract: When the forwarding paths in an IP network change due to a link failure or a link weight modification, the forwarding tables in the routers may need to be updated. Each of these updates may cause transient loops if they are not performed in an appropriate order. In this paper, we propose an order to update the forwarding tables that avoids transient loops for non-urgent changes. The order is obtained by studying the changes in the forwarding tables, therefore it can be used in networks running any routing protocols, and for any type of forwarding path changes. After presenting the order, we prove that it is correct, and present an efficient algorithm to compute the order. Thereafter, we present several algorithms for performing forwarding table updates in accordance with the order. We also discuss how the update algorithms can be applied to both networks with centralized control and decentralized routing protocols. Finally, we study the update algorithms' performance on several network topologies and with varying parameter settings and for several types of forwarding path changes.
BibTeX:

@article{4570773,

  author = {Jing Fu and Sjodin, P. and Karlsson, G.},

  title = {Loop-free updates of forwarding tables},

  journal = {Network and Service Management, IEEE Transactions on},

  year = {2008},

  volume = {5},

  number = {1},

  pages = {22 -35},

  doi = {http://dx.doi.org/10.1109/TNSM.2008.080103}

}

Fu, Y. and Bi, J. and Chen, Z. and Gao, K. and Zhang, B. and Chen, G. and Wu, J. A Hybrid Hierarchical Control Plane for Flow-Based Large-Scale Software-Defined Networks 2015 Network and Service Management, IEEE Transactions on
Vol. 12(2), pp. 117-131 
computational complexity;computer architecture;control systems;protocols;routing;scalability;topology;control plane architecture,;sdn;abstracted hierarchical routing;control plane architecture;fast reroute;hybrid hierarchical DOI  
Abstract: The decoupled architecture and the fine-grained flow-control feature limit the scalability of a flow-based software-defined network (SDN). In order to address this problem, some studies construct a flat control plane architecture; others build a hierarchical control plane architecture to improve the scalability of an SDN. However, the two kinds of structure still have unresolved issues: A flat control plane structure cannot solve the superlinear computational complexity growth of the control plane when the SDN scales to a large size, and the centralized abstracted hierarchical control plane structure brings a path stretch problem. To address these two issues, we propose Orion, a hybrid hierarchical control plane for large-scale networks. Orion can effectively reduce the computational complexity of an SDN control plane by several orders of magnitude. We also design an abstracted hierarchical routing method to solve the path stretch problem. Furthermore, we propose a hierarchical fast reroute method to illustrate how to achieve fast rerouting in the proposed hybrid hierarchical control plane. Orion is implemented to verify the feasibility of the hybrid hierarchical approach. Finally, we verify the effectiveness of Orion from both the theoretical and experimental aspects.
BibTeX:

@article{7109947,

  author = {Fu, Y. and Bi, J. and Chen, Z. and Gao, K. and Zhang, B. and Chen, G. and Wu, J.},

  title = {A Hybrid Hierarchical Control Plane for Flow-Based Large-Scale Software-Defined Networks},

  journal = {Network and Service Management, IEEE Transactions on},

  year = {2015},

  volume = {12},

  number = {2},

  pages = {117-131},

  doi = {http://dx.doi.org/10.1109/TNSM.2015.2434612}

}

Fung, C. and Zhang, J. and Boutaba, R. Effective Acquaintance Management based on Bayesian Learning for Distributed Intrusion Detection Networks 2012 Network and Service Management, IEEE Transactions on
Vol. 9(3), pp. 320-332 
host-based intrusion detection systems , acquaintance management , collaborative networks , computer security DOI  
Abstract: An effective Collaborative Intrusion Detection Network (CIDN) allows distributed Intrusion Detection Systems (IDSes) to collaborate and share their knowledge and opinions about intrusions, to enhance the overall accuracy of intrusion assessment as well as the ability of detecting new classes of intrusions. Toward this goal, we propose a distributed Host-based IDS (HIDS) collaboration system, particularly focusing on acquaintance management where each HIDS selects and maintains a list of collaborators from which they can consult about intrusions. Specifically, each HIDS evaluates both the false positive (FP) rate and false negative (FN) rate of its neighboring HIDSes' opinions about intrusions using Bayesian learning, and aggregates these opinions using a Bayesian decision model. Our dynamic acquaintance management algorithm allows each HIDS to effectively select a set of collaborators. We evaluate our system based on a simulated collaborative HIDS network. The experimental results demonstrate the convergence, stability, robustness, and incentive-compatibility of our system.
BibTeX:

@article{6205096,

  author = {Fung, C. and Zhang, J. and Boutaba, R.},

  title = {Effective Acquaintance Management based on Bayesian Learning for Distributed Intrusion Detection Networks},

  journal = {Network and Service Management, IEEE Transactions on},

  year = {2012},

  volume = {9},

  number = {3},

  pages = {320-332},

  doi = {http://dx.doi.org/10.1109/TNSM.2012.051712.110124}

}

Fung, Carol J and Zhang, Jie and Aib, Issam and Boutaba, Raouf Dirichlet-Based Trust Management for Effective Collaborative Intrusion Detection Networks 2011 Network and Service Management, IEEE Transactions on
Vol. 8(2), pp. 79 -91 
collaborative intrusion detection system , admission control , computer security , security management , trust management computer network security , groupware , peer-to-peer computing DOI  
Abstract: The accuracy of detecting intrusions within a Collaborative Intrusion Detection Network (CIDN) depends on the efficiency of collaboration between peer Intrusion Detection Systems (IDSes) as well as the security itself of the CIDN. In this paper, we propose Dirichlet-based trust management to measure the level of trust among IDSes according to their mutual experience. An acquaintance management algorithm is also proposed to allow each IDS to manage its acquaintances according to their trustworthiness. Our approach achieves strong scalability properties and is robust against common insider threats, resulting in an effective CIDN. We evaluate our approach based on a simulated CIDN, demonstrating its improved robustness, efficiency and scalability for collaborative intrusion detection in comparison with other existing models.
BibTeX:

@article{5871350,

  author = {Fung, Carol J and Zhang, Jie and Aib, Issam and Boutaba, Raouf},

  title = {Dirichlet-Based Trust Management for Effective Collaborative Intrusion Detection Networks},

  journal = {Network and Service Management, IEEE Transactions on},

  year = {2011},

  volume = {8},

  number = {2},

  pages = {79 -91},

  doi = {http://dx.doi.org/10.1109/TNSM.2011.050311.100028}

}

Garcia-Dorado, J. and Finamore, A. and Mellia, M. and Meo, M. and Munafo, M. Characterization of ISP Traffic: Trends, User Habits, and Access Technology Impact 2012 Network and Service Management, IEEE Transactions on
Vol. 9(2), pp. 142-155 
isp traffic , network monitoring , traffic analyzer DOI  
Abstract: In the recent years, the research community has increased its focus on network monitoring which is seen as a key tool to understand the Internet and the Internet users. Several studies have presented a deep characterization of a particular application, or a particular network, considering the point of view of either the ISP, or the Internet user. In this paper, we take a different perspective. We focus on three European countries where we have been collecting traffic for more than a year and a half through 5 vantage points with different access technologies. This humongous amount of information allows us not only to provide precise, multiple, and quantitative measurements of "What the user do with the Internet" in each country but also to identify common/uncommon patterns and habits across different countries and nations. Considering different time scales, we start presenting the trend of application popularity; then we focus our attention to a one-month long period, and further drill into a typical daily characterization of users activity. Results depict an evolving scenario due to the consolidation of new services as Video Streaming and File Hosting and to the adoption of new P2P technologies. Despite the heterogeneity of the users, some common tendencies emerge that can be leveraged by the ISPs to improve their service.
BibTeX:

@article{6158423,

  author = {Garcia-Dorado, J. and Finamore, A. and Mellia, M. and Meo, M. and Munafo, M.},

  title = {Characterization of ISP Traffic: Trends, User Habits, and Access Technology Impact},

  journal = {Network and Service Management, IEEE Transactions on},

  year = {2012},

  volume = {9},

  number = {2},

  pages = {142-155},

  doi = {http://dx.doi.org/10.1109/TNSM.2012.022412.110184}

}

Garroppo, R.G. and Giordano, S. and Niccolini, S. and Spagna, S. A Prediction-Based Overload Control Algorithm for SIP Servers 2011 Network and Service Management, IEEE Transactions on
Vol. 8(1), pp. 39 -51 
dynamic load estimation , overload control , prediction , queueing management , session initiation protocol 3g mobile communication , feedback , multimedia communication , network servers , prediction theory , queueing theory , signalling protocols , telecommunication congestion control , telecommunication traffic , telecontrol DOI  
Abstract: Overload is a challenging problem for a SIP server because the built-in overload control mechanism based on generating rejection messages could not prevent the server from collapsing due to congestion. In this scenario, the paper presents an overload mechanism combining a local and a remote solution. The local part of the overload control mechanism is based on the appropriate queueing structure and buffer management of the SIP proxy. The remote overload control mechanism is based on feedback reports provided by the SIP proxy to the upstream neighbors. These reports permit the traffic regulation necessary to avoid the critical condition of overload. The main paper contributions are the design of key components of a remote control mechanism, the proposal of a new approach for dynamic load estimation, and the use of a prediction technique in the remote control loop.
BibTeX:

@article{5699971,

  author = {Garroppo, R.G. and Giordano, S. and Niccolini, S. and Spagna, S.},

  title = {A Prediction-Based Overload Control Algorithm for SIP Servers},

  journal = {Network and Service Management, IEEE Transactions on},

  year = {2011},

  volume = {8},

  number = {1},

  pages = {39 -51},

  doi = {http://dx.doi.org/10.1109/TNSM.2011.011211.1000010}

}

Ge, C. and Sun, Z. and Wang, N. and Xu, K. and Wu, J. Energy Management in Cross-Domain Content Delivery Networks: A Theoretical Perspective 2014 Network and Service Management, IEEE Transactions on
Vol. 11(3), pp. 264-277 
cooling;delays;energy consumption;energy resolution;power demand;quality of service;servers;content delivery network;data center;energy management DOI  
Abstract: In a content delivery network (CDN), the energy cost is dominated by its geographically distributed data centers (DCs). Generally within a DC, the energy consumption is dominated by its server infrastructure and cooling system, with each contributing approximately half. However, existing research work has been addressing energy efficiency on these two sides separately. In this paper, we jointly optimize the energy consumption of both server infrastructures and cooling systems in a holistic manner. Such an objective is achieved through both strategies of: 1) putting idle servers to sleep within individual DCs; and 2) shutting down idle DCs entirely during off-peak hours. Based on these strategies, we develop a heuristic algorithm, which concentrates user request resolution to fewer DCs, so that some DCs may become completely idle and hence have the opportunity to be shut down to reduce their cooling energy consumption. Meanwhile, QoS constraints are respected in the algorithm to assure service availability and end-to-end delay. Through simulations under realistic scenarios, our algorithm is able to achieve an energy-saving gain of up to 62.1% over an existing CDN energy-saving scheme. This result is bound to be near-optimal by our theoretically-derived lower bound on energy-saving performance.
BibTeX:

@article{6877678,

  author = {Ge, C. and Sun, Z. and Wang, N. and Xu, K. and Wu, J.},

  title = {Energy Management in Cross-Domain Content Delivery Networks: A Theoretical Perspective},

  journal = {Network and Service Management, IEEE Transactions on},

  year = {2014},

  volume = {11},

  number = {3},

  pages = {264-277},

  doi = {http://dx.doi.org/10.1109/TNSM.2014.2346956}

}

Gencay, E. and Sinz, C. and Kuchlin, W. and Schafer, T. SANchk: SQL-based SAN configuration checking 2008 Network and Service Management, IEEE Transactions on
Vol. 5(2), pp. 91 -104 
decision support systems , knowledge based systems , network fault diagnosis , query languages , relational databases , storage area networks java , sql , xml , formal specification , relational databases , storage area networks DOI  
Abstract: Storage Area Networks (SANs) connect groups of storage devices to servers over fast interconnects. An important challenge lies in managing the complexity of the resulting massive SAN configurations. Policy-based validation using new logical frameworks has been proposed earlier as a solution to this configuration problem. SANchk offers a new solution that uses standard technologies such as SQL, XML, and Java, to implement a rule-based configuration checker. SANchk works as a light-weight extension to the relational databases of storage management systems; current support includes IBM's TPC and the open source Aperi storage manager. Some five dozen best practices rules for SAN configuration are implemented in SANchk, many of them with configurable parameters. Empirical results with several commercial SANs show that the approach is viable in practice.
BibTeX:

@article{4694134,

  author = {Gencay, E. and Sinz, C. and Kuchlin, W. and Schafer, T.},

  title = {SANchk: SQL-based SAN configuration checking},

  journal = {Network and Service Management, IEEE Transactions on},

  year = {2008},

  volume = {5},

  number = {2},

  pages = {91 -104},

  doi = {http://dx.doi.org/10.1109/TNSM.2008.021102}

}

Gharbaoui, Molka and Paolucci, Francesco and Giorgetti, Alessio and Martini, Barbara and Castoldi, Piero Effective Statistical Detection of Smart Confidentiality Attacks in Multi-Domain Networks 2013 Network and Service Management, IEEE Transactions on
Vol. 10(4), pp. 383-397 
computer architecture;network security;network topology;quality of service;statistical analysis;bgp;multi-domain networks;pce;control plane security;sequential hypothesis testing;traffic engineering DOI  
BibTeX:

@article{6662354,

  author = {Gharbaoui, Molka and Paolucci, Francesco and Giorgetti, Alessio and Martini, Barbara and Castoldi, Piero},

  title = {Effective Statistical Detection of Smart Confidentiality Attacks in Multi-Domain Networks},

  journal = {Network and Service Management, IEEE Transactions on},

  year = {2013},

  volume = {10},

  number = {4},

  pages = {383-397},

  doi = {http://dx.doi.org/10.1109/TNSM.2013.111113.130482}

}

Ghazar, Tay and Samaan, Nancy Pricing Utility-Based Virtual Networks 2013 Network and Service Management, IEEE Transactions on
Vol. 10(2), pp. 119-132 
virtualization;congestion pricing;subgraph matching;time-of-use pricing;virtual network embedding DOI  
Abstract: This paper presents a new pricing mechanism for virtual network (VN) services to regulate the demand for their shared substrate network (SN) resources. The contributions of this article are two-fold; first, we introduce a new time-of-use pricing policy for the SN resources that reflects the effect of resource congestion introduced by VN users. The preferences of the VN users are first represented through corresponding demand-utility functions that quantify the sensitivity of the applications hosted by the VNs to resource consumption, time-of-use and prices during peak-demand periods. We then introduce a novel model of time-varying VNs, where users are allowed to up- or down-scale the requested resources to continuously maximize their utility while minimizing the cost of embedding the VNs onto the SN. The second contribution is a novel hierarchical embedding management approach tailored to efficiently map these dynamic VNs. The proposed VN embedding scheme recasts the VN embedding problem as a subgraph matching one, and introduces a simple heuristics-based matching procedure to find a good VN embedding from a number of candidate solutions obtained in parallel. In contrast to existing solutions, the proposed scheme does not impose any limitations on the size or topology of the VN requests. Instead, the search is customized according to the VN size and the associated utility. Experimental results demonstrate the performance achieved by the proposed work in terms of the increased profit, resource utilization and number of accepted requests.
BibTeX:

@article{6514998,

  author = {Ghazar, Tay and Samaan, Nancy},

  title = {Pricing Utility-Based Virtual Networks},

  journal = {Network and Service Management, IEEE Transactions on},

  year = {2013},

  volume = {10},

  number = {2},

  pages = {119-132},

  doi = {http://dx.doi.org/10.1109/TNSM.2013.043013.120304}

}

Ghosh, U. and Datta, R. A Secure Addressing Scheme for Large-Scale Managed MANETs 2015 Network and Service Management, IEEE Transactions on
Vol. 12(3), pp. 483-495 
ad hoc networks;artificial neural networks;authentication;ip networks;mobile computing;protocols;resource management;address allocation;authentication;autoconfiguration;manet;security;address allocation;authentication;autoconfiguration;security DOI  
Abstract: In this paper, we propose a low-overhead identity-based distributed dynamic address configuration scheme for secure allocation of IP addresses to authorized nodes of a managed mobile ad hoc network. A new node will receive an IP address from an existing neighbor node. Thereafter, each node in a network is able to generate a set of unique IP addresses from its own IP address, which it can further assign to more new nodes. Due to lack of infrastructure, apart from security issues, such type of networks poses several design challenges such as high packet error rate, network partitioning, and network merging. Our proposed protocol takes care of these issues incurring less overhead as it does not require any message flooding mechanism over the entire MANET. Performance analysis and simulation results show that even with added security mechanisms, our proposed protocol outperforms similar existing protocols.
BibTeX:

@article{7147819,

  author = {Ghosh, U. and Datta, R.},

  title = {A Secure Addressing Scheme for Large-Scale Managed MANETs},

  journal = {Network and Service Management, IEEE Transactions on},

  year = {2015},

  volume = {12},

  number = {3},

  pages = {483-495},

  doi = {http://dx.doi.org/10.1109/TNSM.2015.2452292}

}

Gillani, S.F. and Demirci, M. and Al-Shaer, E. and Ammar, M.H. Problem Localization and Quantification Using Formal Evidential Reasoning for Virtual Networks 2014 Network and Service Management, IEEE Transactions on
Vol. 11(3), pp. 307-320 
cognition;network topology;overlay networks;packet loss;probes;sensors;observations;constraint satisfaction;diagnosis;evidences;evidential theory;overlay network DOI  
Abstract: Overlay (virtual) networks are mainly used to improve Internet reliability and facilitate a rapid deployment of new services. However, in order for overlay services to adapt to dynamic network conditions in a timely manner, efficient diagnosis of performance problems is required. Existing overlay diagnosis approaches assume extensive knowledge about the network and require invasive monitoring sensors or active measurements. In this paper, we propose a novel diagnosis technique to localize performance anomalies and determine the packet loss in each network component. Our approach is purely based on packet loss observations at the end-points to reason about the loss location and severity in the network without any active probing or sensor deployment. We formulate the problem as a constraint-satisfaction problem using network loss properties and end-user observations. Our diagnosis is robust against insufficient observations or malicious end-user participation. We evaluate our approach extensively using simulation and experimentation and demonstrate the accuracy, effectiveness, and scalability of our approach under various network sizes, participation ratio, and malicious observation ratio.
BibTeX:

@article{6824188,

  author = {Gillani, S.F. and Demirci, M. and Al-Shaer, E. and Ammar, M.H.},

  title = {Problem Localization and Quantification Using Formal Evidential Reasoning for Virtual Networks},

  journal = {Network and Service Management, IEEE Transactions on},

  year = {2014},

  volume = {11},

  number = {3},

  pages = {307-320},

  doi = {http://dx.doi.org/10.1109/TNSM.2014.2326297}

}

Grieco, Luigi Alfredo and Barakat, Chadi and Marzulli, Michele Spectral Models for Bitrate Measurement from Packet Sampled Traffic 2011 Network and Service Management, IEEE Transactions on
Vol. 8(2), pp. 141 -152 
packet sampling , internet measurements , spectral models internet , sampling methods , spectral analysis , telecommunication traffic DOI  
Abstract: In network measurement systems, packet sampling techniques are usually adopted to reduce the overall amount of data to collect and process. Being based on a subset of packets, they introduce estimation errors that have to be properly counteracted by using a fine tuning of the sampling strategy and sophisticated inversion methods. This problem has been deeply investigated in the literature with particular attention to the statistical properties of packet sampling and to the recovery of the original network measurements. Herein, we propose a novel approach to predict the energy of the sampling error in the real time estimation of traffic bitrate, based on spectral analysis in the frequency domain. We start by demonstrating that the error introduced by packet sampling can be modeled as an aliasing effect in the frequency domain. Then, we derive closed-form expressions for the Signal-to-Noise Ratio (SNR) to predict the distortion of traffic bitrate estimates over time. The accuracy of the proposed SNR metric is validated by means of real packet traces. Furthermore, a comparison with respect to an analogous SNR expression derived using classic stochastic tools is proposed, showing that the frequency domain approach grants for a higher accuracy when traffic rate measurements are carried out at fine time granularity.
BibTeX:

@article{5871354,

  author = {Grieco, Luigi Alfredo and Barakat, Chadi and Marzulli, Michele},

  title = {Spectral Models for Bitrate Measurement from Packet Sampled Traffic},

  journal = {Network and Service Management, IEEE Transactions on},

  year = {2011},

  volume = {8},

  number = {2},

  pages = {141 -152},

  doi = {http://dx.doi.org/10.1109/TNSM.2011.050311.100035}

}

Griffin, Donna and Pesch, Dirk Service provision for next generation mobile communication systems - the Telecommunication Service Exchange 2006 Network and Service Management, IEEE Transactions on
Vol. 3(2), pp. 2 -12 
agents , auctions , qos , sip , telecommunication service exchange (tse) , umts DOI  
Abstract: The Telecommunication Service Exchange is a communication service platform based on a digital marketplace concept that enables customers to purchase telecommunication services. Customers are able buy product through this platform just as in a supermarket without being compelled to buy services from a particular producer or service provider. To enable this type of service provision, the current subscription model in telecommunications needs to be modified allowing customers to purchase telecommunication services on a per call basis. Using SIP, Electronic-Marketplaces and Agents, this paper outlines the architecture to achieve this as well as the possible benefits that this presents for both service providers and customers. An evaluation of the performance of the platform is also provided.
BibTeX:

@article{4798309,

  author = {Griffin, Donna and Pesch, Dirk},

  title = {Service provision for next generation mobile communication systems - the Telecommunication Service Exchange},

  journal = {Network and Service Management, IEEE Transactions on},

  year = {2006},

  volume = {3},

  number = {2},

  pages = {2 -12},

  doi = {http://dx.doi.org/10.1109/TNSM.2006.4798309}

}

Grimaudo, L. and Mellia, M. and Baralis, E. and Keralapura, R. SeLeCT: Self-Learning Classifier for Internet Traffic 2014 Network and Service Management, IEEE Transactions on
Vol. 11(2), pp. 144-157 
accuracy;algorithm design and analysis;clustering algorithms;ports (computers);protocols;servers;training;traffic classification;clustering;self-seeding;unsupervised machine learning DOI  
Abstract: Network visibility is a critical part of traffic engineering, network management, and security. The most popular current solutions - Deep Packet Inspection (DPI) and statistical classification, deeply rely on the availability of a training set. Besides the cumbersome need to regularly update the signatures, their visibility is limited to classes the classifier has been trained for. Unsupervised algorithms have been envisioned as a viable alternative to automatically identify classes of traffic. However, the accuracy achieved so far does not allow to use them for traffic classification in practical scenario. To address the above issues, we propose SeLeCT, a Self-Learning Classifier for Internet Traffic. It uses unsupervised algorithms along with an adaptive seeding approach to automatically let classes of traffic emerge, being identified and labeled. Unlike traditional classifiers, it requires neither a-priori knowledge of signatures nor a training set to extract the signatures. Instead, SeLeCT automatically groups flows into pure (or homogeneous) clusters using simple statistical features. SeLeCT simplifies label assignment (which is still based on some manual intervention) so that proper class labels can be easily discovered. Furthermore, SeLeCT uses an iterative seeding approach to boost its ability to cope with new protocols and applications. We evaluate the performance of SeLeCT using traffic traces collected in different years from various ISPs located in 3 different continents. Our experiments show that SeLeCT achieves excellent precision and recall, with overall accuracy close to 98%. Unlike state-of-art classifiers, the biggest advantage SeLeCT is its ability to discover new protocols and applications in an almost automated fashion.
BibTeX:

@article{6725830,

  author = {Grimaudo, L. and Mellia, M. and Baralis, E. and Keralapura, R.},

  title = {SeLeCT: Self-Learning Classifier for Internet Traffic},

  journal = {Network and Service Management, IEEE Transactions on},

  year = {2014},

  volume = {11},

  number = {2},

  pages = {144-157},

  doi = {http://dx.doi.org/10.1109/TNSM.2014.011714.130505}

}

Groleat, T. and Arzel, M. and Vaton, S. Stretching the Edges of SVM Traffic Classification With FPGA Acceleration 2014 Network and Service Management, IEEE Transactions on
Vol. 11(3), pp. 278-291 
acceleration;accuracy;classification algorithms;field programmable gate arrays;software;software algorithms;support vector machines;network and systems monitoring and measurements;design and simulation;machine learning DOI  
Abstract: Analyzing the composition of Internet traffic has many applications nowadays, like tracking bandwidth-consuming applications or QoS-based traffic engineering. Even though many classification methods, such as Support Vector Machines (SVMs) have demonstrated their accuracy, the ever-increasing data rates encountered in networks are higher than existing implementations can support. As SVM has been proven to provide a high level of accuracy, and is challenging to implement at high speeds, we consider in this paper the design of a real-time SVM traffic classifier at hundreds of Gb/s to allow online detection of categories of applications. We show the limits of software implementation and offer a solution based on the massive parallelism and low-level network interface access of FPGA boards. We also improve this solution by testing algorithmic changes that dramatically simplify hardware implementation. We then find theoretical supported bit rates up to 473 Gb/s for the most challenging trace on a Virtex 5 FPGA, and confirm them through experimental performance results on a Combov2 board with a 10 Gb/s interface.
BibTeX:

@article{6873566,

  author = {Groleat, T. and Arzel, M. and Vaton, S.},

  title = {Stretching the Edges of SVM Traffic Classification With FPGA Acceleration},

  journal = {Network and Service Management, IEEE Transactions on},

  year = {2014},

  volume = {11},

  number = {3},

  pages = {278-291},

  doi = {http://dx.doi.org/10.1109/TNSM.2014.2346075}

}

Groleat, T. and Pouyllau, H. Distributed Learning Algorithms for Inter-NSP SLA Negotiation Management 2012 Network and Service Management, IEEE Transactions on
Vol. 9(4), pp. 433 -445 
reinforcement learning , sla negotiation , distributed algorithm , inter-nsp DOI  
Abstract: To support real-time and security-demanding applications (e.g. telepresence, cloud computing) at a large-scale, the Internet must evolve so that Network Service Providers (NSPs) provide end-to-end Quality of Service (QoS) across their networks. The delivery of such QoS-assured services requires the negotiation of end-to-end QoS contracts (Service Level Agreements, SLAs) among NSPs and the configuration of their networks accordingly. The management of inter-NSP SLA negotiation is usually treated as an optimization problem, assuming that NSPs cooperate and agree on a common system, providing a solution for each demand. This assumption is quite strong in a highly competitive context where NSPs are cautious about sensitive data disclosure like topology or resource usage information or even SLA descriptions and prices. Hence, to meet NSPs' requirements on confidentiality, we opt for a distributed framework. In order to not over-provision demands, we consider the problem in a wider range: not only on the basis of instantaneous requests but also anticipating future ones. To enhance the chance of an NSP to be selected for an end-to-end service, we aim to take into account the demander likeliness of acceptance (aka. customer utility). To this end, we opt for reinforcement learning techniques and propose three distributed algorithms, inspired by the Q-learning algorithm, having different cooperation levels.
BibTeX:

@article{6247444,

  author = {Groleat, T. and Pouyllau, H.},

  title = {Distributed Learning Algorithms for Inter-NSP SLA Negotiation Management},

  journal = {Network and Service Management, IEEE Transactions on},

  year = {2012},

  volume = {9},

  number = {4},

  pages = {433 -445},

  doi = {http://dx.doi.org/10.1109/TNSM.2012.072012.110185}

}

Wenjun Gu and Dutta, N. and Chellappan, S. and Xiaole Bai Providing End-to-End Secure Communications in Wireless Sensor Networks 2011 Network and Service Management, IEEE Transactions on
Vol. 8(3), pp. 205 -218 
sensor networks , key management , security protocols , telecommunication security , wireless sensor networks DOI  
Abstract: In many Wireless Sensor Networks (WSNs), providing end to end secure communications between sensors and the sink is important for secure network management. While there have been many works devoted to hop by hop secure communications, the issue of end to end secure communications is largely ignored. In this paper, we design an end to end secure communication protocol in randomly deployed WSNs. Specifically, our protocol is based on a methodology called differentiated key pre-distribution. The core idea is to distribute different number of keys to different sensors to enhance the resilience of certain links. This feature is leveraged during routing, where nodes route through those links with higher resilience. Using rigorous theoretical analysis, we derive an expression for the quality of end to end secure communications, and use it to determine optimum protocol parameters. Extensive performance evaluation illustrates that our solutions can provide highly secure communications between sensor nodes and the sink in randomly deployed WSNs. We also provide detailed discussion on a potential attack (i.e. biased node capturing attack) to our solutions, and propose several countermeasures to this attack.
BibTeX:

@article{5970247,

  author = {Wenjun Gu and Dutta, N. and Chellappan, S. and Xiaole Bai},

  title = {Providing End-to-End Secure Communications in Wireless Sensor Networks},

  journal = {Network and Service Management, IEEE Transactions on},

  year = {2011},

  volume = {8},

  number = {3},

  pages = {205 -218},

  doi = {http://dx.doi.org/10.1109/TNSM.2011.072611.100080}

}

Guo, B. and Yu, F.R. and Jiang, S. and Ao, X. and Leung, V.C.M. Energy-Efficient Topology Management With Interference Cancellation in Cooperative Wireless Ad Hoc Networks 2014 Network and Service Management, IEEE Transactions on
Vol. 11(3), pp. 405-416 
cooperative communication;integrated circuits;network topology;power demand;relays;silicon;topology;energy efficiency;interference cancellation;topology control;wireless networks DOI  
Abstract: With recent advances in parallel cooperative transmissions between multiple source-destination pairs, interference cancellation (IC) can be achieved to improve the capacity performance of wireless networks. However, from energy efficiency perspective, user cooperation may not be always appealing, since the increased data rate of one user comes at the price of the energy consumed by another user. In this paper, we study the potential benefits/drawbacks of cooperative communications on network-level issues, such as the capacity and the energy efficiency. We show that, in terms of network energy efficiency, cooperative communications do not always outperform non-cooperative communications, and cooperative communications should be dynamically applied in topology control to optimize the overall network energy efficiency. Specifically, we propose an energy-efficient topology control scheme by jointly considering the capacity and energy consumption of non-cooperative and cooperative communications. Simulation results are presented to show the effectiveness of the proposed scheme.
BibTeX:

@article{6877739,

  author = {Guo, B. and Yu, F.R. and Jiang, S. and Ao, X. and Leung, V.C.M.},

  title = {Energy-Efficient Topology Management With Interference Cancellation in Cooperative Wireless Ad Hoc Networks},

  journal = {Network and Service Management, IEEE Transactions on},

  year = {2014},

  volume = {11},

  number = {3},

  pages = {405-416},

  doi = {http://dx.doi.org/10.1109/TNSM.2014.2346155}

}

Hagen, S. and da Costa Cordeiro, W.L. and Gaspary, L.P. and Granville, L.Z. and Kemper, A Efficient Model Checking of IT Change Operations 2014 Network and Service Management, IEEE Transactions on
Vol. 11(3), pp. 292-306 
complexity theory;compounds;context;object oriented modeling;planning;safety;scalability;change management;model checker;partial order reduction;verification DOI  
Abstract: The success of businesses in modern organizations heavily depends on the high availability of information technology (IT) infrastructures. To prevent business disruption, IT operators have worked hard to ensure that any changes to this infrastructure are properly and efficiently deployed. Change management¡ªa discipline of the Information Technology Infrastructure Library (ITIL)¡ªprovides important guidance to help achieve this end. As IT infrastructures grow larger, however, ensuring that changes are harmless to business continuity becomes increasingly complex. In fact, previous research has shown that existing approaches for verifying changes suffer from severe scalability issues. This problem can become a serious threat to most organizations, as it can lead for example to customer dissatisfaction due to missed deadlines in service change deployment. To bridge this gap, we propose a partial-order reduction model checking paradigm and algorithm for efficiently detecting harmful change operations. Our model improves the complexity of verifying a set of concurrent change activities against safety constraints by reducing¡ªwithout losing effectiveness¡ªthe verification scope. To prove concept and technical feasibility, we carried out an extensive performance evaluation of our algorithm considering a variety of change activities, safety constraints, and configuration scenarios. The results obtained from 32 benchmarks have shown that our algorithm significantly outperformed state-of-the-art, general purpose model checkers, improving the runtime complexity from polynomial/exponential to linear. In summary, the results evidenced that change verification finally became feasible and efficient for larger IT infrastructures.
BibTeX:

@article{6873326,

  author = {Hagen, S. and da Costa Cordeiro, W.L. and Gaspary, L.P. and Granville, L.Z. and Kemper, A},

  title = {Efficient Model Checking of IT Change Operations},

  journal = {Network and Service Management, IEEE Transactions on},

  year = {2014},

  volume = {11},

  number = {3},

  pages = {292-306},

  doi = {http://dx.doi.org/10.1109/TNSM.2014.2346074}

}

Halder, S. and DasBit, S. Design of a Probability Density Function Targeting Energy-Efficient Node Deployment in Wireless Sensor Networks 2014 Network and Service Management, IEEE Transactions on
Vol. 11(2), pp. 204-219 
algorithm design and analysis;corona;energy consumption;probability density function;routing;sensors;wireless sensor networks;energy balance;network lifetime;node deployment;probability density function;wireless sensor network DOI  
Abstract: In wireless sensor networks the issue of preserving energy requires utmost attention. One primary way of conserving energy is judicious deployment of sensor nodes within the network area so that the energy flow remains balanced throughout the network and prevents the problem of occurrence of energy holes. Firstly, we have analyzed network lifetime, found node density as the parameter which has significant influence on network lifetime and derived the desired parameter values for balanced energy consumption. Then to meet the requirement of energy balancing, we have proposed a probability density function (PDF), derived the PDF's intrinsic characteristics and shown its suitability to model the network architecture considered for the work. A node deployment algorithm is also developed based on this PDF. Performance of the deployment scheme is evaluated in terms of coverage-connectivity, energy balance and network lifetime. In qualitative analysis, we have shown the extent to which our proposed PDF has been able to provide desired node density derived from the analysis on network lifetime. Finally, the scheme is compared with three existing deployment schemes based on various distributions. Simulation results confirm our scheme's supremacy over all the existing schemes in terms of all the three performance metrics.
BibTeX:

@article{6775069,

  author = {Halder, S. and DasBit, S.},

  title = {Design of a Probability Density Function Targeting Energy-Efficient Node Deployment in Wireless Sensor Networks},

  journal = {Network and Service Management, IEEE Transactions on},

  year = {2014},

  volume = {11},

  number = {2},

  pages = {204-219},

  doi = {http://dx.doi.org/10.1109/TNSM.2014.031714.130583}

}

Hamdaoui, B. and NoroozOliaee, M. and Tumer, K. and Rayes, A. Coordinating Secondary-User Behaviors for Inelastic Traffic Reward Maximization in Large-Scale DSA Networks 2012 Network and Service Management, IEEE Transactions on
Vol. 9(4), pp. 501-513 
distributed resource allocation and management , cooperative and coordinated learning , dynamic and opportunistic spectrum access DOI  
Abstract: We develop efficient coordination techniques that support inelastic traffic in large-scale distributed dynamic spectrum access (DSA) networks. By means of any learning algorithm, the proposed techniques enable DSA users to locate and exploit spectrum opportunities effectively, thereby increasing their achieved throughput (or "rewards" to be more general). Basically, learning algorithms allow DSA users to learn by interacting with the environment, and use their acquired knowledge to select the proper actions that maximize their own objectives, thereby "hopefully" maximizing their long-term cumulative received reward. However, when DSA users' objectives are not carefully coordinated, learning algorithms can lead to poor overall system performance, resulting in lesser per-user average achieved rewards. In this paper, we derive efficient objective functions that DSA users can aim to maximize, and that by doing so, users' collective behavior also leads to good overall system performance, thus maximizing each user's long-term cumulative received rewards. We show that the proposed techniques are: (i) efficient by enabling users to achieve high rewards, (ii) scalable by performing well in systems with a small as well as a large number of users, (iii) learnable by allowing users to reach up high rewards very quickly, and (iv) distributive by being implementable in a decentralized manner.
BibTeX:

@article{6264039,

  author = {Hamdaoui, B. and NoroozOliaee, M. and Tumer, K. and Rayes, A.},

  title = {Coordinating Secondary-User Behaviors for Inelastic Traffic Reward Maximization in Large-Scale DSA Networks},

  journal = {Network and Service Management, IEEE Transactions on},

  year = {2012},

  volume = {9},

  number = {4},

  pages = {501-513},

  doi = {http://dx.doi.org/10.1109/TNSM.2012.080812.110174}

}

Hasan and Stiller, B. SLO Auditing Task Analysis, Decomposition, and Specification 2011 Network and Service Management, IEEE Transactions on
Vol. 8(1), pp. 15 -25 
audit task decomposition , qos monitoring , dynamic configuration , service level objective auditing internet , quality of service , task analysis DOI  
Abstract: Service Level Objectives (SLOs) - the core of a Service Level Agreement (SLA) - reflect major Quality-of-Service (QoS) requirements of customers on a service for a given price. SLOs need to be updated, if those requirements change. This leads to an update of the SLO auditing implementation. However, in many existing implementations, efforts are required to adapt to SLO changes, and even more efforts are needed for dynamic adaptations. Thus, a new SLO auditing design is essential to be able to reduce such efforts to the bare minimum. This is especially essential, if the service landscape and relevant QoS parameters are changing frequently. Thus, to meet this core functional requirement of an automated auditing, a generic auditing framework, applicable to any SLO, is presented in this paper, where the analysis of a general audit task, the identification of its sequence of subtasks (functional decomposition), and the development of a respective audit specification for each subtask has been performed. A use case and examples are presented to describe and apply the concept in detail. An SLO auditing application, which was prototyped, is not restricted to a certain set of QoS parameters, but it is dynamically reconfigurable and extensible according to changing demands. The work shows that it has become quite easy to instantiate an auditing application for new SLOs. Additionally, third parties would be able to offer SLO auditing services to a service provider separately.
BibTeX:

@article{5741010,

  author = {Hasan and Stiller, B.},

  title = {SLO Auditing Task Analysis, Decomposition, and Specification},

  journal = {Network and Service Management, IEEE Transactions on},

  year = {2011},

  volume = {8},

  number = {1},

  pages = {15 -25},

  doi = {http://dx.doi.org/10.1109/TNSM.2011.032311.100312}

}

Hashim, F. and Munasinghe, K.S. and Jamalipour, A. Biologically Inspired Anomaly Detection and Security Control Frameworks for Complex Heterogeneous Networks 2010 Network and Service Management, IEEE Transactions on
Vol. 7(4), pp. 268 -281 
heterogeneous network security , biologically inspired security , danger theory , epidemiology , human immune system complex networks , computer network security , invasive software , protocols DOI  
Abstract: The demand for anytime, anywhere, anyhow communications in future generation networks necessitates a paradigm shift from independent network services into a more harmonized system. This vision can be accomplished by integrating the existing and emerging access networks via a common Internet Protocol (IP) based platform. Nevertheless, owing to the inter-worked infrastructure, a malicious security threat in such a heterogeneous network is no more confined to its originating network domain, but can easily be propagated to other access networks. To address these security concerns, this paper proposes a biologically inspired security framework that governs the cooperation among network entities to identify security attacks, to perform security updates, and to inhibit attacks propagation in the heterogeneous network. The proposed framework incorporates two principal security components, in the form of anomaly detection framework and security control framework. Several plausible principles from two fields of biology, in particular the human immune system (HIS) and epidemiology have been adopted into the proposed security framework. Performance evaluation demonstrates the efficiency of the proposed biologically inspired security framework in detecting malicious anomalies such as denial-of-service (DoS), distributed DoS (DDoS), and worms, as well as restricting their propagations in the heterogeneous network.
BibTeX:

@article{5668982,

  author = {Hashim, F. and Munasinghe, K.S. and Jamalipour, A.},

  title = {Biologically Inspired Anomaly Detection and Security Control Frameworks for Complex Heterogeneous Networks},

  journal = {Network and Service Management, IEEE Transactions on},

  year = {2010},

  volume = {7},

  number = {4},

  pages = {268 -281},

  doi = {http://dx.doi.org/10.1109/TNSM.2010.1012.0360}

}

Hellerstein, J. and Singhal, S. and Qian Wang Research challenges in control engineering of computing systems 2009 Network and Service Management, IEEE Transactions on
Vol. 6(4), pp. 206 -211 
control theory , computing systems closed loop systems , control engineering , control theory , software engineering DOI  
Abstract: A wide variety of software systems employ closed loops (feedback) to achieve service level objectives and to optimize resource usage. Control theory provides a systematic approach to constructing closed loop systems, and is widely used in disciplines such as mechanical and electrical engineering. This paper describes recent advances in applying control theory to computing systems, and identifies research challenges to address so that control engineering can be widely used by software practitioners.
BibTeX:

@article{5374029,

  author = {Hellerstein, J. and Singhal, S. and Qian Wang},

  title = {Research challenges in control engineering of computing systems},

  journal = {Network and Service Management, IEEE Transactions on},

  year = {2009},

  volume = {6},

  number = {4},

  pages = {206 -211},

  doi = {http://dx.doi.org/10.1109/TNSM.2009.04.090401}

}

Kin-Hon Ho and Pavlou, G. and Ning Wang and Howarth, M. Joint optimization of intra- and inter-autonomous system traffic engineering 2009 Network and Service Management, IEEE Transactions on
Vol. 6(2), pp. 64 -79 
joint optimization, intra-as traffic engineering, inter-as traffic engineering optimisation , traffic engineering computing DOI  
Abstract: Traffic Engineering (TE) involves network configuration in order to achieve optimal IP network performance. The existing literature considers intra- and inter-AS (Autonomous System) TE independently. However, if these two aspects are considered separately, the overall network performance may not be truly optimized. This is due to the interaction between intra and inter-AS TE, where a good solution of inter-AS TE may not be good for intra-AS TE. To remedy this situation, we propose a joint optimization of intra- and inter-AS TE in order to improve the overall network performance by simultaneously finding the best egress points for inter-AS traffic and the best routing scheme for intra-AS traffic. Three strategies are presented to attack the problem, sequential, nested and integrated optimization. Our evaluation shows that, in comparison to sequential and nested optimization, integrated optimization can significantly improve overall network performance by being able to accommodate approximately 30%-60% more traffic demand.
BibTeX:

@article{5374828,

  author = {Kin-Hon Ho and Pavlou, G. and Ning Wang and Howarth, M.},

  title = {Joint optimization of intra- and inter-autonomous system traffic engineering},

  journal = {Network and Service Management, IEEE Transactions on},

  year = {2009},

  volume = {6},

  number = {2},

  pages = {64 -79},

  doi = {http://dx.doi.org/10.1109/TNSM.2009.090601}

}

Hu, C. and Chen, K. and Chen, Y. and Liu, B. and Vasilakos, A. A Measurement Study on Potential Inter-Domain Routing Diversity 2012 Network and Service Management, IEEE Transactions on
Vol. 9(3), pp. 268-278 
internet reliability , failure recovery , routing DOI  
Abstract: In response to Internet emergencies, Internet resiliency is investigated directly through an autonomous system (AS) level graph inferred from policy-compliant BGP paths or/and traceroute paths. Due to policy-driven inter-domain routing, the physical connectivity does not necessarily imply network reachability in the AS-level graph, i.e., many physical paths are not visible by the inter-domain routing protocol for connectivity recovery during Internet outages. We call the invisible connectivity at the routing layer, which can be quickly restored for recovering routing failures by simple configurations, as the potential routing diversities. In this paper, we evaluate two kinds of potential routing diversities, which are recognized as Internet eXchange Points (IXPs) participant reconnection and peering policy relaxation. Using the most complete dataset containing AS-level map and IXP participants that we can achieve, we successfully evaluate the ability of potential routing diversity for routing recovery during different kinds of Internet emergencies. Encouragingly, our experimental results show that 40% to 80% of the interrupted network pairs can be recovered on average beyond policy-compliant paths, with rich path diversities and a little traffic shifts. Thus, this paper implies that the potential routing diversities are promising venues to address Internet failures.
BibTeX:

@article{6233057,

  author = {Hu, C. and Chen, K. and Chen, Y. and Liu, B. and Vasilakos, A.},

  title = {A Measurement Study on Potential Inter-Domain Routing Diversity},

  journal = {Network and Service Management, IEEE Transactions on},

  year = {2012},

  volume = {9},

  number = {3},

  pages = {268-278},

  doi = {http://dx.doi.org/10.1109/TNSM.2012.070512.110191}

}

Anpeng Huang and Siyu Liu and Linzhen Xie and Zhangyuan Chen and Mukherjee, B. Self-Healing Optical Access Networks (SHOAN) Operated by Optical Switching Technologies 2011 Network and Service Management, IEEE Transactions on
Vol. 8(3), pp. 234 -244 
optical access network , low cost , reliable services , self-healing , survivability broadband networks , light transmission , optical fibre subscriber loops , optical switches DOI  
Abstract: An optical access network should offer low-cost reliable services to its end users. To address this problem, an optimal solution is needed which can turn an optical access architecture into a self-healing system. Hence, we propose the Self-Healing Optical Access Network (SHOAN), in which two or more optical access architectures are partners of each other, and they are interconnected by elementary optical crossbar switches into a simple mesh network. In SHOAN, the crossbar switches can keep each access architecture as an independent and closed system for only serving its own end users in normal state. But the crossbars become open in fault scenarios. Whenever a failure occurs in the network, the fault can be monitored and affected services can be recovered by the partner of the access architecture that is affected. Such an interconnected optical access network can withstand failures in its transmission paths, and recover network services in a self-healing way. Compared to existing solutions (e.g., dual-home architecture), illustrative examples demonstrate that SHOAN has many desirable properties: (1) it is robust because risks are disjointed, (2) it is reliable because service recovery is given top priority, and (3) it has low cost because redundant backup components are not necessary since the partner's resources act as backup resources. Analysis results show that SHOAN can minimize disruption duration and network cost for broadband access services.
BibTeX:

@article{6009143,

  author = {Anpeng Huang and Siyu Liu and Linzhen Xie and Zhangyuan Chen and Mukherjee, B.},

  title = {Self-Healing Optical Access Networks (SHOAN) Operated by Optical Switching Technologies},

  journal = {Network and Service Management, IEEE Transactions on},

  year = {2011},

  volume = {8},

  number = {3},

  pages = {234 -244},

  doi = {http://dx.doi.org/10.1109/TNSM.2011.080311.100065}

}

Huang, G. and Chang, C. and Chuah, C. and Lin, B. Measurement-Aware Monitor Placement and Routing: A Joint Optimization Approach for Network-Wide Measurements 2012 Network and Service Management, IEEE Transactions on
Vol. 9(1), pp. 48-59 
traffic measurement , routing , traffic engineering DOI  
Abstract: Network-wide traffic measurement is important for various network management tasks, ranging from traffic accounting, traffic engineering, network troubleshooting to security. Previous research in this area has focused on either deriving better monitor placement strategies for fixed routing, or strategically routing traffic sub-populations over existing deployed monitors to maximize the measurement gain. However, neither of them alone suffices in real scenarios, since not only the number of deployed monitors is limited, but also the traffic characteristics and measurement objectives are constantly changing. This paper presents an MMPR (Measurement-aware Monitor Placement and Routing) framework that jointly optimizes monitor placement and dynamic routing strategy to achieve maximum measurement utility. The main challenge in solving MMPR is to decouple the relevant decision variables and adhere to the intra-domain traffic engineering constraints. We formulate it as an MILP (Mixed Integer Linear Programming) problem and propose several heuristic algorithms to approximate the optimal solution and reduce the computation complexity. Through experiments using real traces and topologies (Abilene , AS6461 , and GEANT ), we show that our heuristic solutions can achieve measurement gains that are quite close to the optimal solutions, while reducing the computation times by a factor of 23X in Abilene (small), 246X in AS6461 (medium), and 233X in GEANT (large), respectively.
BibTeX:

@article{6128762,

  author = {Huang, G. and Chang, C. and Chuah, C. and Lin, B.},

  title = {Measurement-Aware Monitor Placement and Routing: A Joint Optimization Approach for Network-Wide Measurements},

  journal = {Network and Service Management, IEEE Transactions on},

  year = {2012},

  volume = {9},

  number = {1},

  pages = {48-59},

  doi = {http://dx.doi.org/10.1109/TNSM.2012.010912.110128}

}

Hwang, J. and Ramakrishnan, K.K. and Wood, T. NetVM: High Performance and Flexible Networking Using Virtualization on Commodity Platforms 2015 Network and Service Management, IEEE Transactions on
Vol. 12(1), pp. 34-47 
hardware;sockets;software;switches;throughput;virtual machine monitors;virtualization;cloud computing;network function virtualization;network function virtualization;software defined network;cloud computing;software defined network DOI  
Abstract: NetVM brings virtualization to the Network by enabling high bandwidth network functions to operate at near line speed, while taking advantage of the flexibility and customization of low cost commodity servers. NetVM allows customizable data plane processing capabilities such as firewalls, proxies, and routers to be embedded within virtual machines, complementing the control plane capabilities of Software Defined Networking. NetVM makes it easy to dynamically scale, deploy, and reprogram network functions. This provides far greater flexibility than existing purpose-built, sometimes proprietary hardware, while still allowing complex policies and full packet inspection to determine subsequent processing. It does so with dramatically higher throughput than existing software router platforms. NetVM is built on top of the KVM platform and Intel DPDK library. We detail many of the challenges we have solved such as adding support for high-speed inter-VM communication through shared huge pages and enhancing the CPU scheduler to prevent overheads caused by inter-core communication and context switching. NetVM allows true zero-copy delivery of data to VMs both for packet processing and messaging among VMs within a trust boundary. Our evaluation shows how NetVM can compose complex network functionality from multiple pipelined VMs and still obtain throughputs up to 10 Gbps, an improvement of more than 250% compared to existing techniques that use SR-IOV for virtualized networking.
BibTeX:

@article{7036139,

  author = {Hwang, J. and Ramakrishnan, K.K. and Wood, T.},

  title = {NetVM: High Performance and Flexible Networking Using Virtualization on Commodity Platforms},

  journal = {Network and Service Management, IEEE Transactions on},

  year = {2015},

  volume = {12},

  number = {1},

  pages = {34-47},

  doi = {http://dx.doi.org/10.1109/TNSM.2015.2401568}

}

Hwang, JeeHyun and Xie, Tao and Chen, Fei and Liu, Alex X. Systematic Structural Testing of Firewall Policies 2012 Network and Service Management, IEEE Transactions on
Vol. 9(1), pp. 1 -11 
firewall policy , fault detection , structural coverage , test packet generation , validation DOI  
Abstract: Firewalls are the mainstay of enterprise security and the most widely adopted technology for protecting private networks. As the quality of protection provided by a firewall directly depends on the quality of its policy (i.e., configuration), ensuring the correctness of firewall policies is important and yet difficult. To help ensure the correctness, we propose a systematic structural testing approach for firewall policies. We define structural coverage (based on coverage criteria of rules, predicates, and clauses) on the firewall policy under test. To achieve high structural coverage effectively, we have developed four automated packet generation techniques: the random packet generation, the one based on local constraint solving (considering individual rules locally in a policy), the one based on global constraint solving (considering multiple rules globally in a policy), and the one based on boundary values. We have conducted an experiment on a set of real policies and a set of faulty policies to detect faults with generated packet sets. Generally, our experimental results show that a packet set with higher structural coverage has higher fault-detection capability (i.e., detecting more injected faults). Our experimental results show that a reduced packet set (maintaining the same level of structural coverage with the corresponding original packet set) maintains similar fault-detection capability with the original set.
BibTeX:

@article{6138839,

  author = {Hwang, JeeHyun and Xie, Tao and Chen, Fei and Liu, Alex X.},

  title = {Systematic Structural Testing of Firewall Policies},

  journal = {Network and Service Management, IEEE Transactions on},

  year = {2012},

  volume = {9},

  number = {1},

  pages = {1 -11},

  doi = {http://dx.doi.org/10.1109/TNSM.2012.012012.100092}

}

Illiano, V.P. and Lupu, E.C. Detecting Malicious Data Injections in Event Detection Wireless Sensor Networks 2015 Network and Service Management, IEEE Transactions on
Vol. 12(3), pp. 496-510 
accuracy;correlation;data models;estimation;event detection;noise;ad-hoc and sensor networks;mining and statistical methods;security management;ad hoc and sensor networks;mining and statistical methods DOI  
Abstract: Wireless sensor networks (WSNs) are vulnerable and can be maliciously compromised, either physically or remotely, with potentially devastating effects. When sensor networks are used to detect the occurrence of events such as fires, intruders, or heart attacks, malicious data can be injected to create fake events, and thus trigger an undesired response, or to mask the occurrence of actual events. We propose a novel algorithm to identify malicious data injections and build measurement estimates that are resistant to several compromised sensors even when they collude in the attack. We also propose a methodology to apply this algorithm in different application contexts and evaluate its results on three different datasets drawn from distinct WSN deployments. This leads us to identify different tradeoffs in the design of such algorithms and how they are influenced by the application context.
BibTeX:

@article{7131545,

  author = {Illiano, V.P. and Lupu, E.C.},

  title = {Detecting Malicious Data Injections in Event Detection Wireless Sensor Networks},

  journal = {Network and Service Management, IEEE Transactions on},

  year = {2015},

  volume = {12},

  number = {3},

  pages = {496-510},

  doi = {http://dx.doi.org/10.1109/TNSM.2015.2448656}

}

Jaiswal, Vimmi and Sen, Aritra and Verma, Akshat Integrated Resiliency Planning in Storage Clouds 2014 Network and Service Management, IEEE Transactions on
Vol. 11(1), pp. 3-14 
approximation algorithms;bandwidth;containers;cost function;planning;throughput;cost minimization;data replication;disaster recovery planning;storage clouds DOI  
Abstract: Storage clouds use economies of scale to host data for diverse enterprises. However, enterprises differ in the requirements for their data. In this work, we investigate the problem of resiliency or disaster recovery (DR) planning in a storage cloud. The resiliency requirements vary greatly between different enterprises and also between different datasets for the same enterprise. We present in this paper Resilient Storage Cloud Map (RSCMap), a generic cost-minimizing optimization framework for disaster recovery planning, where the cost function may be tailored to meet diverse objectives. We present fast algorithms that come up with a minimum cost DR plan, while meeting all the DR requirements associated with all the datasets hosted on the storage cloud. Our algorithms have strong theoretical properties: 2 factor approximation for bandwidth minimization and fixed parameter constant approximation for the general cost minimization problem. We perform a comprehensive experimental evaluation of RSCMap using models for a wide variety of replication solutions and show that RSCMap outperforms existing resiliency planning approaches.
BibTeX:

@article{Jaiswal2014,

  author = {Jaiswal, Vimmi and Sen, Aritra and Verma, Akshat},

  title = {Integrated Resiliency Planning in Storage Clouds},

  journal = {Network and Service Management, IEEE Transactions on},

  year = {2014},

  volume = {11},

  number = {1},

  pages = {3-14},

  doi = {http://dx.doi.org/10.1109/TNSM.2013.120713.120349}

}

Jesi, G.P. and Montresor, A. and Babaoglu, O. Proximity-Aware Superpeer Overlay Topologies 2007 Network and Service Management, IEEE Transactions on
Vol. 4(2), pp. 74 -83 
communication system control , delay , insects , network topology , peer to peer computing , physics computing , protocols , telecommunication traffic , telephony , traffic control peer-to-peer computing , protocols , telecommunication network topology DOI  
Abstract: The concept of superpeer has been introduced to improve the performance of popular P2P applications. A superpeer is a "powerful" node that acts as a server for a set of clients, and as an equal with respect to other superpeers. By exploiting heterogeneity, the superpeer paradigm can lead to improved efficiency, without compromising the decentralized nature of P2P networks. The main issues in constructing superpeer-based overlays are the selection of superpeers and the association between superpeers and clients. Generally, superpeers are either run voluntarily (without an explicit selection process), or chosen among the "best" nodes in the network, for example those with the most abundant resources, such as bandwidth or storage. In several contexts, however, shared resources are not the only factor; latency between clients and superpeers may play an important role, for example in online games and IP-Telephony applications. This paper presents SG-2, a novel protocol for building and maintaining proximity-aware superpeer topologies. SG-2 uses a gossip-based protocol to spread messages to nearby nodes and a biology-inspired task allocation mechanism to promote the "best" nodes to superpeer status. The paper includes extensive simulation experiments to prove the efficiency, scalability and robustness of SG-2.
BibTeX:

@article{4383309,

  author = {Jesi, G.P. and Montresor, A. and Babaoglu, O.},

  title = {Proximity-Aware Superpeer Overlay Topologies},

  journal = {Network and Service Management, IEEE Transactions on},

  year = {2007},

  volume = {4},

  number = {2},

  pages = {74 -83},

  doi = {http://dx.doi.org/10.1109/TNSM.2007.070904}

}

Jiang, Frank and Dong, Daoyi and Cao, Longbing and Frater, Michael R. Agent-Based Self-Adaptable Context-Aware Network Vulnerability Assessment 2013 Network and Service Management, IEEE Transactions on
Vol. 10(3), pp. 255-270 
vulnerability assessment;agent-based system;intrusion detection system (ids);management information base (mib);threats awareness analysis DOI  
Abstract: Immunology inspired computer security has attracted enormous attention as its potential impacts on the next generation service-oriented network operation system. In this paper, we propose a new agent-based threat awareness assessment strategy inspired by the human immune system to dynamically adapt against attacks. Specifically, this approach is based on the dynamic reconfiguration of the file access right for system calls or logs (e.g., file rewritability) with balanced adaptability and vulnerability. Based on an information-theoretic analysis on the coherently associations of adaptability, autonomy as well as vulnerability, a generic solution is suggested to break down their coherent links. The principle is to maximize context-situation awared systems' adaptability and reduce systems' vulnerability simultaneously. Experimental results show the efficiency of the proposed biological behaviour-inspired vulnerability awareness system.
BibTeX:

@article{6599023,

  author = {Jiang, Frank and Dong, Daoyi and Cao, Longbing and Frater, Michael R.},

  title = {Agent-Based Self-Adaptable Context-Aware Network Vulnerability Assessment},

  journal = {Network and Service Management, IEEE Transactions on},

  year = {2013},

  volume = {10},

  number = {3},

  pages = {255-270},

  doi = {http://dx.doi.org/10.1109/TNSM.2013.090313.120388}

}

Jiang, Miao and Munawar, Mohammad A. and Reidemeister, Thomas and Ward, Paul A.S. System Monitoring with Metric-Correlation Models 2011 Network and Service Management, IEEE Transactions on
Vol. 8(4), pp. 348 -360 
system monitoring , fault detection , heteroscedasticity , metric-correlation models , multi-variable correlations , recursive least squares DOI  
Abstract: Modern software systems expose management metrics to help track their health. Recently, it was demonstrated that correlations among these metrics allow errors to be detected and their causes localized. Prior research shows that linear models can capture many of these correlations. However, our research shows that several factors may prevent linear models from accurately describing correlations, even if the underlying relationship is linear. Common phenomena we have observed include relationships that evolve, relationships with missing variables, and heterogeneous residual variance of the correlated metrics. Usually these phenomena can be discovered by testing for heteroscedasticity of the underlying linear models. Such behaviour violates the assumptions of simple linear regression, which thus fail to describe system dynamics correctly. In this paper we address the above challenges by employing efficient variants of Ordinary Least Squares regression models. In addition, we automate the process of error detection by introducing the Wilcoxon Rank-Sum test after proper correlations modeling. We validate our models using a realistic Java-Enterprise-Edition application. Using fault-injection experiments we show that our improved models capture system behavior accurately.
BibTeX:

@article{6102277,

  author = {Jiang, Miao and Munawar, Mohammad A. and Reidemeister, Thomas and Ward, Paul A.S.},

  title = {System Monitoring with Metric-Correlation Models},

  journal = {Network and Service Management, IEEE Transactions on},

  year = {2011},

  volume = {8},

  number = {4},

  pages = {348 -360},

  doi = {http://dx.doi.org/10.1109/TNSM.2011.120811.100033}

}

Jiang, Yexi and Perng, Chang-Shing and Li, Tao and Chang, Rong N. Cloud Analytics for Capacity Planning and Instant VM Provisioning 2013 Network and Service Management, IEEE Transactions on
Vol. 10(3), pp. 312-325 
cloud computing;capacity planning;cloud analytics;data mining;instant provisioning DOI  
Abstract: The popularity of cloud service spurs the increasing demands of virtual resources to the service vendors. Along with the promising business opportunities, it also brings new technique challenges such as effective capacity planning and instant cloud resource provisioning. In this paper, we describe our research efforts on improving the service quality for the capacity planning and instant cloud resource provisioning problem. We first formulate both of the two problems as a generic cost-sensitive prediction problem. Then, considering the highly dynamic environment of cloud, we propose an asymmetric and heterogeneous measure to quantify the prediction error. Finally, we design an ensemble prediction mechanism by combining the prediction power of a set of prediction techniques based on the proposed measure. To evaluate the effectiveness of our proposed solution, we design and implement an integrated prototype system to help improve the service quality of the cloud. Our system considers many practical situations of the cloud system, and is able to dynamically adapt to the changing environment. A series of experiments on the IBM Smart Cloud Enterprise (SCE) trace data demonstrate that our method can significantly improve the service quality by reducing the resource provisioning time while maintaining a low cloud overhead.
BibTeX:

@article{6517993,

  author = {Jiang, Yexi and Perng, Chang-Shing and Li, Tao and Chang, Rong N.},

  title = {Cloud Analytics for Capacity Planning and Instant VM Provisioning},

  journal = {Network and Service Management, IEEE Transactions on},

  year = {2013},

  volume = {10},

  number = {3},

  pages = {312-325},

  doi = {http://dx.doi.org/10.1109/TNSM.2013.051913.120278}

}

Jordan, S. Traffic Management and Net Neutrality in Wireless Networks 2011 Network and Service Management, IEEE Transactions on
Vol. 8(4), pp. 297 - 309 
open wireless architecture , computer network management , law , public policy , telecommunication services DOI  
Abstract: Many wireless ISPs limit the applications that may be used on wireless devices. In the United States, Congress is debating whether wireless network subscribers should have the right to use applications of their choice. We examine whether wireless ISPs should be able to limit applications. We address how wired and wireless networks differ with respect to traffic management, and conclude that wireless networks often require stronger traffic management than wired networks at and below the network layer. We propose dual goals of providing a level playing field between applications offered by ISPs and those offered by competing application providers and guaranteeing wireless ISPs the ability to reasonably manage wireless network resources. We consider three scenarios for how applications may be restricted on wireless networks, and find that none achieves both goals. We review United States communications law, and conclude that ISPs should be prohibited from giving themselves an unfair competitive edge by blocking applications or by denying QoS to competing application providers. We propose a set of regulations based on network architecture and communication law that limits an ISP's ability to restrict applications by requiring an open interface between network and transport layers. We illustrate how ISPs may deploy QoS within such a regulatory framework, and how this proposed policy can achieve our goals.
BibTeX:

@article{6070517,

  author = {Jordan, S.},

  title = {Traffic Management and Net Neutrality in Wireless Networks},

  journal = {Network and Service Management, IEEE Transactions on},

  year = {2011},

  volume = {8},

  number = {4},

  pages = {297 - 309},

  doi = {http://dx.doi.org/10.1109/TNSM.2011.110311.100093}

}

Jurca, D. and Stadler, R. H-GAP: estimating histograms of local variables with accuracy objectives for distributed real-time monitoring 2010 Network and Service Management, IEEE Transactions on
Vol. 7(2), pp. 83 -95 
real-time monitoring, distributed aggregation, adaptive protocols. computerised monitoring , optimisation , protocols , trees (mathematics) DOI  
Abstract: We present H-GAP, a protocol for continuous monitoring, which provides a management station with the value distribution of local variables across the network. The protocol estimates the histogram of local state variables for a given accuracy and with minimal overhead. H-GAP is decentralized and asynchronous to achieve robustness and scalability, and it executes on an overlay interconnecting management processes in network devices. On this overlay, the protocol maintains a spanning tree and updates the histogram through incremental aggregation. The protocol is tunable in the sense that it allows controlling, at runtime, the trade-off between protocol overhead and an accuracy objective. This functionality is realized through dynamic configuration of local filters that control the flow of updates towards the management station. The paper includes an analysis of the problem of histogram aggregation over aggregation trees, a formulation of the global optimization problem, and a distributed solution containing heuristic, tree-based algorithms. Using SUM as an example, we show how general aggregation functions over local variables can be efficiently computed with H-GAP. We evaluate our protocol through simulation using real traces. The results demonstrate the controllability of H-GAP in a selection of scenarios and its efficiency in large-scale networks.
BibTeX:

@article{5471039,

  author = {Jurca, D. and Stadler, R.},

  title = {H-GAP: estimating histograms of local variables with accuracy objectives for distributed real-time monitoring},

  journal = {Network and Service Management, IEEE Transactions on},

  year = {2010},

  volume = {7},

  number = {2},

  pages = {83 -95},

  doi = {http://dx.doi.org/10.1109/TNSM.2010.06.I8P0292}

}

Kamiyama, N. Analyzing Impact of Introducing CCN on Profit of ISPs 2015 Network and Service Management, IEEE Transactions on
Vol. 12(2), pp. 176-187 
cache memory;internet;numerical models;peer-to-peer computing;routing;servers;topology;ccn;isp;economics DOI  
Abstract: Content centric networking (CCN) has attracted a great deal of attention as a network that can efficiently deliver content. In CCN, content is delivered using the content name, instead of the host IP address, from cache memory implemented at routers. The nodes sending content are not explicitly indicated, and content is delivered from routers that have copies of content on the routes where the Interest packets are transmitted. Therefore, as a result of introducing CCN in ISP networks, the pattern of traffic exchanges among ISPs will change considerably. Customer ISPs normally pay a transit fee to transit ISPs based on the traffic volume transmitted on the transit links. Therefore, the introduction of CCN by ISPs will affect the profit of ISPs. CCN is introduced and operated by ISPs based on their business judgment, so it is im portant to estimate how CCN affects ISP profit to investigate the likelihood of CCN spreading among many ISPs. In this paper, we formalize the profit of ISPs when implementing CCN, assuming a hierarchical topology of ISPs in three levels and show that introducing CCN increases the profit of layer 2 and 3 ISPs, whereas introducing CCN decreases the profit of layer 1 ISPs. We also clarify that the effect of introducing CCN for the profit of ISPs is more remarkable as the cache capacity or the bias of content popularity increases.
BibTeX:

@article{7105935,

  author = {Kamiyama, N.},

  title = {Analyzing Impact of Introducing CCN on Profit of ISPs},

  journal = {Network and Service Management, IEEE Transactions on},

  year = {2015},

  volume = {12},

  number = {2},

  pages = {176-187},

  doi = {http://dx.doi.org/10.1109/TNSM.2015.2432011}

}

Kamiyama, Noriaki Efficient Network Modification to Improve QoS Stability at Failures 2011 Network and Service Management, IEEE Transactions on
Vol. 8(2), pp. 153 -164 
qos stability , link capacity , network topology , single link failure internet , investment , quality of service , stability , telecommunication links , telecommunication network planning , telecommunication traffic DOI  
Abstract: When a link or node fails, flows are detoured around the failed portion, so the hop count of flows and the link load could change dramatically as a result of the failure. As real-time traffic such as video or voice increases on the Internet, ISPs are required to provide stable quality as well as connectivity at failures. For ISPs, how to effectively improve the stability of these qualities at failures with the minimum investment cost is an important issue, and they need to effectively select a limited number of locations to add link facilities. In this paper, efficient design algorithms to select the locations for adding link facilities are proposed and their effectiveness is evaluated using the actual backbone networks of 36 commercial ISPs.
BibTeX:

@article{5871355,

  author = {Kamiyama, Noriaki},

  title = {Efficient Network Modification to Improve QoS Stability at Failures},

  journal = {Network and Service Management, IEEE Transactions on},

  year = {2011},

  volume = {8},

  number = {2},

  pages = {153 -164},

  doi = {http://dx.doi.org/10.1109/TNSM.2011.020411.00017}

}

Kamoun, F. RFID system management: state-of-the art and open research issues 2009 Network and Service Management, IEEE Transactions on
Vol. 6(3), pp. 190 -205 
radio frequency identification, rfid network management, reader management. radiofrequency identification , telecommunication network management DOI  
Abstract: Radiofrequency identification (RFID) is an enabling technology that can provide organizations with unprecedented improved visibility and traceability of items throughout their journey in the value chain. As RFID deployments are scaling-up from pilot projects and proof-of-concept trials towards fully-fledged enterprise applications, RFID system management challenges will escalate. This paper takes an in-depth look at the management aspects of RFID systems. The current state-of-the art of RFID systems management is exposed, whereby various approaches are discussed under five functional areas, namely configuration, fault, performance, accounting and security management. The paper also highlights some future trends and open research areas that can potentially trigger further interests and investigations in this important topic.
BibTeX:

@article{5374839,

  author = {Kamoun, F.},

  title = {RFID system management: state-of-the art and open research issues},

  journal = {Network and Service Management, IEEE Transactions on},

  year = {2009},

  volume = {6},

  number = {3},

  pages = {190 -205},

  doi = {http://dx.doi.org/10.1109/TNSM.2009.03.090305}

}

Karpilovsky, E. and Caesar, M. and Rexford, J. and Shaikh, A. and van der Merwe, J. Practical Network-Wide Compression of IP Routing Tables 2012 Network and Service Management, IEEE Transactions on
Vol. 9(4), pp. 446- 458 
network architecture and design , network management , network protocols DOI  
Abstract: The memory Internet routers use to store paths to destinations is expensive, and must be continually upgraded in the face of steadily increasing routing table size. Unfortunately, routing protocols are not designed to gracefully handle cases where memory becomes full, which arises increasingly often due to misconfigurations and routing table growth. Hence router memory must typically be heavily overprovisioned by network operators, inflating operating costs and administrative effort. The research community has primarily focused on clean-slate solutions that cannot interoperate with the deployed base of protocols. This paper presents an incrementally-deployable Memory Management System (MMS) that reduces associated router state by up to 70%. The MMS coalesces prefixes to reduce memory consumption and can be deployed locally on each router or centrally on a route server. The system can operate transparently, without requiring changes in other ASes. Our memory manager can extend router lifetimes up to seven years, given current prefix growth trends.
BibTeX:

@article{6265424,

  author = {Karpilovsky, E. and Caesar, M. and Rexford, J. and Shaikh, A. and van der Merwe, J.},

  title = {Practical Network-Wide Compression of IP Routing Tables},

  journal = {Network and Service Management, IEEE Transactions on},

  year = {2012},

  volume = {9},

  number = {4},

  pages = {446- 458},

  doi = {http://dx.doi.org/10.1109/TNSM.2012.081012.120246}

}

Katib, I. and Medhi, D. IP/MPLS-over-OTN-over-DWDM Multilayer Networks: An Integrated Three-Layer Capacity Optimization Model, a Heuristic, and a Study 2012 Network and Service Management, IEEE Transactions on
Vol. 9(3), pp. 240-253 
dwdm , ip/mpls , otn , multilayer network , network planning and optimization , routing DOI  
Abstract: Multilayer network design has received significant attention in current literature. Despite this, the explicit modeling of IP/MPLS over OTN over DWDM in which the OTN layer is specifically considered has not been addressed before. This architecture has been identified as promising that bridges integration and interaction between the IP and optical layers. In this paper, we present an integrated capacity optimization model for network planning of such multilayer networks that consider the OTN layer as a distinct layer with its unique technological sublayer constraints. We develop a heuristic algorithm to solve this model for large networks. Finally, we provide a detailed numeric study that considers various cost parameter values of each layer in the network. We analyze the impact of each layer's cost parameter values on neighboring layers and overall network cost.
BibTeX:

@article{6192354,

  author = {Katib, I. and Medhi, D.},

  title = {IP/MPLS-over-OTN-over-DWDM Multilayer Networks: An Integrated Three-Layer Capacity Optimization Model, a Heuristic, and a Study},

  journal = {Network and Service Management, IEEE Transactions on},

  year = {2012},

  volume = {9},

  number = {3},

  pages = {240-253},

  doi = {http://dx.doi.org/10.1109/TNSM.2012.12.110124}

}

Katsalis, K. and Paschos, G.S. and Viniotis, Y. and Tassiulas, L. CPU Provisioning Algorithms for Service Differentiation in Cloud-Based Environments 2015 Network and Service Management, IEEE Transactions on
Vol. 12(1), pp. 61-74 
convergence;heuristic algorithms;measurement;mobile communication;round robin;servers;vectors;cpu scheduling;closed loop systems;servers;service differentiation;stochastic control DOI  
Abstract: This work focuses on the design, analysis and evaluation of Dynamic Weighted Round Robin (DWRR) algorithms that can guarantee CPU service shares in clusters of servers. Our motivation comes from the need to provision multiple server CPUs in cloud-based data center environments. Using stochastic control theory we show that a class of DWRR policies provide the service differentiation objectives, without requiring any knowledge about the arrival and the service process statistics. The member policies provide the data center administrator with trade-off options, so that the communication and computation overhead of the policy can be adjusted. We further evaluate the proposed policies via simulations, using both synthetic and real traces obtained from a medium scale mobile computing application.
BibTeX:

@article{7024161,

  author = {Katsalis, K. and Paschos, G.S. and Viniotis, Y. and Tassiulas, L.},

  title = {CPU Provisioning Algorithms for Service Differentiation in Cloud-Based Environments},

  journal = {Network and Service Management, IEEE Transactions on},

  year = {2015},

  volume = {12},

  number = {1},

  pages = {61-74},

  doi = {http://dx.doi.org/10.1109/TNSM.2015.2397345}

}

Keller, Alexander and Benke, Oliver and Debusmann, Markus and Koppel, Andreas and Kreger, Heather and Maier, Andreas and Schopmeyer, Karl The CIM Metrics Model: Introducing flexible data collection and aggregation for performance management in CIM 2004 Network and Service Management, IEEE Transactions on
Vol. 1(2), pp. 59 -71 
computer architecture , computer integrated manufacturing , computer network management , counting circuits , distributed computing , environmental management , measurement , remote monitoring , resource management , runtime DOI  
Abstract: We describe new extensions to the CIM Metrics Model, termed BaseMetrics Submodel, whose scope is to define schema extensions capable of specifying and subsequently instantiating new performance measurement data at the runtime of a system. The model has been developed by the Metric Extensions Working Group of the Distributed Management Task Force (DMTF) in which the authors actively participate. The BaseMetrics submodel has been adopted by the CIM Technical Committee and is part of the CIM schema. In addition, we present an extension to the BaseMetrics submodel that allows the definition and aggregation of arbitrary performance data at runtime to address the requirements of service level agreements and workload management systems. Two examples illustrate the applicability of the model to real-life data collection and aggregation scenarios in distributed computing environments.
BibTeX:

@article{4798291,

  author = {Keller, Alexander and Benke, Oliver and Debusmann, Markus and Koppel, Andreas and Kreger, Heather and Maier, Andreas and Schopmeyer, Karl},

  title = {The CIM Metrics Model: Introducing flexible data collection and aggregation for performance management in CIM},

  journal = {Network and Service Management, IEEE Transactions on},

  year = {2004},

  volume = {1},

  number = {2},

  pages = {59 -71},

  doi = {http://dx.doi.org/10.1109/TNSM.2004.4798291}

}

Keller, A. and Brown, A.B. and Hellerstein, J.L. A Configuration Complexity Model and Its Application to a Change Management System 2007 Network and Service Management, IEEE Transactions on
Vol. 4(1), pp. 13 -27 
algorithm design and analysis , application software , automation , automobile manufacture , computer network management , costs , databases , humans , middleware , web server java , management of change , middleware DOI  
Abstract: The complexity of configuring computing systems is a major impediment to the adoption of new information technology (IT) products and greatly increases the cost of IT services. This paper develops a model of configuration complexity and demonstrates its value for a change management system. The model represents systems as a set of nested containers with configuration controls. From this representation, we derive various metrics that indicate configuration complexity, including execution complexity, parameter complexity, and memory complexity. We apply this model to a J2EE-based enterprise application and its associated middleware stack to assess the complexity of the manual configuration process for this application. We then show how an automated change management system can greatly reduce configuration complexity.
BibTeX:

@article{4275031,

  author = {Keller, A. and Brown, A.B. and Hellerstein, J.L.},

  title = {A Configuration Complexity Model and Its Application to a Change Management System},

  journal = {Network and Service Management, IEEE Transactions on},

  year = {2007},

  volume = {4},

  number = {1},

  pages = {13 -27},

  doi = {http://dx.doi.org/10.1109/TNSM.2007.030102}

}

Keller, A. and Diao, Y. and Eskesen, F. and Froehlich, S. and Hellerstein, J.l. and Surendra, M. and Spainhower, L.F. Generic On-Line Discovery of Quantitative Models 2004 Network and Service Management, IEEE Transactions on
Vol. 1(1), pp. 39 -48 
computer integrated manufacturing , databases , delay , knowledge management , measurement , monitoring , neural networks , predictive models , prototypes , technology management DOI  
Abstract: Quantitative models are needed for a variety of management tasks, including identification of critical variables to use for health monitoring, anticipating service-level violations by using predictive models, and ongoing optimization of configurations. Unfortunately, constructing quantitative models requires specialized skills that are in short supply. Even worse, rapid changes in provider configurations and the evolution of business demands mean that quantitative models must be updated on an ongoing basis. This paper describes an architecture and algorithms for online discovery of quantitative models without prior knowledge of the managed elements. The architecture makes use of an element schema that describes managed elements using the Common Information Model (CIM). Algorithms are presented for selecting a subset of the element metrics to use as explanatory variables in a quantitative model and for constructing the quantitative model itself. We further describe a prototype system based onthis architecture that incorporates these algorithms. We apply the prototype to online estimation of response times for DB2 Universal Database under a TPC-W workload. Of the approximately 500 metrics available from the DB2 performance monitor, our system chooses three to construct a model that explains 72 percent of the variability of response time.
BibTeX:

@article{4623693,

  author = {Keller, A. and Diao, Y. and Eskesen, F. and Froehlich, S. and Hellerstein, J.l. and Surendra, M. and Spainhower, L.F.},

  title = {Generic On-Line Discovery of Quantitative Models},

  journal = {Network and Service Management, IEEE Transactions on},

  year = {2004},

  volume = {1},

  number = {1},

  pages = {39 -48},

  doi = {http://dx.doi.org/10.1109/TNSM.2004.4623693}

}

Kettig, Oliver and Kolbe, Hans-Joerg Monitoring the Impact of P2P Users on a Broadband Operator's Network over Time 2011 Network and Service Management, IEEE Transactions on
Vol. 8(2), pp. 116 -127 
communication systems , broadband , networks , peer-to-peer (p2p) broadband networks , data privacy , peer-to-peer computing , telecommunication traffic DOI  
Abstract: Since their emergence peer-to-peer (P2P) applications have been generating a considerable fraction of the overall transferred bandwidth in broadband networks. Residential broadband service has been moving from one geared towards technology enthusiasts and early adopters to a commodity for a large fraction of households. Thus, the question whether P2P is still the dominant application in terms of bandwidth usage becomes highly relevant for broadband operators. In this work we present an adaption to a previously published method for classifying broadband users into a P2P- and a non-P2P group based on the amount of communication partners ("peers") they have in a dedicated timeframe. Based on this classification, we derive their impact on network characteristics like the number of active users and their aggregate bandwidth. Privacy is assured by anonymization of the data and by not taking into account the packet payloads. We apply our method to real operational data collected 2007 and 2010, respectively, from a major German DSL provider's access link which transported all traffic each user generates and receives. In 2010 the fraction of P2P users clearly decreased compared to previous years. Nevertheless we find that P2P users are still large contributors to the total amount of traffic seen especially in upstream direction. However in 2010 the impact from P2P on the bandwidth peaks in the busy hours has clearly decreased while other applications have a growing impact, leading to an increased bandwidth usage per subscriber in the peak hours. Further analysis also reveals that the P2P users' traffic still does not exhibit strong locality. We compare our findings to those available in the literature and propose areas for future work on network monitoring, P2P applications, and network design.
BibTeX:

@article{5871353,

  author = {Kettig, Oliver and Kolbe, Hans-Joerg},

  title = {Monitoring the Impact of P2P Users on a Broadband Operator's Network over Time},

  journal = {Network and Service Management, IEEE Transactions on},

  year = {2011},

  volume = {8},

  number = {2},

  pages = {116 -127},

  doi = {http://dx.doi.org/10.1109/TNSM.2011.100056}

}

Kind, A. and Stoecklin, M.P. and Dimitropoulos, X. Histogram-based traffic anomaly detection 2009 Network and Service Management, IEEE Transactions on
Vol. 6(2), pp. 110 -121 
computer network security, monitoring, clustering methods computer network security , pattern clustering , probability , telecommunication traffic DOI  
Abstract: Identifying network anomalies is essential in enterprise and provider networks for diagnosing events, like attacks or failures, that severely impact performance, security, and Service Level Agreements (SLAs). Feature-based anomaly detection models (ab)normal network traffic behavior by analyzing different packet header features, like IP addresses and port numbers. In this work, we describe a new approach to feature-based anomaly detection that constructs histograms of different traffic features, models histogram patterns, and identifies deviations from the created models. We assess the strengths and weaknesses of many design options, like the utility of different features, the construction of feature histograms, the modeling and clustering algorithms, and the detection of deviations. Compared to previous feature-based anomaly detection approaches, our work differs by constructing detailed histogram models, rather than using coarse entropy-based distribution approximations. We evaluate histogram-based anomaly detection and compare it to previous approaches using collected network traffic traces. Our results demonstrate the effectiveness of our technique in identifying a wide range of anomalies. The assessed technical details are generic and, therefore, we expect that the derived insights will be useful for similar future research efforts.
BibTeX:

@article{5374831,

  author = {Kind, A. and Stoecklin, M.P. and Dimitropoulos, X.},

  title = {Histogram-based traffic anomaly detection},

  journal = {Network and Service Management, IEEE Transactions on},

  year = {2009},

  volume = {6},

  number = {2},

  pages = {110 -121},

  doi = {http://dx.doi.org/10.1109/TNSM.2009.090604}

}

Kjaer, M.A. and Kihl, M. and Robertsson, A. Resource allocation and disturbance rejection in web servers using SLAs and virtualized servers 2009 Network and Service Management, IEEE Transactions on
Vol. 6(4), pp. 226 -239 
web server , disturbance rejection , feed-forward , online estimation , prediction , resource management , response-time control , virtualization discrete event simulation , feedforward , file servers , parameter estimation , resource allocation , telecommunication control DOI  
Abstract: Resource management in IT-enterprises gain more and more attention due to high operation costs. For instance, web sites are subject to very changing traffic-loads over the year, over the day, or even over the minute. Online adaption to the changing environment is one way to reduce losses in the operation. Control systems based on feedback provide methods for such adaption, but is in nature slow, since changes in the environment has to propagate through the system before being compensated. Therefore, feed-forward systems can be introduced that has shown to improve the transient performance. However, earlier proposed feed-forward systems have been based on offline estimation. In this article we show that off-line estimations can be problematic in online applications. Therefore, we propose a method where parameters are estimated online, and thus also adapts to the changing environment. We compare our solution to two other control strategies proposed in the literature, which are based on off-line estimation of certain parameters. We evaluate the controllers with both discrete-event simulations and experiments in our testbed. The investigations show the strength of our proposed control system.
BibTeX:

@article{5374031,

  author = {Kjaer, M.A. and Kihl, M. and Robertsson, A.},

  title = {Resource allocation and disturbance rejection in web servers using SLAs and virtualized servers},

  journal = {Network and Service Management, IEEE Transactions on},

  year = {2009},

  volume = {6},

  number = {4},

  pages = {226 -239},

  doi = {http://dx.doi.org/10.1109/TNSM.2009.04.090403}

}

Kristiansson, Johan and Parnes, Peter An application-layer approach to seamless mobile multimedia communication 2006 Network and Service Management, IEEE Transactions on
Vol. 3(1), pp. 33 -42 
mobility management , soft-handover , ubiquitous multimedia DOI  
Abstract: Providing seamless IP mobility support is one of the most challenging problems towards a world of mobile and ubiquitous multimedia communication.
BibTeX:

@article{4798305,

  author = {Kristiansson, Johan and Parnes, Peter},

  title = {An application-layer approach to seamless mobile multimedia communication},

  journal = {Network and Service Management, IEEE Transactions on},

  year = {2006},

  volume = {3},

  number = {1},

  pages = {33 -42},

  doi = {http://dx.doi.org/10.1109/TNSM.2006.4798305}

}

Kulkarni, P. G. and McClean, S. I. and Parr, G. P. and Black, M. M. Lightweight proactive queue management 2006 Network and Service Management, IEEE Transactions on
Vol. 3(2), pp. 1 -11 
proactive management , adaptive learning , performance management , recursive least squares , self management DOI  
Abstract: The quest for better resource control has been the driving force behind Active Queue Management (AQM) research. Random Early Detection (RED), the defacto standard and its variants have been proposed as simple solutions to the AQM problem. These approaches, however, are known to suffer from problems like parameter sensitivity and inability to capture input traffic load fluctuations accurately, thereby resulting in instability. This paper presents a proactive queue management algorithm called PAQMAN that captures input traffic load fluctuations accurately and regulates the queue size around the desirable level. PAQMAN draws from the predictability in the underlying traffic by employing the Recursive Least Squares (RLS) algorithm to forecast the average queue size over the next prediction interval using the average queue size information of the past intervals. The packet drop probability is then computed as a function of this predicted average queue size. The performance of PAQMAN has been evaluated and compared against existing AQM schemes through ns-2 simulations that encompass varying network conditions for networks comprising of single as well as multiple bottleneck links. Simulation results demonstrate that PAQMAN maintains a relatively low queue size, while at the same time achieving high link utilization and low packet loss. Moreover, the computational overhead of PAQMAN is negligible (lightweight) which further justifies its use.
BibTeX:

@article{4798310,

  author = {Kulkarni, P. G. and McClean, S. I. and Parr, G. P. and Black, M. M.},

  title = {Lightweight proactive queue management},

  journal = {Network and Service Management, IEEE Transactions on},

  year = {2006},

  volume = {3},

  number = {2},

  pages = {1 -11},

  doi = {http://dx.doi.org/10.1109/TNSM.2006.4798310}

}

Kusic, D. and Kandasamy, N. and Guofei Jiang Combined Power and Performance Management of Virtualized Computing Environments Serving Session-Based Workloads 2011 Network and Service Management, IEEE Transactions on
Vol. 8(3), pp. 245 -258 
power management , predictive control , resource provisioning , virtualization technologies internet , approximation theory , large-scale systems , predictive control , virtual machines , virtualisation DOI  
Abstract: This paper develops an online resource provisioning framework for combined power and performance management in a virtualized computing environment serving session-based workloads. We pose this management problem as one of sequential optimization under uncertainty and solve it using limited lookahead control (LLC), a form of model-predictive control. The approach accounts for the switching costs incurred when provisioning virtual machines and explicitly encodes the risk of provisioning resources in an uncertain and dynamic operating environment. We experimentally validate the control framework on a server cluster supporting three online services. When managed using LLC, our cluster setup saves, on average, 41% in power-consumption costs over a twenty-four hour period when compared to a system operating without dynamic control. Finally, we use trace-based simulations to analyze LLC performance on server clusters larger than our testbed and show how concepts from approximation theory can be used to further reduce the computational burden of controlling large systems.
BibTeX:

@article{5970245,

  author = {Kusic, D. and Kandasamy, N. and Guofei Jiang},

  title = {Combined Power and Performance Management of Virtualized Computing Environments Serving Session-Based Workloads},

  journal = {Network and Service Management, IEEE Transactions on},

  year = {2011},

  volume = {8},

  number = {3},

  pages = {245 -258},

  doi = {http://dx.doi.org/10.1109/TNSM.2011.0726.100045}

}

Kin-Wah Kwong and Gue? andrin, R. and Shaikh, A. and Shu Tao Balancing performance, robustness and flexibility in routing systems 2010 Network and Service Management, IEEE Transactions on
Vol. 7(3), pp. 186 -199 
routing, optimization, robustness, multi-topology telecommunication network routing , telecommunication network topology , telecommunication traffic DOI  
Abstract: Modern networks face the challenging task of handling increasingly diverse traffic that is displaying a growing intolerance to disruptions. This has given rise to many initiatives, and in this paper we focus on multiple topology routing as the primary vehicle for meeting those demands. Specifically, we seek routing solutions capable of not just accommodating different performance goals, but also preserving them in the presence of disruptions. The main challenge is computational, i.e., to identify among the enormous number of possible routing solutions the one that yields the best compromise between performance and robustness. This is where our principal contribution lies, as we expand the definition of critical links - a key concept in improving the efficiency of routing computation - and develop a precise methodology to efficiently converge on those solutions. Using this new methodology, we demonstrate that one can compute routing solutions that are both flexible in accommodating different performance requirements and robust in maintaining them in the presence of failures and traffic fluctuations.
BibTeX:

@article{5560573,

  author = {Kin-Wah Kwong and Gue? andrin, R. and Shaikh, A. and Shu Tao},

  title = {Balancing performance, robustness and flexibility in routing systems},

  journal = {Network and Service Management, IEEE Transactions on},

  year = {2010},

  volume = {7},

  number = {3},

  pages = {186 -199},

  doi = {http://dx.doi.org/10.1109/TNSM.2010.1009.I9P0355}

}

Lahmadi, A. and Festor, O. A Framework for Automated Exploit Prevention from Known Vulnerabilities in Voice over IP Services 2012 Network and Service Management, IEEE Transactions on
Vol. 9(2), pp. 114-127 
exploit prevention systems , security , session initiation protocol , voice over ip , vulnerability management DOI  
Abstract: We propose a prevention system for SIP-based networks which adopts a rule-based approach to build prevention specifications on SIP protocol activities that stop attacks exploiting an existing vulnerability before reaching their targets. Our approach innovates from existing solutions by making use of the contextual information of a vulnerability targeted by an attack to apply the prevention specification. Manually coding these prevention specifications is tedious and error-prone. Our method automatically infers prevention specifications by analyzing captured SIP exploit traffic. The detection engine uses an efficient method based on event graphs to match protocol activities against available prevention specifications. We describe the different components of our approach and show through an extended performance study of the implemented system its applicability to enterprise level VoIP protection.
BibTeX:

@article{6138261,

  author = {Lahmadi, A. and Festor, O.},

  title = {A Framework for Automated Exploit Prevention from Known Vulnerabilities in Voice over IP Services},

  journal = {Network and Service Management, IEEE Transactions on},

  year = {2012},

  volume = {9},

  number = {2},

  pages = {114-127},

  doi = {http://dx.doi.org/10.1109/TNSM.2012.011812.110125}

}

Lange, S. and Gebert, S. and Zinner, T. and Tran-Gia, P. and Hock, D. and Jarschel, M. and Hoffmann, M. Heuristic Approaches to the Controller Placement Problem in Large Scale SDN Networks 2015 Network and Service Management, IEEE Transactions on
Vol. 12(1), pp. 4-17 
context;equations;graphical user interfaces;mathematical model;measurement;optimization;resilience;controller placement;nfv;openflow;poco;sdn;controller placement;failure tolerance;latency;multiobjective optimization;resilience;simulated annealing DOI  
Abstract: Software Defined Networking (SDN) marks a paradigm shift towards an externalized and logically centralized network control plane. A particularly important task in SDN architectures is that of controller placement, i.e., the positioning of a limited number of resources within a network to meet various requirements. These requirements range from latency constraints to failure tolerance and load balancing. In most scenarios, at least some of these objectives are competing, thus no single best placement is available and decision makers need to find a balanced trade-off. This work presents POCO, a framework for Pareto-based Optimal COntroller placement that provides operators with Pareto optimal placements with respect to different performance metrics. In its default configuration, POCO performs an exhaustive evaluation of all possible placements. While this is practically feasible for small and medium sized networks, realistic time and resource constraints call for an alternative in the context of large scale networks or dynamic networks whose properties change over time. For these scenarios, the POCO toolset is extended by a heuristic approach that is less accurate, but yields faster computation times. An evaluation of this heuristic is performed on a collection of real world network topologies from the Internet Topology Zoo. Utilizing a measure for quantifying the error introduced by the heuristic approach allows an analysis of the resulting trade-off between time and accuracy. Additionally, the proposed methods can be extended to solve similar virtual functions placement problems which appear in the context of Network Functions Virtualization (NFV).
BibTeX:

@article{7038177,

  author = {Lange, S. and Gebert, S. and Zinner, T. and Tran-Gia, P. and Hock, D. and Jarschel, M. and Hoffmann, M.},

  title = {Heuristic Approaches to the Controller Placement Problem in Large Scale SDN Networks},

  journal = {Network and Service Management, IEEE Transactions on},

  year = {2015},

  volume = {12},

  number = {1},

  pages = {4-17},

  doi = {http://dx.doi.org/10.1109/TNSM.2015.2402432}

}

Lau, W. and Jha, S. Failure-Oriented Path Restoration Algorithm for Survivable Networks 2004 Network and Service Management, IEEE Transactions on
Vol. 1(1), pp. 11 -20 
approximation algorithms , availability , bandwidth , heuristic algorithms , multiprotocol label switching , polynomials , protection , routing , telecommunication traffic , virtual private networks DOI  
Abstract: In this article, a new polynomial-time approximation algorithm called Service Path Local Optimization (SPLO) is proposed for the online restoration problem. SPLO is shown to perform competitively with existing offline heuristics algorithm in terms of spare capacity. SPLO is designed for online computation where only one request is computed at any one time, and the decision making does not depend on future requests. The polynomial-time and online nature of the algorithm makes SPLO suitable for use in real-time on-demand path request applications. SPLO can be combined with a non-polynomial post-processing component that re-optimizes the backup paths. Significant reductions in spare capacity requirements are achievable at the expense of higher computation time. Further, the potential for SPLO as an algorithm in traffic engineering applications is investigated by looking at the performance impact when source-destination-based traffic aggregation is applied. We also introduce a new concept called path intermix where the service path??s allocated bandwidth can be used by the backup paths protecting that particular service path.
BibTeX:

@article{4623690,

  author = {Lau, W. and Jha, S.},

  title = {Failure-Oriented Path Restoration Algorithm for Survivable Networks},

  journal = {Network and Service Management, IEEE Transactions on},

  year = {2004},

  volume = {1},

  number = {1},

  pages = {11 -20},

  doi = {http://dx.doi.org/10.1109/TNSM.2004.4623690}

}

Sihyung Lee and Hyong Kim Correlation, visualization, and usability analysis of routing policy configurations 2010 Network and Service Management, IEEE Transactions on
Vol. 7(1), pp. 28 -41 
network management, network configuration modeling, usability analysis, correlation, visualization computer networks , correlation methods , telecommunication network routing DOI  
Abstract: Network configurations implement a set of policies that control a network's behavior. Therefore, correct understanding of the configurations is vital to ensure that the network operates according to the intended policies. However, the current practice of manually reading a large number of configuration commands, which are written in low-level languages and distributed in multiple devices, is inefficient and significantly increases management costs and operator errors. We propose a system that helps decode network configurations by interpreting low-level fragmented configurations and then presenting their high-level intended policies. In particular, the proposed system is applicable to inter-domain routing policies, one of the most complex aspects of network configurations. We implement our system and evaluate its effectiveness through a set of user studies involving 44 participants. These studies examine the participants?? comprehension of routing policies presented with our system as compared to those presented with existing configuration languages. The studies show that our system improves both accuracy, from 70% to nearly 100%, as well as time-to-task-completion, from 30 minutes to 10 minutes. We believe that our system provides a basis for a clean separation of policy intent from its implementation so that policies can be better designed and understood. We also discuss the weaknesses in usability of current network configurations and argue that all aspects of future management systems need to be designed to address these usability issues.
BibTeX:

@article{5412871,

  author = {Sihyung Lee and Hyong Kim},

  title = {Correlation, visualization, and usability analysis of routing policy configurations},

  journal = {Network and Service Management, IEEE Transactions on},

  year = {2010},

  volume = {7},

  number = {1},

  pages = {28 -41},

  doi = {http://dx.doi.org/10.1109/TNSM.2010.I9P0315}

}

Li, Bixin and Ji, Shunhui and Qiu, Dong and Leung, Hareton and Zhang, Gongyuan Verifying the Concurrent Properties in BPEL Based Web Service Composition Process 2013 Network and Service Management, IEEE Transactions on
Vol. 10(4), pp. 410-424 
analytical models;bills of materials;concurrent computing;dsl;process control;synchronization;web services;bpel;formal verification;web service composition;xcfg DOI  
BibTeX:

@article{6662355,

  author = {Li, Bixin and Ji, Shunhui and Qiu, Dong and Leung, Hareton and Zhang, Gongyuan},

  title = {Verifying the Concurrent Properties in BPEL Based Web Service Composition Process},

  journal = {Network and Service Management, IEEE Transactions on},

  year = {2013},

  volume = {10},

  number = {4},

  pages = {410-424},

  doi = {http://dx.doi.org/10.1109/TNSM.2013.111113.120379}

}

Li, B. and Liao, L. and Leung, H. and Song, R. PHAT: A Preference and Honesty Aware Trust Model for Web Services 2014 Network and Service Management, IEEE Transactions on
Vol. 11(3), pp. 363-375 
business;computational modeling;monitoring;quality of service;real-time systems;service-oriented architecture;trust model;web services;honesty;preference;web services DOI  
Abstract: Trust is one of the most critical factors for a service requestor when selecting a service from a large pool of candidate services. However, existing web service trust models either do not consider users' preferences for different quality of service (QoS) attributes, or ignore the impact of vicious ratings on trust evaluation. To address these gaps, PHAT, a dynamic trust evaluation model with dual consideration of users' preferences and false ratings, is proposed in this paper. The model introduces an approach to automatically mine users' preferences from their requirements. The preferences are then used to determine the weight of each QoS attribute when integrating trust into multi-dimensional QoS attributes. The ¡°local¡± trust on a service is derived by combining the trust on QoS attributes and the user's subjective ratings. Then, the users are divided into different groups according to their QoS preferences, and the honesty of each group is assessed by filtering out dishonest users based on a hybrid approach combining rating consistency clustering with an average method. Finally, the weight on ratings is dynamically adjusted according to the results of honesty assessment when calculating the global trustworthiness, namely, the reputation of a service. The proposed model is evaluated with real-world QoS data, and the results indicate that PHAT works well on personalized evaluation of trust, and can effectively dilute the influence of malicious ratings.
BibTeX:

@article{6868989,

  author = {Li, B. and Liao, L. and Leung, H. and Song, R.},

  title = {PHAT: A Preference and Honesty Aware Trust Model for Web Services},

  journal = {Network and Service Management, IEEE Transactions on},

  year = {2014},

  volume = {11},

  number = {3},

  pages = {363-375},

  doi = {http://dx.doi.org/10.1109/TNSM.2014.2325771}

}

Hong Li and Mason, L. and Rabbat, M. Distributed adaptive diverse routing for voice-over-IP in service overlay networks 2009 Network and Service Management, IEEE Transactions on
Vol. 6(3), pp. 175 -189 
diverse routing, reinforcement learning, voice-over-ip, overlay networks. internet telephony , adaptive systems , failure analysis , telecommunication links , telecommunication network routing DOI  
Abstract: This paper proposes a novel mechanism to discover delay-optimal diverse paths using distributed learning automata for Voice-over-IP (VoIP) routing in service overlay networks. In addition, a novel link failure detection method is proposed for detecting and recovering from link failures to reduce the number of dropped voice sessions. The main contributions of this paper are a decentralized, scalable method for minimizing delay on both a primary and secondary path between all pairs of overlay nodes, while at the same time maintaining the link disjointness between the primary and the secondary optimal paths. Simulations of a 50-node model of AT amp;T's backbone network show that the proposed method improves the quality of voice calls from unsatisfactory to satisfactory, as measured by the R-factor. With the proposed link failure detection mechanism, the time to recover from a link failure is considerably reduced.
BibTeX:

@article{5374838,

  author = {Hong Li and Mason, L. and Rabbat, M.},

  title = {Distributed adaptive diverse routing for voice-over-IP in service overlay networks},

  journal = {Network and Service Management, IEEE Transactions on},

  year = {2009},

  volume = {6},

  number = {3},

  pages = {175 -189},

  doi = {http://dx.doi.org/10.1109/TNSM.2009.03.090304}

}

Li, J. and Singhal, S. and Swaminathan, R. and Karp, A. H. Managing Data Retention Policies at Scale 2012 Network and Service Management, IEEE Transactions on
Vol. 9(4), pp. 393 -406 
large-scale policy management , cloud service , data retention , encryption , regulatory compliance DOI  
Abstract: Regulatory policies such as EU privacy, HIPAA, and PCI-DSS place requirements on availability, integrity, migration, retention, and access of data, and compliance with such policies on stored data remains a key hurdle to cloud computing. This paper proposes a policy management service that offers scalable management of data retention policies attached to data objects stored in a cloud environment. An important aspect of any data retention service is permanent deletion of data. We achieve secure data deletion by encrypting the data when stored, and then deleting the encryption key at a specified retention time. Thus, we effectively delete the data object and its copies stored in online and offline environments. Our data retention service includes a highly scalable and secure encryption key store to manage encryption keys on-line. A prototype deployed on a 16-machine Linux cluster currently supports 56 MB/sec for encryption, 76 MB/sec for decryption, 31,000 retention policies/sec read and 15,000 retention policies/sec write.
BibTeX:

@article{6335436,

  author = {Li, J. and Singhal, S. and Swaminathan, R. and Karp, A. H.},

  title = {Managing Data Retention Policies at Scale},

  journal = {Network and Service Management, IEEE Transactions on},

  year = {2012},

  volume = {9},

  number = {4},

  pages = {393 -406},

  doi = {http://dx.doi.org/10.1109/TNSM.2012.101612.110203}

}

Li, Q. and Xu, M. and Wu, J. and Lee, P. and Shi, X. and Chiu, D. and Yang, Y. A Unified Approach to Routing Protection in IP Networks 2012 Network and Service Management, IEEE Transactions on
Vol. 9(3), pp. 306-319 
ip networks , resilience , routing, routing protection DOI  
Abstract: Routing failures are common on the Internet and routing protocols can not always react fast enough to recover from them, which usually cause packet delivery failures. To address the problem, fast reroute solutions have been proposed to guarantee reroute path availability and to avoid high packet loss after network failures. However, existing solutions are often specific to single type of routing protocol. It is hard to deploy these solutions together to protect Internet routing including both intra- and inter-domain routing protocols because of their individual computational and storage complexity. Moreover, most of them can not provide effective protection for traffic over failed links, especially for the bi-directional traffic. In this paper, we propose a unified fast reroute solution for routing protection under network failures. Our solution leverages identifier based direct forwarding to guarantee the effectiveness of routing protection and supports incremental deployment. In particular, enhanced protection cycle (e-cycle) is proposed to construct rerouting paths and to provide node and link protection for both intra- and inter-domain routing protocols. We evaluate our solution by simulations, and the results show that the solution provides 100% failure coverage for all end-to-end routing paths with approximately two extra Forwarding Information Base (FIB) entries. Furthermore, we report an experimental evaluation of the proposed solution in operational networks. Our results show that the proposed solution effective provides failure recovery and does not introduce processing overhead to packet forwarding.
BibTeX:

@article{6233058,

  author = {Li, Q. and Xu, M. and Wu, J. and Lee, P. and Shi, X. and Chiu, D. and Yang, Y.},

  title = {A Unified Approach to Routing Protection in IP Networks},

  journal = {Network and Service Management, IEEE Transactions on},

  year = {2012},

  volume = {9},

  number = {3},

  pages = {306-319},

  doi = {http://dx.doi.org/10.1109/TNSM.2012.070512.110138}

}

Li, Y. and Chen, I. Mobility Management in Wireless Mesh Networks Utilizing Location Routing and Pointer Forwarding 2012 Network and Service Management, IEEE Transactions on
Vol. 9(3), pp. 226-239 
location management , performance analysis , pointer forwarding , routing-based location update , wireless mesh networks DOI  
Abstract: We propose and analyze LMMesh: a routing-based location management scheme with pointer forwarding for wireless mesh networks. LMMesh integrates routing-based location update and pointer forwarding by exploiting the advantages of both methods, while avoiding their drawbacks. It considers the effect of the integration on the overall network cost incurred by location management and packet delivery. By exploring the tradeoff between the service cost for packet delivery and the signaling cost for location management, LMMesh identifies the optimal protocol setting that minimizes the overall network cost on a per-user basis for each individual mesh client, when given a set of parameter values characterizing the specific mobility and service characteristics of the mesh client. We develop an analytical model based on stochastic Petri net techniques for analyzing the performance of LMMesh and a computational procedure for calculating the overall network cost. Through a comparative performance study, we show that LMMesh outperforms both pure routing-based location management schemes and pure pointer forwarding schemes, as well as traditional tunnel-based location management schemes.
BibTeX:

@article{6205097,

  author = {Li, Y. and Chen, I.},

  title = {Mobility Management in Wireless Mesh Networks Utilizing Location Routing and Pointer Forwarding},

  journal = {Network and Service Management, IEEE Transactions on},

  year = {2012},

  volume = {9},

  number = {3},

  pages = {226-239},

  doi = {http://dx.doi.org/10.1109/TNSM.2012.051712.100101}

}

Li, Y. and Xie, H. and Wen, Y. and Chow, C. and Zhang, Z. How Much to Coordinate? Optimizing In-Network Caching in Content-Centric Networks 2015 Network and Service Management, IEEE Transactions on
Vol. 12(3), pp. 420-434 
algorithm design and analysis;approximation methods;delays;internet;routing;servers;in-network caching;content-centric networks;coordinated caching;in-network caching DOI  
Abstract: In content-centric networks, it is challenging how to optimally provision in-network storage to cache contents, to balance the tradeoffs between the network performance and the provisioning cost. To address this problem, we first propose a holistic model for intradomain networks to characterize the network performance of routing contents to clients and the network cost incurred by globally coordinating the in-network storage capability. We then derive the optimal strategy for provisioning the storage capability that optimizes the overall network performance and cost, and analyze the performance gains via numerical evaluations on real network topologies. Our results reveal interesting phenomena; for instance, different ranges of the Zipf exponent can lead to opposite optimal strategies, and the tradeoffs between the network performance and the provisioning cost have great impacts on the stability of the optimal strategy. We also demonstrate that the optimal strategy can achieve significant gain on both the load reduction at origin servers and the improvement on the routing performance. Moreover, given an optimal coordination level $ell^ast$, we design a routing-aware content placement (RACP) algorithm that runs on a centralized server. The algorithm computes and assigns contents to each CCN router to store, which can minimize the overall routing cost, e.g., transmission delay or hop counts, to deliver contents to clients. By conducting extensive simulations using a large-scale trace dataset collected from a commercial 3G network in China, our results demonstrate that our caching scheme can achieve 4% to 22% latency reduction on average over the state-of-the-art caching mechanisms.
BibTeX:

@article{7161334,

  author = {Li, Y. and Xie, H. and Wen, Y. and Chow, C. and Zhang, Z.},

  title = {How Much to Coordinate? Optimizing In-Network Caching in Content-Centric Networks},

  journal = {Network and Service Management, IEEE Transactions on},

  year = {2015},

  volume = {12},

  number = {3},

  pages = {420-434},

  doi = {http://dx.doi.org/10.1109/TNSM.2015.2458271}

}

Li, Z. and Lin, J. and Salamatian, K. and Xie, G. Social Connections in User-Generated Content Video Systems: Analysis and Recommendation 2013 Network and Service Management, IEEE Transactions on
Vol. 10(1), pp. 70-83 
ugc systems , friend recommendation , social connection , tag augmentation , user interest DOI  
Abstract: User-generated content (UGC) video systems by definition heavily depend on the input of their community of users and their social interactions for video diffusion and opinion sharing. Nevertheless, we show in this paper, through measurement and analysis of YouKu, the most popular UGC video system in China, that the social connectivity of its users is very low. These observations are consistent with what was reported about YouTube in previous works. As a UGC system can achieve a larger audience through improved connectivity, our findings motivate us to propose a mean to enhance the users' connectivity by taking benefit of friend recommendation. To this end, we assess two similarity metrics based on users' interests that are derived from their uploads and favorites tagging of videos, to evaluate the interest similarity between friends. The results consistently show that friends share to a great extent common interests. Two friend recommendation algorithms are then proposed. The algorithms use public information provided by users to suggest potential friends with similar interests as measured by the similarity metrics. Experiments on our gathered YouKu dataset demonstrate that the social connectivity can be greatly enhanced by our friend proposition set and that users can access a larger set of interesting videos through the recommendations.
BibTeX:

@article{6328214,

  author = {Li, Z. and Lin, J. and Salamatian, K. and Xie, G.},

  title = {Social Connections in User-Generated Content Video Systems: Analysis and Recommendation},

  journal = {Network and Service Management, IEEE Transactions on},

  year = {2013},

  volume = {10},

  number = {1},

  pages = {70-83},

  doi = {http://dx.doi.org/10.1109/TNSM.2012.100512.120233}

}

Li, Zhe and Simon, Gwendal In a Telco-CDN, Pushing Content Makes Sense 2013 Network and Service Management, IEEE Transactions on
Vol. 10(3), pp. 300-311 
cdn;isp;in-network caching;optimal content placement DOI  
Abstract: The exploding HD video streaming traffic calls for deploying content servers deeper inside network operators' infrastructures. Telco-CDN are new content distribution services that are managed by Internet Service Providers (ISP). Since the network operator controls both the infrastructure and the content delivery overlay, it is in a position to engineer telco-CDN so that networking resources are optimally utilized. In this paper, we show the following two findings: 1. it is possible to implement an efficient algorithm for the placement of video chunks into a telco-CDN. We present an algorithm, which is based on a genetic algorithm implemented on the MapReduce framework. We show that, for a national VoD service, computing a quasi-optimal placement is possible. 2. such push strategy makes sense because it allows to actually take into account fine-grain traffic management strategies on the underlying infrastructure. Our proposal re-opens the debate about the relevance of such "push" approach (where the manager of telco-CDN proactively pushes video content into servers) versus the traditional caching approach (where the content is pulled to the servers from requests of clients). Our proposal of a quasi-optimal tracker enables fair comparisons between both approaches for most traffic engineering policies. We illustrate the interest of our proposal in the context of a major European Telco-CDN with real traces from a popular Video-on-Demand (VoD) service. Our experimental results show that, given a perfect algorithm for predicting user preferences, our placement algorithm is able to keep aligned with LRU caching in terms of the traditional hit-ratio, but the workload on some troubled links (e.g., over-used links) in a push-based strategy is significantly alleviated.
BibTeX:

@article{Li2013,

  author = {Li, Zhe and Simon, Gwendal},

  title = {In a Telco-CDN, Pushing Content Makes Sense},

  journal = {Network and Service Management, IEEE Transactions on},

  year = {2013},

  volume = {10},

  number = {3},

  pages = {300-311},

  doi = {http://dx.doi.org/10.1109/TNSM.2013.043013.130474}

}

Lima, A. and Sauve, J. and Souza, N. Capturing the Quality and Business Value of IT Services Using a Business-Driven Model 2012 Network and Service Management, IEEE Transactions on
Vol. 9(4), pp. 421 -432 
it service management , itil , quality and business value of it services , continual service improvement , fuzzy models , relationship between service quality and business impact , root cause identification of service faults DOI  
Abstract: In an IT Service Management setting, current approaches to support Continual Service Improvement (CSI) suffer from many deficiencies, including lack of or weak visibility of business impacts, no modeling of uncertainty, difficulty in combining heterogeneous metrics, among others. The result is that CSI activities typically rely on unstructured procedures based on weak data. We offer a model to capture service quality and the business value delivered by a service; the model aims to solve or diminish these deficiencies. A case study was performed in a large bank with very promising results: although model completeness and complexity need to improve, a face validity exercise was performed during the case study and found that the model is sufficiently useful, trustworthy, reliable and accurate.
BibTeX:

@article{6276353,

  author = {Lima, A. and Sauve, J. and Souza, N.},

  title = {Capturing the Quality and Business Value of IT Services Using a Business-Driven Model},

  journal = {Network and Service Management, IEEE Transactions on},

  year = {2012},

  volume = {9},

  number = {4},

  pages = {421 -432},

  doi = {http://dx.doi.org/10.1109/TNSM.2011.081512.120272}

}

Ying-Dar Lin and Po-Ching Lin and Yu-An Lin and Yuan-Cheng Lai On-the-Fly Capture and Replay Mechanisms for Multi-Port Network Devices in Operational Networks 2014 Network and Service Management, IEEE Transactions on
Vol. 11(2), pp. 158-171 
failure analysis;search problems;telecommunication switching;dut failure;openflow switch;binary search algorithm;defect identification;defect reproduction;defect traces;defect-triggering traces;device under test;downsizing ratios;multiport network devices;multiport replay;network connectivity;network device testing;on-the-fly capture;operational networks;partial payloads;payload anomalies;replay mechanisms;generators;payloads;ports (computers);protocols;real-time systems;switches;testing;network devices;openflow switch;downsizing;failover;multi-port replay DOI  
Abstract: Testing network devices in a live environment is desirable due to its reality. However, the defects are not reproducible, and the network connectivity will be broken if the device is down. For effective defect reproduction from real traffic, we design a new mechanism, which allows the device under test (DUT) to be automatically online/offline, and supports multi-port replay for multi-port network devices with an OpenFlow switch. The defect traces are captured when the DUT is online. When a DUT failure is detected, the DUT will be offline, and the defect-triggering traces will be replayed to identify the defect. For efficient replay, we keep only partial payloads in a reduced number of packets in the defect traces that are sufficient to trigger the defects. For defect identification, reduction based on a binary search algorithm is presented to deal with the defects caused by payload anomalies and by overloading. The downsizing ratios in the cases of payload anomalies and overloading are up to 98.8% and 96%, respectively. The minimum outage time of the failover during the DUT failure is obtained when the check interval is 1 second and the number of tolerable consecutive failures is 2.
BibTeX:

@article{6750689,

  author = {Ying-Dar Lin and Po-Ching Lin and Yu-An Lin and Yuan-Cheng Lai},

  title = {On-the-Fly Capture and Replay Mechanisms for Multi-Port Network Devices in Operational Networks},

  journal = {Network and Service Management, IEEE Transactions on},

  year = {2014},

  volume = {11},

  number = {2},

  pages = {158-171},

  doi = {http://dx.doi.org/10.1109/TNSM.2014.021714.130528}

}

Ma Lingjun and Pui-Sze Tsang and King-Shan Lui Improving file distribution performance by grouping in peer-to-peer networks 2009 Network and Service Management, IEEE Transactions on
Vol. 6(3), pp. 149 -162 
peer-to-peer, grouping, file distribution. client-server systems , file organisation , peer-to-peer computing , protocols , scheduling DOI  
Abstract: It has been shown that the peer-to-peer paradigm is more efficient than the traditional client-server model for file sharing among a large number of users. Given a group of leechers who wants to download a single file and a group of seeds who possesses the whole file, the minimum time needed for distributing the file to all users can be calculated based on their bandwidth availabilities. A scheduling algorithm has been developed so that every leecher can obtain the file within this minimum time. Unfortunately, this mechanism is not optimal with regard to the average download time among the peers. In this paper, we study how to reduce the average download time without prolonging the time needed for all leechers to obtain the file from a theoretical perspective. Based on the bandwidth capacities, the seeds and leechers are divided into different groups. We identify the necessary conditions for grouping to bring about benefits. We also study the impact on performance when leechers leave the system before the downloading process is complete. To evaluate our mechanism, we conduct extensive simulations and compare the performance with a BitTorrentlike file sharing algorithm. The results show that our grouping protocol successfully reduces the average download time over a wide range of system configurations.
BibTeX:

@article{5374836,

  author = {Ma Lingjun and Pui-Sze Tsang and King-Shan Lui},

  title = {Improving file distribution performance by grouping in peer-to-peer networks},

  journal = {Network and Service Management, IEEE Transactions on},

  year = {2009},

  volume = {6},

  number = {3},

  pages = {149 -162},

  doi = {http://dx.doi.org/10.1109/TNSM.2009.03.090302}

}

Liu, Jungang and Yang, Oliver W.W. Using Fuzzy Logic Control to Provide Intelligent Traffic Management Service for High-Speed Networks 2013 Network and Service Management, IEEE Transactions on
Vol. 10(2), pp. 148-161 
congestion control;fuzzy logic control;max-min fairness;quality of service;robustness;traffic management DOI  
Abstract: In view of the fast-growing Internet traffic, this paper propose a distributed traffic management framework, in which routers are deployed with intelligent data rate controllers to tackle the traffic mass. Unlike other explicit traffic control protocols that have to estimate network parameters (e.g., link latency, bottleneck bandwidth, packet loss rate, or the number of flows) in order to compute the allowed source sending rate, our fuzzy-logic-based controller can measure the router queue size directly; hence it avoids various potential performance problems arising from parameter estimations while reducing much consumption of computation and memory resources in routers. As a network parameter, the queue size can be accurately monitored and used to proactively decide if action should be taken to regulate the source sending rate, thus increasing the resilience of the network to traffic congestion. The communication QoS (Quality of Service) is assured by the good performances of our scheme such as max-min fairness, low queueing delay and good robustness to network dynamics. Simulation results and comparisons have verified the effectiveness and showed that our new traffic management scheme can achieve better performances than the existing protocols that rely on the estimation of network parameters.
BibTeX:

@article{6514996,

  author = {Liu, Jungang and Yang, Oliver W.W.},

  title = {Using Fuzzy Logic Control to Provide Intelligent Traffic Management Service for High-Speed Networks},

  journal = {Network and Service Management, IEEE Transactions on},

  year = {2013},

  volume = {10},

  number = {2},

  pages = {148-161},

  doi = {http://dx.doi.org/10.1109/TNSM.2013.043013.120264}

}

Liu, Qin and Wang, Guojun and Wu, Jie Consistency as a Service: Auditing Cloud Consistency 2014 Network and Service Management, IEEE Transactions on
Vol. 11(1), pp. 25-35 
availability;clocks;cloud computing;data models;servers;synchronization;vectors;cloud storage;consistency as a service (caas);heuristic auditing strategy (has);two-level auditing DOI  
Abstract: Cloud storage services have become commercially popular due to their overwhelming advantages. To provide ubiquitous always-on access, a cloud service provider (CSP) maintains multiple replicas for each piece of data on geographically distributed servers. A key problem of using the replication technique in clouds is that it is very expensive to achieve strong consistency on a worldwide scale. In this paper, we first present a novel consistency as a service (CaaS) model, which consists of a large data cloud and multiple small audit clouds. In the CaaS model, a data cloud is maintained by a CSP, and a group of users that constitute an audit cloud can verify whether the data cloud provides the promised level of consistency or not. We propose a two-level auditing architecture, which only requires a loosely synchronized clock in the audit cloud. Then, we design algorithms to quantify the severity of violations with two metrics: the commonality of violations, and the staleness of the value of a read. Finally, we devise a heuristic auditing strategy (HAS) to reveal as many violations as possible. Extensive experiments were performed using a combination of simulations and real cloud deployments to validate HAS.
BibTeX:

@article{6708155,

  author = {Liu, Qin and Wang, Guojun and Wu, Jie},

  title = {Consistency as a Service: Auditing Cloud Consistency},

  journal = {Network and Service Management, IEEE Transactions on},

  year = {2014},

  volume = {11},

  number = {1},

  pages = {25-35},

  doi = {http://dx.doi.org/10.1109/TNSM.2013.122613.130411}

}

Xue Liu and Jin Heo and Lui Sha and Xiaoyun Zhu Queueing-Model-Based Adaptive Control of Multi-Tiered Web Applications 2008 Network and Service Management, IEEE Transactions on
Vol. 5(3), pp. 157 -167 
e-commerce, web applications, dynamic resource allocations internet , adaptive control , electronic commerce , online front-ends , queueing theory DOI  
Abstract: Web applications have been increasingly deployed on the Internet. How to effectively allocate system resources to meet the Service Level Objectives (SLOs) is a challenging problem for Web application providers. In this article, we propose a scheme for automated performance control of Web applications via dynamic resource allocations. The scheme uses a queueing model predictor and an online adaptive feedback loop that enforces admission control of the incoming requests to ensure the desired response time target is met. The proposed Queueing-Model-Based Adaptive Control approach combines both the modeling power of queueing theory and the self-tuning power of adaptive control. Therefore, it can handle both modeling inaccuracies and load disturbances in a better way. To evaluate the proposed approach, we built a multi-tiered Web application testbed with open-source components widely adopted in industry. Experimental studies conducted on the testbed demonstrated the effectiveness of the proposed scheme.
BibTeX:

@article{4805132,

  author = {Xue Liu and Jin Heo and Lui Sha and Xiaoyun Zhu},

  title = {Queueing-Model-Based Adaptive Control of Multi-Tiered Web Applications},

  journal = {Network and Service Management, IEEE Transactions on},

  year = {2008},

  volume = {5},

  number = {3},

  pages = {157 -167},

  doi = {http://dx.doi.org/10.1109/TNSM.2009.031103}

}

Luo, H. and Zhang, H. and Qiao, C. Optimal Cache Timeout for Identifier-to-Locator Mappings with Handovers 2013 Network and Service Management, IEEE Transactions on
Vol. 10(2), pp. 204-217 
delay;manganese;routing;servers;tunneling;routing architecture;cache timeout;handover process;identifier/locator separation DOI  
Abstract: The locator/ID separation protocol (LISP) proposed for addressing the scalability issue of the current Internet has gained much interest. LISP separates the identifier and locator roles of IP addresses by end point identifiers (EIDs) and locators, respectively. In particular, while EIDs are used in the application and transport layers for identifying nodes, locators are used in the network layer for locating nodes in the network topology. In LISP, packets are tunneled from ingress tunnel routers (ITRs) to egress tunnel routers in a map-and-encapsulation manner. For this purpose, an ITR caches on demand some mappings between EIDs and locators. Since hosts roam from place to place, however, their EID-to-locator mappings change accordingly. Thus, an ITR cannot store a mapping permanently but maintains for every mapping a timer whose default value is set to a given cache timeout. If the cache timeout for a mapping is too short, an ITR frequently queries the mapping system (control plane), resulting in a high traffic load on the control plane. On the other hand, if the cache timeout for a mapping is too long, the mapping could be outdated, resulting in packet loss and associated overheads. Therefore, it is desirable to set appropriate cache timeout for mapping items. In this paper, we analytically determine the optimal cache timeout for EID-to-locator mappings cached at ITRs to minimize the control plane load while remaining efficient for mobility. The results presented here provide valuable insights and guidelines for deploying LISP.
BibTeX:

@article{6400364,

  author = {Luo, H. and Zhang, H. and Qiao, C.},

  title = {Optimal Cache Timeout for Identifier-to-Locator Mappings with Handovers},

  journal = {Network and Service Management, IEEE Transactions on},

  year = {2013},

  volume = {10},

  number = {2},

  pages = {204-217},

  doi = {http://dx.doi.org/10.1109/TNSM.2012.122612.110221}

}

Hongbin Luo and Hongke Zhang and Yajuan Qin and Leung, V.C.M. An Approach for Building Scalable Proxy Mobile IPv6 Domains 2011 Network and Service Management, IEEE Transactions on
Vol. 8(3), pp. 176 -189 
mobility management , proxy mobile ipv6 (pmipv6) , distributed hash table , robustness , scalability ip networks , delays , mobility management (mobile radio) DOI  
Abstract: As a promising network-based mobility management method that does not require active participation of mobile nodes (MNs), Proxy Mobile IPv6 (PMIPv6) is attracting considerable attention among the telecommunication and Internet communities. It remains an open issue how to build a scalable PMIPv6 domain that is able to support a large number of MNs while keeping handover delays low. In this paper, we propose an approach for building Scalable And Robust PMIPv6 (SARP) domains. We propose that every mobility access gateway (MAG) in a SARP domain also functions as a local mobility anchor (LMA), and is organized into a virtual ring with all other MAGs. Consistent hashing is used to efficiently distribute the mapping between each MN and its LMA to all MAGs. A MAG finds an MN's LMA by sending a query message to the virtual ring. Our analysis verifies the robustness and scalability of SARP. We also propose two handover procedures for SARP and show that they achieve low handover delays.
BibTeX:

@article{5962387,

  author = {Hongbin Luo and Hongke Zhang and Yajuan Qin and Leung, V.C.M.},

  title = {An Approach for Building Scalable Proxy Mobile IPv6 Domains},

  journal = {Network and Service Management, IEEE Transactions on},

  year = {2011},

  volume = {8},

  number = {3},

  pages = {176 -189},

  doi = {http://dx.doi.org/10.1109/TNSM.2011.071511.20100063}

}

Jing Luo and Ying Li and Pershing, J. and Lei Xie and Ying Chen A methodology for analyzing availability weak points in SOA deployment frameworks 2009 Network and Service Management, IEEE Transactions on
Vol. 6(1), pp. 31 -44 
availability, soa, workflow analysis, deployment optimization. business data processing , software architecture , workflow management software DOI  
Abstract: The fundamental characteristics of SOA, loose coupling and on-demand integration, enable organizations to seek more flexibility and responsiveness from their business IT systems. However, this brings challenges to assure QoS, especially availability, which should be considered in an integrated way in an SOA environment. Traditionally, availability is measured for each IT resource, but within SOA environments, rather than being considered individually, availability should be analyzed from an end-to-end view from both business and IT perspectives. In this paper, to address the availability problem of SOA, we propose a methodology that analyzes availability weak points in SOA deployment frameworks, leveraging workflow definitions that specify availability requirements at business level. This methodology includes an effective way to calculate high availability enhancement recommendations for a given SOA deployment topology with near-minimum cost, while meeting the business-level availability requirements. A prototype has been implemented as an extension to IBM's SOA deployment framework. Its efficiency and performance are analyzed here.
BibTeX:

@article{5331279,

  author = {Jing Luo and Ying Li and Pershing, J. and Lei Xie and Ying Chen},

  title = {A methodology for analyzing availability weak points in SOA deployment frameworks},

  journal = {Network and Service Management, IEEE Transactions on},

  year = {2009},

  volume = {6},

  number = {1},

  pages = {31 -44},

  doi = {http://dx.doi.org/10.1109/TNSM.2009.090303}

}

Macedo, D. and Dos Santos, A. and Nogueira, J.M.S. and Pujolle, G. A distributed information repository for autonomic context-aware MANETs 2009 Network and Service Management, IEEE Transactions on
Vol. 6(1), pp. 45 -55 
ad hoc networks, autonomic management, context awareness, cross-layering ad hoc networks , middleware , mobile computing , peer-to-peer computing , protocols DOI  
Abstract: Due to the emergence of multimedia context-rich applications and services over wireless networks, networking protocols and services are becoming more and more integrated, thus relying on context and application information to support their operation. Further, wireless protocols and services now employ information from several network layers and the environment, breaking the layering paradigm. In order to cope with this increasing reliance on information, we have proposed MANIP, a middleware for MANETs that instantiates a new networking plane. The Information Plane (InP) is a distributed entity to store and disseminate information concerning the network, its services and the environment, orchestrating the collaboration among cross-layer protocols, autonomic management solutions and context-aware services. We use MANIP to support the autonomic reconfiguration of a P2P network over MANETs. Simulation results show that the MANIP-enabled solutions reduce the response time and increase the number of solved P2P queries when compared to classic, cross-layer implementations of the same protocols.
BibTeX:

@article{5331280,

  author = {Macedo, D. and Dos Santos, A. and Nogueira, J.M.S. and Pujolle, G.},

  title = {A distributed information repository for autonomic context-aware MANETs},

  journal = {Network and Service Management, IEEE Transactions on},

  year = {2009},

  volume = {6},

  number = {1},

  pages = {45 -55},

  doi = {http://dx.doi.org/10.1109/TNSM.2009.090304}

}

Malatras, A. and Pavlou, G. and Sivavakeesar, S. A Programmable Framework for the Deployment of Services and Protocols in Mobile Ad Hoc Networks 2007 Network and Service Management, IEEE Transactions on
Vol. 4(3), pp. 12 -24 
ad hoc networks , analytical models , context-aware services , mobile ad hoc networks , network topology , quality of service , routing protocols , software safety , wireless application protocol , wireless networks ad hoc networks , mobile radio , protocols DOI  
Abstract: Mobile ad hoc networks (MANETs) are characterized by their heterogeneity and the diverse capabilities of their nodes given that almost any device with a wireless network interface can join such a network. In such an environment it is difficult to dynamically deploy services and protocols without a common understanding among the participating nodes and their capabilities. A deployment/provisioning framework must cope with the high-level of device heterogeneity, degree of mobility, and should also take into account the potentially limited device resources. This paper presents a context-based programmable framework for dynamic service/protocol deployment that allows the nodes of a mobile ad hoc network to download and safely activate required service/protocol software dynamically. Downloading and activation can be triggered through preconditions evaluated according to available contextual information. This strategy leads to the alignment of the nodes' capabilities so that common services and protocols can be deployed even if they are not available at every node. In addition, dynamic context-driven deployment may lead to a degree of network self-optimization. We present the programmable framework and functionality and evaluate its various aspects through testbed experimentation, simulation and analytical modeling. The results demonstrate good performance with respect to the supported functionality.
BibTeX:

@article{4489644,

  author = {Malatras, A. and Pavlou, G. and Sivavakeesar, S.},

  title = {A Programmable Framework for the Deployment of Services and Protocols in Mobile Ad Hoc Networks},

  journal = {Network and Service Management, IEEE Transactions on},

  year = {2007},

  volume = {4},

  number = {3},

  pages = {12 -24},

  doi = {http://dx.doi.org/10.1109/tnsm.2007.021108}

}

Marchal, S. and Francois, J. and State, R. and Engel, T. PhishStorm: Detecting Phishing With Streaming Analytics 2014 Network and Service Management, IEEE Transactions on
Vol. 11(4), pp. 458-471 
feature extraction;google;internet;market research;search engines;uniform resource locators;big data;machine learning;mining and statistical methods;phishing detection;storm;search engine query data;security management;security management;url rating;url rating;word relatedness;big data;machine learning;mining and statistical methods;phishing detection;search engine query data;word relatedness DOI  
Abstract: Despite the growth of prevention techniques, phishing remains an important threat since the principal countermeasures in use are still based on reactive URL blacklisting. This technique is inefficient due to the short lifetime of phishing Web sites, making recent approaches relying on real-time or proactive phishing URL detection techniques more appropriate. In this paper, we introduce PhishStorm, an automated phishing detection system that can analyze in real time any URL in order to identify potential phishing sites. PhishStorm can interface with any email server or HTTP proxy. We argue that phishing URLs usually have few relationships between the part of the URL that must be registered (low-level domain) and the remaining part of the URL (upper-level domain, path, query). We show in this paper that experimental evidence supports this observation and can be used to detect phishing sites. For this purpose, we define the new concept of intra-URL relatedness and evaluate it using features extracted from words that compose a URL based on query data from Google and Yahoo search engines. These features are then used in machine-learning-based classification to detect phishing URLs from a real dataset. Our technique is assessed on 96?018 phishing and legitimate URLs that result in a correct classification rate of 94.91% with only 1.44% false positives. An extension for a URL phishingness rating system exhibiting high confidence rate ( $>$ 99%) is proposed. We discuss in this paper efficient implementation patterns that allow real-time analytics using Big Data architectures such as STORM and advanced data structures based on the Bloom filter.
BibTeX:

@article{6975177,

  author = {Marchal, S. and Francois, J. and State, R. and Engel, T.},

  title = {PhishStorm: Detecting Phishing With Streaming Analytics},

  journal = {Network and Service Management, IEEE Transactions on},

  year = {2014},

  volume = {11},

  number = {4},

  pages = {458-471},

  doi = {http://dx.doi.org/10.1109/TNSM.2014.2377295}

}

Marchal, S. and Mehta, A. and Gurbani, V.K. and State, R. and Kam-Ho, T. and Sancier-Barbosa, F. Mitigating Mimicry Attacks Against the Session Initiation Protocol 2015 Network and Service Management, IEEE Transactions on
Vol. 12(3), pp. 467-482 
computer crime;degradation;electronic mail;grammar;internet;protocols;servers;sip;anomaly;classification;machine learning;mimicry attacks;multiple classifier systems DOI  
Abstract: The U.S. National Academies of Science's Board on Science, Technology and Economic Policy estimates that the Internet and voice-over-IP (VoIP) communications infrastructure generates 10% of U.S. economic growth. As market forces move increasingly towards Internet and VoIP communications, there is proportional increase in telephony denial of service (TDoS) attacks. Like denial of service (DoS) attacks, TDoS attacks seek to disrupt business and commerce by directing a flood of anomalous traffic towards key communication servers. In this work, we focus on a new class of anomalous traffic that exhibits a mimicry TDoS attack. Such an attack can be launched by crafting malformed messages with small changes from normal ones. We show that such malicious messages easily bypass intrusion detection systems (IDS) and degrade the goodput of the server drastically by forcing it to parse the message looking for the needed token. Our approach is not to parse at all; instead, we use multiple classifier systems (MCS) to exploit the strength of multiple learners to predict the true class of a message with high probability (98.50% ¡Ü p ¡Ü 99.12%). We proceed systematically by first formulating an optimization problem of picking the minimum number of classifiers such that their combination yields the optimal classification performance. Next, we analytically bound the maximum performance of such a system and empirically demonstrate that it is possible to attain close to the maximum theoretical performance across varied datasets. Finally, guided by our analysis we construct an MCS appliance that demonstrates superior classification accuracy with $O(1)$ runtime complexity across varied datasets.
BibTeX:

@article{7163619,

  author = {Marchal, S. and Mehta, A. and Gurbani, V.K. and State, R. and Kam-Ho, T. and Sancier-Barbosa, F.},

  title = {Mitigating Mimicry Attacks Against the Session Initiation Protocol},

  journal = {Network and Service Management, IEEE Transactions on},

  year = {2015},

  volume = {12},

  number = {3},

  pages = {467-482},

  doi = {http://dx.doi.org/10.1109/TNSM.2015.2459603}

}

Marchetto, G. and Ciminiera, L. and Manzillo, M.P. and Risso, F. and Torrero, L. Locating Equivalent Servants over P2P Networks 2011 Network and Service Management, IEEE Transactions on
Vol. 8(1), pp. 65 -78 
distributed services , equivalent servants , peer-to-peer overlays , scale-free topology internet , peer-to-peer computing , telecommunication network topology DOI  
Abstract: While peer-to-peer networks are mainly used to locate unique resources across the Internet, new interesting deployment scenarios are emerging. Particularly, some applications (e.g., VoIP) are proposing the creation of overlays for the localization of services based on equivalent servants (e.g., voice relays). This paper explores the possible overlay architectures that can be adopted to provide such services, showing how an unstructured solution based on a scale-free overlay topology is an effective option to deploy in this context. Consequently, we propose EQUATOR (EQUivalent servAnt locaTOR), an unstructured overlay implementing the above mentioned operating principles, based on an overlay construction algorithm that well approximates an ideal scale-free construction model. We present both analytical and simulation results which support our overlay topology selection and validate the proposed architecture.
BibTeX:

@article{5702354,

  author = {Marchetto, G. and Ciminiera, L. and Manzillo, M.P. and Risso, F. and Torrero, L.},

  title = {Locating Equivalent Servants over P2P Networks},

  journal = {Network and Service Management, IEEE Transactions on},

  year = {2011},

  volume = {8},

  number = {1},

  pages = {65 -78},

  doi = {http://dx.doi.org/10.1109/TNSM.2011.012111.00013}

}

Melo, Marcio and Sargento, Susana and Killat, Ulrich and Timm-Giel, Andreas and Carapinha, Jorge Optimal Virtual Network Embedding: Node-Link Formulation 2013 Network and Service Management, IEEE Transactions on
Vol. 10(4), pp. 356-368 
delays;linear programming;load management;mathematical model;optimization;virtual networks;ilp model;np-hard;virtual networks;embedding;heuristics;mapping;optimization DOI  
BibTeX:

@article{6616685,

  author = {Melo, Marcio and Sargento, Susana and Killat, Ulrich and Timm-Giel, Andreas and Carapinha, Jorge},

  title = {Optimal Virtual Network Embedding: Node-Link Formulation},

  journal = {Network and Service Management, IEEE Transactions on},

  year = {2013},

  volume = {10},

  number = {4},

  pages = {356-368},

  doi = {http://dx.doi.org/10.1109/TNSM.2013.092813.130397}

}

Mi, N. and Casale, G. and Smirni, E. ASIdE: Using Autocorrelation-Based Size Estimation for Scheduling Bursty Workloads 2012 Network and Service Management, IEEE Transactions on
Vol. 9(2), pp. 198-212 
fcfs scheduling , sjf scheduling , temporal dependence , delay-based scheduling , no-knowledge scheduling DOI  
Abstract: Temporal dependence in workloads creates peak congestion that can make service unavailable and reduce system performance. To improve system performability under conditions of temporal dependence, a server should quickly process bursts of requests that may need large service demands. In this paper, we propose and evaluate ASIdE, an Autocorrelation-based SIze Estimation, that selectively delays requests which contribute to the workload temporal dependence. ASIdE implicitly approximates the shortest job first (SJF) scheduling policy but without any prior knowledge of job service times. Extensive experiments show that (1) ASIdE achieves good service time estimates from the temporal dependence structure of the workload to implicitly approximate the behavior of SJF; and (2) ASIdE successfully counteracts peak congestion in the workload and improves system performability under a wide variety of settings. Specifically, we show that system capacity under ASIdE is largely increased compared to the first-come first-served (FCFS) scheduling policy and is highly-competitive with SJF.
BibTeX:

@article{6189000,

  author = {Mi, N. and Casale, G. and Smirni, E.},

  title = {ASIdE: Using Autocorrelation-Based Size Estimation for Scheduling Bursty Workloads},

  journal = {Network and Service Management, IEEE Transactions on},

  year = {2012},

  volume = {9},

  number = {2},

  pages = {198-212},

  doi = {http://dx.doi.org/10.1109/TNSM.2012.041712.100073}

}

Mijumbi, R. and Serrat, J. and Gorricho, J. and Boutaba, R. A Path Generation Approach to Embedding of Virtual Networks 2015 Network and Service Management, IEEE Transactions on
Vol. 12(3), pp. 334-348 
joining processes;mathematical programming;proposals;resource management;substrates;tin;virtualization;network virtualization;column generation;optimization;resource allocation;virtual network embedding DOI  
Abstract: As the virtualization of networks continues to attract attention from both industry and academia, the virtual network embedding (VNE) problem remains a focus of researchers. This paper proposes a one-shot, unsplittable flow VNE solution based on column generation. We start by formulating the problem as a path-based mathematical program called the primal, for which we derive the corresponding dual problem. We then propose an initial solution which is used, first, by the dual problem and then by the primal problem to obtain a final solution. Unlike most approaches, our focus is not only on embedding accuracy but also on the scalability of the solution. In particular, the one-shot nature of our formulation ensures embedding accuracy, while the use of column generation is aimed at enhancing the computation time to make the approach more scalable. In order to assess the performance of the proposed solution, we compare it against four state of the art approaches as well as the optimal link-based formulation of the one-shot embedding problem. Experiments on a large mix of virtual network (VN) requests show that our solution is near optimal (achieving about 95% of the acceptance ratio of the optimal solution), with a clear improvement over existing approaches in terms of VN acceptance ratio and average substrate network (SN) resource utilization, and a considerable improvement (92% for a SN of 50 nodes) in time complexity compared to the optimal solution.
BibTeX:

@article{7163610,

  author = {Mijumbi, R. and Serrat, J. and Gorricho, J. and Boutaba, R.},

  title = {A Path Generation Approach to Embedding of Virtual Networks},

  journal = {Network and Service Management, IEEE Transactions on},

  year = {2015},

  volume = {12},

  number = {3},

  pages = {334-348},

  doi = {http://dx.doi.org/10.1109/TNSM.2015.2459073}

}

Misherghi, G. and Lihua Yuan and Zhendong Su and Chen-Nee Chuah and Hao Chen A general framework for benchmarking firewall optimization techniques 2008 Network and Service Management, IEEE Transactions on
Vol. 5(4), pp. 227 -238 
firewall optimization, acl optimization, firewall management, acl partitioning authorisation , benchmark testing , computer networks , integer programming , telecommunication security , ubiquitous computing DOI  
Abstract: Firewalls are among the most pervasive network security mechanisms, deployed extensively from the borders of networks to end systems. The complexity of modern firewall policies has raised the computational requirements for firewall implementations, potentially limiting the throughput of networks. Administrators currently rely on ad hoc solutions to firewall optimization. To address this problem, a few automatic firewall optimization techniques have been proposed, but there has been no general approach to evaluate the optimality of these techniques. In this paper we present a general framework for rule-based firewall optimization. We give a precise formulation of firewall optimization as an integer programming problem and show that our framework produces optimal reordered rule sets that are semantically equivalent to the original rule set. Our framework considers the complex interactions among the rules in firewall configurations and relies on a novel partitioning of the packet space defined by the rules themselves. For validation, we employ this framework on real firewall rule sets for a quantitative evaluation of existing heuristic approaches. Our results indicate that the framework is general and faithfully captures performance benefits of firewall optimization heuristics.
BibTeX:

@article{5010446,

  author = {Misherghi, G. and Lihua Yuan and Zhendong Su and Chen-Nee Chuah and Hao Chen},

  title = {A general framework for benchmarking firewall optimization techniques},

  journal = {Network and Service Management, IEEE Transactions on},

  year = {2008},

  volume = {5},

  number = {4},

  pages = {227 -238},

  doi = {http://dx.doi.org/10.1109/TNSM.2009.041104}

}

Misra, Sudip and Krishna, P.Venkata and Kalaiselvan, K. and Saritha, V. and Obaidat, Mohammad S. Learning Automata-Based QoS Framework for Cloud IaaS 2014 Network and Service Management, IEEE Transactions on
Vol. 11(1), pp. 15-24 
automata;cloud computing;learning automata;monitoring;quality of service;time factors;virtual machining;qos;cloud computing;infrastructure as a service (iaas);learning automata (la);service level agreement (sla) DOI  
Abstract: This paper presents a Learning Automata (LA)-based QoS (LAQ) framework capable of addressing some of the challenges and demands of various cloud applications. The proposed LAQ framework ensures that the computing resources are used in an efficient manner and are not over- or under-utilized by the consumer applications. Service provisioning can only be guaranteed by continuously monitoring the resource and quantifying various QoS metrics, so that services can be delivered in an on-demand basis with certain levels of guarantee. The proposed framework helps in ensuring guarantees with these metrics in order to provide QoS-enabled cloud services. The performance of the proposed system is evaluated with and without LA, and it is shown that the LA-based solution improves the performance of the system in terms of response time and speed up.
BibTeX:

@article{6750688,

  author = {Misra, Sudip and Krishna, P.Venkata and Kalaiselvan, K. and Saritha, V. and Obaidat, Mohammad S.},

  title = {Learning Automata-Based QoS Framework for Cloud IaaS},

  journal = {Network and Service Management, IEEE Transactions on},

  year = {2014},

  volume = {11},

  number = {1},

  pages = {15-24},

  doi = {http://dx.doi.org/10.1109/TNSM.2014.011614.130429}

}

Misra, S. and Rohith Mohan, S.V. and Choudhuri, R. A probabilistic approach to minimize the conjunctive costs of node replacement and performance loss in the management of wireless sensor networks 2010 Network and Service Management, IEEE Transactions on
Vol. 7(2), pp. 107 -117 
wireless sensor networks, markov decision processes, maintenance. markov processes , telecommunication network management , wireless sensor networks DOI  
Abstract: In this paper, we consider a sensor network with either node replacement or battery replacement as the maintenance operation. We address the problem of how the failed nodes are to be replaced, in order to obtain a desirable tradeoff between maintenance cost and network performance, in the management of the network. Since node replacement and battery replacement are analytically identical, we solve this problem only for the network, where the maintenance operation is node replacement. We do this by converting performance loss into cost terms and minimizing the summation of node replacement costs and performance loss costs. We use Markov decision processes (MDP) to develop a probabilistic approach in order to estimate the longrun cost of the network. For this we use statistical data based on the past behaviour of the network. We also propose an algorithm to determine the optimal node-replacement policy. The longrun node replacement cost and the longrun performance loss cost of the simulated network are found to be theoretically consistent.
BibTeX:

@article{5471041,

  author = {Misra, S. and Rohith Mohan, S.V. and Choudhuri, R.},

  title = {A probabilistic approach to minimize the conjunctive costs of node replacement and performance loss in the management of wireless sensor networks},

  journal = {Network and Service Management, IEEE Transactions on},

  year = {2010},

  volume = {7},

  number = {2},

  pages = {107 -117},

  doi = {http://dx.doi.org/10.1109/TNSM.2010.06.I9P0319}

}

Morales, R. and Monnet, S. and Gupta, I. and Antoniu, G. MOve: Design and Evaluation of a Malleable Overlay for Group-Based Applications 2007 Network and Service Management, IEEE Transactions on
Vol. 4(2), pp. 107 -116 
collaboration , collaborative work , communication system control , engineering profession , fault tolerance , internet , large-scale systems , peer to peer computing , resource management , scalability fault tolerant computing , groupware , peer-to-peer computing , resource allocation DOI  
Abstract: While peer-to-peer overlays allow distributed applications to scale and tolerate failures, most structured and unstructured overlays in literature today are inflexible from the application viewpoint. The application thus has no first-class control on the overlay structure. This paper proposes the concept of an application-malleable overlay, and the design of the first malleable overlay which we call MOve. MOve is targeted at group- based applications, e.g., collaborative applications. In MOve, the communication characteristics of the distributed application using the overlay can influence the overlay's structure itself, with the twin goals of (1) optimizing the application performance by adapting the overlay, while also (2) retaining the large scale and fault tolerance of the overlay approach. Besides neighbor list membership management, MOve also contains algorithms for resource discovery, update propagation, and churn-resistance. The emergent behavior of the implicit mechanisms used in MOve manifests as follows: when application communication is low, most overlay links keep their default configuration; however, as application communication characteristics become more evident, the overlay gracefully adapts itself to the application. We validate MOve using simulations with group sizes that are fixed, uniform, exponential and PlanetLab-based (slices), as well as churn traces and two sample management-based applications.
BibTeX:

@article{4383312,

  author = {Morales, R. and Monnet, S. and Gupta, I. and Antoniu, G.},

  title = {MOve: Design and Evaluation of a Malleable Overlay for Group-Based Applications},

  journal = {Network and Service Management, IEEE Transactions on},

  year = {2007},

  volume = {4},

  number = {2},

  pages = {107 -116},

  doi = {http://dx.doi.org/10.1109/TNSM.2007.070903}

}

Mukherjee, J. and Krishnamurthy, D. and Rolia, J. Resource Contention Detection in Virtualized Environments 2015 Network and Service Management, IEEE Transactions on
Vol. 12(2), pp. 217-231 
computer architecture;degradation;measurement;probes;servers;sockets;time factors;cloud computing;data center management;software performance engineering;virtualization;data center management;software performance engineering;virtualization DOI  
Abstract: Public and private cloud computing environments employ virtualization methods to consolidate application workloads onto shared servers. Modern servers typically have one or more sockets each with one or more computing cores, a multi-level caching hierarchy, a memory subsystem, and an interconnect to the memory of other sockets. While resource management methods may manage application performance by controlling the sharing of processing time and input-output rates, there is generally no management of contention for virtualization kernel resources or for the memory hierarchy and subsystems. Yet such contention can have a significant impact on application performance. Hardware platform specific counters have been proposed for detecting such contention. We show that such counters alone are not always sufficient for detecting contention. We propose a software probe based approach for detecting contention for shared platform resources and demonstrate its effectiveness. We show that the probe imposes low overhead and is remarkably effective at detecting performance degradations due to inter-VM interference over a wide variety of workload scenarios and on two different server architectures. The probe successfully detected virtualization-induced software bottleneck and memory contention on both server architectures. Our approach supports the management of workload placement on shared servers and pools of shared servers.
BibTeX:

@article{7047843,

  author = {Mukherjee, J. and Krishnamurthy, D. and Rolia, J.},

  title = {Resource Contention Detection in Virtualized Environments},

  journal = {Network and Service Management, IEEE Transactions on},

  year = {2015},

  volume = {12},

  number = {2},

  pages = {217-231},

  doi = {http://dx.doi.org/10.1109/TNSM.2015.2407273}

}

Muntean, Gabriel-Miro and Perry, Philip and Murphy, Liam Objective and subjective evaluation of QOAS video streaming over broadband networks 2005 Network and Service Management, IEEE Transactions on
Vol. 2(1), pp. 19 -28 
bandwidth , broadband communication , engineering management , ip networks , multimedia systems , multiprotocol label switching , quality of service , streaming media , telecommunication traffic , testing DOI  
Abstract: This article presents objective and subjective testing results that assess the performance of the Quality-Oriented Adaptation Scheme (QOAS) when used for high quality multimedia streaming over local broadband IP networks. Results of objective tests using a QOAS simulation model show very efficient adaptation in terms of end-user perceived quality, loss rate, and bandwidth utilization, compared to existing adaptive streaming schemes such as LDA+, and TFRCP. Subjective tests confirm these results by showing high end-user perceived quality of the QOAS under various network conditions.
BibTeX:

@article{4798298,

  author = {Muntean, Gabriel-Miro and Perry, Philip and Murphy, Liam},

  title = {Objective and subjective evaluation of QOAS video streaming over broadband networks},

  journal = {Network and Service Management, IEEE Transactions on},

  year = {2005},

  volume = {2},

  number = {1},

  pages = {19 -28},

  doi = {http://dx.doi.org/10.1109/TNSM.2005.4798298}

}

Musau, Felix and Wang, Guojun and Yu, Shui and Abdullahi, Muhammad Bashir Securing Recommendations in Grouped P2P E-Commerce Trust Model 2012 Network and Service Management, IEEE Transactions on
Vol. 9(4), pp. 407 -420 
peer to peer (p2p) , key generation , key management , security , trust DOI  
Abstract: In dynamic peer to peer (P2P) e-commerce, it is an important and difficult problem to promote online businesses without sacrificing the desired trust to secure transactions. In this paper, we address malicious threats in order to guarantee secrecy and integrity of recommendations exchanged among peers in P2P e-commerce. In addition to trust, secret keys are required to be established between each peer and its neighbors. Further, we propose a key management approach gkeying to generate six types of keys. Our work mainly focuses on key generation for securing recommendations, and ensuring the integrity of recommendations. The proposed approach presented with a security and performance analysis, is more secure and more efficient in terms of communication cost, computation cost, storage cost, and feasibility.
BibTeX:

@article{6314477,

  author = {Musau, Felix and Wang, Guojun and Yu, Shui and Abdullahi, Muhammad Bashir},

  title = {Securing Recommendations in Grouped P2P E-Commerce Trust Model},

  journal = {Network and Service Management, IEEE Transactions on},

  year = {2012},

  volume = {9},

  number = {4},

  pages = {407 -420},

  doi = {http://dx.doi.org/10.1109/TNSM.2012.091712.120269}

}

Nakauchi, K. and Shoji, Y. WiFi Network Virtualization to Control the Connectivity of a Target Service 2015 Network and Service Management, IEEE Transactions on
Vol. 12(2), pp. 308-319 
authentication;handover;ieee 802.11 standards;probes;switches;virtualization;mobile sdn;virtual wifi;wireless network virtualization;wireless network virtualization;managed wifi;mobile sdn;virtual wifi;virtual base station DOI  
Abstract: This paper proposes a WiFi network virtualization technique to control the connectivity of a target service. The packet-level delay violation ratio can be reduced even in a congested situation by provisioning dedicated base station (BS) resources (a set of dedicated BSs) to the target service and allowing only the corresponding terminals to associate with the BSs. The proposed technique is novel in that BSs are specially configured to use the same MAC address, and thus all the decisions on BS selection and handover are separated from those BSs and terminals and are put together into a centralized controller, while consistent layer-2 data paths in a backhaul network are also cooperatively configured. Simulation results show that the proposed technique can control the delay violation ratio of a target VoIP service and keep the ratio extremely low and comparable to that under IEEE 802.11e. A proof-of-concept prototype including two multi-channel virtualization-capable WiFi BSs and a BS switch is developed using off-the-shelf WiFi modules and a commercial OpenFlow switch. Experimental results show that the terminals can make a handover to a dedicated BS in less than 65 ms without any packet drop and association break, and confirm that the effect of the managed handover is limited even in a VoIP application.
BibTeX:

@article{7042282,

  author = {Nakauchi, K. and Shoji, Y.},

  title = {WiFi Network Virtualization to Control the Connectivity of a Target Service},

  journal = {Network and Service Management, IEEE Transactions on},

  year = {2015},

  volume = {12},

  number = {2},

  pages = {308-319},

  doi = {http://dx.doi.org/10.1109/TNSM.2015.2403956}

}

Nayak, T. and Neogi, A. and Kothari, R. Visualization and Analysis of System Monitoring Data using Multi-resolution Context Information 2008 Network and Service Management, IEEE Transactions on
Vol. 5(3), pp. 168 -177 
systems management, self-organizing feature map, visualization data analysis , data visualisation DOI  
Abstract: Projection of high dimensional data into a lower dimensional subspace is required for human understanding of the health of an IT infrastructure. Over the past several years, there have been a large number of dimensionality reduction techniques that have been proposed. Their direct application for visualizing system monitoring data is challenged by two factors. First, system monitoring data does not lie in a metric space. Second, system monitoring data is intrinsically iquestmulti-resolutioniquest in that an event may lead to cascaded events (a server going down impacts one or more applications running on the server; a network outage may impact several network dependent components). Lower dimensional representations which do not take into account the intrinsic multi-resolution nature of the monitoring data are thus limited in their utility and challenge human comprehension. In this paper, we exploit the multi-resolution nature of the monitoring data and a 1-of-n representation of event data to construct navigable multi-resolution topology preserving views for visualizing system monitoring data. We also demonstrate the efficacy of the proposed approach using data from a real data center.
BibTeX:

@article{4805133,

  author = {Nayak, T. and Neogi, A. and Kothari, R.},

  title = {Visualization and Analysis of System Monitoring Data using Multi-resolution Context Information},

  journal = {Network and Service Management, IEEE Transactions on},

  year = {2008},

  volume = {5},

  number = {3},

  pages = {168 -177},

  doi = {http://dx.doi.org/10.1109/TNSM.2009.031104}

}

Nogueira, M. and Silva, H. and Santos, A. and Pujolle, G. A Security Management Architecture for Supporting Routing Services on WANETs 2012 Network and Service Management, IEEE Transactions on
Vol. 9(2), pp. 156-168 
security management , routing , survivability , wireless ad hoc networks DOI  
Abstract: Due to the raising dependence of people on critical applications and wireless networks, high level of reliability, security and availability is claimed to assure secure and reliable service operation. Wireless ad hoc networks (WANETs) experience serious security issues even when solutions employ preventive or reactive security mechanisms. In order to support both network operations and security requirements of critical applications, we present SAMNAR, a Survivable Ad hoc and Mesh Network ARchitecture. Its goal lies in managing adaptively preventive, reactive and tolerant security mechanisms to provide essential services even under attacks, intrusions or failures. We use SAMNAR to design a path selection scheme for WANET routing. The evaluation of this path selection scheme considers scenarios using urban mesh network mobility with urban propagation models, and also random way point mobility with two-ray ground propagation models. Results show the survivability achieved on routing service under different conditions and attacks.
BibTeX:

@article{6138265,

  author = {Nogueira, M. and Silva, H. and Santos, A. and Pujolle, G.},

  title = {A Security Management Architecture for Supporting Routing Services on WANETs},

  journal = {Network and Service Management, IEEE Transactions on},

  year = {2012},

  volume = {9},

  number = {2},

  pages = {156-168},

  doi = {http://dx.doi.org/10.1109/TNSM.2012.011812.100071}

}

Novotny, P. and Ko, B.J. and Wolf, A.L. On-Demand Discovery of Software Service Dependencies in MANETs 2015 Network and Service Management, IEEE Transactions on
Vol. 12(2), pp. 278-292 
accuracy;ad hoc networks;aggregates;data mining;mobile computing;monitoring;software;network and service management;enabling technologies for management;software services;wireless and mobile networks DOI  
Abstract: The dependencies among the components of service-oriented software applications hosted in a mobile ad hoc network (MANET) are difficult to determine due to the inherent loose coupling of the services and the transient communication topologies of the network. Yet understanding these dependencies is critical to making good management decisions, since dependence data underlie important analyses such as fault localization and impact analysis. Current methods for discovering dependencies, developed primarily for fixed networks, assume that dependencies change only slowly and require relatively long monitoring periods as well as substantial memory and communication resources, all of which are impractical in the MANET environment. We describe a new dynamic dependence discovery method designed specifically for this environment, yielding dynamic snapshots of dependence relationships discovered through observations of service interactions. We evaluate the performance of our method in terms of the accuracy of the discovered dependencies, and draw insights on the selection of critical parameters under various operational conditions. Although operated under more stringent conditions, our method is shown to provide results comparable to or better than existing methods.
BibTeX:

@article{7055349,

  author = {Novotny, P. and Ko, B.J. and Wolf, A.L.},

  title = {On-Demand Discovery of Software Service Dependencies in MANETs},

  journal = {Network and Service Management, IEEE Transactions on},

  year = {2015},

  volume = {12},

  number = {2},

  pages = {278-292},

  doi = {http://dx.doi.org/10.1109/TNSM.2015.2410693}

}

Oida, K. Detecting suspended video streams through variance-time analysis 2009 Network and Service Management, IEEE Transactions on
Vol. 6(1), pp. 56 -63 
suspension detection, video stream, variancetime-plot, internet traffic analysis internet , telecommunication traffic , time series , video streaming DOI  
Abstract: To detect suspended video streams, an algorithm is proposed which analyzes traffic data to sense changes in variances. The objective of the algorithm is to help restart suspended streams as quickly as possible in cooperation with network administrators, mechanisms to change routes, etc. The algorithm handles traffic data of a number of streams all together to detect one or more suspended streams without using the information on the number of streams and on each stream's data rate. It works with the same parameter values for both LAN and WAN traffic streams, even if their packet interarrival-time distributions are different. Experiments with 41 live TV streams showed that suspension is detectable when the traffic rate decrease caused by the suspension is 1.6%.
BibTeX:

@article{5331281,

  author = {Oida, K.},

  title = {Detecting suspended video streams through variance-time analysis},

  journal = {Network and Service Management, IEEE Transactions on},

  year = {2009},

  volume = {6},

  number = {1},

  pages = {56 -63},

  doi = {http://dx.doi.org/10.1109/TNSM.2009.090305}

}

Paganelli, F. and Parlanti, D. A Dynamic Composition and Stubless Invocation Approach for Information-Providing Services 2013 Network and Service Management, IEEE Transactions on
Vol. 10(2), pp. 218-230 
web services;xml;planning;semantic web;service brokering;service composition;service invocation;service-oriented architecture DOI  
Abstract: The automated specification and execution of composite services are important capabilities of service-oriented systems. In practice, service invocation is performed by client components (stubs) that are generated from service descriptions at design time. Several researchers have proposed mechanisms for late binding. They all require an object representation (e.g., Java classes) of the XML data types specified in service descriptions to be generated and meaningfully integrated in the client code at design time. However, the potential of dynamic composition can only be fully exploited if supported in the invocation phase by the capability of dynamically binding to services with previously unknown interfaces. In this work, we address this limitation by proposing a way of specifying and executing composite services, without resorting to previously compiled classes that represent XML data types. Semantic and structural properties encoded in service descriptions are exploited to implement a mechanism, based on the Graphplan algorithm, for the run-time specification of composite service plans. Composite services are then executed through the stubless invocation of constituent services. Stubless invocation is achieved by exploiting structural properties of service descriptions for the run-time generation of messages.
BibTeX:

@article{6470579,

  author = {Paganelli, F. and Parlanti, D.},

  title = {A Dynamic Composition and Stubless Invocation Approach for Information-Providing Services},

  journal = {Network and Service Management, IEEE Transactions on},

  year = {2013},

  volume = {10},

  number = {2},

  pages = {218-230},

  doi = {http://dx.doi.org/10.1109/TNSM.2013.022213.120229}

}

Papapanagiotou, I. and Falkner, M. and Devetsikiotis, M. Optimal Functionality Placement for Multiplay Service Provider Architectures 2012 Network and Service Management, IEEE Transactions on
Vol. 9(3), pp. 359-372 
aggregation networks , ethernet based dsl , edge systems , metro ethernet , triple play DOI  
Abstract: The proliferation of multiplay services is creating design dilemmas for service providers, related to where certain key networking functionality should be placed. For example, service providers need to know whether to distribute more network intelligence closer to the subscriber or cluster it in a central location. In view of this, we quantify the cost differences among service provider architectures, identified based on the functionality distribution (centralized vs. distributed, clustered vs. unclustered and single vs. multi edge). For this purpose, we formulate a modular mixed-integer programming model based on a set of close-to-real-case scenarios. Given the complexity of such problems, we propose methodologies that can reduce the number of locations. Our results indicate that distributing the IP intelligence and the video replication is preferable. Moreover, deploying edge systems with faster backplane has little benefit in the aggregation network, and providers should rather invest in faster interfaces.
BibTeX:

@article{6220828,

  author = {Papapanagiotou, I. and Falkner, M. and Devetsikiotis, M.},

  title = {Optimal Functionality Placement for Multiplay Service Provider Architectures},

  journal = {Network and Service Management, IEEE Transactions on},

  year = {2012},

  volume = {9},

  number = {3},

  pages = {359-372},

  doi = {http://dx.doi.org/10.1109/TNSM.2012.061212.110032}

}

Patikirikorala, T. and Wang, L. and Colman, A. and Han, J. Differentiated Performance Management in Virtualized Environments Using Nonlinear Control 2015 Network and Service Management, IEEE Transactions on
Vol. 12(1), pp. 101-113 
control systems;feedback control;resource management;sensors;servers;time factors;virtual machine monitors;feedback control;nonlinear control;performance management;virtual machine;feedback control;nonlinear control;virtual machine DOI  
Abstract: The efficient management of shared resources in virtualized environments has become an important issue with the advent of cloud computing. This is a challenging management task because the resources of a single physical server may have to be shared between multiple virtual machines (VMs) running applications with different performance objectives, under unpredictable and erratic workloads. A number of existing works have developed performance differentiation and resource management techniques for shared resource environments by using linear feedback control approaches. However, the dominant nonlinearities of performance differentiation schemes and virtualized environments mean that linear control techniques do not provide effective control under a wide range of operating conditions. Instead of using linear control techniques, this paper presents a new nonlinear control approach that enables achieving differentiated performance requirements effectively in virtualized environments through the automated provisioning of resources. By using a nonlinear block control structure called the Hammerstein and Wiener model, a nonlinear feedback control system is integrated to the physical server (hypervisor) to efficiently achieve the performance differentiation objectives. The novelty of this approach is the inclusion of a compensation framework, which reduces the impact of nonlinearities on the management system. The experiments conducted in a virtual machine environment have shown significant improvements in performance differentiation and system stability of the proposed nonlinear control approach compared to a linear control system. In addition, the simulation results demonstrate the scalability of this nonlinear approach, providing stable performance differentiation between 10 applications/VMs.
BibTeX:

@article{7021951,

  author = {Patikirikorala, T. and Wang, L. and Colman, A. and Han, J.},

  title = {Differentiated Performance Management in Virtualized Environments Using Nonlinear Control},

  journal = {Network and Service Management, IEEE Transactions on},

  year = {2015},

  volume = {12},

  number = {1},

  pages = {101-113},

  doi = {http://dx.doi.org/10.1109/TNSM.2015.2394472}

}

Perng, C. and Li, T. and Chang, R. Cloud Analytics for Capacity Planning and Instant VM Provisioning 2013 Network and Service Management, IEEE Transactions on
Vol. 10(3), pp. 312 - 325 
cloud computing;capacity planning;cloud analytics;data mining;instant provisioning DOI  
Abstract: The popularity of cloud service spurs the increasing demands of virtual resources to the service vendors. Along with the promising business opportunities, it also brings new technique challenges such as effective capacity planning and instant cloud resource provisioning. In this paper, we describe our research efforts on improving the service quality for the capacity planning and instant cloud resource provisioning problem. We first formulate both of the two problems as a generic cost-sensitive prediction problem. Then, considering the highly dynamic environment of cloud, we propose an asymmetric and heterogeneous measure to quantify the prediction error. Finally, we design an ensemble prediction mechanism by combining the prediction power of a set of prediction techniques based on the proposed measure. To evaluate the effectiveness of our proposed solution, we design and implement an integrated prototype system to help improve the service quality of the cloud. Our system considers many practical situations of the cloud system, and is able to dynamically adapt to the changing environment. A series of experiments on the IBM Smart Cloud Enterprise (SCE) trace data demonstrate that our method can significantly improve the service quality by reducing the resource provisioning time while maintaining a low cloud overhead.
BibTeX:

@article{Perng2013,

  author = {Perng, C. and Li, T. and Chang, R.},

  title = {Cloud Analytics for Capacity Planning and Instant VM Provisioning},

  journal = {Network and Service Management, IEEE Transactions on},

  year = {2013},

  volume = {10},

  number = {3},

  pages = {312 - 325},

  doi = {http://dx.doi.org/10.1109/TNSM.2013.051913.120278}

}

Pezaros, D.P. and Hoerdt, M. and Hutchison, D. Low-Overhead End-to-End Performance Measurement for Next Generation Networks 2011 Network and Service Management, IEEE Transactions on
Vol. 8(1), pp. 1 -14 
computer instrumentation , computer networks , computer performance , network measurement , next generation networking ip networks , internet , computer network performance evaluation , telecommunication traffic DOI  
Abstract: Internet performance measurement is commonly perceived as a high-cost control-plane activity and until now it has tended to be implemented on top of the network's forwarding operation. Consequently, measurement mechanisms have often had to trade relevance and accuracy over non-intrusiveness and cost effectiveness. In this paper, we present the software implementation of an in-line measurement mechanism that uses native structures of the Internet Protocol version 6 (IPv6) stack to piggyback measurement information on data-carrying traffic as this is routed between two points in the network. We carefully examine the overhead associated with both the measurement process and the measurement data, and we demonstrate that direct two-point measurement has minimal impact on throughput and on system processing load. The results of this paper show that adequately engineered measurement mechanisms that exploit selective processing do not compromise the network's forwarding efficiency, and can be deployed in an always-on manner to reveal the true performance of network traffic over small timescales.
BibTeX:

@article{5741009,

  author = {Pezaros, D.P. and Hoerdt, M. and Hutchison, D.},

  title = {Low-Overhead End-to-End Performance Measurement for Next Generation Networks},

  journal = {Network and Service Management, IEEE Transactions on},

  year = {2011},

  volume = {8},

  number = {1},

  pages = {1 -14},

  doi = {http://dx.doi.org/10.1109/TNSM.2011.032311.090369}

}

Polito, S.G. and Zaghloul, S. and Chamania, M. and Jukan, A. Inter-Domain Path Provisioning with Security Features: Architecture and Signaling Performance 2011 Network and Service Management, IEEE Transactions on
Vol. 8(3), pp. 219 -233 
aaa , diameter , pce , rsvp , connection-oriented networks , inter-domain routing , peering agreements authorisation , computer network security DOI  
Abstract: Significant research and standardization efforts are underway to enable automated computation and reservation of connection-oriented paths (circuits) across multiple domains. In the absence of a secure authentication and authorization mechanism, however, carriers continue to provision connections manually, which leads to large setup delays and increases possibility of configuration errors. Carriers also lack mechanisms to meter connection quality during the service lifetime and typically do not exchange accounting information for established connections for auditing and billing purposes. In this paper, we address the challenge for automatic multi-domain path provisioning with authentication, authorization and accounting (AAA) capabilities in carrier-grade transport networks. The designed solution secures computation and reservation for path provisioning and also leverages a standard accounting model which incorporates the accounting signaling for an inter-domain connection. In order to evaluate the impact of the proposed framework on signaling performance, we also provide an analytical framework scalable to large inter-domain network scenarios. We verify the analysis using event-driven simulations and then use this analytical model to quantify the feasibility of our model in terms of signaling load and signaling delay for a wide range of network scenarios.
BibTeX:

@article{6009142,

  author = {Polito, S.G. and Zaghloul, S. and Chamania, M. and Jukan, A.},

  title = {Inter-Domain Path Provisioning with Security Features: Architecture and Signaling Performance},

  journal = {Network and Service Management, IEEE Transactions on},

  year = {2011},

  volume = {8},

  number = {3},

  pages = {219 -233},

  doi = {http://dx.doi.org/10.1109/TCOMM.2011.072611.100047}

}

Polo, J. and Becerra, Y. and Carrera, D. and Steinder, M. and Whalley, I. and Torres, J. and Ayguade, E. Deadline-Based MapReduce Workload Management 2013 Network and Service Management, IEEE Transactions on
Vol. 10(2), pp. 231-244 
mapreduce;performance management;task scheduling DOI  
Abstract: This paper presents a scheduling technique for multi-job MapReduce workloads that is able to dynamically build performance models of the executing workloads, and then use these models for scheduling purposes. This ability is leveraged to adaptively manage workload performance while observing and taking advantage of the particulars of the execution environment of modern data analytics applications, such as hardware heterogeneity and distributed storage. The technique targets a highly dynamic environment in which new jobs can be submitted at any time, and in which MapReduce workloads share physical resources with other workloads. Thus the actual amount of resources available for applications can vary over time. Beyond the formulation of the problem and the description of the algorithm and technique, a working prototype (called Adaptive Scheduler) has been implemented. Using the prototype and medium-sized clusters (of the order of tens of nodes), the following aspects have been studied separately: the scheduler's ability to meet high-level performance goals guided only by user-defined completion time goals; the scheduler's ability to favor data-locality in the scheduling algorithm; and the scheduler's ability to deal with hardware heterogeneity, which introduces hardware affinity and relative performance characterization for those applications that can benefit from executing on specialized processors.
BibTeX:

@article{6407138,

  author = {Polo, J. and Becerra, Y. and Carrera, D. and Steinder, M. and Whalley, I. and Torres, J. and Ayguade, E.},

  title = {Deadline-Based MapReduce Workload Management},

  journal = {Network and Service Management, IEEE Transactions on},

  year = {2013},

  volume = {10},

  number = {2},

  pages = {231-244},

  doi = {http://dx.doi.org/10.1109/TNSM.2012.122112.110163}

}

Pongthawornkamol, T. and Gupta, I. AVCast: New Approaches for Implementing Generic Availability-Dependent Reliability Predicates for Multicast Receivers 2007 Network and Service Management, IEEE Transactions on
Vol. 4(2), pp. 117 -126 
algorithm design and analysis , availability , large-scale systems , monitoring , multicast algorithms , multicast protocols , peer to peer computing , publish-subscribe , streaming media , switches message passing , multicast communication , protocols , radio receivers , reliability DOI  
Abstract: Today's large-scale distributed systems consist of collections of nodes, each of which has its own availability characteristics - a phenomenon sometimes called churn. This availability variation across nodes is often a hindrance to achieving reliability and performance for distributed applications such as multicast. This paper looks into utilizing and leveraging availability information in order to implement arbitrary predicates that specify availability-dependent message reliability for multicast receivers. An application (e.g., a publish-subscribe system) may want to scale the multicast message reliability at each receiver according to that receiver's availability (in terms of the fraction of time that receiver is online) - different options are that the reliability is independent of the availability, proportional to it, or an arbitrary function of it, etc. We propose several gossip- based algorithms to support an arbitrary class of such predicates. These techniques rely on each node's availability being monitored in a distributed manner by a small group of other nodes in such a way that the monitoring load is evenly distributed in the system. Our techniques are light-weight, scalable, and are space- and time- efficient. We analyze our algorithms and evaluate them experimentally by injecting availability traces collected from real peer-to-peer systems.
BibTeX:

@article{4383313,

  author = {Pongthawornkamol, T. and Gupta, I.},

  title = {AVCast: New Approaches for Implementing Generic Availability-Dependent Reliability Predicates for Multicast Receivers},

  journal = {Network and Service Management, IEEE Transactions on},

  year = {2007},

  volume = {4},

  number = {2},

  pages = {117 -126},

  doi = {http://dx.doi.org/10.1109/TNSM.2007.070902}

}

Pras, Aiko and Drevers, Thomas and van de Meent, Remco and Quartel, Dick Comparing the performance of SNMP and Web services-based management 2004 Network and Service Management, IEEE Transactions on
Vol. 1(2), pp. 72 -82 
ber , cpu time , snmp , web services , xml , bandwidth usage , compression , iftable , memory consumption , performance , round trip delay DOI  
Abstract: This paper compares the performance of Web services based network monitoring to traditional, SNMP based, monitoring. The study focuses on the ifTable, and investigates performance as function of the number of retrieved objects. The following aspects are examined: bandwidth usage, CPU time, memory consumption and round trip delay. For our study several prototypes of Web services based agents were implemented; these prototypes can retrieve single ifTable elements, ifTable rows, ifTable columns or the entire ifTable. This paper presents a generic formula to calculate SNMP's bandwidth requirements; the bandwidth consumption of our prototypes was compared to that formula. The CPU time, memory consumption and round trip delay of our prototypes was compared to Net-SNMP, as well as several other SNMP agents. Our measurements show that SNMP is more efficient in cases where only a single object is retrieved; for larger number of objects Web services may be more efficient. Our study also shows that, if performance is the issue, the choice between BER (SNMP) or XML (Web services) encoding is generally not the determining factor; other choices can have stronger impact on performance.
BibTeX:

@article{4798292,

  author = {Pras, Aiko and Drevers, Thomas and van de Meent, Remco and Quartel, Dick},

  title = {Comparing the performance of SNMP and Web services-based management},

  journal = {Network and Service Management, IEEE Transactions on},

  year = {2004},

  volume = {1},

  number = {2},

  pages = {72 -82},

  doi = {http://dx.doi.org/10.1109/TNSM.2004.4798292}

}

Prieto, A.G. and Stadler, R. A-GAP: An Adaptive Protocol for Continuous Network Monitoring with Accuracy Objectives 2007 Network and Service Management, IEEE Transactions on
Vol. 4(1), pp. 2 -12 
computer networks , counting circuits , estimation error , filters , monitoring , network topology , protocols , robustness , scalability , stochastic processes error statistics , estimation theory , protocols , stochastic processes , telecommunication network management , telecommunication network reliability , trees (mathematics) DOI  
Abstract: We present A-GAP, a novel protocol for continuous monitoring of network state variables, which aims at achieving a given monitoring accuracy with minimal overhead. Network state variables are computed from device counters using aggregation functions, such as SUM, AVERAGE and MAX. The accuracy objective is expressed as the average estimation error. A-GAP is decentralized and asynchronous to achieve robustness and scalability. It executes on an overlay that interconnects management processes on the devices. On this overlay, the protocol maintains a spanning tree and updates the network state variables through incremental aggregation. Based on a stochastic model, it dynamically configures local filters that control whether an update is sent towards the root of the tree. We evaluate A-GAP through simulation using real traces and two different types of topologies of up to 650 nodes. The results show that we can effectively control the trade-off between accuracy and protocol overhead, and that the overhead can be reduced by almost two orders of magnitude when allowing for small errors. The protocol quickly adapts to a node failure and exhibits short spikes in the estimation error. Lastly, it can provide an accurate estimate of the error distribution in real-time.
BibTeX:

@article{4275030,

  author = {Prieto, A.G. and Stadler, R.},

  title = {A-GAP: An Adaptive Protocol for Continuous Network Monitoring with Accuracy Objectives},

  journal = {Network and Service Management, IEEE Transactions on},

  year = {2007},

  volume = {4},

  number = {1},

  pages = {2 -12},

  doi = {http://dx.doi.org/10.1109/TNSM.2007.030101}

}

Qi, Zhengwei and Dong, Haoliang and Sun, Wei and Dong, Yaozu and Guan, Haibing Multi-Granularity Memory Mirroring via Binary Translation in Cloud Environments 2014 Network and Service Management, IEEE Transactions on
Vol. 11(1), pp. 36-45 
application level memory mirroring;multi-granularity high availability;virtualization DOI  
Abstract: As the size of DRAM memory grows in clusters, memory errors are common. Current memory availability strategies mostly focus on memory backup and error recovery. Hardware solutions like mirror memory needs costly peripheral equipments while existing software approaches reduce the expense but are limited by the high overhead in practical usage. Moreover, in cloud environments, containers such as LXC now can be used as process and application-level virtualization to run multiple isolated systems on a single host. In this paper, we present a novel system called Memvisor to provide high availability memory mirroring. It is a software approach achieving flexible multi-granularity memory mirroring based on virtualization and binary translation. We can flexibly set memory areas to be mirrored or not from process level to the whole user mode applications. Then, all memory write instructions are duplicated. Data written to memory are synchronized to backup space in the instruction level. If memory failures happen, Memvisor will recover the data from the backup space. Compared with traditional software approaches, the instruction level synchronization lowers the probability of data loss and reduces the backup overhead. The results show that Memvisor outperforms the state-of-the-art software approaches even in the worst case.
BibTeX:

@article{6805343,

  author = {Qi, Zhengwei and Dong, Haoliang and Sun, Wei and Dong, Yaozu and Guan, Haibing},

  title = {Multi-Granularity Memory Mirroring via Binary Translation in Cloud Environments},

  journal = {Network and Service Management, IEEE Transactions on},

  year = {2014},

  volume = {11},

  number = {1},

  pages = {36-45},

  doi = {http://dx.doi.org/10.1109/TNSM.2014.031714.130415}

}

Haiyang Qian and Dispensa, S. and Medhi, D. Balancing Request Denial Probability and Latency in an Agent-Based VPN Architecture 2010 Network and Service Management, IEEE Transactions on
Vol. 7(4), pp. 282 -295 
poisson and non-poisson arrival , request denial probability , agent-based vpn , finite and infinite population models , latency bandwidth allocation , internetworking , multi-agent systems , network servers , probability , telecommunication traffic , virtual private networks DOI  
Abstract: Agent-based virtual private networks architecture (ABVA) refers to the environment where a third-party provider runs and administers remote access virtual private network (VPN) service for organizations that do not want to maintain their own in-house VPN servers. In this paper, we consider the problem of optimally connecting users of an organization to VPN server locations in an ABVA environment so that request denial probability and latency are balanced. A user request needs a certain bandwidth between the user and the VPN server. The VPN server may deny requests when the bandwidth is insufficient (capacity limitation). At the same time, the latency perceived by a user from its current location to a VPN server is an important consideration. We present a number of schemes regarding how VPN servers are to be selected and the number of servers to be tried so that request denial probability is minimized without unduly affecting latency. These schemes are studied on a number of different topologies. For our study, we consider Poisson and non-Poisson arrival of requests under both finite and infinite population models to understand the impact on the entire system. We found that the arrival processes have a significant and consistent impact on the request denial probability and the impact on the latency is dependent on the traffic load in the infinite model. In the finite model, arrival processes have an inconsistent impact to the request denial probability. As to the latency in the finite model, arrivals that have a squared co-efficient of variation less than one is consistently largest, followed by the Poisson case, then the case that the squared co-efficient of variation is more than one. Finally, a strength of this work is the comparison of infinite and finite models; we found that a mismatch between the infinite and the finite model is dependent both on the number of users in the system and the load.
BibTeX:

@article{5668983,

  author = {Haiyang Qian and Dispensa, S. and Medhi, D.},

  title = {Balancing Request Denial Probability and Latency in an Agent-Based VPN Architecture},

  journal = {Network and Service Management, IEEE Transactions on},

  year = {2010},

  volume = {7},

  number = {4},

  pages = {282 -295},

  doi = {http://dx.doi.org/10.1109/TNSM.2010.1012.100103}

}

Qian, H. and Li, F. and Ravindran, R. and Medhi, D. Optimal Resource Provisioning and the Impact of Energy-Aware Load Aggregation for Dynamic Temporal Workloads in Data Centers 2014 Network and Service Management, IEEE Transactions on
Vol. 11(4), pp. 486-503 
energy consumption;indexes;linear programming;planning;power demand;servers;switches;data center;data center;energy-aware;server cost optimization;energy-aware;multi-period planning model;server cost optimization;workload aggregation DOI  
Abstract: An important goal of data center providers is to minimize their operational cost, which reflected through the wear-and-tear cost and the energy consumption cost. In this paper, we present optimization formulations to minimize the cost of ownership in terms of server energy consumption and serverwear-and-tear cost under three different data center server setups (homogeneous, heterogeneous, and hybrid hetero¨Chomogeneous clusters) for dynamic temporal workloads. Our studies show that the homogeneous model takes significantly less computational time than the heterogeneous model (by an order of magnitude). To compute optimal configurations in near real time for large-scale data centers, we propose two modes for using our models: aggregation by maximum (preserves workload deadline) and aggregation by mean (relaxes workload deadline). In addition, we propose two aggregation methods for use in each of the two modes: static (periodic) aggregation and dynamic (aperiodic)aggregation. We found that in the aggregation by maximum mode, dynamic aggregation resulted in cost savings of up to approximately 18% over the static aggregation. In the aggregation by mean mode, dynamic aggregation saved up to approximately a 50% workload rearrangement compared with the static aggregationby mean mode.
BibTeX:

@article{6975205,

  author = {Qian, H. and Li, F. and Ravindran, R. and Medhi, D.},

  title = {Optimal Resource Provisioning and the Impact of Energy-Aware Load Aggregation for Dynamic Temporal Workloads in Data Centers},

  journal = {Network and Service Management, IEEE Transactions on},

  year = {2014},

  volume = {11},

  number = {4},

  pages = {486-503},

  doi = {http://dx.doi.org/10.1109/TNSM.2014.2378515}

}

Raad, P. and Secci, S. and Phung, D. and Cianfrani, A. and Gallard, P. and Pujolle, G. Achieving Sub-Second Downtimes in Large-Scale Virtual Machine Migrations with LISP 2014 Network and Service Management, IEEE Transactions on
Vol. 11(2), pp. 133-143 
encapsulation;ip networks;internet;protocols;routing;servers;virtual machine monitors;locator/identifier separation protocol (lisp);virtual machine mobility;cloud networking DOI  
Abstract: Nowadays, the rapid growth of Cloud computing services is stressing the network communication infrastructure in terms of resiliency and programmability. This evolution reveals missing blocks of the current Internet Protocol architecture, in particular in terms of virtual machine mobility management for addressing and locator-identifier mapping. In this paper, we propose some changes to the Locator/Identifier Separation Protocol (LISP) to cope with this gap. We define novel control-plane functions and evaluate them exhaustively in the worldwide public LISP testbed, involving five LISP sites distant from a few hundred kilometers to many thousands kilometers. Our results show that we can guarantee service downtime upon live virtual machine migration lower than a second across American, Asian and European LISP sites, and down to 300 ms within Europe, outperforming standard LISP and legacy triangular routing approaches in terms of service downtime, as a function of datacenter-datacenter and client-datacenter distances.
BibTeX:

@article{6784163,

  author = {Raad, P. and Secci, S. and Phung, D. and Cianfrani, A. and Gallard, P. and Pujolle, G.},

  title = {Achieving Sub-Second Downtimes in Large-Scale Virtual Machine Migrations with LISP},

  journal = {Network and Service Management, IEEE Transactions on},

  year = {2014},

  volume = {11},

  number = {2},

  pages = {133-143},

  doi = {http://dx.doi.org/10.1109/TNSM.2014.012114.130517}

}

Racz, P. and Stiller, B. IP flow accounting application for diameter 2008 Network and Service Management, IEEE Transactions on
Vol. 5(4), pp. 239 -246 
flow accounting, diameter, ipfix ip networks , authorisation , message authentication , open systems , protocols DOI  
Abstract: Flow accounting in IP networks is used by network operators for various purposes, such as network management, traffic management, or traffic analysis. In order to integrate flow accounting into an Authentication, Authorization, and Accounting (AAA) infrastructure, this work designs and evaluates an accounting extension to the Diameter protocol - termed Diameter IP Flow Accounting (IPFA) application - in support of the efficient transfer of IP flow records. The new Diameter IPFA application has been implemented as a prototype and its evaluation shows that it achieves a better performance for the transfer of IP flow records than the traditional Diameter accounting approach.
BibTeX:

@article{5010447,

  author = {Racz, P. and Stiller, B.},

  title = {IP flow accounting application for diameter},

  journal = {Network and Service Management, IEEE Transactions on},

  year = {2008},

  volume = {5},

  number = {4},

  pages = {239 -246},

  doi = {http://dx.doi.org/10.1109/TNSM.2009.041105}

}

Rahman, M. and Boutaba, R. SVNE: Survivable Virtual Network Embedding Algorithms for Network Virtualization 2013 Network and Service Management, IEEE Transactions on
Vol. 10(2), pp. 105-118 
network virtualization;network survivability and resilience;virtual network embedding DOI  
Abstract: Network virtualization can offer more flexibility and better manageability for the future Internet by allowing multiple heterogeneous virtual networks (VN) to coexist on a shared infrastructure provider (InP) network. A major challenge in this respect is the VN embedding problem that deals with the efficient mapping of virtual resources on InP network resources. Previous research focused on heuristic algorithms for the VN embedding problem assuming that the InP network remains operational at all times. In this paper, we remove this assumption by formulating the survivable virtual network embedding (SVNE) problem. We then develop a pro-active, and a hybrid policy heuristic to solve it, and a baseline policy heuristic to compare to. The hybrid policy is based on a fast re-routing strategy and utilizes a pre-reserved quota for backup on each physical link. Our evaluation results show that our proposed heuristics for SVNE outperform the baseline heuristic in terms of long term business profit for the InP, acceptance ratio, bandwidth efficiency, and response time.
BibTeX:

@article{6449268,

  author = {Rahman, M. and Boutaba, R.},

  title = {SVNE: Survivable Virtual Network Embedding Algorithms for Network Virtualization},

  journal = {Network and Service Management, IEEE Transactions on},

  year = {2013},

  volume = {10},

  number = {2},

  pages = {105-118},

  doi = {http://dx.doi.org/10.1109/TNSM.2013.013013.110202}

}

Rao, Jia and Wei, Yudi and Gong, Jiayu and Xu, Cheng-Zhong QoS Guarantees and Service Differentiation for Dynamic Cloud Applications 2013 Network and Service Management, IEEE Transactions on
Vol. 10(1), pp. 43-55 
fuzzy control , cloud computing , quality-of-service , resource management DOI  
Abstract: Cloud elasticity allows dynamic resource provisioning in concert with actual application demands. Feedback control approaches have been applied with success to resource allocation in physical servers. However, cloud dynamics make the design of an accurate and stable resource controller challenging, especially when application-level performance is considered as the measured output. Application-level performance is highly dependent on the characteristics of workload and sensitive to cloud dynamics. To address these challenges, we extend a self-tuning fuzzy control (STFC) approach, originally developed for response time assurance in web servers to resource allocation in virtualized environments. We introduce mechanisms for adaptive output amplification and flexible rule selection in the STFC approach for better adaptability and stability. Based on the STFC, we further design a two-layer QoS provisioning framework, DynaQoS, that supports adaptive multi-objective resource allocation and service differentiation. We implement a prototype of DynaQoS on a Xen-based cloud testbed. Experimental results on representative server workloads show that STFC outperforms popular controllers such as Kalman filter, ARMA and, Adaptive PI in the control of CPU, memory, and disk bandwidth resources under both static and dynamic workloads. Further results with multiple control objectives and service classes demonstrate the effectiveness of DynaQoS in performance-power control and service differentiation.
BibTeX:

@article{6298750,

  author = {Rao, Jia and Wei, Yudi and Gong, Jiayu and Xu, Cheng-Zhong},

  title = {QoS Guarantees and Service Differentiation for Dynamic Cloud Applications},

  journal = {Network and Service Management, IEEE Transactions on},

  year = {2013},

  volume = {10},

  number = {1},

  pages = {43-55},

  doi = {http://dx.doi.org/10.1109/TNSM.2012.091012.120238}

}

Rao, Sudarshan Operational Fault Detection in cellular wireless base-stations 2006 Network and Service Management, IEEE Transactions on
Vol. 3(2), pp. 1 -11 
statistical fault detection , base-stations , cellular , learning , static and adaptive thresholds , training , wireless DOI  
Abstract: The goal of this work is to improve availability of operational base-stations in a wireless mobile network through non-intrusive fault detection methods. Since revenue is generated only when actual customer calls are processed, we develop a scheme to minimize revenue loss by monitoring real-time mobile user call processing activity. The mobile user call load profile experienced by a base-station displays a highly non-stationary temporal behavior with time-of-day, day-of-the-week and time-of-year variations. In addition, the geographic location also impacts the traffic profile, making each base-station have its own unique traffic patterns. A hierarchical base-station fault monitoring and detection scheme has been implemented in an IS-95 CDMA Cellular network that can detect faults at - base station level, sector level, carrier level, and channel level. A statistical hypothesis test framework, based on a combination of parametric, semi-parametric and non-parametric test statistics are defined for determining faults. The fault or alarm thresholds are determined by learning expected deviations during a training phase. Additionally, fault thresholds have to adapt to spatial and temporal mobile traffic patterns that slowly changes with seasonal traffic drifts over time and increasing penetration of mobile user density. Feedback mechanisms are provided for threshold adaptation and self-management, which includes automatic recovery actions and software reconfiguration. We call this method, Operational Fault Detection (OFD). We describe the operation of a few select features from a large family of OFD features in Base Stations; summarize the algorithms, their performance and comment on future work.
BibTeX:

@article{4798311,

  author = {Rao, Sudarshan},

  title = {Operational Fault Detection in cellular wireless base-stations},

  journal = {Network and Service Management, IEEE Transactions on},

  year = {2006},

  volume = {3},

  number = {2},

  pages = {1 -11},

  doi = {http://dx.doi.org/10.1109/TNSM.2006.4798311}

}

Reali, G. and Monacelli, L. Definition and performance evaluation of a fault localization technique for an NGN IMS network 2009 Network and Service Management, IEEE Transactions on
Vol. 6(2), pp. 122 -136 
codebook, fault localization, network management, performance analysis internet telephony , computer network management , fault location , multimedia communication , optimisation DOI  
Abstract: Fault Localization (FL) is a critical task for operators in the context of e-TOM (enhanced Telecom Operations Map) assurance process, in order to reduce network maintenance costs and improve availability, reliability, and performance of network services. This paper investigates, in a practical perspective, the use of a well known FL technique, named codebook technique, for the IMS control layer of a real Next Generation Network, deploying wireline VoIP and advanced communication services. Moreover, we propose some heuristics to generate optimal codebooks, i.e. to find the minimum set of symptoms (alarms) to be monitored in order to obtain the desired level of robustness to spurious or missing alarms and modelling errors in the root cause detection, and we evaluate their performance through extensive simulations. Finally, we provide a list of some practical Key Performance Indicators, the value of which is compared against specific thresholds. When a threshold is exceeded, an alarm is generated and used by the FL processing.
BibTeX:

@article{5374832,

  author = {Reali, G. and Monacelli, L.},

  title = {Definition and performance evaluation of a fault localization technique for an NGN IMS network},

  journal = {Network and Service Management, IEEE Transactions on},

  year = {2009},

  volume = {6},

  number = {2},

  pages = {122 -136},

  doi = {http://dx.doi.org/10.1109/TNSM.2009.090605}

}

Reynolds, M.Brent and Hulce, Don R. and Hopkinson, Kenneth M. and Oxley, Mark E. and Mullins, Barry E. A Bin Packing Heuristic for On-Line Service Placement and Performance Control 2013 Network and Service Management, IEEE Transactions on
Vol. 10(3), pp. 326-339 
configuration control;control theory;service systems optimization;web services management DOI  
Abstract: The ever-increasing size and complexity of cloud computing, data centers, virtualization, web services, and other forms of distributed computing make automated and effective service management increasingly important. This article treats the service placement problem as a novel generalization of the on-line vector packing problem. This generalization of the service placement problem does not require a priori knowledge of the service resource profiles, allows for resource profiles to change over time, and allows services to be moved once placed on a server. An on-line self-organizing model profiles resource supplies and demands arranging services in a placement based on their resulting quality rating. A policy-driven asymmetric matrix norm quantifies the quality of the placement allowing for administrative preferences regarding service performance versus service inclusion. Service resource usage profiles' variations cause changes in their assigned placement quality; forcing new, better server placements to be found. Because some placements perform better, a proportional integral derivative controller for performance feedback adjusts the services' actual profile according to service's individual response times. This large scale system autonomically organizes placement of services in response to changes in demand and network disruptions. This article presents theorems which demonstrate the theoretical basis for the model. The article includes empirical results from the implementation of this model in a self-organizing testbed of web servers and services.
BibTeX:

@article{6599024,

  author = {Reynolds, M.Brent and Hulce, Don R. and Hopkinson, Kenneth M. and Oxley, Mark E. and Mullins, Barry E.},

  title = {A Bin Packing Heuristic for On-Line Service Placement and Performance Control},

  journal = {Network and Service Management, IEEE Transactions on},

  year = {2013},

  volume = {10},

  number = {3},

  pages = {326-339},

  doi = {http://dx.doi.org/10.1109/TNSM.2013.13.120334}

}

Ricciato, F. and Hasenleithner, E. and Romirer-Maierhofer, P. Traffic analysis at short time-scales: an empirical case study from a 3G cellular network 2008 Network and Service Management, IEEE Transactions on
Vol. 5(1), pp. 11 -21 
aggregates , availability , cellular networks , delay effects , ip networks , land mobile radio cellular systems , microscopy , pattern analysis , statistics , telecommunication traffic 3g mobile communication , cellular radio , statistical analysis , synchronisation , telecommunication network reliability , telecommunication traffic DOI  
Abstract: The availability of synchronized packet-level traces captured at different links allows the extraction of one-way delays for the network section in between. Delay statistics can be used as quality indicators to validate the health of the network and to detect global performance drifts and/or localized problems. Since packet delays depend not only on the network status but also on the arriving traffic rate, the delay analysis must be coupled with the analysis of the traffic patterns at short time scales. In this work we report on the traffic and delay patterns observed at short timescales in a 3G cellular mobile network. We show that the aggregate traffic rate exhibits large impulses and investigate on their causes. Specifically, we find that high- rate sequential scanners represent a common source of traffic impulses, and identify the potential consequences of such traffic onto the underlying network. This case-study demonstrates that the microscopic analysis of delay and traffic patterns at short time-scales can contribute effectively to the task of troubleshooting IP networks. This is particularly important in the context of 3G cellular networks given their complexity and relatively recent deployment.
BibTeX:

@article{4570772,

  author = {Ricciato, F. and Hasenleithner, E. and Romirer-Maierhofer, P.},

  title = {Traffic analysis at short time-scales: an empirical case study from a 3G cellular network},

  journal = {Network and Service Management, IEEE Transactions on},

  year = {2008},

  volume = {5},

  number = {1},

  pages = {11 -21},

  doi = {http://dx.doi.org/10.1109/TNSM.2008.080102}

}

Riggio, R. and Marina, M.K. and Schulz-Zander, J. and Kuklinski, S. and Rasheed, T. Programming Abstractions for Software-Defined Wireless Networks 2015 Network and Service Management, IEEE Transactions on
Vol. 12(2), pp. 146-162 
ieee 802.11 standards;interference;ports (computers);programming;resource management;wireless networks;network management;network management;programming abstractions;wlans;programming abstractions;software-defined wireless networks DOI  
Abstract: Software-Defined Networking (SDN) has received, in the last years, significant interest from the academic and the industrial communities alike. The decoupled control and data planes found in an SDN allows for logically centralized intelligence in the control plane and generalized network hardware in the data plane. Although the current SDN ecosystem provides a rich support for wired packet--switched networks, the same cannot be said for wireless networks where specific radio data-plane abstractions, controllers, and programming primitives are still yet to be established. In this work, we present a set of programming abstractions modeling the fundamental aspects of a wireless network, namely state management, resource provisioning, network monitoring, and network reconfiguration. The proposed abstractions hide away the implementation details of the underlying wireless technology providing programmers with expressive tools to control the state of the network. We also present a Software-Defined Radio Access Network Controller for Enterprise WLANs and a Python--based Software Development Kit implementing the proposed abstractions. Finally, we experimentally evaluate the usefulness, efficiency and flexibility of the platform over a real 802.11-based WLAN.
BibTeX:

@article{7072550,

  author = {Riggio, R. and Marina, M.K. and Schulz-Zander, J. and Kuklinski, S. and Rasheed, T.},

  title = {Programming Abstractions for Software-Defined Wireless Networks},

  journal = {Network and Service Management, IEEE Transactions on},

  year = {2015},

  volume = {12},

  number = {2},

  pages = {146-162},

  doi = {http://dx.doi.org/10.1109/TNSM.2015.2417772}

}

Rizk, A. and Fidler, M. On Multiplexing Models for Independent Traffic Flows in Single- and Multi-Node Networks 2013 Network and Service Management, IEEE Transactions on
Vol. 10(1), pp. 15-28 
ebb , statistical network calculus , effective bandwidth , statistical multiplexing DOI  
Abstract: In packet switched networks, statistical multiplexing of independent variable bit rate flows achieves significant resource savings, i.e., N flows require considerably less than N times the resources needed for one flow. In this work, we explore statistical multiplexing using methods from the current stochastic network calculus, where we compare the accuracy of different analytical approaches. While these approaches are known to provide identical results for a single flow, we find significant differences if several independent flows are multiplexed. Recent results on the concatenation of nodes along a network path allow us to investigate both single- as well as multi-node networks with cross traffic. The analysis enables us to distinguish different independence assumptions between traffic flows at a single node as well as between cross traffic flows at consecutive nodes of a network path. We contribute insights into the scaling of end-to-end delay bounds in the number of nodes n of a network path under statistical independence. Our work is complemented by numerical applications, e.g., on access multiplexer dimensioning and traffic trunk management.
BibTeX:

@article{6298751,

  author = {Rizk, A. and Fidler, M.},

  title = {On Multiplexing Models for Independent Traffic Flows in Single- and Multi-Node Networks},

  journal = {Network and Service Management, IEEE Transactions on},

  year = {2013},

  volume = {10},

  number = {1},

  pages = {15-28},

  doi = {http://dx.doi.org/10.1109/TNSM.2012.091012.120234}

}

Robertson, G. and Nelakuditi, S. Handling Multiple Failures in IP Networks through Localized On-Demand Link State Routing 2012 Network and Service Management, IEEE Transactions on
Vol. 9(3), pp. 293-305 
fast reroute , failure resilience , local rerouting DOI  
Abstract: It has been observed that transient failures are fairly common in IP backbone networks and there have been several proposals based on local rerouting to provide high network availability despite failures. While most of these proposals are effective in handling single failures, they either cause loops or drop packets in the case of multiple independent failures. To ensure forwarding continuity even with multiple failures, we propose Localized On-demand Link State (LOLS) routing. Under LOLS, each packet carries a blacklist, which is a minimal set of failed links encountered along its path, and the next hop is determined by excluding the blacklisted links. We show that the blacklist can be reset when the packet makes forward progress towards the destination and hence can be encoded in a few bits. Furthermore, blacklist-based forwarding entries at a router can be precomputed for a given set of failures requiring protection. While the LOLS approach is generic, this paper describes how it can be applied to ensure forwarding to all reachable destinations in case of any two link or node failures. Our evaluation of this failure scenario based on various real network topologies reveals that LOLS needs 6 bits in the worst case to convey the blacklist information. We argue that this overhead is acceptable considering that LOLS routing deviates from the optimal path by a small stretch only while routing around failures.
BibTeX:

@article{6208786,

  author = {Robertson, G. and Nelakuditi, S.},

  title = {Handling Multiple Failures in IP Networks through Localized On-Demand Link State Routing},

  journal = {Network and Service Management, IEEE Transactions on},

  year = {2012},

  volume = {9},

  number = {3},

  pages = {293-305},

  doi = {http://dx.doi.org/10.1109/TNSM.2012.12.110172}

}

Rohmer, T. and Nakib, A and Nafaa, A Priori Knowledge Guided Approach for Optimal Peer Selection in P2P VoD Systems 2014 Network and Service Management, IEEE Transactions on
Vol. 11(3), pp. 350-362 
entropy;market research;measurement;peer-to-peer computing;resource management;streaming media;uplink;bayes;learning;optimal selection;peer-to-peer;resource allocation;video-on-demand DOI  
Abstract: With the rise of Video-on-Demand (VoD) systems as a preferred way to distribute video content over IP networks, many research works and innovations have focused on improving the scalability of streaming systems by looking at distributed approaches such as peer-to-peer (P2P). One of the most critical aspects in P2P-assisted streaming system is the real-time resource allocation, which drives the performance of the system in terms of capacity utilization and VoD requests rejection rates. In this paper, we specifically focus on the problem of maximizing the P2P streaming system utilization by effectively alternating between different resource allocation strategies. Switching between different resource allocation strategies is guided by a run-time statistical analysis of performances against predicted content popularity pattern. A key contribution of this paper resides in effectively combining different, and potentially conflicting, performance objectives when deciding on which resource allocation strategy to use. Indeed, we use a Bayesian Fusion to select the most appropriate resource allocation strategy to deal with future content demand. With our P2P resource allocation framework, a VoD service operator can combine any number of resource allocation strategies and formulate different performance objectives that meet the requirements of its network and the content consumption behavior of its users.
BibTeX:

@article{6881717,

  author = {Rohmer, T. and Nakib, A and Nafaa, A},

  title = {Priori Knowledge Guided Approach for Optimal Peer Selection in P2P VoD Systems},

  journal = {Network and Service Management, IEEE Transactions on},

  year = {2014},

  volume = {11},

  number = {3},

  pages = {350-362},

  doi = {http://dx.doi.org/10.1109/TNSM.2014.2346076}

}

Rooney, Sean and Bauer, Daniel and Scotton, Paolo Techniques for integrating sensors into the enterprise network 2006 Network and Service Management, IEEE Transactions on
Vol. 3(1), pp. 43 -52 
asynchronous messaging , power efficient protocols , scalability , sensor systems DOI  
Abstract: Cheap programmable sensor devices are becoming commercially available. They offer the possibility of transforming existing enterprise applications and enabling entirely new ones. The merging of sensor networks into the enterprise network poses some distinct problems. In particular, information from theses devices must be obtained in a way which minimizes their energy use and must be aggregated and filtered before being sent to the application server to prevent it from being overwhelmed. We describe a range of complementary techniques for integrating sensors into an enterprise network. These comprise new architectural entities within the enterprise network #x2014; edge server #x2014; new means of sharing information within the enterprise network #x2014; messaging binning #x2014; and new protocols for extracting information from the sensor network #x2014; Messo.
BibTeX:

@article{4798306,

  author = {Rooney, Sean and Bauer, Daniel and Scotton, Paolo},

  title = {Techniques for integrating sensors into the enterprise network},

  journal = {Network and Service Management, IEEE Transactions on},

  year = {2006},

  volume = {3},

  number = {1},

  pages = {43 -52},

  doi = {http://dx.doi.org/10.1109/TNSM.2006.4798306}

}

Roychoudhuri, Lopamudra and Al-Shaer, Ehab S. Real-time packet loss prediction based on end-to-end delay variation 2005 Network and Service Management, IEEE Transactions on
Vol. 2(1), pp. 29 -38 
delay-loss correlation , loss prediction DOI  
Abstract: The effect of packet loss on the quality of real-time audio is significant. Nevertheless, Internet measurement experiments continue to show a considerable variation of packet loss, which makes audio error recovery and concealment challenging. We propose a novel framework to predict packet loss and congestion, based on measurements of end-to-end delay variation and trend, enabling proactive error recovery and congestion avoidance. Our preliminary simulation and experimentation results with various sites on the Internet show the effectiveness and the accuracy of the Loss Predictor technique.
BibTeX:

@article{4798299,

  author = {Roychoudhuri, Lopamudra and Al-Shaer, Ehab S.},

  title = {Real-time packet loss prediction based on end-to-end delay variation},

  journal = {Network and Service Management, IEEE Transactions on},

  year = {2005},

  volume = {2},

  number = {1},

  pages = {29 -38},

  doi = {http://dx.doi.org/10.1109/TNSM.2005.4798299}

}

Saad, M. and Leon-Garcia, A. and Wei Yu Optimal Network Rate Allocation under End-to-End Quality-of-Service Requirements 2007 Network and Service Management, IEEE Transactions on
Vol. 4(3), pp. 40 -49 
asynchronous transfer mode , bandwidth , communication system traffic control , delay , partitioning algorithms , processor scheduling , quality of service , streaming media , traffic control , utility programs computer networks , quality of service , radio networks DOI  
Abstract: We address the problem of allocating transmission rates to a set of network sessions with end-to-end bandwidth and delay requirements. We give a unified convex programming formulation that captures both average and probabilistic delay requirements. Moreover, we present a distributed algorithm and establish its convergence to the global optimum of the overall rate allocation problem. In our algorithm, session sources selfishly update their rates as to maximize their individual benefit (utility minus bandwidth cost), the network partitions end-to-end delay requirements into local per-link delays, and the links adjust their prices to coordinate the sources' and network's decisions, respectively. This algorithm relies on a network utility maximization (NUM) approach, and can be viewed as a generalization of TCP and active queue management (AQM) algorithms to handle end-to-end QoS. We extend our results to deterministic delay requirements when nodes employ Packet-level Generalized Processor Sharing (PGPS) schedulers.
BibTeX:

@article{4489645,

  author = {Saad, M. and Leon-Garcia, A. and Wei Yu},

  title = {Optimal Network Rate Allocation under End-to-End Quality-of-Service Requirements},

  journal = {Network and Service Management, IEEE Transactions on},

  year = {2007},

  volume = {4},

  number = {3},

  pages = {40 -49},

  doi = {http://dx.doi.org/10.1109/TNSM.2007.021101}

}

Salah, K. and Elbadawi, K. and Boutaba, R. Performance Modeling and Analysis of Network Firewalls 2012 Network and Service Management, IEEE Transactions on
Vol. 9(1), pp. 12-21 
network firewalls , performance analysis , performance modeling , queueing systems DOI  
Abstract: Network firewalls act as the first line of defense against unwanted and malicious traffic targeting Internet servers. Predicting the overall firewall performance is crucial to network security engineers and designers in assessing the effectiveness and resiliency of network firewalls against DDoS (Distributed Denial of Service) attacks as those commonly launched by today's Botnets. In this paper, we present an analytical queueing model based on the embedded Markov chain to study and analyze the performance of rule-based firewalls when subjected to normal traffic flows as well as DoS attack flows targeting different rule positions. We derive equations for key features and performance measures of engineering and design significance. These features and measures include throughput, packet loss, packet delay, and firewall's CPU utilization. In addition, we verify and validate our analytical model using simulation and real experimental measurements.
BibTeX:

@article{6112159,

  author = {Salah, K. and Elbadawi, K. and Boutaba, R.},

  title = {Performance Modeling and Analysis of Network Firewalls},

  journal = {Network and Service Management, IEEE Transactions on},

  year = {2012},

  volume = {9},

  number = {1},

  pages = {12-21},

  doi = {http://dx.doi.org/10.1109/TNSM.2011.122011.110151}

}

Saleh, Maen and Dong, Liang Real-Time Scheduling with Security Enhancement for Packet Switched Networks 2013 Network and Service Management, IEEE Transactions on
Vol. 10(3), pp. 271-285 
authentication;protocols;quality of service;real-time systems;scheduling;servers;multi-agent systems;network security;quality of service (qos);real-time scheduling;resource estimation DOI  
Abstract: Real-time network applications depend on schedulers to guarantee the quality of service (QoS). Conventional real-time schedulers focus on the timing constraints but are much less effective in satisfying the security requirements. In this paper, we propose an adaptive security-aware scheduling system for packet switched networks using a real-time multi-agent design model. The proposed system combines real-time scheduling with security service enhancement. The scheduling unit uses the differentiated-earliest-deadline-first (Diff-EDF) scheduler and the security enhancement scheme adopts a congestion control mechanism. The required QoS is guaranteed for different types (audio and video) of real-time data flows, while the packet security levels are adaptively enhanced according to the feedbacks from the congestion control module. Compared with the IPsec protocol, the proposed scheme reduces the number of pending packets at the destinations. In implementation, the proposed scheme can overload the priority code point and the virtual-LAN identifier fields of the IEEE 802.1Q frame format, hence eliminating the overhead of the security associations performed by the IPsec protocol.
BibTeX:

@article{6565569,

  author = {Saleh, Maen and Dong, Liang},

  title = {Real-Time Scheduling with Security Enhancement for Packet Switched Networks},

  journal = {Network and Service Management, IEEE Transactions on},

  year = {2013},

  volume = {10},

  number = {3},

  pages = {271-285},

  doi = {http://dx.doi.org/10.1109/TNSM.2013.071813.120299}

}

Samaan, N. and Karmouch, A. Network anomaly diagnosis via statistical analysis and evidential reasoning 2008 Network and Service Management, IEEE Transactions on
Vol. 5(2), pp. 65 -77 
anomaly detection , dempster-shafer theory , network management computer network management , inference mechanisms , security of data , statistical analysis , telecommunication security DOI  
Abstract: This paper investigates the efficiency of diagnosing network anomalies using concepts of statistical analysis and evidential reasoning. A bi-cycle of auto-regression is first applied to model increments in the values of network monitoring variables to accurately detect network anomalies. To classify the rootcause of the detected anomalies, concepts of evidential reasoning of Dempster-Shafer theory are employed; the root-cause of a network failure is inferred by gathering pieces of evidence concerning different groups of candidate failures obtained from a training set of detected anomalies and their corresponding root-causes. These groups are then refined to infer the exact cause of failure when evidence accumulates using the Dempster rule of combinations. To handle cases of imbalanced training sets, two new approaches for assigning belief values to different anomaly classes are also proposed. Performance analysis and results demonstrate the accuracy of the proposed scheme in detecting anomalies using real data.
BibTeX:

@article{4694132,

  author = {Samaan, N. and Karmouch, A.},

  title = {Network anomaly diagnosis via statistical analysis and evidential reasoning},

  journal = {Network and Service Management, IEEE Transactions on},

  year = {2008},

  volume = {5},

  number = {2},

  pages = {65 -77},

  doi = {http://dx.doi.org/10.1109/TNSM.2008.021103}

}

Samak, T. and Al-Shaer, E. Fuzzy Conflict Analysis for QoS Policy Parameters in DiffServ Networks 2012 Network and Service Management, IEEE Transactions on
Vol. 9(4), pp. 459- 472 
qos policy modeling , conflict analysis DOI  
Abstract: Policy-based network management is a necessity for managing large-scale environments. It provides the means for separating high-level system requirements from the actual implementation. As the network size increases, the need for automated tools to perform management becomes more apparent. But configuring routers and network devices to achieve QoS goals is a challenging task. Using Differentiated Services to dynamically perform this configuration involves defining policies on different network nodes in multiple domains. Policy aggregation across domains requires a unified policy model that can overcome the challenge of conflict detection and resolution. In this work, we propose a unified model to represent and encode QoS policies. This model enables efficient and flexible conflict analysis. The representation utilizes a bottom-up approach, from the base policy parameters to the aggregation of policies across domains with respect to traffic classes. We also present a classification of these conflicts and a measure of conflicts to assess the severity of any misconfiguration. The model and the conflict measure are evaluated with large networks and different topologies.
BibTeX:

@article{6228473,

  author = {Samak, T. and Al-Shaer, E.},

  title = {Fuzzy Conflict Analysis for QoS Policy Parameters in DiffServ Networks},

  journal = {Network and Service Management, IEEE Transactions on},

  year = {2012},

  volume = {9},

  number = {4},

  pages = {459- 472},

  doi = {http://dx.doi.org/10.1109/TNSM.2012.062512.120308}

}

Samimi, F.A. and McKinley, P.K. and Sadjadi, S.M. and Chiping Tang and Shapiro, J.K. and Zhinan Zhou Service Clouds: Distributed Infrastructure for Adaptive Communication Services 2007 Network and Service Management, IEEE Transactions on
Vol. 4(2), pp. 84 -95 
adaptive systems , cloud computing , computer architecture , computer networks , middleware , mobile computing , prototypes , quality of service , testing , web and internet services computer architecture , middleware , mobile computing , telecommunication services DOI  
Abstract: This paper describes service clouds, a distributed infrastructure designed to facilitate rapid prototyping and deployment of adaptive communication services. The infrastructure combines adaptive middleware functionality with an overlay network substrate in order to support dynamic instantiation and reconfiguration of services. The service clouds architecture includes a collection of low-level facilities that can be invoked directly by applications or used to compose more complex services. After describing the service clouds architecture, we present results of experimental case studies conducted on the PlanetLab Internet testbed alone and a mobile computing testbed.
BibTeX:

@article{4383310,

  author = {Samimi, F.A. and McKinley, P.K. and Sadjadi, S.M. and Chiping Tang and Shapiro, J.K. and Zhinan Zhou},

  title = {Service Clouds: Distributed Infrastructure for Adaptive Communication Services},

  journal = {Network and Service Management, IEEE Transactions on},

  year = {2007},

  volume = {4},

  number = {2},

  pages = {84 -95},

  doi = {http://dx.doi.org/10.1109/TNSM.2007.070901}

}

Santos, F. and da Costa Cordeiro, W. and Gaspary, L. and Barcellos, M. Funnel: Choking Polluters in BitTorrent File Sharing Communities 2011 Network and Service Management, IEEE Transactions on
Vol. 8(4), pp. 310 - 321 
bittorrent , peer-to-peer , experimental evaluation , pollution DOI  
Abstract: BitTorrent-based file sharing communities are very popular nowadays. Anecdotal evidence hints that such communities are exposed to content pollution attacks (i.e., publication of "false" files, viruses, or other malware), requiring a moderation effort from their administrators. The size of such a cumbersome task increases with content publishing rate. To tackle this problem, we propose a generic pollution control strategy and instantiate it as a mechanism for BitTorrent communities. The strategy follows a conservative approach: it regards newly published content as polluted, and allows the dissemination rate to increase according to the proportion of positive feedback issued about the content. In contrast to related approaches, the strategy and mechanism avoid the problem of pollution dissemination at the initial stages of a swarm, when insufficient feedback is available to form a reputation about the content. To evaluate the proposed solution, we conducted a set of experiments using a popular BitTorrent agent and an implementation of our mechanism. Results indicate that the proposed approach mitigates the dissemination of polluted content in BitTorrent, imposing a low overhead in the distribution of non-polluted ones.
BibTeX:

@article{6070519,

  author = {Santos, F. and da Costa Cordeiro, W. and Gaspary, L. and Barcellos, M.},

  title = {Funnel: Choking Polluters in BitTorrent File Sharing Communities},

  journal = {Network and Service Management, IEEE Transactions on},

  year = {2011},

  volume = {8},

  number = {4},

  pages = {310 - 321},

  doi = {http://dx.doi.org/10.1109/TNSM.2011.110311.110104}

}

Sarma, A and Chakraborty, S. and Nandi, S. Context Aware Handover Management: Sustaining QoS and QoE in a Public IEEE 802.11e Hotspot 2014 Network and Service Management, IEEE Transactions on
Vol. 11(4), pp. 530-543 
DOI  
Abstract: IEEE 802.11 Community wireless hotspots are widely used now-a-days to provide ubiquitous Internet connections to the end-users in public areas such as airports and restaurants. The recent analysis of the traffic pattern in a wireless hotspot shows more APs are deployed in a public area than required, though the users visit only a few APs. As a consequence, severe load imbalance is observed in a hotspot local area network (LAN), that results in performance degradation in terms of quality of service (QoS) for the network and quality of experience (QoE) for the end-users. This paper proposes a set of bandwidth management policies to achieve this goal, and the effectiveness of these policies are analyzed theoretically. According to the theoretical foundation, a context aware handover management scheme for proper load distribution in a public IEEE 802.11 network, supporting the class-aware bandwidth management policies, is designed in this paper. The performance of the proposed scheme is evaluated using an IEEE 802.11g+e wireless LAN testbed, and compared with other schemes proposed in the literature.
BibTeX:

@article{6884842,

  author = {Sarma, A and Chakraborty, S. and Nandi, S.},

  title = {Context Aware Handover Management: Sustaining QoS and QoE in a Public IEEE 802.11e Hotspot},

  journal = {Network and Service Management, IEEE Transactions on},

  year = {2014},

  volume = {11},

  number = {4},

  pages = {530-543},

  doi = {http://dx.doi.org/10.1109/TNSM.2014.2352811}

}

Sauve, J. and Queiroz, M. and Moura, A. and Bartolini, C. and Hickey, M. Prioritizing Information Technology Service Investments under Uncertainty 2011 Network and Service Management, IEEE Transactions on
Vol. 8(3), pp. 259 -273 
investments in it services , aleatory uncertainty , decision making under uncertainty , epistemic uncertainty , risk management , service portfolio management information technology , investment DOI  
Abstract: We explore the challenge of selecting the best among a set of alternative IT investments. Solving this problem is important since the difference between alternative investment options may be drastic in terms of business results, both positive and negative. The resulting model takes as input a set of investment alternatives and a parameterized description of IT services, and provides as output a Preference Index for each alternative. The solution takes into account such characteristics as epistemic and aleatory uncertainty as well as the decision maker's attitude toward risk. Through a case study and a sensitivity analysis, we conclude that the model is useful in practice and robust; we also describe its domain of validity.
BibTeX:

@article{5970246,

  author = {Sauve, J. and Queiroz, M. and Moura, A. and Bartolini, C. and Hickey, M.},

  title = {Prioritizing Information Technology Service Investments under Uncertainty},

  journal = {Network and Service Management, IEEE Transactions on},

  year = {2011},

  volume = {8},

  number = {3},

  pages = {259 -273},

  doi = {http://dx.doi.org/10.1109/TNSM.2011.072611.100077}

}

Sauve, J. and Santos, R. and Reboucas, R. and Moura, A. and Bartolini, C. Change Priority Determination in IT Service Management Based on Risk Exposure 2008 Network and Service Management, IEEE Transactions on
Vol. 5(3), pp. 178 -187 
change management, risk, change prioritization, it service management, business-driven it management business data processing , management of change , risk analysis DOI  
Abstract: In the Change Management process within IT Service Management, some activities need to evaluate the risk exposure associated with changes to be made to the infrastructure and services. The paper presents a method to evaluate risk exposure associated with a change. Further, we show how to use the risk exposure metric to automatically assign priorities to changes. The formal model developed for this purpose captures the business perspective by using financial metrics in the evaluation of risk. Thus the method is an example of Business-Driven IT Management. A case study, performed in conjunction with a large IT service provider, is reported and provides good results when compared to decisions made by human managers.
BibTeX:

@article{4805134,

  author = {Sauve, J. and Santos, R. and Reboucas, R. and Moura, A. and Bartolini, C.},

  title = {Change Priority Determination in IT Service Management Based on Risk Exposure},

  journal = {Network and Service Management, IEEE Transactions on},

  year = {2008},

  volume = {5},

  number = {3},

  pages = {178 -187},

  doi = {http://dx.doi.org/10.1109/TNSM.2009.031105}

}

Schembra, G. and Incarbone, G. A Business Model for Multimedia Streaming in Mobile Clouds 2014 Network and Service Management, IEEE Transactions on
Vol. 11(3), pp. 376-389 
business;mobile computing;mobile nodes;multimedia communication;streaming media;markov models;mobile clouds;business model;multimedia applications;quality of service DOI  
Abstract: In the last few years, the proliferation of mobile devices coupled with the ever-increasing popularity of multimedia applications, has stimulated great interest of new figures of service providers. The major objective is to offer mobile users, located in limited areas, with broadband multimedia applications enriched with services and functionalities specific to mobile scenarios. This is the case, for example, of passengers waiting to board in an airport, visitors to a museum, or spectators at a football stadium. Since cellular networks usually cannot deliver bitrates suitable for such kinds of applications to every single user, the most appealing solution for providing mobile users with multimedia applications is to organize users into mobile clouds. In this context, this paper considers delay-constrained multimedia streaming applications over mobile clouds, and defines a business model for managing these kinds of services. Users are divided into two classes, according to the way they intend to pay for the service: in bandwidth and energy for traffic relaying (cheap-tariff) or with some money (full-tariff). An analytical model is proposed for designing the main system parameters and deciding on the tariffs of the business model. Finally, the business model is applied to a case study.
BibTeX:

@article{6873253,

  author = {Schembra, G. and Incarbone, G.},

  title = {A Business Model for Multimedia Streaming in Mobile Clouds},

  journal = {Network and Service Management, IEEE Transactions on},

  year = {2014},

  volume = {11},

  number = {3},

  pages = {376-389},

  doi = {http://dx.doi.org/10.1109/TNSM.2014.2346084}

}

Schmidt, R.O. and Sadre, R. and Sperotto, A. and Berg, H. and Pras, A. Impact of Packet Sampling on Link Dimensioning 2015 Network and Service Management, IEEE Transactions on
Vol. 12(3), pp. 392 - 405 
aggregates;bandwidth;estimation;loss measurement;monitoring;quality of service;radiation detectors DOI  
Abstract: Link dimensioning is used by network operators to properly provision the capacity of their network links. Proposed methods for link dimensioning often require statistics, such as traffic variance, that need to be calculated from packet-level measurements. In practice, due to increasing traffic volume, operators deploy packet sampling techniques aiming to reduce the burden of traffic monitoring, but little is known about how link dimensioning is affected by such measurements. In this paper we make use of a previously proposed and validated dimensioning formula that requires traffic variance to estimate required link capacity. We assess the impact of three packet sampling techniques on link dimensioning, namely, Bernoulli, n-in-N and sFlow sampling. To account for the additional variance introduced by the sampling algorithms, we propose approaches to better estimate traffic variance from sampled data according to the employed technique. Results show that, depending on sampling rate and link load, packet sampling does not negatively impact on link dimensioning accuracy even at very short timescales such as 10ms. Moreover, we also show that the loss of inter-arrival time of sampled packets due to the exporting process in sFlow does not harm the estimations, given that an appropriate sampling rate is used. Our study is validated using a large dataset consisting of traffic packet traces captured at several locations around the globe.
BibTeX:

@article{7117394,

  author = {Schmidt, R.O. and Sadre, R. and Sperotto, A. and Berg, H. and Pras, A.},

  title = {Impact of Packet Sampling on Link Dimensioning},

  journal = {Network and Service Management, IEEE Transactions on},

  year = {2015},

  volume = {12},

  number = {3},

  pages = {392 - 405},

  doi = {http://dx.doi.org/10.1109/TNSM.2015.2436365}

}

Schonwalder, J. and Marinov, V. On the Impact of Security Protocols on the Performance of SNMP 2011 Network and Service Management, IEEE Transactions on
Vol. 8(1), pp. 52 -64 
snmp , network management , security protocol computer network management , computer network security , network interfaces , transport protocols , user interfaces DOI  
Abstract: Since the early 1990s, there have been several attempts to secure the Simple Network Management Protocol (SNMP). The third version of the protocol, published as full standard in 2002, introduced the User-based Security Model (USM), which comes with its own user and key-management infrastructure. Since then, network operators have reported that deploying another user and key management infrastructure to secure SNMP is expensive and a reason to not deploy SNMPv3. This paper describes how existing security protocols operating above the transport layer and below application protocols can be used to secure SNMP. These protocols can take advantage of already deployed key management infrastructures that are used for other network management interfaces and hence their use can reduce the operational costs associated with securing SNMP. Our main contribution is a detailed performance analysis of a prototype implementation, comparing the performance of SNMPv3 over SSH, TLS, and DTLS with other versions of SNMP. We also discuss the differences between the various options to secure SNMP and provide guidelines for choosing solutions to implement or deploy.
BibTeX:

@article{5702353,

  author = {Schonwalder, J. and Marinov, V.},

  title = {On the Impact of Security Protocols on the Performance of SNMP},

  journal = {Network and Service Management, IEEE Transactions on},

  year = {2011},

  volume = {8},

  number = {1},

  pages = {52 -64},

  doi = {http://dx.doi.org/10.1109/TNSM.2011.012111.00011}

}

Schuetz, S. and Zimmermann, K. and Nunzi, G. and Schmid, S. and Brunner, M. Autonomic and Decentralized Management of Wireless Access Networks 2007 Network and Service Management, IEEE Transactions on
Vol. 4(2), pp. 96 -106 
base stations , centralized control , communication system control , computer network management , computer networks , control systems , environmental management , radio spectrum management , wireless lan , wireless networks radio access networks , telecommunication network management DOI  
Abstract: In this article, we apply autonomic and distributed management principles to wireless access networks. Most interesting is the application of autonomic properties and behaviors including adaptive, aware, and automatic operation in a decentralized setting. In particular, we present a generic and autonomic management architecture for decentralized management of wireless access networks, such as GERAN/UTRAN, E-UTRAN, WiMAX or WLAN. For evaluation purposes, we apply this architecture to the management of a Wireless LAN network, and we evaluate the architecture and some of the autonomic management functions through simulations, a prototype implementation and the setup of a real-world testbed for experimentation with the proposed management approach.
BibTeX:

@article{4383311,

  author = {Schuetz, S. and Zimmermann, K. and Nunzi, G. and Schmid, S. and Brunner, M.},

  title = {Autonomic and Decentralized Management of Wireless Access Networks},

  journal = {Network and Service Management, IEEE Transactions on},

  year = {2007},

  volume = {4},

  number = {2},

  pages = {96 -106},

  doi = {http://dx.doi.org/10.1109/TNSM.2007.070905}

}

Secci, S. and Ma, H. and Helvik, B. and Rougier, J. Resilient Inter-Carrier Traffic Engineering for Internet Peering Interconnections 2011 Network and Service Management, IEEE Transactions on
Vol. 8(4), pp. 274 - 284 
bgp , internet reliability , routing resiliency , game theory , inter-domain routing , multipath routing , peering management DOI  
Abstract: We present a novel resilient routing policy for controlling the routing across peering links between Internet carriers. Our policy is aimed at offering more dependability and better performance to the routing decision with respect to the current practice (e.g., hot-potato routing). Our work relies on a non-cooperative game framework, called Peering Equilibrium MultiPath (PEMP), that has been recently proposed. PEMP allows two carrier providers to coordinate a multipath route selection for critical flows across peering links, while preserving their respective interests and independence. In this paper, we propose a resilient PEMP execution policy accounting for the occurrence of potential impairments (traffic matrix variations, intra-AS and peering link failures) that may occur in both peering networks. We mathematically define how to produce robust equilibrium sets and describe how to appropriately react to unexpected network impairments that might take place. The results from extensive simulations show that, under a realistic failure scenario, our policy adaptively prevents from peering link congestions and excessive route deviations after failures.
BibTeX:

@article{6070522,

  author = {Secci, S. and Ma, H. and Helvik, B. and Rougier, J.},

  title = {Resilient Inter-Carrier Traffic Engineering for Internet Peering Interconnections},

  journal = {Network and Service Management, IEEE Transactions on},

  year = {2011},

  volume = {8},

  number = {4},

  pages = {274 - 284},

  doi = {http://dx.doi.org/10.1109/TNSM.2011.110311.100064}

}

Secci, S. and Pujolle, G. and Thi Mai Trang Nguyen and Sinh Chung Nguyen Performance¨CCost Trade-Off Strategic Evaluation of Multipath TCP Communications 2014 Network and Service Management, IEEE Transactions on
Vol. 11(2), pp. 250-263 
mobile communication;transport protocols;access network interfaces;multihomed end-to-end communications;multipath tcp communications;multiple access paths;non-cooperative game;performance-cost trade-off strategic evaluation;strategic load-balancing distribution;delays;games;internet;load modeling;mobile communication;protocols;throughput;mp-tcp;load-balancing;multihoming;network coordination;routing games DOI  
Abstract: Today's mobile terminals have several access network interfaces. New protocols have been proposed during the last few years to enable the concurrent use of multiple access paths for data transmission. In practice, the use of different access technologies is subject to different interconnection costs, and mobile users have preferences on interfaces jointly depending on performance and cost factors. There is therefore an interest in defining ¡°light¡± multipath communication policies that are less expensive than greedy unconstrained ones such as with basic multipath TCP (MP-TCP) and that are strategically acceptable assuming a selfish endpoint behavior. With this goal, we analyze the performance-cost trade-off of multi-homed end-to-end communications from a strategic standpoint. We model the communication between multi-homed terminals as a specific non-cooperative game to achieve performance-cost decision frontiers. The resulting potential game always allows selecting multiple equilibria, leading to a strategic load-balancing distribution over the available interfaces, possibly constraining their use with respect to basic MP-TCP. By simulation of a realistic three-interface scenario, we show how the achievable performance is bound by the interconnection cost; we show that we can halve the interconnection cost with respect to basic (greedy) MP-TCP while offering double throughputs with respect to single-path TCP. Moreover, we evaluate the compromise between keeping or relaxing strategic constraints in a coordinated MP-TCP context.
BibTeX:

@article{6811210,

  author = {Secci, S. and Pujolle, G. and Thi Mai Trang Nguyen and Sinh Chung Nguyen},

  title = {Performance¨CCost Trade-Off Strategic Evaluation of Multipath TCP Communications},

  journal = {Network and Service Management, IEEE Transactions on},

  year = {2014},

  volume = {11},

  number = {2},

  pages = {250-263},

  doi = {http://dx.doi.org/10.1109/TNSM.2014.2321838}

}

Setzer, T. and Bhattacharya, K. and Ludwig, H. Change scheduling based on business impact analysis of change-related risk 2010 Network and Service Management, IEEE Transactions on
Vol. 7(1), pp. 58 -71 
business impact analysis, change management, change scheduling, risk management, service networks, service transition management business data processing , financial management , management of change , probability , risk management , scheduling DOI  
Abstract: In today's enterprises, the alignment of IT service infrastructures to continuously changing business requirements is a key cost driver, all the more so as most severe service disruptions can be attributed to the introduction of changes into the IT service infrastructure. Change management is a disciplined process for introducing required changes with minimum business impact. Considering the number of business processes in an enterprise and the complexity of the dependency network of processes to invoked services, one of the most pressing problems in change management is the risk-aware prioritization and scheduling of vast numbers of service changes. In this paper we introduce a model for estimating the business impact of operational risk resulting from changes. We determine the business impact based on the number and types of business process instances affected by a change-related outage and quantify the business impact in terms of financial loss. The model takes into account the network of dependencies between processes and services, probabilistic change-related downtime, uncertainty in business process demand, and various infrastructural characteristics. Based on the analytical model, we derive decision models aimed at scheduling single or multiple correlated changes with the lowest expected business impact. The models are evaluated using simulations based on industry data.
BibTeX:

@article{5412873,

  author = {Setzer, T. and Bhattacharya, K. and Ludwig, H.},

  title = {Change scheduling based on business impact analysis of change-related risk},

  journal = {Network and Service Management, IEEE Transactions on},

  year = {2010},

  volume = {7},

  number = {1},

  pages = {58 -71},

  doi = {http://dx.doi.org/10.1109/TNSM.2010.I9P0305}

}

Setzer, Thomas and Markl, Christian Autonomic Prioritization of Enterprise Transactions Based on Bid-Price Controls 2013 Network and Service Management, IEEE Transactions on
Vol. 10(4), pp. 398-409 
business;mathematical model;process control;resource management;strategic planning;transaction processing;transaction processing;bid-price controls;business-driven prioritization;demand management DOI  
BibTeX:

@article{6623066,

  author = {Setzer, Thomas and Markl, Christian},

  title = {Autonomic Prioritization of Enterprise Transactions Based on Bid-Price Controls},

  journal = {Network and Service Management, IEEE Transactions on},

  year = {2013},

  volume = {10},

  number = {4},

  pages = {398-409},

  doi = {http://dx.doi.org/10.1109/TNSM.2013.100413.120316}

}

Sha, Mo and Hackmann, Gregory and Lu, Chenyang Real-World Empirical Studies on Multi-Channel Reliability and Spectrum Usage for Home-Area Sensor Networks 2013 Network and Service Management, IEEE Transactions on
Vol. 10(1), pp. 56-69 
empirical study , home-area sensor networks , multi-channel , spectrum DOI  
Abstract: Home area networks (HANs) consisting of wireless sensors have emerged as the enabling technology for important applications such as smart energy. These applications impose unique network management constraints, requiring low data rates but high network reliability in the face of unpredictable wireless environments. This paper presents two in-depth empirical studies on wireless channels in real homes, providing key design guidelines for meeting the network management constraints of HAN applications. The spectrum study analyzes spectrum usage in the 2.4 GHz band where HANs based on the IEEE 802.15.4 standard must coexist with existing wireless devices. We characterize the ambient wireless environment in six apartments through passive spectrum analysis across the entire 2.4 GHz band over seven days in each apartment. We find that the wireless conditions in these residential environments are much more complex and varied than in a typical office environment. Moreover, while 802.11 signals play a significant role in spectrum usage, there also exists non-negligible noise from non-802.11 devices. The multi-channel link study measures the reliability of different 802.15.4 channels through active probing with motes in ten apartments. We find that there is not always a persistently reliable channel over 24 hours, and that link reliability does not exhibit cyclic behavior at daily or weekly timescales. Nevertheless, reliability can be maintained through infrequent channel hopping, suggesting dynamic channel hopping as a key tool for meeting the network management requirements of HAN applications. Our empirical studies provide important guidelines and insights in designing HANs for residential environments.
BibTeX:

@article{6307795,

  author = {Sha, Mo and Hackmann, Gregory and Lu, Chenyang},

  title = {Real-World Empirical Studies on Multi-Channel Reliability and Spectrum Usage for Home-Area Sensor Networks},

  journal = {Network and Service Management, IEEE Transactions on},

  year = {2013},

  volume = {10},

  number = {1},

  pages = {56-69},

  doi = {http://dx.doi.org/10.1109/TNSM.2012.091312.120237}

}

Shayani, D. and Machuca, C.M. and Jager, M. A techno-economic approach to telecommunications: the case of service migration 2010 Network and Service Management, IEEE Transactions on
Vol. 7(2), pp. 96 -106 
business intelligence; employee allocation; next generation networking; operational expenditures; service migration; techno-economic analysis. competitive intelligence , economics , telecommunication network management , telecommunication services DOI  
Abstract: The evolution of telecommunications technology has always challenged the industry to cope with the rapid pace of innovation and demand. The actual panorama offers a view of network operators constantly pushed to minimize their expenses while keeping profitability. Due to that, a keen interest on applying business intelligence methods has appeared an approach that encompasses the particularities of technology, thus named techno-economic analysis. Based on this framework, this work proposes a novel approach to modeling and studying a particular process, the service migration between platforms. The need for migration of telecommunication platforms stems from the constant technological progress and has become a focus of savings and new business opportunities. The proposed service migration model uses the hill climbing solution to find the best scenario for a given number of involved employees. The paper presents a collection of case studies, which cover the most important aspects of the migration process and suggest how network operators can increase their benefits. Additionally, the network dismantling the very end of a network lifecycle is also analyzed and simulated. Our approach aims at a deeper look into the operation of a telecommunications company, identifies the main cost drivers and extracts recommendations to an improved network management.
BibTeX:

@article{5471040,

  author = {Shayani, D. and Machuca, C.M. and Jager, M.},

  title = {A techno-economic approach to telecommunications: the case of service migration},

  journal = {Network and Service Management, IEEE Transactions on},

  year = {2010},

  volume = {7},

  number = {2},

  pages = {96 -106},

  doi = {http://dx.doi.org/10.1109/TNSM.2010.06.I8P0297}

}

Si, W. and Hashemi, M. and Xin, L. and Starobinski, D. and Trachtenberg, A. TeaCP: A Toolkit for Evaluation and Analysis of Collection Protocols in Wireless Sensor Networks 2015 Network and Service Management, IEEE Transactions on
Vol. 12(2), pp. 293-307 
data collection;data visualization;delays;protocols;reliability;testing;wireless sensor networks;network and systems monitoring;collection protocol;open-source toolkit;opensource toolkit;performance evaluation and visualization;wireless sensor networks DOI  
Abstract: We present TeaCP, a prototype toolkit for the evaluation and analysis of collection protocols in both simulation and experimental environments running on TinyOS. Our toolkit consists of a testing system, which runs a collection protocol of choice, and an optional SD card-based logging system, which stores the logs generated by the testing system. The SD card datalogger allows a wireless sensor network (WSN) to be deployed flexibly in various environments, especially where wired transfer of data is difficult. Using the saved logs, TeaCP evaluates a wide range of performance metrics, such as reliability, throughput, and delay. TeaCP further allows visualization of packet routes and the topology evolution of the network, under both static and dynamic conditions, even in the face of transient disconnections. Through simulation of an intra-car WSN and real lab experiments, we demonstrate the functionality of TeaCP for comparing the performance of two prominent collection protocols, the Collection Tree Protocol (CTP) and the Backpressure Collection Protocol (BCP). We also present the usage of TeaCP as a high level diagnosis tool, through which an inconsistency of the BCP implementation for the CC2420 radio chips is identified and resolved.
BibTeX:

@article{7051281,

  author = {Si, W. and Hashemi, M. and Xin, L. and Starobinski, D. and Trachtenberg, A.},

  title = {TeaCP: A Toolkit for Evaluation and Analysis of Collection Protocols in Wireless Sensor Networks},

  journal = {Network and Service Management, IEEE Transactions on},

  year = {2015},

  volume = {12},

  number = {2},

  pages = {293-307},

  doi = {http://dx.doi.org/10.1109/TNSM.2015.2407882}

}

Song, Y. and Liu, L. and Ma, H. and Vasilakos, AV. A Biology-Based Algorithm to Minimal Exposure Problem of Wireless Sensor Networks 2014 Network and Service Management, IEEE Transactions on
Vol. 11(3), pp. 417-430 
approximation algorithms;approximation methods;biological system modeling;computational modeling;junctions;optimization;sensors;bio-inspired computing;minimal exposure problem;physarum optimization;steiner problem;wireless sensor networks DOI  
Abstract: The Minimal Exposure Problem (MEP), which corresponds to the quality of coverage, is a fundamental problem in wireless sensor networks. This paper exploits a biological model of physarum to design a novel biology-inspired optimization algorithm for MEP. We first formulate MEP and the related models, and then convert MEP into the Steiner problem by discretizing the monitoring field to a large-scale weighted grid. Inspired by the path-finding capability of physarum, we develop a biological optimization solution to find the minimal exposure road-network among multiple points of interest, and present a Physarum Optimization Algorithm (POA). Furthermore, POA can be used for solving the general Steiner problem. Extensive simulations demonstrate that our proposed models and algorithm are effective for finding the road-network with minimal exposure and feasible for the Steiner problem.
BibTeX:

@article{6873305,

  author = {Song, Y. and Liu, L. and Ma, H. and Vasilakos, AV.},

  title = {A Biology-Based Algorithm to Minimal Exposure Problem of Wireless Sensor Networks},

  journal = {Network and Service Management, IEEE Transactions on},

  year = {2014},

  volume = {11},

  number = {3},

  pages = {417-430},

  doi = {http://dx.doi.org/10.1109/TNSM.2014.2346080}

}

Sourlas, Vasilis and Gkatzikis, Lazaros and Flegkas, Paris and Tassiulas, Leandros Distributed Cache Management in Information-Centric Networks 2013 Network and Service Management, IEEE Transactions on
Vol. 10(3), pp. 286-299 
autonomic cache management;information-centric networks;distributed optimization;performance bounds DOI  
Abstract: The main promise of current research efforts in the area of Information-Centric Networking (ICN) architectures is to optimize the dissemination of information within transient communication relationships of endpoints. Efficient caching of information is key to delivering on this promise. In this paper, we look into achieving this promise from the angle of managed replication of information. Management decisions are made in order to efficiently place replicas of information in dedicated storage devices attached to nodes of the network. In contrast to traditional off-line external management systems we adopt a distributed autonomic management architecture where management intelligence is placed inside the network. Particularly, we present an autonomic cache management approach for ICNs, where distributed managers residing in cache-enabled nodes decide on which information items to cache. We propose four on-line intra-domain cache management algorithms with different level of autonomicity and compare them with respect to performance, complexity, execution time and message exchange overhead. Additionally, we derive a lower bound of the overall network traffic cost for a certain category of network topologies. Our extensive simulations, using realistic network topologies and synthetic workload generators, signify the importance of network wide knowledge and cooperation.
BibTeX:

@article{Sourlas2013,

  author = {Sourlas, Vasilis and Gkatzikis, Lazaros and Flegkas, Paris and Tassiulas, Leandros},

  title = {Distributed Cache Management in Information-Centric Networks},

  journal = {Network and Service Management, IEEE Transactions on},

  year = {2013},

  volume = {10},

  number = {3},

  pages = {286-299},

  doi = {http://dx.doi.org/10.1109/TNSM.2013.052113.120382}

}

Sperotto, A. and Mandjes, M. and Sadre, R. and de Boer, P. and Pras, A. Autonomic Parameter Tuning of Anomaly-Based IDSs: an SSH Case Study 2012 Network and Service Management, IEEE Transactions on
Vol. 9(2), pp. 128-141 
autonomic , anomalies , intrusion detection , network management , parameter optimization DOI  
Abstract: Anomaly-based intrusion detection systems classify network traffic instances by comparing them with a model of the normal network behavior. To be effective, such systems are expected to precisely detect intrusions (high true positive rate) while limiting the number of false alarms (low false positive rate). However, there exists a natural trade-off between detecting all anomalies (at the expense of raising alarms too often), and missing anomalies (but not issuing any false alarms). The parameters of a detection system play a central role in this trade-off, since they determine how responsive the system is to an intrusion attempt. Despite the importance of properly tuning the system parameters, the literature has put little emphasis on the topic, and the task of adjusting such parameters is usually left to the expertise of the system manager or expert IT personnel. In this paper, we present an autonomic approach for tuning the parameters of anomaly-based intrusion detection systems in case of SSH traffic. We propose a procedure that aims to automatically tune the system parameters and, by doing so, to optimize the system performance. We validate our approach by testing it on a flow-based probabilistic detection system for the detection of SSH attacks.
BibTeX:

@article{6172597,

  author = {Sperotto, A. and Mandjes, M. and Sadre, R. and de Boer, P. and Pras, A.},

  title = {Autonomic Parameter Tuning of Anomaly-Based IDSs: an SSH Case Study},

  journal = {Network and Service Management, IEEE Transactions on},

  year = {2012},

  volume = {9},

  number = {2},

  pages = {128-141},

  doi = {http://dx.doi.org/10.1109/TNSM.2012.031512.110146}

}

Srivastava, Shekhar and Agrawal, Gaurav and Pioro, Michal and Medhi, Deep Determining link weight system under various objectives for OSPF networks using a Lagrangian relaxation-based approach 2005 Network and Service Management, IEEE Transactions on
Vol. 2(1), pp. 9 -18 
ospf networks , link weight system , optimal routing , traffic engineering DOI  
Abstract: An important traffic engineering problem for OSPF networks is the determination of optimal link weights. Certainly, this depends on the traffic engineering objective. Regardless, often a variety of performance measures may be of interest to a network provider due to their impact on the network. In this paper, we consider different objectives and discuss how they impact the determination of the link weights and different performance measures. In particular, we propose a composite objective function; furthermore, we present a Lagrangian relaxation-based dual approach to determine the link weight system. We then consider different performance measures and discuss the effectiveness of different objectives through computational studies of a variety of network topologies. We find that our proposed composite objective function with Lagrangian relaxation-based dual approach is very effective in meeting different performance measures and is computationally very fast.
BibTeX:

@article{4798297,

  author = {Srivastava, Shekhar and Agrawal, Gaurav and Pioro, Michal and Medhi, Deep},

  title = {Determining link weight system under various objectives for OSPF networks using a Lagrangian relaxation-based approach},

  journal = {Network and Service Management, IEEE Transactions on},

  year = {2005},

  volume = {2},

  number = {1},

  pages = {9 -18},

  doi = {http://dx.doi.org/10.1109/TNSM.2005.4798297}

}

Stanic, S. and Subramaniam, S. and Sahin, G. and Choi, H. and Choi, H.-A. Active monitoring and alarm management for fault localization in transparent all-optical networks 2010 Network and Service Management, IEEE Transactions on
Vol. 7(2), pp. 118 -131 
transparent optical networks, fault detection, fault localization, monitoring, alarm processing. alarm systems , fault location , integer programming , linear programming , monitoring DOI  
Abstract: Achieving accurate and efficient fault localization in large transparent all-optical networks (TONs) is an important and challenging problem due to unique fault-propagation, time constraints, and scalability requirements. In this paper, we introduce a novel technique for optimizing the speed of fault-localization through the selection of an active set of monitors for centralized and hierarchically-distributed management. The proposed technique is capable of providing multiple levels of fault-localization-granularity, from individual discrete optical components to the entire monitoring domains. We formulate and prove the NP-completeness of the optimal monitor activation problem and present its Integer Linear Program (ILP) formulation. Furthermore, we propose a novel heuristic whose solution quality is verified by comparing it with an ILP. Extensive simulation results provide supporting analysis and comparisons of achievable alarm-vector reduction, localization coverage, and time complexity, for flat and hierarchically distributed monitoring approaches. The impact of network connectivity on fault localization complexity in randomly generated topologies is also studied. Results demonstrate the effectiveness of the proposed technique in efficient and scalable monitoring of transparent optical networks.
BibTeX:

@article{5471042,

  author = {Stanic, S. and Subramaniam, S. and Sahin, G. and Choi, H. and Choi, H.-A.},

  title = {Active monitoring and alarm management for fault localization in transparent all-optical networks},

  journal = {Network and Service Management, IEEE Transactions on},

  year = {2010},

  volume = {7},

  number = {2},

  pages = {118 -131},

  doi = {http://dx.doi.org/10.1109/TNSM.2010.06.I9P0343}

}

Sun, X.S. and Agarwal, A. and Ng, T.S.E. Controlling Race Conditions in OpenFlow to Accelerate Application Verification and Packet Forwarding 2015 Network and Service Management, IEEE Transactions on
Vol. 12(2), pp. 263-277 
delays;model checking;process control;protocols;software;switches;openflow;forwarding delay;model checking;race condition;software-defined network;verification DOI  
Abstract: OpenFlow is a Software Defined Networking (SDN) protocol that is being deployed in many network systems. SDN application verification takes an important role in guaranteeing the correctness of the application. Through our investigation, we discover that application verification can be very inefficient under the OpenFlow protocol since there are many race conditions between the data packets and control plane messages. Furthermore, these race conditions also increase the control plane workload and packet forwarding delay. We propose Attendre, an OpenFlow extension, to mitigate the ill effects of the race conditions in OpenFlow networks. We have implemented Attendre in NICE (a model checking verifier), Open vSwitch (a software virtual switch), and NOX (an OpenFlow controller). Experiments show that Attendre can reduce verification time by several orders of magnitude, and significantly reduce TCP connection setup time.
BibTeX:

@article{7080852,

  author = {Sun, X.S. and Agarwal, A. and Ng, T.S.E.},

  title = {Controlling Race Conditions in OpenFlow to Accelerate Application Verification and Packet Forwarding},

  journal = {Network and Service Management, IEEE Transactions on},

  year = {2015},

  volume = {12},

  number = {2},

  pages = {263-277},

  doi = {http://dx.doi.org/10.1109/TNSM.2015.2419975}

}

Szilagyi, P. and Novaczki, S. An Automatic Detection and Diagnosis Framework for Mobile Communication Systems 2012 Network and Service Management, IEEE Transactions on
Vol. 9(2), pp. 184-197 
fault management , key performance indicator , network management automation , root cause analysis , self-healing DOI  
Abstract: As the complexity of commercial cellular networks grows, there is an increasing need for automated methods detecting and diagnosing cells not only in complete outage but with degraded performance as well. Root cause analysis of the detected anomalies can be tedious and currently carried out mostly manually if at all; in most practical cases, operators simply reset problematic cells. In this paper, a novel integrated detection and diagnosis framework is presented that can identify anomalies and find the most probable root cause of not only severe problems but even smaller degradations as well. Detecting an anomaly is based on monitoring radio measurements and other performance indicators and comparing them to their usual behavior captured by profiles, which are also automatically built without the need for thresholding or manual calibration. Diagnosis is based on reports of previous fault cases by identifying and learning their characteristic impact on different performance indicators. The designed framework has been evaluated with proof-of-concept simulations including artificial faults in an LTE system. Results show the feasibility of the framework for providing the correct root cause of anomalies and possibly ranking the problems by their severity.
BibTeX:

@article{6174486,

  author = {Szilagyi, P. and Novaczki, S.},

  title = {An Automatic Detection and Diagnosis Framework for Mobile Communication Systems},

  journal = {Network and Service Management, IEEE Transactions on},

  year = {2012},

  volume = {9},

  number = {2},

  pages = {184-197},

  doi = {http://dx.doi.org/10.1109/TNSM.2012.031912.110155}

}

Yongning Tang and Al-Shaer, E. and Boutaba, R. Efficient fault diagnosis using incremental alarm correlation and active investigation for internet and overlay networks 2008 Network and Service Management, IEEE Transactions on
Vol. 5(1), pp. 36 -49 
computer science , computerized monitoring , condition monitoring , degradation , fault diagnosis , ip networks , information technology , management information systems , scalability , web and internet services internet , computer network management , computer network reliability , fault diagnosis DOI  
Abstract: Fault localization is the core element in fault management. Symptom-fault map is commonly used to describe the symptom-fault causality in fault reasoning. For Internet service networks, a well-designed monitoring system can effectively correlate the observable symptoms (i.e., alarms) with the critical network faults (e.g., link failure). However, the lost and spurious symptoms can significantly degrade the performance and accuracy of a passive fault localization system. For overlay networks, due to limited underlying network accessibility, as well as the overlay scalability and dynamics, it is impractical to build a static overlay symptom-fault map. In this paper, we firstly propose a novel active integrated fault reasoning (AIR) framework to incrementally incorporate active investigation actions into the passive fault reasoning process based on an extended symptom-fault-action (SFA) model. Secondly, we propose an overlay network profile (ONP) to facilitate the dynamic creation of an overlay symptom-fault-action (called O-SFA) model, such that the AIR framework can be applied seamlessly to overlay networks (called O-AIR). As a result, the corresponding fault reasoning and action selection algorithms are elaborated. Extensive simulations and Internet experiments show that AIR and O-AIR can significantly improve both accuracy and performance in the fault reasoning for Internet and overlay service networks, especially when the ratio of the lost and spurious symptoms is high.
BibTeX:

@article{4570776,

  author = {Yongning Tang and Al-Shaer, E. and Boutaba, R.},

  title = {Efficient fault diagnosis using incremental alarm correlation and active investigation for internet and overlay networks},

  journal = {Network and Service Management, IEEE Transactions on},

  year = {2008},

  volume = {5},

  number = {1},

  pages = {36 -49},

  doi = {http://dx.doi.org/10.1109/TNSM.2008.080104}

}

Tang, Y. and Al-Shaer, E. and Joshi, K. Reasoning under Uncertainty for Overlay Fault Diagnosis 2012 Network and Service Management, IEEE Transactions on
Vol. 9(1), pp. 34-47 
overlay networks , dependable networks , fault diagnosis , uncertainty reasoning DOI  
Abstract: The performance and reliability of overlay services rely on the underlying overlay network's ability to effectively diagnose and recover from faults such as link failures and overlay node outages. However, overlay networks bring to fault diagnosis new challenges such as large-scale deployment, inaccessible underlay network information, dynamic symptom-fault causality relationship, and multi-layer complexity. In this paper, we develop an evidential overlay fault diagnosis framework called DigOver to tackle these challenges. Firstly, DigOver identifies a set of potential faulty components based on shared end-user observed negative symptoms. Then, each potential faulty component is evaluated to quantify its fault likelihood and the corresponding evaluation uncertainty. Finally, DigOver dynamically constructs a plausible fault graph to locate the root causes of end-user observed negative symptoms. Both simulation and Internet experiments demonstrate that DigOver can effectively and accurately diagnose overlay faults based on end-user observed negative symptoms.
BibTeX:

@article{6122518,

  author = {Tang, Y. and Al-Shaer, E. and Joshi, K.},

  title = {Reasoning under Uncertainty for Overlay Fault Diagnosis},

  journal = {Network and Service Management, IEEE Transactions on},

  year = {2012},

  volume = {9},

  number = {1},

  pages = {34-47},

  doi = {http://dx.doi.org/10.1109/TNSM.2011.010312.110126}

}

Thing, V.L.L. and Sloman, M. and Dulay, N. Locating network domain entry and exit point/path for DDoS attack traffic 2009 Network and Service Management, IEEE Transactions on
Vol. 6(3), pp. 163 -174 
distributed denial of service, ip traceback. ip networks , security of data DOI  
Abstract: A method to determine entry and exit points or paths of DDoS attack traffic flows into and out of network domains is proposed. We observe valid source addresses seen by routers from sampled traffic under non-attack conditions. Under attack conditions, we detect route anomalies by determining which routers have been used for unknown source addresses, to construct the attack paths. We consider deployment issues and show results from simulations to prove the feasibility of our scheme. We then implement our Traceback mechanism in C++ and more realistic experiments are conducted. The experiments show that accurate results, with high traceback speed of a few seconds, are achieved. Compared to existing techniques, our approach is non-intrusive, not requiring any changes to the Internet routers and data packets. Precise information regarding the attack is not required allowing a wide variety of DDoS attack detection techniques to be used. The victim is also relieved from the traceback task during an attack. The scheme is simple and efficient, allowing for a fast traceback, and scalable due to the distribution of processing workload.
BibTeX:

@article{5374837,

  author = {Thing, V.L.L. and Sloman, M. and Dulay, N.},

  title = {Locating network domain entry and exit point/path for DDoS attack traffic},

  journal = {Network and Service Management, IEEE Transactions on},

  year = {2009},

  volume = {6},

  number = {3},

  pages = {163 -174},

  doi = {http://dx.doi.org/10.1109/TNSM.2009.03.090303}

}

Tian, Chen and Jiang, Hongbo and Iyengar, Arun and Liu, Xue and Wu, Zuodong and Chen, Jinhua and Liu, Wenyu and Wang, Chonggang Improving Application Placement for Cluster-Based Web Applications 2011 Network and Service Management, IEEE Transactions on
Vol. 8(2), pp. 104 -115 
load balancing , algorithm design , application placement , class constrained multiple-knapsack problem , cluster-based service internet , web sites , minimax techniques , resource allocation DOI  
Abstract: Dynamic application placement for clustered web applications heavily influences system performance and quality of user experience. Existing approaches claim that they strive to maximize the throughput, keep resource utilization balanced across servers, and minimize the start/stop cost of application instances. However, they fail to minimize the worst case of server utilization; the load balancing performance is not optimal. What's more, some applications need to communicate with each other, which we called dependent applications; the network cost of them also should be taken into consideration. In this paper, we investigate how to minimize the resource utilization of servers in the worst case, aiming at improving load balancing among clustered servers. Our contribution is two-fold. First we propose and define a new optimization objectives: limiting the worst case of each individual server's utilization, formulated by a min-max problem. A novel framework based on binary search is proposed to detect an optimal load balancing solution. Second, we define system cost as the weighted combination of both placement change and inter-application communication cost. By maximizing the number of instances of dependent applications that reside in the same set of servers, the basic load-shifting and placement-change procedures are enhanced to minimize whole system cost. Extensive experiments have been conducted and effectively demonstrate that: 1) the proposed framework achieves a good allocation for clustered web applications. In other words, requests are evenly allocated among servers, and throughput is still maximized; 2) the total system cost maintains at a low level; 3) our algorithm has the capacity of approximating an optimal solution within polynomial time and is promising for practical implementation in real deployments.
BibTeX:

@article{5871352,

  author = {Tian, Chen and Jiang, Hongbo and Iyengar, Arun and Liu, Xue and Wu, Zuodong and Chen, Jinhua and Liu, Wenyu and Wang, Chonggang},

  title = {Improving Application Placement for Cluster-Based Web Applications},

  journal = {Network and Service Management, IEEE Transactions on},

  year = {2011},

  volume = {8},

  number = {2},

  pages = {104 -115},

  doi = {http://dx.doi.org/10.1109/TNSM.2011.050311.100040}

}

Tran, Con and Dziong, Zbigniew Traffic Trend Estimation for Profit Oriented Capacity Adaptation in Service Overlay Networks 2011 Network and Service Management, IEEE Transactions on
Vol. 8(4), pp. 285 -296 
kalman filter , traffic estimation , adaptive exponential smoothing , capacity adaptation , grade of service DOI  
Abstract: Service Overlay Networks (SON) can offer end to end Quality of Service by leasing bandwidth from Internet Autonomous Systems. To maximize profit, the SON can continually adapt its leased bandwidth to traffic demand dynamics based on online traffic trend estimation. In this paper, we propose novel approaches for online traffic trend estimation that fits the SON capacity adaptation. In the first approach, the smoothing parameter of the exponential smoothing (ES) model is adapted to traffic trend. Here, the trend is estimated using measured connection arrival rate autocorrelation or cumulative distribution functions. The second approach applies Kalman filter whose model is built from historical traffic data. In this case, availability of the estimation error distribution allows for better control of the network Grade of Service. Numerical study shows that the proposed autocorrelation based ES approach gives the best combined estimation response-stability performance when compared to known ES methods. The proposed Kalman filter based approach improves further the capacity adaptation performance by limiting the increase of connection blocking when traffic level is increasing.
BibTeX:

@article{6102276,

  author = {Tran, Con and Dziong, Zbigniew},

  title = {Traffic Trend Estimation for Profit Oriented Capacity Adaptation in Service Overlay Networks},

  journal = {Network and Service Management, IEEE Transactions on},

  year = {2011},

  volume = {8},

  number = {4},

  pages = {285 -296},

  doi = {http://dx.doi.org/10.1109/TNSM.2011.110911.110116}

}

Con Tran and Dziong, Z. Service overlay network capacity adaptation for profit maximization 2010 Network and Service Management, IEEE Transactions on
Vol. 7(2), pp. 72 -82 
service overlay network, economics, grade of service, resource adaptation, markov decision process internet , markov processes , optimisation , quality of service , telecommunication network routing , telecommunication traffic DOI  
Abstract: The considered Service Overlay Networks (SON) lease bandwidth with Quality of Service (QoS) guarantees from a multitude of Internet Autonomous Systems, through service level agreements (SLA) with Internet Service Providers (ISP). This bandwidth is used to establish SON links and deliver end-to-end QoS for real time service connections. The leased bandwidth amount influences both the admitted traffic and network cost, affecting the network profit. This gives the network operator the opportunity to optimize the profit by adapting the network resources to changing traffic and SLA costs conditions. We propose a novel approach that maximizes the network profit based on traffic measurements and SLA cost changes. The approach uses an economic model that integrates the network routing policy with the adaptation of the SON link capacities. While performing the adaptation of leased bandwidth, the connection blocking constraints are also maintained. The proposed adaptive optimization approach is based on a reward maximizing routing policy derived from the Markov Decision Process theory although it can be applied to other routing policies. Analytical models as well as simulation of a measurement based implementation of the proposed models are used to evaluate the performance of the proposed approach.
BibTeX:

@article{5471038,

  author = {Con Tran and Dziong, Z.},

  title = {Service overlay network capacity adaptation for profit maximization},

  journal = {Network and Service Management, IEEE Transactions on},

  year = {2010},

  volume = {7},

  number = {2},

  pages = {72 -82},

  doi = {http://dx.doi.org/10.1109/TNSM.2010.06.I8P0287}

}

Tuncer, D. and Charalambides, M. and Clayman, S. and Pavlou, G. Adaptive Resource Management and Control in Software Defined Networks 2015 Network and Service Management, IEEE Transactions on
Vol. 12(1), pp. 18-33 
computer architecture;control systems;network topology;resource management;routing;substrates;topology;adaptive resource management;decentralized network configuration;software defined networking;software defined networking;adaptive resource management;decentralized network configuration DOI  
Abstract: The heterogeneous nature of the applications, technologies and equipment that today's networks have to support has made the management of such infrastructures a complex task. The Software-Defined Networking (SDN) paradigm has emerged as a promising solution to reduce this complexity through the creation of a unified control plane independent of specific vendor equipment. However, designing a SDN-based solution for network resource management raises several challenges as it should exhibit flexibility, scalability and adaptability. In this paper, we present a new SDN-based management and control framework for fixed backbone networks, which provides support for both static and dynamic resource management applications. The framework consists of three layers which interact with each other through a set of interfaces. We develop a placement algorithm to determine the allocation of managers and controllers in the proposed distributed management and control layer. We then show how this layer can satisfy the requirements of two specific applications for adaptive load-balancing and energy management purposes.
BibTeX:

@article{7039203,

  author = {Tuncer, D. and Charalambides, M. and Clayman, S. and Pavlou, G.},

  title = {Adaptive Resource Management and Control in Software Defined Networks},

  journal = {Network and Service Management, IEEE Transactions on},

  year = {2015},

  volume = {12},

  number = {1},

  pages = {18-33},

  doi = {http://dx.doi.org/10.1109/TNSM.2015.2402752}

}

Varadharajan, Vijay and Tupakula, Udaya Security as a Service Model for Cloud Environment 2014 Network and Service Management, IEEE Transactions on
Vol. 11(1), pp. 60-75 
cloud security;security and privacy;security architecture DOI  
Abstract: Cloud computing is becoming increasingly important for provision of services and storage of data in the Internet. However there are several significant challenges in securing cloud infrastructures from different types of attacks. The focus of this paper is on the security services that a cloud provider can offer as part of its infrastructure to its customers (tenants) to counteract these attacks. Our main contribution is a security architecture that provides a flexible security as a service model that a cloud provider can offer to its tenants and customers of its tenants. Our security as a service model while offering a baseline security to the provider to protect its own cloud infrastructure also provides flexibility to tenants to have additional security functionalities that suit their security requirements. The paper describes the design of the security architecture and discusses how different types of attacks are counteracted by the proposed architecture. We have implemented the security architecture and the paper discusses analysis and performance evaluation results.
BibTeX:

@article{6805344,

  author = {Varadharajan, Vijay and Tupakula, Udaya},

  title = {Security as a Service Model for Cloud Environment},

  journal = {Network and Service Management, IEEE Transactions on},

  year = {2014},

  volume = {11},

  number = {1},

  pages = {60-75},

  doi = {http://dx.doi.org/10.1109/TNSM.2014.041614.120394}

}

Varga, P. and Moldovan, l. Integration of Service-Level Monitoring with Fault Management for End-to-End Multi-Provider Ethernet Services 2007 Network and Service Management, IEEE Transactions on
Vol. 4(1), pp. 28 -38 
environmental management , ethernet networks , monitoring , neodymium , petri nets , prototypes , quality management , quality of service , standardization , virtual private networks fault diagnosis , local area networks , monitoring , quality of service , telecommunication network management , telecommunication network reliability DOI  
Abstract: Assuring end-to-end service quality in a multi- provider Ethernet environment is a challenging task. Operation and maintenance issues have become more and more complex due to the gradual extension of the Ethernet technology from local- to wide-area networks and the increasingly frequent use of layer-2 virtual private networks. End-to-end Ethernet network management is currently under standardization, with a focus on connectivity fault management and performance management. However, none of the tools and research prototypes available to date integrate service-level monitoring with fault management functions such as event correlation or root cause analysis for interconnected Ethernet networks. In this paper, we address the issue by proposing an integrated service-level monitoring and fault management framework. Our event processing module can handle various events generated by network nodes or pollers. We also describe service-level monitoring and fault management methods that are fine-tuned for managing end-to-end multi- provider Ethernet services.
BibTeX:

@article{4275032,

  author = {Varga, P. and Moldovan, l.},

  title = {Integration of Service-Level Monitoring with Fault Management for End-to-End Multi-Provider Ethernet Services},

  journal = {Network and Service Management, IEEE Transactions on},

  year = {2007},

  volume = {4},

  number = {1},

  pages = {28 -38},

  doi = {http://dx.doi.org/10.1109/TNSM.2007.030103}

}

Velloso, P.B. and Laufer, R.P. and de O Cunha, D. and Duarte, O.C.M.B. and Pujolle, G. Trust management in mobile ad hoc networks using a scalable maturity-based model 2010 Network and Service Management, IEEE Transactions on
Vol. 7(3), pp. 172 -185 
trust, ad hoc networks, security ad hoc networks , mobility management (mobile radio) , protocols , telecommunication security DOI  
Abstract: In this paper, we propose a human-based model which builds a trust relationship between nodes in an ad hoc network. The trust is based on previous individual experiences and on the recommendations of others. We present the Recommendation Exchange Protocol (REP) which allows nodes to exchange recommendations about their neighbors. Our proposal does not require disseminating the trust information over the entire network. Instead, nodes only need to keep and exchange trust information about nodes within the radio range. Without the need for a global trust knowledge, our proposal scales well for large networks while still reducing the number of exchanged messages and therefore the energy consumption. In addition, we mitigate the effect of colluding attacks composed of liars in the network. A key concept we introduce is the relationship maturity, which allows nodes to improve the efficiency of the proposed model for mobile scenarios. We show the correctness of our model in a single-hop network through simulations. We also extend the analysis to mobile multihop networks, showing the benefits of the maturity relationship concept. We evaluate the impact of malicious nodes that send false recommendations to degrade the efficiency of the trust model. At last, we analyze the performance of the REP protocol and show its scalability. We show that our implementation of REP can significantly reduce the number messages.
BibTeX:

@article{5560572,

  author = {Velloso, P.B. and Laufer, R.P. and de O Cunha, D. and Duarte, O.C.M.B. and Pujolle, G.},

  title = {Trust management in mobile ad hoc networks using a scalable maturity-based model},

  journal = {Network and Service Management, IEEE Transactions on},

  year = {2010},

  volume = {7},

  number = {3},

  pages = {172 -185},

  doi = {http://dx.doi.org/10.1109/TNSM.2010.1009.I9P0339}

}

Verma, A. and Sharma, U. and Jain, R. and Dasgupta, K. Compass: optimizing the migration cost vs. application performance tradeoff 2008 Network and Service Management, IEEE Transactions on
Vol. 5(2), pp. 118 -131 
migration , profit maximzation , resource allocation , store placement middleware , resource allocation , virtual storage DOI  
Abstract: We investigate methodologies for placement and migration of logical data stores in virtualized storage systems leading to optimum system configuration in a dynamic workload scenario. The aim is to optimize the tradeoff between the performance or operational cost improvement resulting from changes in store placement, and the cost imposed by the involved data migration step. We propose a unified economic utility based framework in which the tradeoff can be formulated as a utility maximization problem where the utility of a configuration is defined as the difference between the benefit of a configuration and the cost of moving to the configuration. We present a storage management middleware framework and architecture Compass that allows systems designers to plug-in different placement as well as migration techniques for estimation of utilities associated with different configurations. The biggest obstacle in optimizing the placement benefit and migration cost tradeoff is the exponential number of possible configurations that one may have to evaluate. We present algorithms that explore the configuration space efficiently and compute a candidate set of configurations that optimize this cost-benefit tradeoff. Our algorithms have many desirable properties including local optimality. Comprehensive experimental studies demonstrate the efficacy of the proposed framework and exploration algorithms, as our algorithms outperform migration cost-oblivious placement strategies by up to 40% on real OLTP traces for many settings.
BibTeX:

@article{4694136,

  author = {Verma, A. and Sharma, U. and Jain, R. and Dasgupta, K.},

  title = {Compass: optimizing the migration cost vs. application performance tradeoff},

  journal = {Network and Service Management, IEEE Transactions on},

  year = {2008},

  volume = {5},

  number = {2},

  pages = {118 -131},

  doi = {http://dx.doi.org/10.1109/TNSM.2008.021105}

}

Wang, H. and Wang, F. and Liu, J. and Xu, K. and Wu, D. Torrents on Twitter: Explore Long-Term Social Relationships in Peer-to-Peer Systems 2013 Network and Service Management, IEEE Transactions on
Vol. 10(1), pp. 95-104 
bittorrent , long-term relationship , self-similar , social networks DOI  
Abstract: Peer-to-peer file sharing systems, most notably BitTorrent (BT), have achieved tremendous success among Internet users. Recent studies suggest that the long-term relationships among BT peers can be explored to enhance the downloading performance; for example, for re-sharing previously downloaded contents or for effectively collaborating among the peers. However, whether such relationships do exist in real world remains unclear. In this paper, we take a first step towards the real-world applicability of peers' long-term relationship through a measurement based study. We find that 95% peers cannot even meet each other again in the BT networks; therefore, most peers can hardly be organized for further cooperation. This result contradicts to the conventional understanding based on the observed daily arrival pattern in peer-to-peer networks. To better understand this, we revisit the arrival of BT peers as well as their long-range dependence. We find that the peers' arrival patterns are highly diverse; only a limited number of stable peers have clear self-similar and periodic daily arrivals patterns. The arrivals of most peers are, however, quite random with little evidence of long-range dependence. To better utilize these stable peers, we start to explore peers' long-term relationships in specific swarms instead of conventional BT networks. Fortunately, we find that the peers in Twitter-initialized torrents have stronger temporal locality, thus offering great opportunity for improving their degree of sharing. Our PlanetLab experiments further indicate that the incorporation of social relations remarkably accelerates the download completion time. The improvement remains noticeable even in a hybrid system with a small set of social friends only.
BibTeX:

@article{6313582,

  author = {Wang, H. and Wang, F. and Liu, J. and Xu, K. and Wu, D.},

  title = {Torrents on Twitter: Explore Long-Term Social Relationships in Peer-to-Peer Systems},

  journal = {Network and Service Management, IEEE Transactions on},

  year = {2013},

  volume = {10},

  number = {1},

  pages = {95-104},

  doi = {http://dx.doi.org/10.1109/TNSM.2012.091912.120243}

}

Jessie Hui Wang and Dah Ming Chiu and Lui, J.C.S. and Chang, R.K.C. Inter-AS Inbound Traffic Engineering via ASPP 2007 Network and Service Management, IEEE Transactions on
Vol. 4(1), pp. 62 -70 
communication system traffic control , databases , guidelines , ip networks , internet , joining processes , peer to peer computing , protocols , routing , telecommunication traffic internet , routing protocols , telecommunication traffic DOI  
Abstract: AS Path Prepending (ASPP) is a popular method for the inter-AS inbound traffic engineering, which is known to be more difficult than the outbound traffic engineering. Although the ASPP approach has been extensively practised by many ASes, it is surprising that there still lacks a systematic study of this approach and the basic understanding of its effectiveness. In this paper, we introduce the concept, applicability and potential instability problem of the ASPP approach. Some guidelines are given as the first step to study the method to avoid instability problem. Finally, we study the dynamic prepending behavior of ISPs and show a real-world pathologic case of prepending instability based on our measurement study of RouteViews data.
BibTeX:

@article{4275035,

  author = {Jessie Hui Wang and Dah Ming Chiu and Lui, J.C.S. and Chang, R.K.C.},

  title = {Inter-AS Inbound Traffic Engineering via ASPP},

  journal = {Network and Service Management, IEEE Transactions on},

  year = {2007},

  volume = {4},

  number = {1},

  pages = {62 -70},

  doi = {http://dx.doi.org/10.1109/TNSM.2007.030106}

}

Wang, Y. and Vasilakos, A.V. and Ma, J. VPEF: A Simple and Effective Incentive Mechanism in Community-Based Autonomous Networks 2015 Network and Service Management, IEEE Transactions on
Vol. 12(1), pp. 75-86 
collaboration;communities;games;ieee 802.11 standards;peer-to-peer computing;sociology;statistics;community-based autonomous networks;entry fee;evolutionary game theory;incentive mechanism;voluntary principle;entry fee;evolutionary game theory;incentive mechanism;voluntary principle DOI  
Abstract: This paper focuses on incentivizing cooperative behavior in community-based autonomous networking environments (like mobile social networks, etc.), in which through dynamically forming virtual and/or physical communities, users voluntarily participate in and contribute resources (or provide services) to the community while consuming. Specifically, we proposed a simple but effective EGT (Evolutionary Game Theory)-based mechanism, VPEF (Voluntary Principle and round-based Entry Fee), to drive the networking environment into cooperative. VPEF builds incentive mechanism as two simple system rules: The first is VP meaning that all behaviors are voluntarily conducted by users: Users voluntarily participate (after paying round-based entry fee), voluntarily contribute resource, and voluntarily punish other defectors (incurring extra cost to those so-called punishers); The second is EF meaning that an arbitrarily small round-based entry fee is set for each user who wants to participate in the community. We presented a generic analytical framework of evolutionary dynamics to model VPEF scheme, and theoretically proved that VPEF scheme's efficiency loss defined as the ratio of system time, in which no users will provide resource, is $4/(8+M)$. $M$ is the number of users in community-based collaborative system. Finally, the simulated results using content availability as an example verified our theoretical analysis.
BibTeX:

@article{7029119,

  author = {Wang, Y. and Vasilakos, A.V. and Ma, J.},

  title = {VPEF: A Simple and Effective Incentive Mechanism in Community-Based Autonomous Networks},

  journal = {Network and Service Management, IEEE Transactions on},

  year = {2015},

  volume = {12},

  number = {1},

  pages = {75-86},

  doi = {http://dx.doi.org/10.1109/TNSM.2015.2397883}

}

Wang, Y. and Wang, H. and Wang, C. Graph-Based Authentication Design for Color-Depth-Based 3D Video Transmission over Wireless Networks 2013 Network and Service Management, IEEE Transactions on
Vol. 10(3), pp. 245-254 
authentication;color-depth-based 3d video;distortion reduction;wireless networks DOI  
Abstract: 3D video applications such as 3D-TV and 3D games have become more and more popular in recent years. These applications raised significant challenges in the media security, processing and transmissions. Especially, when 3D videos are delivered over wireless networks, the video streaming suffers the potential malicious attacks. One of the most important security challenging issues is how to guarantee the integrity of media content over error-prone wireless networks. To address this challenge, in the paper, we for the first time propose an authentication approach for 3D video transmission over wireless networks, which can improve the reconstructed media quality under error-prone wireless environment with lower authentication overheads and energy consumption. The proposed method is based on color-depth 3D video coding approach, which can save bandwidth, be tolerable to packet losses and thus satisfy the users' Quality of Experience (QoE) requirements. Our major contribution in this paper includes: (1) designing a joint source-channel-authentication coding framework for color-depth-based 3D video transmission; (2) proposing a media quality prediction model for color-depth-based 3D video transmission; (3) developing optimization for graph-based authentication on 3D video transmission to improve reconstructed media quality, reduce authentication overheads and energy consumption. Experimental results demonstrated the effectiveness of our proposed solutions.
BibTeX:

@article{6517360,

  author = {Wang, Y. and Wang, H. and Wang, C.},

  title = {Graph-Based Authentication Design for Color-Depth-Based 3D Video Transmission over Wireless Networks},

  journal = {Network and Service Management, IEEE Transactions on},

  year = {2013},

  volume = {10},

  number = {3},

  pages = {245-254},

  doi = {http://dx.doi.org/10.1109/TNSM.2013.051313.120343}

}

Zhikui Wang and Yuan Chen and Gmach, D. and Singhal, S. and Watson, B.J. and Rivera, W. and Xiaoyun Zhu and Hyser, C.D. AppRAISE: application-level performance management in virtualized server environments 2009 Network and Service Management, IEEE Transactions on
Vol. 6(4), pp. 240 -254 
performance control , performance model , resource allocation , virtualization , workload consolidation feedback , feedforward , queueing theory , resource allocation , virtual machines DOI  
Abstract: Managing application-level performance for multitier applications in virtualized server environments is challenging because the applications are distributed across multiple virtual machines, and workloads are dynamic in their intensity and transaction mix resulting in time-varying resource demands. In this paper, we present AppRAISE, a system that manages performance of multi-tier applications by dynamically resizing the virtual machines hosting the applications. We extend a traditional queuing model to represent application performance in virtualized server environments, where virtual machine capacity is dynamically tuned. Using this performance model, AppRAISE predicts the performance of the applications due to workload changes, and proactively resizes the virtual machines hosting the applications to meet performance thresholds. By integrating feedforward prediction and feedback reactive control, AppRAISE provides a robust and efficient performance management solution. We tested AppRAISE using Xen virtual machines and the RUBiS benchmark application. Our empirical results show that AppRAISE can effectively allocate CPU resources to application components of multiple applications to meet end-to-end mean response time targets in the presence of variable workloads, while maintaining reasonable trade-offs between application performance, resource efficiency, and transient behavior.
BibTeX:

@article{5374032,

  author = {Zhikui Wang and Yuan Chen and Gmach, D. and Singhal, S. and Watson, B.J. and Rivera, W. and Xiaoyun Zhu and Hyser, C.D.},

  title = {AppRAISE: application-level performance management in virtualized server environments},

  journal = {Network and Service Management, IEEE Transactions on},

  year = {2009},

  volume = {6},

  number = {4},

  pages = {240 -254},

  doi = {http://dx.doi.org/10.1109/TNSM.2009.04.090404}

}

Wang, Z. and Wu, C. and Sun, L. and Yang, S. Peer-Assisted Social Media Streaming with Social Reciprocity 2013 Network and Service Management, IEEE Transactions on
Vol. 10(1), pp. 84-94 
social media streaming , peer incentive , resource allocation , social reciprocity DOI  
Abstract: Online video sharing and social networking are cross-pollinating rapidly in today's Internet: Online social network users are sharing more and more media contents among each other, while online video sharing sites are leveraging social connections among users to promote their videos. An intriguing development as it is, the operational challenge in previous video sharing systems persists, i.e., the large server cost demanded for scaling of the systems. Peer-to-peer video sharing could be a rescue, only if the video viewers' mutual resource contribution has been fully incentivized and efficiently scheduled. Exploring the unique advantages of a social network based video sharing system, we advocate to utilize social reciprocities among peers with social relationships for efficient contribution incentivization and scheduling, so as to enable high-quality video streaming with low server cost. We exploit social reciprocity with two give-and-take ratios at each peer: (1) peer contribution ratio (PCR), which evaluates the reciprocity level between a pair of social friends, and (2) system contribution ratio (SCR), which records the give-and-take level of the user to and from the entire system. We design efficient peer-to-peer mechanisms for video streaming using the two ratios, where each user optimally decides which other users to seek relay help from and help in relaying video streams, respectively, based on combined evaluations of their social relationship and historical reciprocity levels. Our design achieves effective incentives for resource contribution, load balancing among relay peers, as well as efficient social-aware resource scheduling. We also discuss practical implementation and implement our design in a prototype social media sharing system. Our extensive evaluations based on PlanetLab experiments verify that high-quality large-scale social media sharing can be achieved with conservative server costs.
BibTeX:

@article{6317102,

  author = {Wang, Z. and Wu, C. and Sun, L. and Yang, S.},

  title = {Peer-Assisted Social Media Streaming with Social Reciprocity},

  journal = {Network and Service Management, IEEE Transactions on},

  year = {2013},

  volume = {10},

  number = {1},

  pages = {84-94},

  doi = {http://dx.doi.org/10.1109/TNSM.2012.12.120244}

}

Watkins, L. and Robinson, W.H. and Beyah, R. A Passive Solution to the Memory Resource Discovery Problem in Computational Clusters 2010 Network and Service Management, IEEE Transactions on
Vol. 7(4), pp. 218 -230 
cluster computing , clustering applications , memory-intensive applications , passive resource discovery distributed processing , resource allocation DOI  
Abstract: Resource discovery is an important problem in distributed computing, because the throughput of the system is directly linked to its ability to quickly locate available resources. Current solutions are undesirable for discovering resources in large computational clusters because they are intrusive, chatty (i.e., have per-node overhead), or maintenance-intensive. In this paper, we present a novel method that offers the ability to non-intrusively identify resources that have available memory; this is critical for memory-intensive cluster applications such as weather forecasting and computational chemistry. The prime benefits are fourfold: (1) low message complexity, (2) scalability, (3) load balancing, and (4) low maintainability. We demonstrate the feasibility of our method with experiments using a 50-node test-bed (DETERlab). Our technique allows us to establish a correlation between memory load and the timely response of network traffic from a node. Results show that our method can accurately (92%-100%) identify nodes with available memory through analysis of existing network traffic, including network traffic that has passed through a switch (non-congested).
BibTeX:

@article{5668978,

  author = {Watkins, L. and Robinson, W.H. and Beyah, R.},

  title = {A Passive Solution to the Memory Resource Discovery Problem in Computational Clusters},

  journal = {Network and Service Management, IEEE Transactions on},

  year = {2010},

  volume = {7},

  number = {4},

  pages = {218 -230},

  doi = {http://dx.doi.org/10.1109/TNSM.2010.1012.0326}

}

Wichtlhuber, M. and Reinecke, R. and Hausheer, D. An SDN-Based CDN/ISP Collaboration Architecture for Managing High-Volume Flows 2015 Network and Service Management, IEEE Transactions on
Vol. 12(1), pp. 48-60 
collaboration;ip networks;multiprotocol label switching;network topology;servers;system analysis and design;topology;software defined networking;content distribution networks DOI  
Abstract: The collaboration of Internet service providers (ISPs) and content distribution network (CDN) providers was shown to be beneficial for both parties in a number of recent works. Influencing CDN edge server (surrogate) selection allows the ISP to manage the rising amount of traffic emanating from CDNs to reduce the operational expenditures (OPEX) of his infrastructure, e.g., by preventing peered traffic. At the same time, including the ISP's hidden network knowledge in the surrogate selection process influences the quality of service a CDN provider can deliver positively. As a large amount of CDN traffic is video-on-demand traffic, this paper investigates the topic of CDN/ISP collaboration from a perspective of high-volume long-living flows. These types of flows are hardly manageable with state-of-the-art Dynamic Name Service (DNS)-based redirection, as a reassignment of flows during the session is difficult to achieve. Consequently, varying load of surrogates caused by flash crowds and congestion events in the ISP's network are hard to compensate. This paper presents a novel approach promoting ISP and CDN collaboration based on a minimal deployment of software-defined networking switches in the ISP's network. The approach complements standard DNS-based redirection by allowing for a migration of high-volume flows between surrogates in the backend even if the communication has state information, such as Hyper Text Transfer Protocol sessions. In addition to a proof-of-concept, the evaluation identifies factors influencing performance and shows large performance increases when compared to standard DNS-based redirection.
BibTeX:

@article{7044567,

  author = {Wichtlhuber, M. and Reinecke, R. and Hausheer, D.},

  title = {An SDN-Based CDN/ISP Collaboration Architecture for Managing High-Volume Flows},

  journal = {Network and Service Management, IEEE Transactions on},

  year = {2015},

  volume = {12},

  number = {1},

  pages = {48-60},

  doi = {http://dx.doi.org/10.1109/TNSM.2015.2404792}

}

Wuhib, F. and Dam, M. and Stadler, R. A gossiping protocol for detecting global threshold crossings 2010 Network and Service Management, IEEE Transactions on
Vol. 7(1), pp. 42 -57 
distributed monitoring, threshold detection, gossip protocol computerised monitoring , protocols DOI  
Abstract: We investigate the use of gossip protocols for the detection of network-wide threshold crossings. Our design goals are low protocol overhead, small detection delay, low probability of false positives and negatives, scalability, robustness to node failures and controllability of the trade-off between overhead and detection delay. Based on push-synopses, a gossip protocol introduced by Kempe et al., we present a protocol that indicates whether a global aggregate of static local values is above or below a given threshold. For this protocol, we prove correctness and show that it converges to a state with no overhead when the aggregate is sufficiently far from the threshold. Then, we introduce an extension we call TG-GAP, a protocol that (1) executes in a dynamic network environment where local values change and (2) implements hysteresis behavior with upper and lower thresholds. Key elements of its design are the construction of snapshots of the global aggregate for threshold detection and a mechanism for synchronizing local states, both of which are realized through the underlying gossip protocol. Simulation studies suggest that TG-GAP is efficient in that the protocol overhead is minimal when the aggregate is sufficiently far from the threshold, that its overhead and the detection delay are largely independent on the system size, and that the tradeoff between overhead and detection quality can be effectively controlled. Lastly, we perform a comparative evaluation of TG-GAP against a tree-based protocol. We conclude that, for detecting global threshold crossings in the type of scenarios investigated, the tree-based protocol incurs a significantly lower overhead and a smaller detection delay than a gossip protocol such as TG-GAP.
BibTeX:

@article{5412872,

  author = {Wuhib, F. and Dam, M. and Stadler, R.},

  title = {A gossiping protocol for detecting global threshold crossings},

  journal = {Network and Service Management, IEEE Transactions on},

  year = {2010},

  volume = {7},

  number = {1},

  pages = {42 -57},

  doi = {http://dx.doi.org/10.1109/TNSM.2010.I9P0329}

}

Wuhib, F. and Dam, M. and Stadler, R. and Clem, A. Robust monitoring of network-wide aggregates through gossiping 2009 Network and Service Management, IEEE Transactions on
Vol. 6(2), pp. 95 -109 
gossip protocol, epidemic protocol, aggregation, real-time monitoring distributed algorithms , monitoring , protocols DOI  
Abstract: We investigate the use of gossip protocols for continuous monitoring of network-wide aggregates under crash failures. Aggregates are computed from local management variables using functions such as SUM, MAX, or AVERAGE. For this type of aggregation, crash failures offer a particular challenge due to the problem of mass loss, namely, how to correctly account for contributions from nodes that have failed. In this paper we give a partial solution. We present G-GAP, a gossip protocol for continuous monitoring of aggregates, which is robust against failures that are discontiguous in the sense that neighboring nodes do not fail within a short period of each other. We give formal proofs of correctness and convergence, and we evaluate the protocol through simulation using real traces. The simulation results suggest that the design goals for this protocol have been met. For instance, the tradeoff between estimation accuracy and protocol overhead can be controlled, and a high estimation accuracy (below some 5% error in our measurements) is achieved by the protocol, even for large networks and frequent node failures. Further, we perform a comparative assessment of GGAP against a tree-based aggregation protocol using simulation. Surprisingly, we find that the tree-based aggregation protocol consistently outperforms the gossip protocol for comparative overhead, both in terms of accuracy and robustness.
BibTeX:

@article{5374830,

  author = {Wuhib, F. and Dam, M. and Stadler, R. and Clem, A.},

  title = {Robust monitoring of network-wide aggregates through gossiping},

  journal = {Network and Service Management, IEEE Transactions on},

  year = {2009},

  volume = {6},

  number = {2},

  pages = {95 -109},

  doi = {http://dx.doi.org/10.1109/TNSM.2009.090603}

}

Wuhib, F. and Stadler, R. and Spreitzer, M. A Gossip Protocol for Dynamic Resource Management in Large Cloud Environments 2012 Network and Service Management, IEEE Transactions on
Vol. 9(2), pp. 213-225 
cloud computing , distributed management , gossip protocols , resource allocations DOI  
Abstract: We address the problem of dynamic resource management for a large-scale cloud environment. Our contribution includes outlining a distributed middleware architecture and presenting one of its key elements: a gossip protocol that (1) ensures fair resource allocation among sites/applications, (2) dynamically adapts the allocation to load changes and (3) scales both in the number of physical machines and sites/applications. We formalize the resource allocation problem as that of dynamically maximizing the cloud utility under CPU and memory constraints. We first present a protocol that computes an optimal solution without considering memory constraints and prove correctness and convergence properties. Then, we extend that protocol to provide an efficient heuristic solution for the complete problem, which includes minimizing the cost for adapting an allocation. The protocol continuously executes on dynamic, local input and does not require global synchronization, as other proposed gossip protocols do. We evaluate the heuristic protocol through simulation and find its performance to be well-aligned with our design goals.
BibTeX:

@article{6172596,

  author = {Wuhib, F. and Stadler, R. and Spreitzer, M.},

  title = {A Gossip Protocol for Dynamic Resource Management in Large Cloud Environments},

  journal = {Network and Service Management, IEEE Transactions on},

  year = {2012},

  volume = {9},

  number = {2},

  pages = {213-225},

  doi = {http://dx.doi.org/10.1109/TNSM.2012.031512.110176}

}

Wysocki, T. and Jamalipour, A. An Economic Welfare Preserving Framework for Spot Pricing and Hedging of Spectrum Rights for Cognitive Radio 2012 Network and Service Management, IEEE Transactions on
Vol. 9(1), pp. 87-99 
cognitive radio , sharpe ratio , economic welfare , hedging , spectrum pricing DOI  
Abstract: In a Cognitive Radio (CR) enabled network, Secondary Users (SUs) have an impact on the Quality-of-Service (QoS), and ultimately the revenue of the PU Primary User (PU) license holders. An important metric of an investor's economic welfare is the ratio of mean returns to the volatility of returns (Sharpe ratio). Numerous CR access schemes have been proposed that fail to account for the economic welfare of PUs when SUs access their spectrum. In this paper we consider the PU's spectrum license as an investment and analyze the impact of CR activity on the PU's economic welfare, in order to derive a cost of production of spectrum access rights such that the PU's economic welfare is not degraded. This creates an incentive for PUs to permit CR access in their spectrum bands, by ensuring that the characteristics of returns on the substantial investment made in spectrum licenses are preserved. However, under any instantaneous or spot pricing scheme, the risk of price volatility is introduced. We also propose a framework to alleviate the risk to SUs from a volatile CR rights spot price, by introducing the concept of forward pricing and hedging of CR rights. This reduces the variability of cash-flow associated with the purchase of CR rights, ensuring that the QoS provided by an SU network remains unaffected by a high CR access price. We also illustrate the leverage effect, where a PU will achieve a higher mean return-on-investment with SU operation, albeit with a higher variance of returns.
BibTeX:

@article{6093760,

  author = {Wysocki, T. and Jamalipour, A.},

  title = {An Economic Welfare Preserving Framework for Spot Pricing and Hedging of Spectrum Rights for Cognitive Radio},

  journal = {Network and Service Management, IEEE Transactions on},

  year = {2012},

  volume = {9},

  number = {1},

  pages = {87-99},

  doi = {http://dx.doi.org/10.1109/TNSM.2011.110911.110053}

}

Xu, K. and Wang, H. and Liu, J. and Lin, S. and Xu, L. Pushing Server Bandwidth Consumption to the Limit: Modeling and Analysis of Peer-Assisted VoD 2014 Network and Service Management, IEEE Transactions on
Vol. 11(4), pp. 472-485 
algorithm design and analysis;analytical models;bandwidth;heuristic algorithms;optimization;prefetching;servers;video-on-demand (vod);video-on-demand (vod);peer-assisted systems;scheduling DOI  
Abstract: Recent years have witnessed video-on-demand (VoD) as an efficient means for providing reliable streaming service for Internet users. It is known that peer-assisted VoD systems, such as NetFlix and PPlive, generally incur a lower deployment cost in terms of server bandwidth consumption. However, some fundamental issues still need to be further clarified, particularly for VoD service providers. In particular, how far can we push peer-assisted VoD forward, and at the scale of VoD systems, the maximum reduction of server bandwidth consumption that can be achieved with peer-assisted approaches. In this paper, we provide extensive model analysis to understand the minimum server bandwidth consumption for peer-assisted VoD systems. We first propose a basic model that can optimally schedule user demands at given snapshots. Our model analysis reveals the optimal performance bound and shows that the existing peer-assisted protocols are still far from being optimal. How to push the server bandwidth consumption to the limit remains a big challenge in VoD system design. To approach the optimal bandwidth consumption in real deployment, we further extend our model to a realistic case to capture the peer dynamic across continuous time-slots. The simulation result indicates that the optimal load scheduling problem is still achievable through a dynamic programming algorithm. Its design principle further motivates a fast priority-based algorithm that achieves near-optimal performance. These proposed algorithms can significantly reduce the bandwidth consumption of dedicated VoD servers.
BibTeX:

@article{6912984,

  author = {Xu, K. and Wang, H. and Liu, J. and Lin, S. and Xu, L.},

  title = {Pushing Server Bandwidth Consumption to the Limit: Modeling and Analysis of Peer-Assisted VoD},

  journal = {Network and Service Management, IEEE Transactions on},

  year = {2014},

  volume = {11},

  number = {4},

  pages = {472-485},

  doi = {http://dx.doi.org/10.1109/TNSM.2014.2360772}

}

Yahaya, A. and Harks, T. and Suda, T. iREX: efficient automation architecture for the deployment of inter-domain QoS policy 2008 Network and Service Management, IEEE Transactions on
Vol. 5(1), pp. 50 -64 
algorithm design and analysis , analytical models , automation , communication system traffic control , ip networks , identity management systems , quality of service , resource management , service oriented architecture , web and internet services internet , numerical analysis , quality of service DOI  
Abstract: The inter-domain resource exchange (iREX) architecture uses economic market mechanisms to automate the ad-hoc negotiation and deployment of end to end inter-domain quality of service policy among resource consumer and resource provider . In this paper, we explore iREX's network load distribution by comparing its performance to a lower bound for network congestion in two ways. We first present an analytical model of iREX in terms of an online algorithm and analyze its efficiency via competitive analysis. Our main result shows that the efficiency loss of iREX with respect to monetary cost is upper-bounded by a factor of 8 K/2 K+1, where K s the number of deployments, provided affine linear price functions are used. When the price functions are used to model congestion in the network, this result implies upper bounds on the efficiency loss of iREX with respect to network congestion. We then complement the analytical model with a numerical study using simulations.with optimal solutions derived from unsplittable and splittable multi-commodity flow optimization models. Our numerical results show that for nominal to high traffic loads of 40% or more, iREX deviates a maximum of about 20% from the lower bound, while the current method deviates a maximum of 300%.
BibTeX:

@article{4570774,

  author = {Yahaya, A. and Harks, T. and Suda, T.},

  title = {iREX: efficient automation architecture for the deployment of inter-domain QoS policy},

  journal = {Network and Service Management, IEEE Transactions on},

  year = {2008},

  volume = {5},

  number = {1},

  pages = {50 -64},

  doi = {http://dx.doi.org/10.1109/TNSM.2008.080105}

}

Yoshizawa, M. and Sato, T. and Naono, K. Integrated Monitoring Software for Application Service Managers 2014 Network and Service Management, IEEE Transactions on
Vol. 11(3), pp. 321-332 
batch production systems;databases;monitoring;servers;software as a service;switches;saas;service monitoring;anomaly detection;impact analysis;monitoring software;performance prediction;root cause analysis DOI  
Abstract: In today's data centers, many application services share the same physical/virtual devices and affect each other. Therefore, application service managers need to spend a lot of time monitoring the application services by investigating a wide range of historical data about the shared devices. In this paper, integrated monitoring software for a wide range of historical data, which shortens the transition time (i.e., the time to switch from one historical data to another) by collecting and processing the historical data, is proposed and evaluated. Five basic historical data formats for automatically creating both well-organized historical data and relation data for the historical data are also proposed. The formats help application service managers by eliminating the need for the additional software development, which was otherwise required for each application service. Surveys on application service managers in a SaaS provider for over 50?000 companies show that about 98.6% of monitoring tasks can be covered by the five basic data formats and that the integrated monitoring software reduces the transition times for switching between historical data by about 54.7% compared to those with conventional monitoring software. These results suggest that the proposed integrated monitoring software is effective not only for reducing the time required for monitoring application services, but also for enhancing the overall service availability of SaaS providers' systems.
BibTeX:

@article{6873567,

  author = {Yoshizawa, M. and Sato, T. and Naono, K.},

  title = {Integrated Monitoring Software for Application Service Managers},

  journal = {Network and Service Management, IEEE Transactions on},

  year = {2014},

  volume = {11},

  number = {3},

  pages = {321-332},

  doi = {http://dx.doi.org/10.1109/TNSM.2014.2346073}

}

Younis, M. and Farrag, O. and Althouse, B. TAM: A Tiered Authentication of Multicast Protocol for Ad-Hoc Networks 2012 Network and Service Management, IEEE Transactions on
Vol. 9(1), pp. 100-113 
multicast communications , ad-hoc networks , message authentication DOI  
Abstract: Ad-hoc networks are becoming an effective tool for many mission critical applications such as troop coordination in a combat field, situational awareness, etc. These applications are characterized by the hostile environment that they serve in and by the multicast-style of communication traffic. Therefore, authenticating the source and ensuring the integrity of the message traffic become a fundamental requirement for the operation and management of the network. However, the limited computation and communication resources, the large scale deployment and the unguaranteed connectivity to trusted authorities make known solutions for wired and single-hop wireless networks inappropriate. This paper presents a new Tiered Authentication scheme for Multicast traffic (TAM) for large scale dense ad-hoc networks. TAM combines the advantages of the time asymmetry and the secret information asymmetry paradigms and exploits network clustering to reduce overhead and ensure scalability. Multicast traffic within a cluster employs a one-way hash function chain in order to authenticate the message source. Cross-cluster multicast traffic includes message authentication codes (MACs) that are based on a set of keys. Each cluster uses a unique subset of keys to look for its distinct combination of valid MACs in the message in order to authenticate the source. The simulation and analytical results demonstrate the performance advantage of TAM in terms of bandwidth overhead and delivery delay.
BibTeX:

@article{6094287,

  author = {Younis, M. and Farrag, O. and Althouse, B.},

  title = {TAM: A Tiered Authentication of Multicast Protocol for Ad-Hoc Networks},

  journal = {Network and Service Management, IEEE Transactions on},

  year = {2012},

  volume = {9},

  number = {1},

  pages = {100-113},

  doi = {http://dx.doi.org/10.1109/TNSM.2011.113011.100139}

}

Yu, F.R. and Tang, H. and Mason, P.C. and Fei Wang A Hierarchical Identity Based Key Management Scheme in Tactical Mobile Ad Hoc Networks 2010 Network and Service Management, IEEE Transactions on
Vol. 7(4), pp. 258 -267 
hierarchical id-based encryption , compromising probability , network lifetime , private key generator military communication , mobile ad hoc networks , private key cryptography , stochastic processes , telecommunication network management , telecommunication security DOI  
Abstract: Hierarchical key management schemes would serve well for military applications where the organization of the network is already hierarchical in nature. Most of the existing key management schemes concentrate only on network structures and key allocation algorithms, ignoring attributes of the nodes themselves. Due to the distributed and dynamic nature of MANETs, it is possible to show that there is a security benefit to be attained when the node states are considered in the process of constructing a private key generator (PKG). In this paper, we propose a distributed hierarchical key management scheme in which nodes can get their keys updated either from their parent nodes or a threshold of sibling nodes. The dynamic node selection process is formulated as a stochastic problem and the proposed scheme can select the best nodes to be used as PKGs from all available ones considering their security conditions and energy states. Simulation results show that the proposed scheme can decrease network compromising probability and increase network lifetime in tactical MANETs.
BibTeX:

@article{5668981,

  author = {Yu, F.R. and Tang, H. and Mason, P.C. and Fei Wang},

  title = {A Hierarchical Identity Based Key Management Scheme in Tactical Mobile Ad Hoc Networks},

  journal = {Network and Service Management, IEEE Transactions on},

  year = {2010},

  volume = {7},

  number = {4},

  pages = {258 -267},

  doi = {http://dx.doi.org/10.1109/TNSM.2010.1012.0362}

}

Yu, H. and Qiao, C. and Wang, J. and Wu, B. and Li, L. A Virtualization Layer Approach to Survivability 2014 Network and Service Management, IEEE Transactions on
Vol. 11(4), pp. 504-515 
bandwidth;indium phosphide;physical layer;reliability;resource management;substrates;virtualization;network virtualization;network virtualization;node failure;sharing;survivability;virtual infrastructure;node failure;sharing;survivability;virtual infrastructure DOI  
Abstract: Network virtualization facilitates sharing and efficient utilization of computing and bandwidth resources of an underlying substrate network. As network virtualization becomes popular, it is important to efficiently map a virtual infrastructure (VI) onto a substrate network, such that the survivability of the former can be guaranteed against failures in the latter. In this paper, we study a virtualization layer approach to survivability, whereby the virtualization layer customizes a VI request with redundant nodes and links according to its reliability requirements and then passes limited information about the augmented VI to the physical layer, where the mapping of the augmented VI takes place. More specifically, we develop a flexible scheme to enhance the original VI graph with $K$ redundant nodes, in order to fight against an arbitrary substrate node failure. In addition, a scenario-based component group (SBCG) concept is proposed to describe resource sharing of enhanced VI requests at the physical layer. We also develop an efficient heuristic that takes advantage of the limited information on SBCG to reduce costs when mapping the enhanced VI to the substrate network. The efficiency of the proposed solution is compared using extensive simulation under various performance metrics. It is shown that the $K$-redundant-node scheme with SBCG information is more cost efficient than the existing 1-redundant-node solution.
BibTeX:

@article{6977980,

  author = {Yu, H. and Qiao, C. and Wang, J. and Wu, B. and Li, L.},

  title = {A Virtualization Layer Approach to Survivability},

  journal = {Network and Service Management, IEEE Transactions on},

  year = {2014},

  volume = {11},

  number = {4},

  pages = {504-515},

  doi = {http://dx.doi.org/10.1109/TNSM.2014.2377520}

}

Yuan, Zhenhui and Muntean, Gabriel-Miro A Prioritized Adaptive Scheme for Multimedia Services over IEEE 802.11 WLANs 2013 Network and Service Management, IEEE Transactions on
Vol. 10(4), pp. 340-355 
bandwidth;ieee 802.11 standards;multimedia communication;protocols;quality of service;streaming media;telecommunication traffic;ieee 802.11;ieee 802.21;multimedia delivery;qos differentiation DOI  
BibTeX:

@article{6657882,

  author = {Yuan, Zhenhui and Muntean, Gabriel-Miro},

  title = {A Prioritized Adaptive Scheme for Multimedia Services over IEEE 802.11 WLANs},

  journal = {Network and Service Management, IEEE Transactions on},

  year = {2013},

  volume = {10},

  number = {4},

  pages = {340-355},

  doi = {http://dx.doi.org/10.1109/TNSM.2013.110513.130490}

}

Zachariadis, G. and Barria, J.A. Dynamic pricing and resource allocation using revenue management for multiservice networks 2008 Network and Service Management, IEEE Transactions on
Vol. 5(4), pp. 215 -226 
pricing, qos, resource allocation, revenue management, bandwidth on demand dynamic programming , packet radio networks , quality of service , resource allocation , telecommunication network management DOI  
Abstract: In this paper we develop a novel multiple-classes-of service framework where offered prices and QoS are allowed to be actively modified by the provider, depending on the demand and the congestion of the system. We obtain a solution to the problem by using dynamic programming. These results are then extended to a network environment using a decomposition approach. The decomposition approach makes our solution scalable, since single-link solutions are used and minimal amount of information is explicitly exchanged. Assessments carried out for small networks show that the obtained income is improved between 2%-20% when compared to a static approach and to other approaches where only price or quality are allowed to be adaptive.
BibTeX:

@article{5010445,

  author = {Zachariadis, G. and Barria, J.A.},

  title = {Dynamic pricing and resource allocation using revenue management for multiservice networks},

  journal = {Network and Service Management, IEEE Transactions on},

  year = {2008},

  volume = {5},

  number = {4},

  pages = {215 -226},

  doi = {http://dx.doi.org/10.1109/TNSM.2009.041103}

}

Zeng, L. and Wang, Y. and Feng, D. and Kent, K.B. XCollOpts: A Novel Improvement of Network Virtualizations in Xen for I/O-Latency Sensitive Applications on Multicores 2015 Network and Service Management, IEEE Transactions on
Vol. 12(2), pp. 163-175 
dispatching;load management;multicore processing;optimization;time factors;virtual machine monitors;virtualization;hypervisor;i/o virtualization;load balancing;multicore;scheduling;hypervisor;load balancing;multicore DOI  
Abstract: It has long been recognized that the Credit scheduler selectively favors CPU-bound applications whereas for I/O-latency sensitive workloads, such as those related to stream-based audio/video services, it only exhibits tolerable, or even worse, unacceptable performance. The reasons behind this phenomenon are the poor understanding (to some degree) of the virtual machine scheduling as well as the network I/O virtualizations. In order to address these problems and make the system more responsive to the I/O-latency sensitive applications, in this paper, we present XCollOpts which performs a collection of novel optimizations to improve the Credit scheduler and the underlying I/O virtualizations in multicore environments, each from two perspectives. To optimize the schedule, in XCollOpts, we first pinpoint the Imbalanced Multi-Boosting problem among the cores thereby minimizing the system response time by load balancing the BOOST VCPUs. Then, we describe the Premature Preemption problem and address it by monitoring the received network packets in the driver domain and deliberately preventing it from being prematurely preempted during the packet delivery. However, these optimizations on the scheduling strategies cannot be fully exploited if the performance issues of the underlying supportive communication mechanisms are not considered. To this end, we make two further optimizations for the network I/O virtualizations, namely, Multi-Tasklet Pairs and Optimized Small Data Packet. Our empirical studies show that with XCollOpts, we can significantly improve the performance of the latency-sensitive applications at a cost of relatively small system overhead.
BibTeX:

@article{7105937,

  author = {Zeng, L. and Wang, Y. and Feng, D. and Kent, K.B.},

  title = {XCollOpts: A Novel Improvement of Network Virtualizations in Xen for I/O-Latency Sensitive Applications on Multicores},

  journal = {Network and Service Management, IEEE Transactions on},

  year = {2015},

  volume = {12},

  number = {2},

  pages = {163-175},

  doi = {http://dx.doi.org/10.1109/TNSM.2015.2432066}

}

Bo Zhang and Guohui Wang and Zhu, A.Y. and Ng, T.S.E. Router group monitoring: making traffic trajectory error detection more efficient 2010 Network and Service Management, IEEE Transactions on
Vol. 7(3), pp. 158 -171 
traffic trajectory error, monitoring, sampling , detection, router group error detection , monitoring , peripheral interfaces , telecommunication network routing , telecommunication network topology , telecommunication traffic DOI  
Abstract: Detecting errors in traffic trajectories (i.e., packet forwarding paths) is important to operational networks. Several different traffic monitoring algorithms such as Trajectory Sampling, PSAMP, and Fatih can be used for traffic trajectory error detection. However, a straight-forward application of these algorithms will incur the overhead of simultaneously monitoring all network interfaces in a network for the packets of interest. In this paper, we propose a novel technique called router group monitoring to improve the efficiency of trajectory error detection by only monitoring the periphery interfaces of a set of selected router groups. We analyze a large number of real network topologies and show that effective router groups with high trajectory error detection rates exist in all cases. However, for router group monitoring to be practical, those effective router groups must be identified efficiently. To this end, we develop an analytical model for quickly and accurately estimating the detection rates of different router groups. Based on this model, we propose an algorithm to select a set of router groups that can achieve complete error detection and low monitoring overhead. Finally, we show that the router group monitoring technique can significantly improve the efficiency of trajectory error detection based on Trajectory Sampling or Fatih.
BibTeX:

@article{5560571,

  author = {Bo Zhang and Guohui Wang and Zhu, A.Y. and Ng, T.S.E.},

  title = {Router group monitoring: making traffic trajectory error detection more efficient},

  journal = {Network and Service Management, IEEE Transactions on},

  year = {2010},

  volume = {7},

  number = {3},

  pages = {158 -171},

  doi = {http://dx.doi.org/10.1109/TNSM.2010.1009.I9P03322}

}

Zhang, Hui and Jiang, Guofei and Yoshihira, Kenji and Chen, Haifeng Proactive Workload Management in Hybrid Cloud Computing 2014 Network and Service Management, IEEE Transactions on
Vol. 11(1), pp. 90-100 
diversity methods;fading;interference;relays;signal to noise ratio;wireless networks;cloud computing;algorithms;hybrid cloud;load balancing;workload management DOI  
Abstract: The hindrances to the adoption of public cloud computing services include service reliability, data security and privacy, regulation compliant requirements, and so on. To address those concerns, we propose a hybrid cloud computing model which users may adopt as a viable and cost-saving methodology to make the best use of public cloud services along with their privately-owned (legacy) data centers. As the core of this hybrid cloud computing model, an intelligent workload factoring service is designed for proactive workload management. It enables federation between on- and off-premise infrastructures for hosting Internet-based applications, and the intelligence lies in the explicit segregation of base workload and flash crowd workload, the two naturally different components composing the application workload. The core technology of the intelligent workload factoring service is a fast frequent data item detection algorithm, which enables factoring incoming requests not only on volume but also on data content, upon a changing application data popularity. Through analysis and extensive evaluation with real-trace driven simulations and experiments on a hybrid testbed consisting of local computing platform and Amazon Cloud service platform, we showed that the proactive workload management technology can enable reliable workload prediction in the base workload zone (with simple statistical methods), achieve resource efficiency (e.g., 78% higher server capacity than that in base workload zone) and reduce data cache/replication overhead (up to two orders of magnitude) in the flash crowd workload zone, and react fast (with an X^2 speed-up factor) to the changing application data popularity upon the arrival of load spikes.
BibTeX:

@article{6701292,

  author = {Zhang, Hui and Jiang, Guofei and Yoshihira, Kenji and Chen, Haifeng},

  title = {Proactive Workload Management in Hybrid Cloud Computing},

  journal = {Network and Service Management, IEEE Transactions on},

  year = {2014},

  volume = {11},

  number = {1},

  pages = {90-100},

  doi = {http://dx.doi.org/10.1109/TNSM.2013.122313.130448}

}

Zhang, J. and Chen, C. and Xiang, Y. and Zhou, W. and Vasilakos, A. An Effective Network Traffic Classification Method with Unknown Flow Detection 2013 Network and Service Management, IEEE Transactions on
Vol. 10(2), pp. 133-147 
traffic classification;compound classification;network security;unknown flow detection DOI  
Abstract: Traffic classification technique is an essential tool for network and system security in the complex environments such as cloud computing based environment. The state-of-the-art traffic classification methods aim to take the advantages of flow statistical features and machine learning techniques, however the classification performance is severely affected by limited supervised information and unknown applications. To achieve effective network traffic classification, we propose a new method to tackle the problem of unknown applications in the crucial situation of a small supervised training set. The proposed method possesses the superior capability of detecting unknown flows generated by unknown applications and utilizing the correlation information among real-world network traffic to boost the classification performance. A theoretical analysis is provided to confirm performance benefit of the proposed method. Moreover, the comprehensive performance evaluation conducted on two real-world network traffic datasets shows that the proposed scheme outperforms the existing methods in the critical network environment.
BibTeX:

@article{6476080,

  author = {Zhang, J. and Chen, C. and Xiang, Y. and Zhou, W. and Vasilakos, A.},

  title = {An Effective Network Traffic Classification Method with Unknown Flow Detection},

  journal = {Network and Service Management, IEEE Transactions on},

  year = {2013},

  volume = {10},

  number = {2},

  pages = {133-147},

  doi = {http://dx.doi.org/10.1109/TNSM.2013.022713.120250}

}

Zhang, S. and Zhang, Q. and Bannazadeh, H. and Leon-Garcia, A. Routing algorithms for network function virtualization enabled multicast topology on SDN 2015 Network and Service Management, IEEE Transactions on
Vol. Early Access 
approximation algorithms;approximation methods;bandwidth;heuristic algorithms;network topology;routing;topology;network function virtualization;software defined networking;traffic engineering DOI  
Abstract: Many multicast services such as live multimedia distribution and real-time event monitoring require multicast mechanisms that involve network functions (e.g. firewall, video transcoding). Network Function Virtualization (NFV) is a concept that proposes using virtualization to implement network functions on infrastructure building block (such as high volume servers, virtual machines), where software provides the functionality of existing purpose-built network equipment. We present an approach for building the multicast mechanism whereby multicast flows are processed by NFV before reaching their end users. We propose a routing algorithm and a method for building an appropriate multicast topology.
BibTeX:

@article{7181717,

  author = {Zhang, S. and Zhang, Q. and Bannazadeh, H. and Leon-Garcia, A.},

  title = {Routing algorithms for network function virtualization enabled multicast topology on SDN},

  journal = {Network and Service Management, IEEE Transactions on},

  year = {2015},

  volume = {Early Access},

  doi = {http://dx.doi.org/10.1109/TNSM.2015.2465371}

}

Zhao, Y. and Cao, Y. and Chen, Y. and Zhang, M. and Goyal, A. Rake: Semantics Assisted Network-Based Tracing Framework 2013 Network and Service Management, IEEE Transactions on
Vol. 10(1), pp. 3-14 
rake , tracing framework DOI  
Abstract: The ability to trace request execution paths is critical for diagnosing performance faults in large-scale distributed systems. Previous black-box and white-box approaches are either inaccurate or invasive. We present a novel semantics-assisted gray-box tracing approach, called Rake, which can accurately trace individual request by observing network traffic. Rake infers the causality between messages by identifying polymorphic IDs in messages according to application semantics. To make Rake universally applicable, we design a Rake language so that users can easily describe necessary semantics of their applications while reusing the core Rake component. We evaluate Rake using a few popular distributed applications, including web search, distributed computing cluster, content provider network, and online chatting. Our results demonstrate Rake is much more accurate than the black-box approaches while requiring no modification to OS/applications. In the CoralCDN (a content distributed network) experiments, Rake links messages with much higher accuracy than WAP5, a state-of-the-art black-box approach. In the Hadoop (a distributed computing cluster platform) experiments, Rake helps reveal several previously unknown issues that may lead to performance degradation, including a RPC (Remote Procedure Call) abusing problem.
BibTeX:

@article{6313581,

  author = {Zhao, Y. and Cao, Y. and Chen, Y. and Zhang, M. and Goyal, A.},

  title = {Rake: Semantics Assisted Network-Based Tracing Framework},

  journal = {Network and Service Management, IEEE Transactions on},

  year = {2013},

  volume = {10},

  number = {1},

  pages = {3-14},

  doi = {http://dx.doi.org/10.1109/TNSM.2012.091912.120224}

}

Zheng, Jie and Ng, T.S.Eugene and Sripanidkulchai, Kunwadee and Liu, Zhaolei Pacer: A Progress Management System for Live Virtual Machine Migration in Cloud Computing 2013 Network and Service Management, IEEE Transactions on
Vol. 10(4), pp. 369-382 
cloud computing;prediction algorithms;virtual machine monitors;virtual machining;virtualization;web servers;live migration;cloud computing;datacenter;progress management DOI  
BibTeX:

@article{6662353,

  author = {Zheng, Jie and Ng, T.S.Eugene and Sripanidkulchai, Kunwadee and Liu, Zhaolei},

  title = {Pacer: A Progress Management System for Live Virtual Machine Migration in Cloud Computing},

  journal = {Network and Service Management, IEEE Transactions on},

  year = {2013},

  volume = {10},

  number = {4},

  pages = {369-382},

  doi = {http://dx.doi.org/10.1109/TNSM.2013.111013.130522}

}

Zhou, F. and Liu, J. and Simon, G. and Boutaba, R. Joint Optimization for the Delivery of Multiple Video Channels in Telco-CDNs 2015 Network and Service Management, IEEE Transactions on
Vol. 12(1), pp. 87-100 
bandwidth;delays;joints;optimization;servers;streaming media;vegetation;content delivery networks (cdns);content delivery networks (cdns),;joint optimization;mixed integer linear programming (milp);heuristic algorithms;joint optimization;video delivery DOI  
Abstract: The delivery of live video channels for services such as twitch.tv leverages the so-called Telco-CDN¡ªContent Delivery Network (CDN) deployed within the Internet Service Provider (ISP) domain. A Telco-CDN can be regarded as an intra-domain overlay network with tight resources and critical deployment constraints. This paper addresses two problems in this context: (1) the construction of the overlays used to deliver the video channels from the entrypoints of the Telco-CDN to the appropriate edge servers; and (2) the allocation of the required resources to these overlays. Since bandwidth is critical for entrypoints and edge servers, our ultimate goal is to deliver as many video channels as possible while minimizing the total bandwidth consumption. To achieve this goal, we propose two approaches: a two-step optimization where the optimal overlays are firstly computed, then an optimal resource allocation based on these pre-computed overlays is performed; and a joint optimization where both optimization problems are simultaneously solved. We also devise fast heuristic algorithms for each of these approaches. The conducted evaluations of these two approaches and algorithms provide useful insights into the management of critical Telco-CDN infrastructures.
BibTeX:

@article{7036096,

  author = {Zhou, F. and Liu, J. and Simon, G. and Boutaba, R.},

  title = {Joint Optimization for the Delivery of Multiple Video Channels in Telco-CDNs},

  journal = {Network and Service Management, IEEE Transactions on},

  year = {2015},

  volume = {12},

  number = {1},

  pages = {87-100},

  doi = {http://dx.doi.org/10.1109/TNSM.2015.2400915}

}

Zhou, Yuezhi and Zhang, Yaoxue and Xie, Yinglian and Zhang, Hui and Yang, Laurence T. and Min, Geyong TransCom: A Virtual Disk-Based Cloud Computing Platform for Heterogeneous Services 2014 Network and Service Management, IEEE Transactions on
Vol. 11(1), pp. 46-59 
cloud computing;hard disks;ip networks;kernel;linux;servers;centralized management;cloud computing;distributed platforms;heterogeneous services;virtual disks DOI  
Abstract: This paper presents the design, implementation, and evaluation of TransCom, a virtual disk (Vdisk) based cloud computing platform that supports heterogeneous services of operating systems (OSes) and their applications in enterprise environments. In TransCom, clients store all data and software, including OS and application software, on Vdisks that correspond to disk images located on centralized servers, while computing tasks are carried out by the clients. Users can choose to boot any client for using the desired OS, including Windows, and access software and data services from Vdisks as usual without consideration of any other tasks, such as installation, maintenance, and management. By centralizing storage yet distributing computing tasks, TransCom can greatly reduce the potential system maintenance and management costs. We have implemented a multi-platform TransCom prototype that supports both Windows and Linux services. The extensive evaluation based on both test-bed experiments and real-usage experiments has demonstrated that TransCom is a feasible, scalable, and efficient solution for successful real-world use.
BibTeX:

@article{6708154,

  author = {Zhou, Yuezhi and Zhang, Yaoxue and Xie, Yinglian and Zhang, Hui and Yang, Laurence T. and Min, Geyong},

  title = {TransCom: A Virtual Disk-Based Cloud Computing Platform for Heterogeneous Services},

  journal = {Network and Service Management, IEEE Transactions on},

  year = {2014},

  volume = {11},

  number = {1},

  pages = {46-59},

  doi = {http://dx.doi.org/10.1109/TNSM.2013.122613.120358}

}

Zhu, Y. and Helsley, B. and Rexford, J. and Siganporia, A. and Srinivasan, S. LatLong: Diagnosing Wide-Area Latency Changes for CDNs 2012 Network and Service Management, IEEE Transactions on
Vol. 9(3), pp. 333-345 
network diagnosis , content distribution networks (cdns) , latency increases DOI  
Abstract: Minimizing user-perceived latency is crucial for Content Distribution Networks (CDNs) hosting interactive services. Latency may increase for many reasons, such as interdomain routing changes and the CDN's own load-balancing policies. CDNs need greater visibility into the causes of latency increases, so they can adapt by directing traffic to different servers or paths. In this paper, we propose a tool for CDNs to diagnose large latency increases, based on passive measurements of performance, traffic, and routing. Separating the many causes from the effects is challenging. We propose a decision tree for classifying latency changes, and determine how to distinguish traffic shifts from increases in latency for existing servers, routers, and paths. Another challenge is that network operators group related clients to reduce measurement and control overhead, but the clients in a region may use multiple servers and paths during a measurement interval. We propose metrics that quantify the latency contributions across sets of servers and routers. Based on the design, we implement the LatLong tool for diagnosing large latency increases for CDN. We use LatLong to analyze a month of data from Google's CDN, and find that nearly 1% of the daily latency changes increase delay by more than 100 msec. Note that the latency increase of 100 msec is significant, since these are daily averages over groups of clients, and we only focus on latency-sensitive traffic for our study. More than 40% of these increases coincide with interdomain routing changes, and more than one-third involve a shift in traffic to different servers. This is the first work to diagnose latency problems in a large, operational CDN from purely passive measurements. Through case studies of individual events, we identify research challenges for managing wide-area latency for CDNs.
BibTeX:

@article{6233056,

  author = {Zhu, Y. and Helsley, B. and Rexford, J. and Siganporia, A. and Srinivasan, S.},

  title = {LatLong: Diagnosing Wide-Area Latency Changes for CDNs},

  journal = {Network and Service Management, IEEE Transactions on},

  year = {2012},

  volume = {9},

  number = {3},

  pages = {333-345},

  doi = {http://dx.doi.org/10.1109/TNSM.2012.070412.110180}

}

Yanmin Zhu and Sye Loong Keoh and Sloman, M. and Lupu, E.C. A lightweight policy system for body sensor networks 2009 Network and Service Management, IEEE Transactions on
Vol. 6(3), pp. 137 -148 
policy-driven management, policy system, body sensor networks, adaptation, authorization, access control. body sensor networks , health care , operating systems (computers) , telecommunication computing , telecommunication security , user interfaces DOI  
Abstract: Body sensor networks (BSNs) for healthcare have more stringent security and context adaptation requirements than required in large-scale sensor networks for environment monitoring. Policy-based management enables flexible adaptive behavior by supporting dynamic loading, enabling and disabling of policies without shutting down nodes. This overcomes many of the limitations of sensor operating systems, such as TinyOS, which do not support dynamic modification of code. Alternative schemes for adaptation, such as network programming, have a high communication cost and suffer from operational interruption. In addition, a policy-driven approach enables fine-grained access control through specifying authorization policies. This paper presents the design, implementation and evaluation of an efficient policy system called Finger which enables policy interpretation and enforcement on distributed sensors to support sensor level adaptation and fine-grained access control. It features support for dynamic management of policies, minimization of resources usage, high responsiveness and node autonomy. The policy system is integrated as a TinyOS component, exposing simple, well-defined interfaces which can easily be used by application developers. The system performance in terms of processing latency and resource usage is evaluated.
BibTeX:

@article{5374835,

  author = {Yanmin Zhu and Sye Loong Keoh and Sloman, M. and Lupu, E.C.},

  title = {A lightweight policy system for body sensor networks},

  journal = {Network and Service Management, IEEE Transactions on},

  year = {2009},

  volume = {6},

  number = {3},

  pages = {137 -148},

  doi = {http://dx.doi.org/10.1109/TNSM.2009.03.090301}

}

Zhu, Y. and Lu, H. and Leung, V. Access Point Buffer Management for Power Saving in IEEE 802.11 WLANs 2012 Network and Service Management, IEEE Transactions on
Vol. 9(4), pp. 473-486 
ieee 802.11 , power management , wlan , power saving DOI  
Abstract: It is crucial to save power and prolong the runtime of mobile stations (STAs) in wireless local area networks (WLANs). In an infrastructure WLAN, a STA cannot be connected until it is associated with an access point (AP), which is responsible for buffering frames for all the associated STAs operating in the power saving mode. Hence, efficient memory utilization is critical for an AP to accommodate as many power-saving STAs as possible. The basic power management (BPM) scheme introduced in the IEEE 802.11 standard achieves power saving by allowing STAs not engaging in data delivery to operate in doze mode, but it does not consider the efficient use of the memory in the AP. To tradeoff power consumption for memory usage, we present an AP-priority timer-based power management (APP-TPM) scheme and develop a novel model for stochastic analysis of the proposed scheme. Based on this model, the probability distributions of the numbers of frames buffered at the AP and the average numbers of frames buffered at the AP are obtained. Moreover, a power-aware buffer management scheme (PBMS), which is based on the derived statistics, is proposed to accommodate as many STAs as possible given a fixed amount of memory in the AP while maintaining low power consumption. Simulation results show that the proposed scheme performs better than BPM in terms of memory usage in the AP.
BibTeX:

@article{6228474,

  author = {Zhu, Y. and Lu, H. and Leung, V.},

  title = {Access Point Buffer Management for Power Saving in IEEE 802.11 WLANs},

  journal = {Network and Service Management, IEEE Transactions on},

  year = {2012},

  volume = {9},

  number = {4},

  pages = {473-486},

  doi = {http://dx.doi.org/10.1109/TNSM.2012.062512.110188}

}

Yanfeng Zhu and Qian Ma and Bisdikian, C. and Chun Ying User-Centric Management of Wireless LANs 2011 Network and Service Management, IEEE Transactions on
Vol. 8(3), pp. 165 -175 
802.11 , wireless lans , access points , network management , performance modeling telecommunication network management , wireless lan DOI  
Abstract: With the ever increasing deployment density of Wireless Local Area Networks (WLANs), more and more access points (APs) are deployed within users' vicinity. The effective management of these APs to optimize the users' throughput becomes an important challenge in high-density deployment environments. In this paper, we propose a user-centric network management framework to optimize users' throughput taking into consideration both the network conditions sensed by users and their access priorities. The proposed framework is built around an information pipeline that facilitates the sharing of the information needed for optimal management of communication resources. Theoretical analysis and extensive simulations are presented on two major management activities: AP association and channel selection, and demonstrate that the proposed user-centric network management framework significantly outperforms traditional network management framework in the high-density deployment environment.
BibTeX:

@article{5970249,

  author = {Yanfeng Zhu and Qian Ma and Bisdikian, C. and Chun Ying},

  title = {User-Centric Management of Wireless LANs},

  journal = {Network and Service Management, IEEE Transactions on},

  year = {2011},

  volume = {8},

  number = {3},

  pages = {165 -175},

  doi = {http://dx.doi.org/10.1109/TNSM.2011.072611.100031}

}

Zuo, L. and Zhu, M.M. Concurrent Bandwidth Reservation Strategies for Big Data Transfers in High-Performance Networks 2015 Network and Service Management, IEEE Transactions on
Vol. 12(2), pp. 232-247 
algorithm design and analysis;bandwidth;complexity theory;data transfer;heuristic algorithms;scheduling;scheduling algorithms;bandwidth reservation;qos;big data;dynamic provisioning;high-performance networks DOI  
Abstract: Because of the deployment of large-scale experimental and computational scientific applications, big data is being generated on a daily basis. Such large volumes of data usually need to be transferred from the data generating center to remotely located scientific sites for collaborative data analysis in a timely manner. Bandwidth reservation along paths provisioned by dedicated high-performance networks (HPNs) has proved to be a fast, reliable, and predictable way to satisfy the transfer requirements of massive time-sensitive data. In this paper, we study the problem of scheduling multiple bandwidth reservation requests (BRRs) concurrently within an HPN while achieving their best average transfer performance. Two common data transfer performance parameters are considered: the Earliest Completion Time (ECT) and the Shortest Duration (SD). Since not all BRRs in one batch can oftentimes be successfully scheduled, the problem of scheduling all BRRs in one batch while achieving their best average ECT and SD are converted into the problem of scheduling as many BRRs as possible while achieving the average ECT and SD of scheduled BRRs, respectively. The aforementioned two problems are proved to be NP-complete problems. Two fast and efficient heuristic algorithms with polynomial-time complexity are proposed. Extensive simulation experiments are conducted to compare their performance with two proposed naive algorithms in various performance metrics. Performance superiority of these two fast and efficient algorithms is verified.
BibTeX:

@article{7103032,

  author = {Zuo, L. and Zhu, M.M.},

  title = {Concurrent Bandwidth Reservation Strategies for Big Data Transfers in High-Performance Networks},

  journal = {Network and Service Management, IEEE Transactions on},

  year = {2015},

  volume = {12},

  number = {2},

  pages = {232-247},

  doi = {http://dx.doi.org/10.1109/TNSM.2015.2430358}

}