DEEPSE Cloud Initiative

Cloud computing represents a new way to deliver and use services on a shared IT infrastructure. Traditionally, IT hardware and software were acquired and provisioned on business premises. Software applications were built, possibly integrating off-the-shelf components, deployed and ra on these privately owned resources. With service-oriented computing, applications are offered by service providers to clients, who can simply invoke them through the network. The offer specifies both the functionality and the Quality of Service (QoS). Providers are responsible for deploying and running services on their own resources. Cloud computing moves one step further. Computing facilities can also be delivered on demand in the form of services over a network.

The aim of the DEEPSE Cloud Initiative is to study advanced solutions to make the software infrastructure of modern Cloud computing systems able to adapt dynamically. These adaptation capabilities are becoming increasingly important since cloud infrastructures live in an open world, characterized by continuous changes in the environment and in the requirements they have to meet; moreover these changes may be frequent, difficult to predict, and out of the control of the owner of the cloud computing infrastructure.

The current research activities within the DEEPSE Cloud Initiative are spaced on several directions that are strictly related one another. The Cloud SelfLet is a software toolkit that has the aim of providing a middleware over which self-adaptive cloud applications may be developed and deployed. The Load-balancing in the Cloud activity focuses on finding efficient innovative ways to distribute the workload in a Cloud computing system with the support of the Cloud SelfLet middleware. The Bio-inspired Self-organization activity focuses on finding decentralized techniques for solving several types of optimization problems taking inspiration from self-organization phenomena observed from the natural world. Finally, the research activity on Resource Allocation devises decentralized resource allocation policies for cloud virtualized environments able to identify performance and energy trade-offs, providing a priori availability guarantees for end-users.

Some further details on the current research activities are described in the following sections. The links to some published research works that are based on the research activities of the DEEPSE Cloud Initiative can be found in the References section and in the Related thesis section.


Research Activities

Cloud SelfLet

Modern Cloud systems offer the possibility to rent virtual machine instances belonging to different data centers, in real time and with a low cost. The availability of such computing infrastructure with these characteristics is fostering the diffusion of systems composed of a large number of heterogeneous nodes, geographically distributed and with a high level of dynamism. These systems pose several new challenges such as the need for nodes to autonomously and dynamically manage themselves in order to achieve a common goal, despite to the continuous evolution of the surrounding environment. Due to the aforementioned characteristics, systems need to support a decentralized paradigm of communication in which typically actions taken at each local node have an impact on the overall system state.

The objective of this research activity is to augment an existing software toolkit, the SelfLet [1], with generic cloud operations able to seamlessly support the development of self-managing systems within cloud infrastructures. By incorporating each cloud operation into atomic services offered by each SelfLet, it is possible to exploit the advantages of a fully decentralized, self-managing autonomic middleware within a cloud infrastructure by reasoning at a higher level of abstraction.

To this regard, an application using the SelfLet Cloud middleware has been implemented for the case of dynamic resource provisioning. In particular, the analyzed case study assumes a system partly allocated on cloud and non-cloud machines subject to environment changes (i.e. workload, faults, costs). By exploiting the cloud elasticity and the specific management policies the system is able to dynamically allocate more resources on the cloud according to the environment state changes. A more detailed description of the SelfLet Cloud middleware can be found on [2]

Load-balancing in the Cloud

An important research area for the systems described in the previous activity is the identification of proper load balancing mechanisms that, depending on the current utilization of resources at a node and on its knowledge of the neighborhood, aim at optimizing the runtime global system state.

In the context of this activity we decided to study the load balancing problem by relying on the SelfLet environment: a general framework that provides an architectural model and a runtime infrastructure supporting the development of distributed autonomic systems. To achieve a load balancing behavior, the SelfLet is provided with a performance model able to characterize the current resources utilization (i.e. CPU utilization) and to evaluate the instantaneous revenues which are generated for each satisfied service request that respects certain QoS parameters.

In order to improve system performance a SelfLet can actuate different optimization actions, for example: change a service implementation, redirect service requests, teach a service, learn a service. Depending on the current workload/revenues of each SelfLet and the state of its neighbors, an optimization policy is able to decide which optimization action is better to actuate among all the possible actions. Moreover, by taking advantage of predictions techniques [3] and probabilistic choices, actuated actions can alleviate the problem of system instability and local optimal states. The work is currently in advanced progress and preliminary experiment results [4] have shown that the developed optimization policy is able to improve the whole system revenues while keeping a balanced resource utilization among the system nodes.

Bio-inspired Self-organizing Techniques for the Cloud

This research activity focuses its efforts to adaptation problems in the context of Cloud Computing environments. Cloud Computing typical context can be seen as a complex distributed system composed of many nodes and many resources scattered among them in terms of running Virtual Machines and their data.

Traditionally adaptation in this kind of system is obtained using two different approaches. The first one is the top-down self-adaptation in which the system is conceptually separated into a monitor and a resource: the monitor analyzes the state of the resource using sensors, and, according to a policy that depends on the application, performs corrective actions using an actuator. The second approach is bottom-up self-adaptation, often called self-organization, in which the system elements are not divided into monitors and resources, but cooperate at the same level using only local information and inter-element communication. The particularity of this approach with respect to other optimization heuristics is that the solution usually emerges from local apparently unrelated interactions, and, similarly to what happens in the natural world, this process tends to be resistant to individual failures or selfish behaviors among the system components.

In this activity we focus on using a bio-inspired bottom up approach in order to improve the reactivity of Cloud Computing environments in presence of context characterized by high dynamism, a high number of resources, and a high number of constraints in terms of Service Level Agreements that may rapidly change overtime.

Some of the current results include a set of fully-decentralized algorithms that are able to optimize several properties in this type of systems. Examples of algorithms that have been studied include algorithms for maintaining the topology and balancing the load among different nodes [5][6], and for reducing the number of active nodes by migrating Virtual Machines from a node to another [7]. Moreover a software engineering methodology for supporting bio-inspired self-organizing applications is under investigation [8].

Resources Allocation in the Cloud

One of the benefits of the Cloud computing paradigm is the possibility to support on-demand provisioning of flexible and scalable services accessible through the Internet. This research activity considers the Software as a Service (SaaS) paradigm, in which basic software components may be deployed into the cloud to offer basic services. These basic services may be then composed together to create virtual components that provide more advanced and complex functionalities. This complexity requires efficient resource allocation mechanisms to be able to give some guarantees to the final users of such services. The final aim of this activity is to support the SaaS providers to deploy services taking into account the fact that they need to comply with Quality of Service requirements specified in the Service Level Agreement contracts. At the same time they need also to make an efficient use of their resources by reducing the use (and therefore the cost) of their infrastructure without breaking the SLA. Within this activity the allocation problem has been thoroughly analyzed by considering several infrastructural parameters such as the number of servers in the cloud, the number of request types, the total request of a given time, multi-tier classes, the cost for reallocating resources in such infrastructure, and so on. The used approach includes the formalization of the problem as a non-linear programming optimization problem along with a decentralized heuristic-based solution. The innovation with respect to existing studies in the area is the cloud specificity, the use of a broader number of parameters, and the fact that solutions are decentralized. A more detailed description of the approach together with the experimental results have been published in [9] and [10] where it is possible to see that the obtained performance is close to the optimal one.

Research Projects


The main goal of MODAClouds is to provide methods, a decision support system, an open source IDE and run-time environment for the high-level design, early prototyping, semi-automatic code generation, and automatic deployment of applications on multi-Clouds with guaranteed QoS. Model-driven development combined with novel model-driven risk analysis and quality prediction will enable developers to specify Cloud-provider independent models enriched with quality parameters, implement these, perform quality prediction, monitor applications at run-time and optimize them based on the feedback, thus filling the gap between design and run-time. Additionally, MODAClouds provides techniques for data mapping and synchronization among multiple Clouds.


Industrial Sponsors

Amazon grant

The Deep-se cloud initiative has been awarded by Amazon Web Services with a $7500 research grant for the year 2011. The grant will be used to support all the activities involved in the project.

The Amazon grant has been used for a total of 19690 EC2 hours and the distribution of used virtual machines in reported in the figure below.


Flexiant is a leading provider of cloud orchestration software for on-demand, fully automated provisioning of cloud services. Flexiant's software gives cloud service providers business agility as well as the freedom and flexibility to scale, deploy and configure servers, simply and cost-effectively. Vendor agnostic and supporting multiple hypervisors, Flexiant's proven cloud orchestration solution is a full business process automation suite from provisioning through to granular metering and billing of resource. Flexiant Cloud Orchestrator is a mature, reliable and feature-rich software solution for cloud management. The software was developed to launch the first public cloud service platform in Europe, in 2007 and was first made commercially available to the market in 2010. Since then, it has been enhanced many times and is recognised for leading the market in innovation.

One of the projects carried out in collaboration with Flexiscale concerns auto scaling features found in common cloud infrastructures. In particular, in [5] the Flexiant public cloud was extended in order to compare the scaling mechanisms with the ones offered by the Amazon cloud. The analysis aims at identifying useful patterns for the execution of Web applications in the cloud and at underlining the critical factors that affect the performance of the two providers. In this analysis were performed a large set of experiments that demonstrated the importance of tuning correctly the auto scaling parameters.

Publications and References


  1. D. Ardagna, S. Casolari, B. Panicucci. Flexible Distributed Capacity Allocation and Load Redirect Algorithms for Cloud Systems. IEEE Cloud 2011 Proceedings. 163-170. Washignton DC, USA.
  2. D. Ardagna, S. Casolari, M. Colajanni, and B. Panicucci. Multi time scale distributed capacity allocation and load redirect algorithms for cloud systems. Politecnico di Milano, Tech. Report 2011.23
  3. D. Ardagna, B. Panicucci, M. Passacantando. Generalized Nash Equilibria for the Service Provisioning Problem in Cloud Systems. Politecnico di Milano, Tech. Report 2011.27
  4. N. M. Calcavecchia, D. Ardagna, E. Di Nitto, and A. Gandini. Developing Applications in the Cloud through the SelfLet Framework. Politecnico di Milano, Internal report.
  5. F. L. Ferraris, D. Franceschelli, M. Pio Gioiosa, D. Lucia, D. Ardagna, E. Di Nitto and T. Sharif. Evaluating the Auto Scaling Performance of Flexiscale and Amazon EC2 Clouds. MICAS'12 Workshop.

Related thesis

  1. A. Gandini. An extension of the SelfLet autonomic framework to support Amazon Cloud Computing Services. Master Thesis, 2010, Politecnico di Milano. Link
  2. M. Casiero, S. Vettor. Multi-time Scale Distributed Capacity Allocation and Load Redirect Algorithms for Cloud Systems. Master Thesis, 2010, Politecnico di Milano. Link
  3. F. Nigro. An On-Line Aspect-Oriented Monitoring Tool for the SelfLet Framework. Master Thesis, 2011, Politecnico di Milano. Link
  4. A. Gallo. Hierarchical Resource Allocation Controller for Very Large Scale Data Centers. Master Thesis, 2011, Politecnico di Milano.
  5. M. Basilico. A Self-organizing Approach for Auto-scaling Services in the Cloud. Master Thesis, 2011, Politecnico di Milano.


  1. The SelfLet software toolkit.
  2. A. Gandini. An extension of the SelfLet autonomic framework to support Amazon Cloud Computing Services. Master Thesis, 2010, Politecnico di Milano.
  3. N. M. Calcavecchia and E. Di Nitto. Incorporating prediction models in the SelfLet framework: a plugin approach. Proceedings of the Fourth International ICST Conference on Performance Evaluation Methodologies and Tools (VALUETOOLS 2009).
  4. N. M. Calcavecchia, D. Ardagna, E. Di Nitto. The emergence of load balancing in distributed systems: the SelfLet approach. In Run-time Models for Self managing Systems and Applications. Birkhauser Springer, Autonomic Systems series 2010.
  5. E. Di Nitto, D. J. Dubois, R. Mirandola, F. Saffre, R. Tateson. Applying self-aggregation to load balancing: Experimental results. In Bionetics, 2008.
  6. B. A. Caprarescu, N. M. Calcavecchia, E. Di Nitto, D. J. Dubois. SOS Cloud: Self-Organizing Services in the Cloud (Work in progress Paper). Bionetics, 2010.
  7. D. Barbagallo, E. Di Nitto, D. J. Dubois, R. Mirandola. . A Bio-Inspired Algorithm for Energy Optimization in a Self-organizing Data Center. Self-Organizing Architectures, Springer, 2010.
  8. E. Di Nitto, D. J. Dubois and R. Mirandola. On Exploiting Decentralized Bio-inspired Self-organization Algorithms to Develop Real Systems. Proceedings of Software Engineering for Adaptive and Self-Managing Systems (SEAMS), 2009.
  9. B. Addis, D. Ardagna, B. Panicucci, L. Zhang. Autonomic Management of Cloud Service Centers with Availability Guarantees. International Conference on Cloud Computing (CLOUD), 2010.
  10. D. Ardagna, C. Ghezzi, B. Panicucci, M. Trubian. Service Provisioning on the Cloud: Distributed Algorithms for Joint Capacity Allocation and Admission Control. ServiceWave, 2010.
  11. B. Addis, D. Ardagna, B. Panicucci, M. Squillante, L. Zhang. A Hierarchical Approach for the Resource Management of Very Large Cloud Platforms. IEEE Transactions on Dependable and Secure Computing. To Appear.
  12. D. Ardagna, B. Panicucci, M. Passacantando. Generalized Nash Equilibria for the Service Provisioning Problem in Cloud Systems. IEEE Transactions on Services Computing. To Appear.
  13. F. Giove, D. Longoni, M. Shokrolahi Yancheshmeh, D. Ardagna, E. Di Nitto. An approach for the Development of Portable Applications on PaaS Clouds. Closer 2013 Proceedings. To Appear.
  14. D. Franceschelli, D. Ardagna, M. Ciavotta, E. Di Nitto. SPACE4CLOUD: A Tool for System PerformAnce and Cost Evaluation of CLOUD Systems. Multi-Cloud 2013 Workshop Proceedings. To Appear.
  15. S. Benefico, E. Gjeci, R. Gonzalez Gomarasca, E. Lever, S. Lombardo, D. Ardagna, E. Di Nitto. The evaluation of CAP Properties on Amazon SimpleDB and Windows Azure Table Storage. MICAS 2012 Workshops Proceedings. To Appear.
  16. F. L. Ferraris, D. Franceschelli, M. P. Gioiosa, D. Lucia, D. Ardagna, E. Di Nitto, T. Sharif. Evaluating the Auto Scaling Performance of Flexiscale and Amazon EC2 Clouds. MICAS 2012 Workshop Proceedings. To Appear.
  17. S. Lombardo, E. Di Nitto, D. Ardagna. Issues in handling complex data structures with NoSQL databases. MICAS 2012 Workshop Proceedings. To Appear.
  18. Miglierina, M.; Gibilisco, G.; Ardagna, D.; Nitto, Di E., "Model Based Control for Multi-cloud Applications" Modeling in Software Engineering (MISE), 2013 ICSE Workshop on, 18-19 May 2013