We are constantly offering a range of open master thesis topics. By writing your master thesis with us, you will learn many practically relevant software engineering concepts and skills and contribute to our cutting-edge research in this field. Our thesis topics usually have a good balance of theory and practice.

If you don't find a topic that suites you, do not hesitate to fix an appointment with us. We will then counsel you, and it may be possible to define an adequate thesis topic for you.

We are a strong research group with an outstanding national and international reputation — and we are looking forward to have you on board!

Group presentation slides

Available Theses

Title: A Spark framework for parallel optimisation algorithms
Show details
The last years witnessed the rise of Big Data technologies and platforms as a mean to manage, analyze and interpret huge data sets. These approaches are based on the MapReduce paradigm, presented by Google in 2004, where the dataset is spread among several commodity computers (nodes) and a certain function (referred to as Map) is executed in parallel on chunks of data. The results of this phase is a new set of data that are generally aggregated and then manipulated by the execution of another function called Reduce. The most famous open source implementation of MapReduce algorithm is Apache Hadoop; it is a consolidated, distributed, fault-tolerant and worldwide adopted Big Data platform. Despite its undeniable success, over the years Hadoop also showed some important drawbacks, mainly due to the rigidity of its base paradigm.
For this reason, more flexible and faster approaches have been devised and implemented.
Spark can be considered the successor of Hadoop. It is based on shared memory structure and allows to set up and execute not only Map and Reduce but also loops of instructions. For such features, Spark is currently in the spotlight of Industry and Academia.
What is more is that the flexible structure, together with the automatic parallelization of calculation tasks and fault-tolerance makes Spark a good candidate for being not only a great platform for executing data-intensive application but also CPU-intensive ones.
Driven by this idea, we are studying how Spark, designed to face big data problems, fits the execution of optimization algorithms, exploiting its intrinsic parallelism. Concepts as local-search, intensification, diversification must be explored, redefined and implemented using the Spark abstractions.
The goal of this thesis is to develop a general meta-heuristic optimization framework in Scala programming language to prove the applicability and suitability of this approach.

Title: Petri Net Models to Estimate Hadoop Performance
Show details
Nowadays more and more companies deal with large amounts of raw, unstructured data during business operations. This trend is fostering the emergence of the Big Data market, which is growing at a fast 27% worldwide compound annual growth rate through 2017 and, in Europe, at a 31.96% compound annual growth rate through 2016. Moreover, nearly 40% of Big Data worldwide will likely be hosted on public Clouds by 2020, while Hadoop is expected to touch half of world data during the same period.
Apache Hadoop, open source implementation of the MapReduce framework, holds a central role in the Big Data paradigm and is widely adopted in the industry to process huge datasets. Alongside Hadoop, with its I/O bounded workflow mainly targeted at batch processing, currently in-memory frameworks such as Spark are available, enabling faster elaboration of iterative algorithms, e.g., regression, classification, and other machine learning applications. Cost effectiveness considerations encourage to share computational clusters among heterogeneous classes of workloads, but this practice gives rise to difficulties in performance prediction. Furthermore, real world applications are usually bound to meet Service Level Agreements providing, e.g., an upper bound for query execution time, thus requiring careful resource allocation.
Among other approaches, it is possible to study multi-class systems adopting Petri Nets. At the expense of a significant computational complexity, these tools allow for a great accuracy in performance prediction. In addition, Petri Nets appear good abstractions for data-intensive applications: a token circulating in the model represents well a request being processed and atomic fork/join operations and colors can be profitably exploited to express at the same time the memory, disk read/write operations, network/stream traffic, and other concurrent operations that a single request implies on the available computational resources. Then the results will be exploite

Download complete description
Title: Context-Oriented Programming Languages
Contact(s): Carlo Ghezzi, Matteo Pradella
Show details
Context-Oriented programming is a novel paradigm, born to easily manage adaptability in applications. In the literature various context-oriented extensions of programming language implementations are proposed (e.g. based on Java, Python, Ruby, Lisp, ...) We plan to apply and extend two of these implementations, originally proposed by us, i.e. ContextErlang, for concurrent/distributed systems based on Erlang/OTP, and JavaCtx, a lightweight Java extension that is compatible with many existing IDE tools and tool-chains. We also intend to consider different constructs and languages, and try them in the context of the SMSCom project.

Title: Floyd languages for parallel and incremental parsing
Contact(s): Dino Mandrioli, Matteo Pradella
Show details
Parsing algorithms are essential for browsers of semi-structured data, and, of course, for Natural Language Processing, and for compilers. Without parallel algorithms, browsing on future many- core hand-held devices will be too slow and too power-greedy to be practical. Classical deterministic algorithms unfortunately do not speedup on multicore architectures, because the pushdown machine model of Context-Free grammars is inherently sequential; general tabular algorithms (Earley, CKY) are also difficult to parallelize. Our recent advances in the theory of Operator-Precedence grammars make them attractive for parallel and incremental parsing, allowing a long text to be arbitrarily split into chunks, which can be parsed in parallel and combined into the final tree, by means of precisely defined light transformations. Nothing like that is possible with CF grammars or even deterministic LR(k) grammars. The project will be based on a prototypal generator of parallel parsers, called PAPAGENE, then it will engineer it to produce efficient multicore implementations. We plan also to consider and analyze its performance on various practical languages, like JavaScript and HTML5.

Title: Monitoring and Supervisioning of Complex Banking Systems
Contact(s): Alessandro Margara, Giordano Tamburrelli
Show details
Banking systems are examples of the most complex distributed and heterogeneous
systems. Managing such infrastructures requires tools and algorithms to
detect, recognize and signal anomalies or unpredicted behaviors that may
jeopardize the correct functioning of the entire system.

The thesis is a collaboration with one one of the largest banks in Europe
(UniCredit) and aims at conceive and develop novel and advanced monitoring
techniques to ease the supervision of their infrastructure.

The thesis provides a mix of academic research and industrial application,
which is a good opportunity to stregthen the resume of the participant

Title: Tools and techniques for the development of (safety-)critical systems
Contact(s): Pierluigi San Pietro, Dino Mandrioli, Angelo Morzenti, Matteo Pradella, Matteo Rossi
Show details
In software-intensive systems software parts interact with a variety of physical phenomena and devices to reach global goals. Software-intensive systems are typically embedded, often have dependability requirements, and might be safety-critical with real-time constraints.
A (non-exhaustive) list of (safety-)critical systems includes the flight management systems of aircraft, the onboard systems of high-speed trains, avionics systems such as radars, and flexible manufacturing systems.
The design of dependable (and safety-critical) systems, in particular, requires great care and precision, as errors in their development can lead to potentially dire consequences.
It can greatly benefit from the use of formal methods, whose sound mathematical foundations permit, at least in principle, to employ formal verification techniques to analyze the properties (e.g. correctness) of the system under development.

The development of formal modeling and verification techniques for (safety-)critics systems faces a number of key challenges.
As far as modeling notations are concerned, some of the desired features include:
- a good level of expressiveness
- user-friendliness
- decidability

As far as verification techniques are concerned, the key challenges are mainly related to making the verification process as efficient as possible.

Some of these issues have been tackled and implemented through the development of plugins of the Zot bounded model/satisfiability checker (http://zot.googlecode.com).

Theses in this line of research can tackle one (or more) of the challenges listed above, either from a purely theoretical point of view, or from a practical one (which can lead to, for example, the application of the techniques mentioned above to real-life case studies, or the extension of the existing Zot formal verification tool).

Title: Energy optimization under timing constraints
Contact(s): Danilo Ardagna, Matteo Rossi
Show details
The increasing complexity and size of Information and Communication Technology (ICT) systems has lead to a significant increase of the energy that such systems consume.
Recent studies show that ICT accounts for 2-4% of global CO2 emissions and it is projected to reach up to 10% in 5-10 years.

Energy management and consumption is more and more important also
in the operation of industrial systems, including those where
timing constraints are crucial, such as real-time and safety-critical
Very frequently, performance constraints and energy consumption reduction are conflicting goals. Hence the optimal trade-off between performance and energy consumption need to be determined.

While the techniques to formally model and analyze timed system
have reached a certain level of maturity, thanks to the considerable
advances achieved in the last few years in the so-called model checking
field, very few, if any, of these techniques are tailored to
take into account energy issues, in addition to timing ones.

The goal of the thesis is to devise a technique that combines
formal verification mechanisms and optimization ones, in order to
facilitate the design of energy-constrained systems that also
include timing properties (possibly real-time ones).

Contact(s): Elisabetta Di Nitto, Santo Lombardo
Show details
The world of distributed computing is changing. In this period the cloud computing is becoming very popular and some interesting distributed computation and data storage paradigms are being presented as solution to problems concerning management of large quantities of information. A notable example is given by the Google map-reduce approach to index web pages.
In this context, the concept of “noSQL” database is seen as a valid alternative to RDBMS for storing large quantities of simple-structured data. That is because “noSQL” data structures are naturally parallelizable, horizontal partitionable and allow a free-schema data representation.
Of course "noSQL" databases have other limits, i.e. the transactional operations, the query optimization, the atomic write property, etc... preserved in the RDBMS.
This proposal aims to identify new solutions for supporting the execution of complex queries in “noSQL” databases, still keeping their scalability characteristics (please contact prof. Di Nitto and Santo Lombardo for details).

Contact(s): Elisabetta Di Nitto
Show details
Modern software is often built as an assembly of parts of various different sizes, running on different computational resources. The various elements composing a software system are often owned by different parties or require different organizations and expertises to be built. Examples of systems fulfilling this definition are Future Internet Applications (FIAs). These are system of systems, merging together input and output from people, software and things alike, and can vary from an online trading application for mobile phones, to the system for traffic management in a certain city, to the flight control system of an aircraft. These systems have different levels of complexity and criticality, but all involve integration of parts typically developed by different organizations or sub-organizations. Given modern globalization of business, these organizations tend to be distributed worldwide; for example, the online trading application has to interact with a service providing trading information, with an authentication system able to check the identity of the trader, and with the trader’s banking service to perform capital movements. In this Global Software Engineering (GSE) context it is clear that we need to face problems such as the difficulty for different individuals and organizations to cooperate, to identify problems quickly, to coordinate their efforts, to establish mutual trust, to be aware of what the others are doing in the global team, and, in general, to fulfill the business imperatives of keeping cost and risk under control and of increasing efficiency.
The objective of this work is to build a tool that, based on the information available on a forge and concerning existing projects and participation of individuals, supports the creation of virtual software development networks focusing on specific project ideas (please contact prof. Di Nitto for details).

Contact(s): Elisabetta Di Nitto
Show details
Cloud infrastructures live in an open world characterized by continuous changes in the environment and requirements they have to meet. Continuous changes occur unpredictably, and they are out of control of the cloud provider. Nevertheless, cloud-based services must be provided with different Service Level Agreements (SLAs) in terms of reliability, security and performance. Thus, there is a need to build a proper framework to enable the identification, selection, and execution of run-time self-adaptation strategies. These strategies can be identified by exploiting various approaches, including optimization techniques and game theory. The objective of this work is not to focus on how to derive the specific strategy, but on how to create a middleware layer that supports the activation of the strategy that best suit the current situation and on a programming model for defining the strategies so that they are suitable for execution (please contact prof. Di Nitto for details).

Title: Green Move: developing third-generation vehicle sharing systems
Contact(s): Gianpaolo Cugola, Angelo Morzenti, Matteo Rossi
Show details
Green Move (http://www.greenmove.polimi.it/) is a project funded by Regione Lombardia, which aims at developing an open, electric vehicle sharing solution for public/private partners.
It is open, as different partners may contribute with cars/charging stations/additional services.
It is highly innovative: no keys, no personnel, everything happens through software, using an Android smartphone as the main interface to the system.
It targets a large portfolio of vehicles, not only cars: scooters, small cars, etc. The project already acquired three vehicles for practical experimentations, and more are coming.
Developing such a complex and multi-faceted system poses many challenges in terms of the software infrastructure that allows all the entities to communicate, the design of the protocols on which the system is based, the mobile applications through which the system is managed, the overall management of the fleet.
Theses in the scope of the Green Move project can tackle these, and others, aspects.

Short demo video: http://www.youtube.com/watch?v=SRbP1lRrp1k

Contact(s): Danilo Ardagna
Show details
The reduction of carbon dioxide emissions targeted for the next years is fostering an increased utilization of renewable energy sources (green energies) and, more in general, a decreased impact on the environment (carbon footprint) of human activities. ICT plays a key role in this greening process, as ICT solutions can greatly improve the environmental performance of other sectors of the world economy. However, the potential impact of the carbon emissions of the ICT sector itself has also to be carefully considered. Recent studies show that service centers accounts for 2-4% of global CO2 emissions and it is projected to reach up to 10% in 5-10 years, fuelled by the expected massive adoption of Cloud services. Nowadays, service centers consume as much power as medium-size cities and Cloud providers are among the largest customers of electricity providers. So, one of the main challenges for adoption of Cloud services is to be able to reduce their energy consumption and carbon emissions, while keeping up with the high growth rate of associated data storage, server and communication infrastructures.

The thesis  aims at defining a unifying energy load management framework, which takes into account both the economic perspective of Cloud providers (thus performance levels, energy costs, etc.) and the overall energy distribution system efficiency (load/production balancing, load previsioning and guaranteeing) with the aim of minimizing the usage of brown energy sources and reducing the environmental footprint of Cloud systems, enabling the cooperation among Cloud service providers, communication networks, and the electrical grid in multiple scenarios and with different cooperation levels .

Prerequisites: Optimization models (linear and non-linear programming, AMPL, etc.)

Contact(s): Elisabetta Di Nitto, Danilo Ardagna
Show details
Cloud infrastructures live in an open world characterized by continuous changes in the environment and requirements they have to meet. Continuous changes occur unpredictably, and they are out of control of the cloud provider. Nevertheless, cloud-based services must be provided with different Service Level Agreements (SLAs) in terms of reliability, security and performance.
The aim of this thesis is to develop solutions for performance guarantee that are able to dynamically adapt the resources of the cloud infrastructure in order to satisfy SLAs and to minimize costs. Capacity allocation and load balancing techniques able to coordinate multiple distributed resource controllers working in geographically distributed cloud sites will be developed.

Prerequisites: Optimization models (linear and non-linear programming, AMPL, etc.)
Performance Models (queueing networks)

Contact(s): Elisabetta Di Nitto, Danilo Ardagna
Show details
Models play a central role in software engineering. They may be used to reason about requirements, to identify possible missing parts or conflicts. They may be used at design time to analyze the effects and trade-offs of different architectural choices before starting an implementation, anticipating the discovery of possible defects that might be uncovered at later stages, when they might be difficult or very expensive to remove. They may also be used at run time to support continuous monitoring of compliance of the running system with respect to the desired model.

However, models are abstraction of real systems, hence the predictions at design-time need to be validated once the system is deployed in a real system. For example in cloud environments, assumptions on the mix of requests in operation at design-time might differ from the ones observed in the production system depending on customer preferences. Similarly, the performance or reliability profile of certain Cloud resources in practise may differ from the figures assumed at design time.
The aim of this thesis, is to define a feedback loop between the operational systems deployed in the cloud and software design. Quality and cost models used at design-time will be kept alive at run-time and refined by exploiting the information gathered by monitoring the underlying cloud system. The feedback loop wiil integrate run-time data into design-time models for fine tuning and will provide recommendations to the software designer to improve the design time QoS and cost estimates.

Title: A survey of simulators for large distributed systems
Contact(s): Elisabetta Di Nitto, Nicolò Maria Calcavecchia
Show details
The development of large decentralized distributed systems (i.e. peer-to-peer) is challenging.
In order to speed up the development process, researchers typically implement
a prototype of the system using ad-hoc simulators.
Different research groups proposed simulators with various characteristics. The work proposed
in this thesis aims to investigate the conceptual and practical differences among these simulators.

Title: Extension of a development environment for the SelfLet autonomic framework
Contact(s): Elisabetta Di Nitto, Nicolò Maria Calcavecchia
Show details
A SelfLet is a self-sufficient piece of software which is situated in some kind of logical or physical network, where it can interact and communicate with other SelfLets (http://selflet.sourceforge.net/).
The development of a SelfLet-based system involves different aspects such as services, their implementation, autonomic policies, etc. In this work we propose to extend an existing IDE to develop SelfLet based system. The IDE is built as a Eclipse plugin.

Title: Version consistent dynamic reconfiguration of SelfLet services
Contact(s): Elisabetta Di Nitto, Nicolò Maria Calcavecchia
Show details
A SelfLet is a self-sufficient piece of software which is situated in some kind of logical or physical network, where it can interact and communicate with other SelfLets (http://selflet.sourceforge.net/). SelfLets easily allow to change the implementation of a service at runtime. However, updates of services must ensure the consistency of the application state. This work proposes the integration of component dynamic reconfiguration techniques (ref) within the SelfLet framework.

Title: Development of a web-based experiment configuration and monitoring system for large experiments
Contact(s): Elisabetta Di Nitto, Nicolò Maria Calcavecchia
Show details
Large distributed systems (i.e. composed by thousands of nodes) can be difficult to deploy and monitor. With this work we aim at the development of a web-based system that will help the developers to (i) easily define the deploy configuration of the system and (ii) to provide an interface where major properties to be monitored can be defined and shown at runtime. The system will be developed with a easily accessible web-based interface allowing monitoring of multiple systems at runtime.

Title: Formal Analysis of Distributed Self-organization Algorithms
Contact(s): Elisabetta Di Nitto, Daniel J. Dubois
Show details
Distributed self-organization algorithms are a class of algorithms that, using simple interaction rules, are able to give a high-level property to a system without using any centralized coordination mechanism. Some well-known examples of these algorithms are used in peer-to-peer file sharing applications. The advantage is their capability of scaling up to thousands of nodes and their high resistance to failures. However they have usually a slower convergence rate than their centralized counterpart. The purpose of this thesis is to analyze some existing self-organization algorithms and discuss some properties such as their convergence rate and their level of fault-tolerance using formal methods and probabilistic analysis.

Title: Optimization of Resources Replication in a Cloud Computing System
Contact(s): Danilo Ardagna, Daniel J. Dubois
Show details
Cloud computing is a new paradigm in which the execution of an application and/or the storage of its data are managed by an external provider. The introduction of cloud computing is allowing companies and individuals to externalize their private computing infrastructure and to flexibly adapt it to their usage needs over time. This flexibility requires from the cloud provider side the capability to quickly allocate and deallocate computational resources to satisfy these needs. This thesis will focus on the problem of improving the speed of resource allocation in a cloud computing infrastructure.

Title: Analysis and Exploitation of Gamification Models to Improve the Quality of User Applications
Contact(s): Elisabetta Di Nitto, Daniel J. Dubois
Show details
The coordination of large-scale distributed software architectures may benefit from self-organizing approaches that are inspired from the biological world, such as the emergence of a collective behavior in large colonies. The purpose of this thesis is to analyze and categorize existing self-organizing approaches and find common models or patterns for using them in a software development process.

Title: Self-organizing Design Patterns for Distributed Software Architectures
Contact(s): Elisabetta Di Nitto, Daniel J. Dubois
Show details
The coordination of large-scale distributed software architectures may benefit from self-organizing approaches that are inspired from the biological world, such as the emergence of a collective behavior in large colonies. The purpose of this thesis is to analyze and categorize existing self-organizing approaches and find common models or patterns for using them in a software development process.

Title: Programming languages for Self-adaptive Service Compositions
Contact(s): Leandro Sales Pinto, Gianpaolo Cugola
Show details
To increase development efficiency and shorten time-to-market, modern systems are typically developed by composing external available services provided by third-party partners. In this setting, applications are build as heterogeneous compositions, which mainly depend on the components and services they integrate. To cope with unpredicted changes and failures such systems need to be adaptive. We believe, however, that the current available languages make it quite hard to develop such kind of self-adapting compositions, specially because adaptation strategies are defined imperatively and intertwined with the application logic, yielding to applications difficult to maintain and evolve.

In this research field, and in the context of the <a href="http://www.dsol-lang.net">DSOL</a> project, we are interested in minimizing the effort required to implement such self-adaptive service compositions, that are also able to cope with unexpected situations, by proposing a new declarative approach together an innovative run-time environment. If you are interested and want to find out about the specific theses available today, please contact us by email.

Title: High-performance algorithms on parallel hardware
Contact(s): Gianpaolo Cugola, Alessandro Margara
Show details
Todays hardware offers massively parallel computational resources at low cost.
CPUs include an increasing number of processing cores, and general-purpose
programming API are now available for graphic cards (GPUs), which often embed
hundreds, or thousands of cores.

Exploiting these resources in an efficient way is both challenging and
exciting. Algorithms often need to be completely re-designed to run on
parallel hardware. In this research field, we are interested both in solving
specific application problems, by implementing and evaluating parallel
algorithms, and in extracting general design guidelines and methodologies to
reduce the effort and maximize the outcome when writing parallel algorithms.

For the specific theses available today, please contact us by email.

Title: Social Mashlight
Contact(s): Luciano Baresi, Sam Jesus Alejandro Guinea Montalvo
Show details
Mashlight is a functional mashup infrastructure developed at Politecnico. It allows one to compose chunks of processes, along with their interfaces, and define user-centered applications. The framework provides very simple means to control the flow and sup-ports very diverse pieces and widgets as primitive bricks. The work proposed here originates from this experience and would like to extend it by addressing the “social” dimension of the Web. The extension foreseen here is to distinguish between the integration of the functionality and the integration of the user interfaces, which might be customized according to the different needs, and also to identify suitable building blocks that exploit the social features of the Web and allow one to create his/her own specialized application. As side effect, this could also be seen as a first step to compose social networks and try to standardize their interfaces.

Title: Integrated verification of UML models
Contact(s): Luciano Baresi, Matteo Rossi
Show details
This proposal starts from an ongoing EU project, called MADES, where we have already defined a possible formalization of some UML diagrams through temporal logic. Currently, we can “only” verify models with particular characteristics and results are provided in an awkward textual format. The idea now is to extend the work in some directions: (a) try to formalize and integrate other UML diagrams, (b) work on an integrated tool-suite based on EMF and Eclipse, (c) stress the customizability of defined semantics and provide means to ease the process, (d) define a suitable way to specify the proper-ties of interest, and (e) address the problem of visualizing the results provided by verification through proper annotations of UML diagrams.

Title: Abstractions, processing algorithms, and communication protocols for Event-Based Middleware
Contact(s): Gianpaolo Cugola, Alessandro Margara
Show details
Several complex systems operate by observing the primitive events that happen
in the external environment, interpreting and combining them to identify
higher level composite events or situations. The task of identifying composite
events from primitive ones is performed by an Event Processing middleware,
which processes incoming events based on a set of user-defined rules.

We are active in this research area at various levels: we are investigating
new languages and abstractions to specify complex situations in an easy and
intuitive way. We are designing new algorithms to increase the performance of
the system, and new communication protocols to distribute the processing
effort over multiple machines in large-scale scenarios.

For the specific theses available today, please contact us by email.

Title: Cross-layer adaptations in cloud platforms
Contact(s): Luciano Baresi, Sam Jesus Alejandro Guinea Montalvo
Show details
This is the era of cloud computing, where applications run within virtual machines host-ed by remote infrastructures. We are already used to consider all these elements as services (SaaS, PaaS, and IaaS), but now it is time we start governing the whole solution in a more integrated way. Oftentimes, problems at application level can be better solved by changing the virtual machine or even by deploying a new/different one. This hypothesis requires a governance infrastructure that is able to probe and monitor the different elements (services), correlate retrieved information, identify solutions at the proper levels, and react accordingly and in an integrated way. The aim of this work is to continue the work we are already carrying on, complete the theoretical framework behind this cross-layer governance, release a suitable implementation of the framework, and evaluate it on significant case studies.

Title: Smart requirements for dynamic software product lines
Contact(s): Luciano Baresi
Show details
Requirements elicitation should be a key activity in any development process. This is also the case when one starts thinking of adaptive systems or even families of dynamically changing products. Besides the “usual” functional requirements and foreseen qualities of service, these products also require specific features to elicit the adaptation capabilities and variability that the system-to-be should embed. Besides identifying precise notations to render the requirements, the work also aims to study innovative analysis capabilities for the identification of possible problems as soon as they manifest in the development process. The work builds on FLAGS, which is a requirements elicitation approach developed at Politecnico, and would like to emphasize the idea of product lines, the dynamism they embed, and also the capability of analyzing produced models.

Title: HTML 5: comprehensive testing approach
Contact(s): Luciano Baresi, Sam Jesus Alejandro Guinea Montalvo
Show details
Since HTML 5 is imposing a very promising solution for implementing cross-platform applications, which means applications that can run on different devices seamlessly. The role and importance of these applications are increasing, and thus their “correctness” must be assessed more concretely. This is why this work proposes the development of an innovative, and comprehensive solution, for testing HTML 5 applications. The idea is to cover the functional aspects, but also the possible problems with the graphical layout and the interactions with external components/services.

Title: HTML 5: model-based development
Contact(s): Luciano Baresi, Sam Jesus Alejandro Guinea Montalvo
Show details
HTML 5 is imposing as “the solution” for the development of applications that run on multiple platforms (Apple and Amazon’s Kindle reader are prominent examples). To this end, this work would like to develop an innovative approach towards the development of cross-platform mobile applications. These applications are not only Web pages properly linked, but they also comprise advanced inputting capabilities and interactions with external elements. Well-known design models are the starting point, while HTML 5, and hopefully limited parts written in the proprietary languages, are the artifacts that should be produced automatically through suitable transformations.

Title: Scalable solutions for ubiquitous massive systems
Contact(s): Luciano Baresi, Sam Jesus Alejandro Guinea Montalvo
Show details
More and more applications are highly distributed, ubiquitous, and massive: they integrate a high number of different parties (nodes) in a very distributed environment. For example, one can think of environmental monitoring, wildlife tracking, intelligent agriculture, home automation, building monitoring and control, smart transportation, and emergency management. These applications require suitable middleware infrastructures able to coordinate the high number of subjects, but also able to provide reliable and robust communication means. The work starts from an existing solution, called A-3, and aims to extend it, and migrate its key features onto devices like smartphones. Suitable demonstrators will help assess the work done.

Title: Middleware abstractions and applications for Wireless Sensor and Actuator Networks
Contact(s): Gianpaolo Cugola, Alessandro Sivieri
Show details
We explore the Wireless Sensors and Actuator Networks research area, from programming languages to middleware abstractions, to applications in specific domains (e.g., monitoring buildings), and when necessary also to the electronics used for sensing data from the external world. We focus on building systems, so we own and use various WSAN devices to test applications not only under simulations but also in the real world. For the specific theses available today, please contact us by email.

Title: RESTful Design Patterns
Contact(s): Carlo Ghezzi, Mauro Caporuscio
Show details
In software engineering, a design pattern is a general reusable solution to a commonly occurring problem within a given context in software design. It is a description specifying how to solve a problem that can occur in many different situations. Goal of this thesis is twofold: Survey the state of the art in RESTful Services development trying to filter out common design choices, and (2) Formalize such commonalities into a set design patterns adhering to REST principles.

Download complete description
Title: Formalization and Verification of RESTfulness
Contact(s): Carlo Ghezzi, Mauro Caporuscio
Show details
REST and RESTful Web services have recently emerged as a promising alternative approach to simplify the plumbing required to build service oriented architectures. The goal of this thesis is to formalize the semantics prescribed by the REST architectural style – e.g., REST principles Addressability, Statelessness, Connectedness, and Uniformity – and to develop a tool to check the RESTfulness of a given Web service implementation.

Download complete description
Title: Architecture-Based Quality Evaluation under Uncertainty
Contact(s): Raffaela Mirandola
Show details
The accuracy of architecture-based quality evaluations depends on a number of parameters that need to be estimated, such as environmental factors or system usage. Researchers have tackled this problem by including uncertainties in architecture evaluation models and solving them analytically and with simulations. We wish to investigate novel approaches dealing this uncertainty and characterizing in a precise way a specific quality attributes (please contact prof. Mirandola for details).

Title: Quality Driven Exploration of Model Transformation Spaces
Contact(s): Carlo Ghezzi, Raffaela Mirandola
Show details
The goal of this research is to develop an integrated environment where it is possible verifying that a software system has certain non-functional properties. Although several model-driven approaches exist to predict quality attributes from
system models, they still lack the proper level of automation and when a problem arises, the identification of a solution is still entirely up to the engineer and to his/her experience. (Please contact prof. Ghezzi and prof. Mirandola for further details).

Title: Adaptive software solutions for energy grids
Contact(s): Carlo Ghezzi
Show details
We wish to explore scenarios in which people (at home, outdoor, while moving) interact with the environment in an energy-aware manner. We wish to investigate how the techniques developed to support runtime software adaptation can be applied in this context to solve innovative problems. In particular, we wish to explore the development of mobile applications that can improve user awareness.
(Please contact prof. Carlo GHEZZI for additional details)

Title: Making software self-healing via evolutionary computation methods
Contact(s): Carlo Ghezzi
Show details
We wish to explore what has been done so far in the field of evolutionary/genetic computing to support program repair.
We wish to assess how these results can be applied at the architecture level to support self-adaptive software architectures.
(Please contact prof. Carlo GHEZZI for additional details)

Title: Incremental syntax-driven runtime verification
Contact(s): Carlo Ghezzi
Show details
The goal is to make verification efficient at run-time, by reusing verification results across different versions of the artifact that is being verified. This avoids redoing unnecessary verification steps (i.e., steps that remain unchanged) as we progress from one verification step to the next. The proposed work will follow a syntax-driven approach to detect the minimum portion of artifacts to be analyzed from scratch.

(Please contact prof. Carlo GHEZZI for additional details)