Docker

A study on performance measures for auto-scaling CPU-intensive containerized applications

Autoscaling of containers can leverage performance measures from the different layers of the computational stack. This paper investigate the problem of selecting the most appropriate performance measure to activate auto-scaling actions aiming at guaranteeing QoS constraints. First, the correlation between absolute and relative usage measures and how a resource allocation decision can be influenced by them is analyzed in different workload scenarios. Absolute and relative measures could assume quite different values.

Autonomic orchestration of containers: Problem definition and research challenges

Today, a new technology is going to change the way cloud platforms are designed and managed. This technology is called container. A container is a software environment where to install an application or application component and all the library dependencies, the binaries, and a basic configuration needed to run the application. The container technology promises to solve many cloud application issues, for example the application portability problem and the virtual machine performance overhead problem.

Measuring Docker performance: what a mess!!!

Today, a new technology is going to change the way platforms for the internet of services are designed and managed. This technology is called container (e.g. Docker and LXC). The internet of service industry is adopting the container technology both for internal usage and as commercial offering. The use of container as base technology for largescale systems opens many challenges in the area of resource management at run-Time, for example: Autoscaling, optimal deployment and monitoring.

Auto-scaling of containers: the impact of relative and absolute metrics

Today, The cloud industry is adopting the container technology both for internal usage and as commercial offering. The use of containers as base technology for large-scale systems opens many challenges in the area of resource management at run-Time. This paper addresses the problem of selecting the more appropriate performance metrics to activate auto-scaling actions. Specifically, we investigate the use of relative and absolute metrics. Results demonstrate that, for CPU intense workload, the use of absolute metrics enables more accurate scaling decisions.

© Università degli Studi di Roma "La Sapienza" - Piazzale Aldo Moro 5, 00185 Roma