The Fourth International Workshop on Large-Scale Testing (LT 2015)

Feb. 1, 2015
Austin, TX, USA
Co-located with ICPE 2015
The 6th International Conference on Performance Engineering
Important Dates
Research papers: Nov. 16, 2014 Paper notifications: Nov. 30, 2014 Camera ready: Dec. 10, 2014 Industry talks: Jan. 9, 2015 Talk notifications: Jan. 14, 2015 Workshop date: Feb. 1, 2015
Past LT Workshops
LT 2012 LT 2013 LT 2014

Load Testing Elasticity and Performance Isolation in Shared Execution Environments

Samuel_KounevSamuel Kounev, University of Würzburg

Talk Abstract:
The inability to provide performance guarantees is a major challenge for the widespread adoption of shared execution environments, based on paradigms such as virtualization and cloud computing. Performance is a major distinguishing factor between different service offerings. To make such offerings comparable, novel metrics and techniques are needed allowing to measure and quantify the performance of shared execution environments under load, e.g., public cloud platforms or general virtualized service infrastructures. In this talk, we first discuss the inherent challenges of providing performance guarantees in the presence of highly variable workloads and load spikes. We then present novel metrics and techniques for shared execution environments, specifically taking into account the dynamics of modern service infrastructures. We consider both environments where virtualization is used as a basis for enabling resource sharing, e.g., as in Infrastructure-as-a-Service (IaaS) offerings, as well as multi-tenant Software-as-a-Service (SaaS) applications, where the whole hardware and software stack is shared among different customers. We focus on evaluating two aspects: i) the ability of the system to provision resources in an elastic manner, i.e., system elasticity, ii) the ability of the system to isolate different applications and customers sharing the physical infrastructure in terms of the performance they observe, i.e., performance isolation. We discuss the challenges in measuring and quantifying the mentioned two properties, presenting existing approaches to tackle them. Finally, we discuss open issues and emerging directions for future work.

Short Biography of the Speaker:
Samuel Kounev is a Professor and Chair of Computer Science at the Department of Computer Science, University of Würzburg, Germany. His research focusses on developing methods, techniques and tools for the engineering of dependable and efficient software systems. Relevant research areas include: software design, modeling and architecture-based analysis; systems benchmarking, monitoring and experimental analysis; and autonomic and self-aware systems management. He received a PhD degree in computer science from Technische Universitaet Darmstadt (2005). From February 2006 to May 2008, he was a research fellow at Cambridge University. In April 2009, he received the Emmy-Noether Career award (1 Mil. EUR) for excellent young scientists by the German Research Foundation (DFG). He currently serves as elected Chair of the Research Group of the Standard Performance Evaluation Corporation (SPEC), which he co-founded in 2010, providing a platform for collaborative research efforts between academia and industry in the area of quantitative system evaluation. He also serves as Co-Chair of the Steering Committee of the ACM/SPEC International Conference on Performance Engineering (ICPE), which he co-founded in 2010 as a first joint event between ACM and SPEC. He is a member of the ACM, IEEE, and the German Computer Science Society, and recipient of several honors including the SPEC 2014 Presidential Award for "Excellence in Research" recognizing lasting contributions to the field of performance evaluation and benchmarking of computing systems.

Challenges, Benefits and Best Practices of Performance Focused DevOps

Wolfgang_GottesheimWolfgang Gottesheim, Compuware

Talk Abstract:
Did you know that just a handful of root causes cause the majority of application issues like crashes, slow performance or incorrect application behavior? Non-optimized database access, deployment mistakes, memory leaks, or inefficient coding are just some examples. Companies that think Continuous Delivery and DevOps will solve all their problems typically fail as they just run into these problems faster. In this session we take a closer look at the most common problems, how to detect them and how to incorporate performance into your DevOps culture by automatically detecting these top problems.

Short Biography of the Speaker:
Wolfgang Gottesheim has several years of experience as a software engineer and research assistant in the Java enterprise space. Currently he contributes to the strategic development of dynaTrace as a Technology Strategist. He focuses on how to make APM a part of DevOps by monitoring and optimizing applications along the Continuous Delivery pipeline.