Reproducability of CAE-computations through Immutable

Reproducability of CAE-computations through Immutable Application Containers
Christian Kniep1
1
QNIB Solutions
Ensuring the reproducibility of computations on various compute node generations throughout a
long period of time is vital for a lot of reasons. An automotive vendor is forced to keep the results
of CAE workloads due to legal requirements. Instead of keeping the outcome of the computation, a
stable reproduction would be preferred. Unfortunately this is unlikely in current environments, due
to a rather short life-cycle of HPC infrastructure.
This conservative ”never touch a running system”-approach conflicts with the agile way of maintaining IT infrastructure, which fosters fast update cycles to prevent software bugs (security or
functional) from impacting operations.
This paper aims to bring these two approaches together by showing that an ’Immutable Applications Container’ (using Docker) is capable of reproducing the results, even if the ecosystem moves
forwards.
I.
INTRODUCTION
This research is motivated by the previous paper ’Containerized MPI workloads’1 , which showed that the performance impact introduced by containerization is negligible. Projecting this conclusion into the automotive
industry, in which the author gained the foundation of
his System Operations experience, the performance was
never the primary driver of decisions. Stability and more
precisely reproducibility comes first.
Once a workflow is implemented and put into production, it forces the ’System Operations’ (SysOp) personnel
to prevent changes within the infrastructure from impacting the ability to compute. Therefore the mantra ’neverchange-a-winning-team’ suppresses the agility that is
needed to keep up with the ever increasing technology
cycles and security fixes. While the SysOp team is eager
to apply the latest updates as soon as possible, the engineering team aims to have their computational needs
met as efficient as possible with all resources available at
their disposal. This clearly marks a conflict of interest
which is hard to overcome.
Within the engineering process of cars a lot of data
is generated. Body parts within different iterations are
assembled to form a complete car body. A special ’Lifecycle Management’ keeps track of all the different parts
and enables the checkout of a specific snapshot during the
complete process. These snapshots are used in multiple
disciplines within ’Computer Aided Engineering’ (CAE)
(crash-test, airflow simulations, ...). In case development
leads to an error that has to be corrected using old computational data, a stable outcome for those computations
is essential.
The ITIL2 framework commoditized IT operation, by
formalizing the processes (incident, problem, change, ...).
1
2
m
http://doc.qnib.org/2014-11-05_Whitepaper_
Docker-MPI-workload.pdf
m http://en.wikipedia.org/wiki/Information_Technology_
Infrastructure_Library
On the other hand the formalization of the processes
burdens the agility and the adoption of new technologies. Fitting virtual machines like VMware clusters into
the paper-trail caused some friction (since they are disposable within a short period of time, but still a server
with assigned resources), let alone containers (disposable
within seconds, only a group of processes).
Even though containers might be hard to fit into the
existing model, they potentially provide a way to abstract
the workload dramatically. Without a special layer in
between, an engineer’s workload might spark at the resource closest to the engineer - his workstation, where an
idea is formed. Afterwards the workload is shifted to an
office server to prove the concept. After the prove was
successful a department cluster takes over to fit the work
within a down-scaled context and finally a complete computation is initiated on a big HPC system. Even spilling
over into a public cloud like Amazon EC2 with the certainty that each iteration provides comparable results,
depending on the job configuration and not the platform.
This work aims to examine whether such workloads
provide consistent outcomes on changing platforms.
II.
METHOD
To determine whether a workflow outputs consistent
results the solver ’Open Source Field Operation and Manipulation’ (OpenFOAM) was chosen. It provides numerical solvers for a broad range of continuum mechanics problems. As an open-source project a broad community provides tutorials and documentation, moreover
this study does not need any license to be recreated.
2
A.
In the dawn of the HPC Advisory Council Swiss
Workshop the computation was also performed on
a c4.4xlarge instance, which uses 16 vCores of an
Intel XEON E5-2666 v3 and 30GB of RAM.
Design
This study uses the cavity3 scenario out of the OpenFOAM tutorials. The use-case is computed on each platform using a set of Docker Images. After each run a
check is performed, to determine whether the results are
consistent.
B.
Platforms
• boot2docker4 Boot2docker is a lightweight Linux
distribution (based on Tony Core Linux5 ), trimmed
to run the Docker server daemon. It packages everything necessary to run docker on-top of an Apple Macintosh. Since Docker is (as of today) limited to the Linux Kernel, the server is not able
to run on MacOS. By default VirtualBox6 is used
as a provider. This approach introduces a performance penalty since a hypervisor has to be used,
but provides a portable way of running docker on
non-Linux systems. It is very convenient and inherits from vagrant7 , which abstracts the setup
of virtual-machines behind one single command.
Boot2docker was used on-top of a MacBook Pro
Retina (Mid 2014) housing an Intel Core i7 with
3GHz, 16GB of RAM and a 512GB SSD.
• Workstation A desktop PC was used to provide
the equivalent of a low-end workstation running a
Linux derivative. The workstation is comprised of
a AMD PhenomII QuadCore, 4GB of RAM and a
64GB consumer grade SSD. The workstation bootstrapped via PXEboot and customized using Ansible. A set of Linux derivatives with multiple kernel
version were installed.
• HPC-Cluster The HPC-Cluster ’Venus’8 , which
was used in the previous study about containerized
MPI-workloads was also used in this study. It is
comprised of eight SUN FIRE X2250 nodes. Each
node housed two QUAD-core Intel XEON X5472
and 32GB of memory. All nodes were connected
via QDR InfiniBand using ConnectX-2 Mellanox
cards.
• Public Cloud An AWS c3-compute instance was
used to run the workload within Amazons’ EC2
public cloud. The c3.xlarge instance uses an Intel
XEON E5-2680 Ivy Bridge processor and 7.5 GB
of RAM, along with SSD storage.
C.
OpenFOAM
OpenFOAM provides a broad range of tools to tackle
problems within the area of continuum mechanics. Most
prominently ’Computational Fluid Dynamics’ (CFD)
solvers. Within this work the cavity9 case is used, which
simulates an isothermal, incompressible flow as shown in
Figure 1.
FIG. 1. Velocities in the cavity case (Source:m http://www.
openfoam.org/docs/user/cavity.php).
1.
Program Versions
The versions used within this study were installed by
following the OpenFOAM installation process10 for the
Ubuntu Linux distribution. Two containers were created.
The image name reads qnib/openfoam11 with different
tags associated with different setups:
• u1204of222 OpenFOAM v2.2.2 on Ubuntu 12.04
• u1204of230 OpenFOAM v2.3.0 on Ubuntu 12.04
• u1410of231 OpenFOAM 2.3.1 on Ubuntu 14.10
3
4
5
6
7
8
m
m
m
m
m
m
http://openfoam.org/docs/user/cavity.phptutorial
http://boot2docker.io/
http://tinycorelinux.net/
https://www.virtualbox.org/
https://www.vagrantup.com/
http://www.hpcadvisorycouncil.com/cluster_center.php
9
10
11
m http://www.openfoam.org/docs/user/cavity.php
m http://www.openfoam.org/download/ubuntu.php
m https://registry.hub.docker.com/u/qnib/openfoam/
3
2.
Computational Domain Changes
The dimensions of the computational domain were increased in an effort to increase the computation time.
The problem was decomposed into sixteen zones, as to
use sixteen way parallelism. The difference between the
original case is described as follows:
• constant/polyMesh/blockMeshDict
The
mesh size was increased from 20x20x1 to
1000x1000x1000 and the vertices were stretched
by a factor of 10.
• system/controlDict The computation started at
t0 = 0 and iterated 50 times until it reaches t5 0 =
0.005. Result were flushed every 25 iterations.
Before Docker stepped up, Linux Containers were only
a niche in the virtualization ecosystem. They were either not available for the main distributions (like Jails in
BSD or Zones in Solaris) or to complex to use and not
portable like LXC. Docker took the LXC principles and
bundled it with an simple (yet powerful) way to package,
distribute and execute containers. Along with the easy
to understand command-line interface, it gained a lot of
momentum.
E.
Workflow
The OpenFOAM images presented in section II C 1
were executed on-top of each platform using Open MPI.
$ decompose
$ mpirun −np16 icoFoam −− p a r a l l e l 2>&1 l o g
3.
Measurement of Equality
To measure the consistency of the results each iteration
of the icoFoam calculation was extended with a function
to compute the average cell pressure.
Listing 3. Command to run icoFoam with 16way parallelism.
After the run was finished the log file was postprocessed as to compute the sum of each iteration’s average pressure.
pAverage {
functionObjectLibs
( ” l i b u t i l i t y F u n c t i o n O b j e c t s . so ” ) ;
t y p e coded ;
redirectType average ;
outputControl timeStep ;
code
#{
c o n s t v o l S c a l a r F i e l d& p =
mesh ( ) . l o o k u p O b j e c t <v o l S c a l a r F i e l d >(”p” ) ;
I n f o <<”p avg : ” << a v e r a g e ( p ) << e n d l ;
#};
}
The OpenFOAM computation was performed on all
setups using multiple operation systems and kernel versions, if available.
Listing 1. Function to compute the average pressure within
each iteration.
The MacBookPro as an example of a users laptop was
only tested with one version of boot2docker (1.4.1).
Each iterations’ pressure value was summed up as a
post-processing step.
$ f o r l f i l e i n $ ( f i n d . −name l o g ) ; do
s t r=$ ( g r e p ” ˆp ” $ l f i l e | awk ’ { p r i n t $10 } ’ | x a r g s )
sum=$ ( echo $ s t r | s e d −e ’ s / /+/g ’ | bc )
echo ” $ {sum} # $ { l f i l e } ”
done
Listing 2. Bash command to calculate pressure sum.
D.
III.
A.
host
MBP
MBP
MBP
MBP
MacBookPro (boot2docker)
system
boot2docker
boot2docker
core618
core618
#n
1
1
1
1
kernel
3.16.7
3.16.7
3.19
3.19
image
u12
u12
u12
u12
of
of222
of230
of222
of230
wclock
5213s
5789s
4205s
4639s
FIG. 2. Results Laptop
Docker
Docker uses Kernel Namespaces to isolate groups of
processes. At first Docker leverages ’LinuX Containers’
(LXC) (as of today a distinct library ’libcontainers’ is
used) and cgroups to constrain the resources used by
those process groups. Unlike traditional virtualization
containers (whether it is LXC, Docker or other container
techniques), these do not introduce much additional overhead. The isolation and the resource restriction is handled within the Linux kernel.
RESULTS
B.
Workstation
Due to an automated provisioning process multiple operating systems were used on-top of the workstation.
C.
AWS EC2 Cloud
A c3.xlarge and c4.4xlarge compute instance were used
to run the workload within a cloud environment.
4
host
x4
x4
x4
x4
x4
x4
x4
x4
x4
x4
x4
x4
system
u12
u12
u12
u12
u12
u12
cos6.6
cos6.6
u14
u14
u14
u14
#n
1
1
1
1
1
1
1
1
1
1
1
1
kernel
3.13.0
3.13.0
3.13.0
3.13.0
3.17.7
3.17.7
2.6.32
2.6.32
3.13.0
3.13.0
3.18.1
3.18.1
image
bare-metal
u12
u12
u14
u12
u12
u12
u12
u12
u12
u12
u12
of
of230
of222
of230
of231
of222
of230
of222
of230
of222
of230
of222
of230
wclock
4266s
4243s
4171s
4164s
4164s
4169s
4195s
4190s
4236s
4226s
4185s
4193s
FIG. 3. Results Workstation
host
c3.xlarge
c3.xlarge
c3.xlarge
c3.xlarge
c4.4xlarge
c4.4xlarge
c4.4xlarge
system
u14
u14
core494
core494
u14
u14
u14
#n
1
1
1
1
1
1
1
kernel
3.13.0
3.13.0
3.17.2
3.17.2
3.13.0
3.13.0
3.13.0
image
u12
u12
u12
u12
u12
u12
u14
of
of222
of230
of222
of230
of222
of230
of231
wclock
1268s
1288s
1285s
1288s
548s
554s
553s
FIG. 4. Results Amazon EC2
D.
HPC Advisory Council Cluster
To run OpenFOAM in a distributed setup the InfiniBand connected Venus cluster within the HPC Advisory
Council was used. The first two runs were started on
localhost only, the ones at the bottom of the following
table ran distributed. Due to better InfiniBand support,
the Ubuntu 14 image was used in the distributed case.
host
Venus
Venus
Venus
Venus
Venus
Venus
system
cos7
cos7
cos7
cos7
cos7
cos7
#n
1
1
1
2
4
8
kernel
3.10.0
3.10.0
3.10.0
3.10.0
3.10.0
3.10.0
image
u12
u12
u14
u14
u14
u14
of
of222
of230
of231
of231
of231
of231
wclock
2429s
2444s
2485s
892s
490s
431s
FIG. 5. Results HPCAC Cluster ’Venus’
The logfiles written during the runs are available at
github12 .
Within Minor releases the average pressure sums’ were
equal among all test setup, regardless of the underlying
hardware, Operating System and Linux kernel version.
• 2.3.0/2.3.1: 8.6402463 p
$ cd hpcac2015−r e s u l t s
$ f o r l f i l e i n $ ( f i n d . −name l o g ) ; do
p r e f i x=$ ( echo $ l f i l e | awk −F/ ’ { p r i n t $2 } ’ )
s t r=$ ( g r e p ” ˆp ” $ l f i l e | awk ’ { p r i n t $10 } ’ | x a r g s )
sum=$ ( echo $ s t r | s e d −e ’ s / /+/g ’ | bc )
echo ” $ {sum } | $ { p r e f i x } ”
done
8.6402816| b2d u12 of222
8.6402463| b2d u12 of230
8 . 6 4 0 2 8 1 6 | c3 . x l a r g e −c o r e 4 9 4 d o c k e r u 1 2 −o f 2 2 2
8 . 6 4 0 2 4 6 3 | c3 . x l a r g e −c o r e 4 9 4 d o c k e r u 1 2 −o f 2 3 0
8 . 6 4 0 2 8 1 6 | c3 . x l a r g e −u 1 4 d o c k e r u 1 2 −o f 2 2 2
8 . 6 4 0 2 4 6 3 | c3 . x l a r g e −u 1 4 d o c k e r u 1 2 −o f 2 3 0
8 . 6 4 0 2 8 1 6 | c4 . 4 x l a r g e −u 1 4 d o c k e r u 1 2 −o f 2 2 2
8 . 6 4 0 2 4 6 3 | c4 . 4 x l a r g e −u 1 4 d o c k e r u 1 2 −o f 2 3 0
8 . 6 4 0 2 4 6 3 | c4 . 4 x l a r g e −u 1 4 d o c k e r u 1 4 −o f 2 3 1
8 . 6 4 0 2 8 1 6 | c a v i t y F i n e b a r e −x4 u12 − 3 . 1 7 . 7 o f 2 2 2
8 . 6 4 0 2 4 6 3 | c a v i t y F i n e b a r e −x4 u12 − 3 . 1 7 . 7 o f 2 3 0
8 . 6 4 0 2 4 6 3 | c a v i t y F i n e b a r e −x 4 u 1 2 o f 2 3 0
8 . 6 4 0 2 8 1 6 | c a v i t y F i n e d o c k e r −x 4 u 1 2 o f 2 2 2 u 1 2
8 . 6 4 0 2 4 6 3 | c a v i t y F i n e d o c k e r −x 4 u 1 2 o f 2 3 0 u 1 2
8 . 6 4 0 2 4 6 3 | c a v i t y F i n e d o c k e r −x 4 u 1 2 o f 2 3 1 u 1 4 1 0
8 . 6 4 0 2 8 1 6 | c a v i t y F i n e x 4 −c o s 6 . 6 d o c k e r u 1 2 −o f 2 2 2
8 . 6 4 0 2 4 6 3 | c a v i t y F i n e x 4 −c o s 6 . 6 d o c k e r u 1 2 −o f 2 3 0
8 . 6 4 0 2 8 1 6 | c a v i t y F i n e x 4 −u14 −3.18 d o c k e r u 1 2 −o f 2 2 2
8 . 6 4 0 2 4 6 3 | c a v i t y F i n e x 4 −u14 −3.18 d o c k e r u 1 2 −o f 2 3 0
8 . 6 4 0 2 8 1 6 | c a v i t y F i n e x 4 −u 1 4 d o c k e r u 1 2 −o f 2 2 2
8 . 6 4 0 2 4 6 3 | c a v i t y F i n e x 4 −u 1 4 d o c k e r u 1 2 −o f 2 3 0
8.6402816| coreos618 . u12of222
8.6402463| coreos618 . u12of230
8 . 6 4 0 2 8 1 6 | v e n u s c o s 7 0 d o c k e r u 1 2 −o f 2 2 2
8 . 6 4 0 2 4 6 3 | v e n u s c o s 7 0 d o c k e r u 1 2 −o f 2 3 0
8 . 6 4 0 2 4 6 3 | v e n u s c o s 7 0 N 1 d o c k e r u 1 2 −o f 2 3 1
8 . 6 4 0 2 4 6 3 | v e n u s c o s 7 0 N 2 d o c k e r u 1 2 −o f 2 3 1
8 . 6 4 0 2 4 6 3 | v e n u s c o s 7 0 N 4 d o c k e r u 1 2 −o f 2 3 1
8 . 6 4 0 2 4 6 3 | v e n u s c o s 7 0 N 8 d o c k e r u 1 2 −o f 2 3 1
Listing 4. Bash command to calculate pressure sum.
The computation time varied significantly. The eight
node HPC system finished the computation in 7 minutes. The AWS instance with the newest XEON CPUs
(Haswell) in the field, needed only 9 minutes on a single
node. Ontop of the MacBook Pro and the workstation
the computation took over 1hour.
IV.
FUTURE WORK
The cavity case is laminar, hence tends towards a deterministic result, which is approximated by simulations.
To push this study further a more chaotic use-case could
be used, which is determined by random events, such as
the Karman Vortex Street shown in Figure 6.
This would explore how stable the user-land within a
container behaves.
V.
CONCLUSION
• 2.2.2: 8.6402816 p
12
m https://github.com/qnib/hpcac2015-results
This study indicates that ’Immutable Application Containers’ are a realistic scenario, application might be encapsulated and thus abstracted from the underlying platform.
5
specific Image ID (sha256 hash) of a Docker image and
produce equal results even on fast moving infrastructures.
Furthermore a consistent deployment of application
(probably certified by the distribution and the software
vendor) could enhance the stability of application stacks.
The barrier to upgrade a version vanishes since the rollback is easy to achieve.
ACKNOWLEDGMENTS
FIG. 6. Karmann Vortex Street Casea Screenshotb
a
b
m http://www.beilke-cfd.de/Karmann_OpenFoam.tar.gz
m https://www.youtube.com/watch?v=hZm7lc4sC2o
Such a system could tie together an input deck and a
Special thanks to Martin Becker from DHCAE-Tools
for advisement on OpenFOAM in general and particularly for providing the average pressure function to compare results. Furthermore many thanks to the ’HPC Advisory Council’ (HPCAC) for providing the HPC cluster
resource Venus and Nicholas McCreath for proofreading.