How to Achieve Workflow Interoperability ER-flow & SHIWA project Gabor Terstyanszky,

How to Achieve Workflow Interoperability
ER-flow & SHIWA project
Gabor Terstyanszky,
University of Westminster, UK
18 January 2013
ER-flow is supported by the FP7 Capacities Programme under contract No. RI-261585
WF Ecosystem
2
2
What does a WF developer need?
Using a portal/desktop to
parameterize and run these
applications, and to further
develop them
accessing a large set of various
DCIs to make these WF
applications run
WF App.
Repository
Portal
Access to a large
set of ready-to-run
scientific WF
applications
Supercomputer
based SGs
(DEISA, TeraGrid)
Cluster based
service grids (SGs)
(EGEE, OSG, etc.)
Local clusters
Clouds
Supercomputers
E-science infrastructure
Desktop grids (DGs)
(BOINC, Condor, etc.)
Grid systems
3
SHIWA: Enabling Workflow Interoperability
•  Publish WF applications in the repository
to be continued by other appl. developers
• Application developers use the portal/
desktop to develop complex applications
(executable on various DCIs) for various
end-user communities
SHIWA
App.
Repository
Application developers
SSP Portal
Supercomputer
based SGs
(DEISA, TeraGrid)
Cluster based
service grids (SGs)
(EGEE, OSG, etc.)
Local clusters
Clouds
Supercomputers
Desktop grids (DGs)
(BOINC, Condor, etc.)
Grid systems
4
Coarse-Grained Interoperability:
submitting non-native workflow
Workflow Engine A
Workflow
Engine A
Workflow of Workflow
Engine B
WF
submissio
client
Submission
Service
Workflow
Engine B
DCI
•  non-native workflow:
WF
- non-native workflows are black boxes which are managed as
legacy code applications
5
Coarse-Grained Interoperability:
meta-workflow submission
Workflows of
Workflow Engine A
Workflow of Workflow
Engine B
J1
WF2
WF4
Submission
client
Submission
Service
Workflow
Engine B
Workflow
Engine A
J3
DCI
•  native workflows:
J1, J3 and WF2
•  non-native workflows:
WF4
- black boxes which are managed as legacy code applications
SSP: CGI Infrastructure
DCIs
SHIWA Science Gateway
SHIWA Portal
GEMLCA
admin
ARC DCI
SHIWA Repository
WF1
WFn
gLite DCI
Unicore
DCI
Globus DCI
GEMLCA Repository
WS-PGRADE
Workflow
editor
WS-PGRADE
Workflow
engine
SHIWA Science Gateway
WF1
WFm
WE1
WEp
ASKALON
WE
Galaxy
WE
GWES
WE
Kepler WE
GEMLCA Service
Pegasus
WE
MOTEUR
WE
Proxy
Server
ProActive
WE
PGRADE
WE
GEMLCA
with GIB
SHIWA Proxy Server
na5ve WE
portal
repository
WS-­‐PGRADE
WS-­‐PGRADE v3.5
GEMLCA + SHIWA repo submiKer
proxy management
GEMLCA with GIB
SHIWA Proxy Server Resources Triana WE
Taverna
WE
Workflow Engines
local resources: invoca5on of locally deployed WEs WE submission to local cluster remote resources: through remotely pre-­‐deployed WEs to ARC, gLite, Globus Unicore DCIs 7
SHIWA Repository: Browse View
8
SHIWA Repository: Table View
9
SHIWA Portal: Workflow Editor
10
SHIWA Portal: Configuring Workflow
11
SHIWA Portal: Executing Workflow
12
CGI Developer Scenario:
Specifying Workflow Engine
step 1
specify WE
data
SHIWA Science Gateway
SHIWA Portal
workflow
engine
developer
GEMLCA
admin
SHIWA Repository
WF1
WFn
GEMLCA Repository
step 2
upload WE
binary,
dependencies
WS-PGRADE
Workflow
editor
WS-PGRADE
Workflow
engine
WF1
WFm
WE1
WEp
step 3
deploy WE
GEMLCA
with GIB
GEMLCA Service
Proxy
Server
SHIWA Proxy Server
13
CGI Developer Scenario:
Specifying Workflows
step 1
specify WF
data
SHIWA Science Gateway
SHIWA Repository
WF1
WFn
SHIWA Portal
WF1
WFm
WS-PGRADE
Workflow
editor
WE1
WEp
workflow
developer
step 3
deploy WF
WS-PGRADE
Workflow
engine
GEMLCA Repository
step 2
upload WF
GEMLCA
with GIB
GEMLCA Service
Proxy
Server
SHIWA Proxy Server
14
14
CGI User Scenario:
PGRADE as Native WE
DCIs
ARC DCI
SHIWA Science Gateway
gLite DCI
Globus DCI
Unicore
DCI
SHIWA Repository
WF1
WFn
step 1
search WF
step 3
retrieve WF data
ASKALON
WE
Galaxy
WE
GWES
WE
Kepler WE
Pegasus
WE
MOTEUR
WE
ProActive
WE
PGRADE
WE
Triana WE
Taverna
WE
GEMLCA Repository
SHIWA Portal
WS-PGRADE
Workflow
editor
WF list
WF1
WFm
WE1
WEp
WE + WF
WS-PGRADE
Workflow
engine
step 5
retrieve WF
Workflow Engines
e-scientists
GEMLCA
with GIB
GEMLCA Service
step 4
submit WF
step 2
edit WF
step 6
retrieve proxy
step 7
run WF
Proxy
Server
SHIWA Proxy Server
15
ER-flow Project
Partners:
University of Westminster
Magyar Tudomanyos Akademia
Szamitastechnikai es Automatizalasi Kutato
Intezete
Centre National de la Recherche Scientifique
Stichting European Grid Initiative
Academic Medical Center of the University of
Amsterdam
Technische Universität Dresden
Ludwig-Maximilians-Universität München
University College London
Trinity College Daudublin
Istituto Nazionale di Astrofisica
UoW
MTA-SZTAKI
United Kingdom
Hungary
CNRS
EGI.eu
AMC
France
The Netherlands
The Netherlands
TUD
LMU
UCL
TCD
INAF
Germany
Germany
United KIngdom
Ireland
Italy
Technology providers:
CNRS, EGI.eu, MTA-SZTAKI, UoW
Research Communities:
Astro-Physics
Computational Chemistry
Helio-Physics
Life Science
INAF
LMU + TUD
TCD + UCL
AMC
Duration:
September 2012 – August 2014
16
Project Aim and Services
Aim:
• 
• 
To provide a simulation platform for research communities to enable
seamless execution of workflows of different workflow systems
through workflow interoperability
To investigate data interoperability issues in the workflow domain and
propose solutions
Services:
• 
• 
• 
To support the whole workflow lifecycle: editing, uploading, browsing
downloading and executing workflows
To provide coarse-grained workflow interoperability solution
To provide GUIs to manage workflows
Key actors:
• 
researchers
workflow engine developers
workflow developers
17
17
User Communities and
Simulation Platform
Research communities supported by the project
•  Astro-Physics
•  Computational Chemistry
•  HelioPhysics
•  Life Science
Further research communities
•  at least four further research communities will be supported
candidate communities
§  Hydrometeorology
§  Seizmology
§  further communities considered
18