EGI - WeNMR demonstration ========================= GOCDB ----- - An operational tool for the federation, not an end user tool (top BDII) - Either answers two fundamental questions -- Which services are offered by my RP of chouce? -- Which RPs provide the services of my choice? MONITORING ---------- - Remember, we are monitoring a testbed! - Nagios represents the back-office UI - Front office portal will be available once we go public - RPs are monitored for 4 aspects: -- Accounting integration (with APEL SSM) -- Cloud service reachability (via TCB ping) -- Can VMs be instantiated? (using a crafted OPS VO VM) -- Information discovery integration (via local BDII) VM MARKETPLACE -------------- - A platform to share VM image metadata and location information -- Users upload initial metadata -- RPs decide to support eh user/comunity or not -- Images are then synchronized into locally endorsed replicas -- RPs publish metadata of loca replicas in the Marketplace - Users can thus discover which RP is supporting their scientific research - Image distribution is done behind the scenes using vmcatcher -- Comes from HePiX virt group Setting up OCCI client ---------------------- - Command line client written in Ruby - Authentication and Authorisation done using standard X.509v3 -- VO support using vanilla VOMS (not shown here) - Federated Cloud endpoints are simply HTTP(S) URLs - Querying compute instance just requires changing the endpoint -- Thanks to OCCI standard and enforcement in the federation Setting up workload queue on ToPoS server ----------------------------------------- - ToPoS is a generic workload server hosted by SURFsara -- ToPoS = Token Pool Server - Tokens are opaque to the server, the pool clients are parsing the tokens - Token Pools are protected form each other Starting VMs ------------ - Consistent across all federated resource providers -- Thanks to OCCI! -- Only change endpoint and local VM instance identifier WeNMR in Action --------------- - The tokens get leased by VM instances -- Actual workload fetched from central NMR database -- Results uploaded to central Protein server - More than one VM may lease a token -- First completing VM uploads the results --> Fail-over functionality (implemented in user space) - The more VMs, the faster the completion of the workload --> Elastic Cloud computing Assessing results ----------------- - The result is a validated computer model of a bio-active protein -- As shown on the screen -- Portal on the Protein DB server in Utrecht University - VMs are headless, so results and logs need checking Stopping VMs ------------ - If the workload is empty, VMs remain active (like a normal server) - To conserve resources, we will stop them - Again the same OCCI client is used -- Note the different parameter to stop instead of create instances - Instances can be stopped individually -- Or altogether per RP -- Or in groups (not shown) Accounting ---------- - Accounting is based on OGF Usage Record (UR) standard - Some modifications for Clouds in a profile - Once stabilised, will be fed back to OGF - The shown page is live, but preloaded -- Shows records of the last hour only (!) - Search for the compute instance names used when instantiated - Check the accounting data -- state (started or completed) -- Start time -- Completion time empty for still running virtual servers -- RAM allocation (8 GB @ GWDG - 1.7 GB at CESNET) - Note the Image ID being the same as seen in the VM Marketplace!