Project

General

Profile

Actions

D6.4 OpenAIRE Maintenance Report (v2, 28th of June 2018)

Overview

This document contains information about the status of OpenAIRE2020 workflows, services and content. It provides details about deployment and history of major modifications of the system hosted at ICM, Poland and Zenodo repository hosted at CERN, Switzerland. The official maintenance of the OpenAIRE2020 services began on January 1st, 2015 when the project started.

Information Space Data section focuses on OpenAIRE data model including stats for main entities, inferred relations, funders, available data sources and records stored in Zenodo repository.

OpenAIRE workflows describes in depth all subsequent phases of data processing:
  • aggregation
  • Information Space population
  • deduplication
  • inference generation
  • Information Space monitoring and publishing

The last section gives insight into software life-cycle of all major services involved in the project, whole infrastructure and most important architectural changes.

Information Space Data

The OpenAIRE Core Data Model comprises the following interlinked entities:
  • results ( in form of publications, datasets and patents)
  • persons (removed since 2018.01)
  • organisations
  • funders
  • funding streams
  • projects
  • data sources (in form of institutional, thematic, and data repositories, Current Research Information Systems (CRIS), thematic and national aggregators, publication catalogues and entity registries)
  • software

Content Status

First table presents numbers for different InformationSpace main entities and fulltexts:

data type count
publication metadata 24629933
dataset metadata 7389872
projects 2539451
organizations 126455
EuropePMC XML fulltext 2071118
PDF fulltext 3919705

Second table provides numbers related to inferences generated by IIS:

inference type count
dataset references 1697855
project references 1066503
software urls references 40451
protein db references 266683
research initiatives references 42709
documents classified 3165468
similar documents found 233087235
citations matched by reference text 16365015
citations matched by id 6259352

Both tables are based on IIS report generated on May 9, 2016 for OpenAIRE production infrastructure (citation matching outcome numbers are related to report generated on November 19 2017, software urls references count was taken from report generated on March 21, 2018).

Funders

Table below presents number of publications funded by different funders:

funder/funding count
arc 20165
ec 150642
fct 40966
fwf 17881
hrzz 18504
mestd 2987
mzos 2590
nhmrc 20103
nih 377242
nsf 249153
nwo 18379
rcuk 59482
sfi 3257
snsf 12776
tubitak 1851
wt 70525

Presented numbers are based on production infrastructure state as of May 9, 2018.

Data Sources

Data sources in OpenAIRE are divided in:
  • data providers (~1078) OpenAIRE continuously collects information from and which comply with the OpenAIRE Guidelines
  • data sources (~11785) where metadata are collected from compatible data providers (usually aggregators) and represented in the OpenAIRE portal as links to the publications and datasets (downloadFrom).

The table below presents number of data sources grouped by OpenAIRE compatibility level and data source type:

data source type compatibility count
institutional repository openaire-3.0 243
openaire-2.0+ 74
openaire-2.0 11
openaire-basic 246
thematic repository openaire-3.0 7
openaire-2.0+ 9
openaire-basic 34
other publication repository openaire-3.0 3
openaire-2.0+ 0
openaire-2.0 1
openaire-basic 30
data repository openaire-data-2.0 21
publication repository aggregator openaire-3.0 35
openaire-2.0 1
openaire-basic 14
data repository aggregator openaire-data 2
journals openaire-3.0 103
openaire-2.0+ 13
openaire-2.0 6
openaire-basic 129
journal aggregator / publisher openaire-3.0 11
openaire-basic 71
proprietary 1
publication catalogue proprietary 1
openaire-3.0 1
openaire-2.0 11
repository registries proprietary 2
funder databases proprietary 16

The number of data sources is based on the OpenAIRE-index from June 28, 2018.

Zenodo Content Status

data type count or size
records 378k
managed files 1.2M
files total size 38TB

Presented numbers are based on Zenodo repository status on March 2018.

OpenAIRE workflows

The OpenAIRE aggregation system is based on the D-NET software toolkit. D-NET is a service-oriented framework specifically designed to support developers at constructing custom aggregative infrastructures in a cost-effective way. D-NET offers data management services capable of providing access to different kinds of external data sources, storing and processing information objects of any data models, converting them into common formats, and exposing information objects to third-party applications through a number of standard access API. Most importantly, D-NET offers infrastructure enabling services that facilitate the construction of domain-specific aggregative infrastructures by selecting and configuring the needed services and easily combining them to form autonomic data processing workflows.

The Enabling Layer contains the Services supporting the application framework. These provide functionalities such as Service registration, discovery, subscription and notification and data transfer mechanisms through ResultSet Services. Most importantly, these Services are configured to orchestrate Services of other layers to fulfil the OpenAIRE specific requirements and implement the OpenAIRE workflows.

Services

Software life-cycle

D-NET services

The D-NET services are shipped as web applications and deployed on the tomcat application server (v7.0.52) on three distinct systems: dev, beta, production. To support the deployment process all the software codebase is versioned on SVN and the project is managed with maven, whose artifacts are automatically built on a continuous integration system (Jenkins) and hosted on a dedicated maven repository (nexus), while webapp builds are made available via http server. The mentioned tools supporting the software lifecycle are maintained by CNR.

The D-NET services deployment is performed in subsequent stages:
  • The development infrastructure plays the role of test bench for the integration of the software developed by different institutions. It is maintained by CNR and runs mostly un-released code and contains mock or subsets of the data available on the production system.
  • The beta infrastructure runs only released code. It is maintained by ICM and consists of the final integration stage where all the system workflows are tested on the real data (not necessarily the same data as the production system) before making them available to the production system. Although the software running on the beta system is not yet production ready, its portal is publicly accessible in order to showcase new features and data.
  • The production infrastructure is maintained by ICM and runs only code that was tested on the beta system.

D-NET backend services are packed in four different web applications, each of them running on a dedicated tomcat instance.

Information Inference Service

Information Inference Service is an oozie application providing set of inference algorithms deployed on Hadoop cluster. Each new IIS release is deployed into unique, dedicated HDFS location. This way different IIS versions can be utilized by different D-Net environments: beta and production.

Formerly IIS was being deployed on CDH4 cluster. Since October 1st, 2015 dedicated cdh5 branch was created where new SPARK modules were introduced and existing modules were optimized. On November 20, 2016 for the first time all inferences in production infrastructure were generated by IIS deployed on new CDH5 OCEAN cluster. Both stability and major performance increase were noticed.

Currently IIS heavily depends on the following technologies:

Continuous Integration System hosted @ICM allows both building IIS artifacts and running full suite of integration tests which are triggered nightly.

More details on service versioning and deployment can be found on IIS versioning and deployment wiki page.

Portal

OpenAIRE portal is hosted at ICM. It uses Joomla! 3.6.2, a free dynamic portal engine and content management system (CMS).

The Joomla! depends on other upstream applications:

  • Apache 2.4.7
  • PHP 5.5.9
  • MySQL 5.5.53
  • OpenLDAP 2.4.31

Zenodo

Zenodo repository employs an instance of the Invenio software, developed by CERN.

Repository is deployed in a production system (https://zenodo.org) and a QA system (https://sandbox.zenodo.org). In total the two systems are running on 30 VMs hosted in CERNs OpenStack infrastructure. All machines are configured using Puppet and are running on top of CERN CentOS 7.2.

Zenodo/Invenio depends on the following applications:
- HAProxy for load balancing
- Nginx for serving static content and proxying request to application server
- UWSGI as application server for Zenodo/Invenio application
- Redis for memory cache
- RabbitMQ as message broker
- Celery as distributed task queue
- PostgreSQL as database
- Elasticsearch as search engine

Deployment process is described at http://zenodo.readthedocs.io/projectlifecycle.html#release-process

See https://github.com/zenodo/zenodo/commits/production for changes to Zenodo production system (does not include changes to Invenio modules).

Zenodo repository was relaunched on Invenio v3 alpha on September 12, 2016.

Infrastructure services

Because OpenAIRE2020 services are a continuation and incremental extension of the services already present that resulted from OpenAIRE+ project, so they are still hosted on that same machines. More details are available at OpenAIRE+ WP5 Maintenance Report.

Hadoop clusters

DM hadoop cluster

CDH version: cdh4.3.1

IIS hadoop cluster
There were two IIS clusters deployed:
  • old CDH4 IIS cluster, version cdh4.3.1, in operation until December 9, 2016
  • new CDH5 IIS cluster deployed on March 22, 2016 in OCEAN infrastructure, supports MRv2 on YARN and SPARK
CDH5 cluster version history:
  • 5.5.2 deployment on March 22, 2016
  • 5.5.2 -> 5.7.5 upgrade on November 30, 2016
  • 5.7.5 -> 5.9.0 upgrade on December 8, 2016

Databases

database type usage version
postgress statistics 9.1.23
postgress DNet services 9.3.14
mongodb DNet services 3.2.6
virtuoso LOD 7.2

Piwik analytics platform

Currently deployed Piwik version: 2.17.1, since December 6, 2016.

ownCloud filesync platform

Deployed at https://box.openaire.eu. Current version: 8.2.7, since October 19, 2016.

Services monitoring

The following technologies are in use at ICM for monitoring purposes:
  • Zabbix - enterprise-class open source monitoring solution for networks, servers and applications
  • Monit - open source tool for monitoring and managing, processes, files, directories and file systems
  • Pingdom - website performance and availability monitoring
  • Uptime - website availability monitoring from worldwide locations
Their main purpose is to cover the following group of monitoring activities:
  • Availability
    • Zabbix - monitoring local network health
    • pingdom, uptime - services probing
  • Watchdogs
    • Monit
    • Zabbix
      Both are used for a large variety of monitoring aspects such as:
    • disk space
    • cpu usage
    • network bandwidth usage
    • services monitoring
    • automatic services restarting
      among many others.
  • Alerts
    • Monit
    • Zabbix
      Both having appropriate thresholds set and both sending emails when those thresholds are exceeded.

All those monitoring technologies are in use for all production machines and services monitoring. The most crucial machines and services from BETA infrastructure are also being monitored.

Hadoop cluster monitoring

CDH5 IIS hadoop cluster was configured and set up using Cloudera Manager instance deployed at ICM OCEAN data center. CM provides many features for monitoring health and performance of the cluster components (hosts and service daemons) as well as the performance and resource demands of the jobs running on cluster. Apart from regular monitoring of various characteristics of the whole system (e.g. CPUs utilization, network/disk/hdfs IO) it is also capable of sending alerts whenever any host on a cluster goes down or when resources are severely exhausted.

Architectural changes

Detailed list of all administrative operations undertaken @ICM is available on changelog for servers administration operations wiki page.

Introducing CDH5 IIS cluster hosted in ICM OCEAN infrastructure

Slave node specification:
  • Huawei RH1288 V3
  • CPU: 2x Intel(R) Xeon(R) CPU E5-2680 v3 @ 2.50GHz (24 cores, 48 threads)
  • RAM: 128GB
  • HDD: 4x SATA 6TB 7.2K RPM (HDFS)
Cluster summary (16 slaves):
  • CPU: 384 cores, 768 threads
  • RAM: 2048GB
  • HDD: 384TB (HDFS)
YARN available resources:
  • vcores: 640
  • memory: 1.44TB
  • HDFS: 344TB

Incorporating resources from old CDH4 IIS cluster into existing DM CDH4 cluster

This task was possible after shutting down old IIS CDH4 cluster what happened on December 9, 2016.

Deploying DNet postgress and mongodb databases on separate machines

Separating openaire-services database instances into dedicated ones (since June 27, 2016):
  • openaire-services-postgresql
  • openaire-services-mongodb

Updating Zenodo repository infrastructure at CERN

Several architectural changes were introduced in CERN's infrastructure:
  • changed storage backend from OpenAFS to CERN EOS (18 PB disk cluster) for better scalability
  • changed from self-managed MySQL database to CERN Database Team managed PostgreSQL database
  • deployment of Elasticsearch clusters (6 VMs)
  • SLC to CentOS 7 on all 30 VMs

System downtimes

@ICM

  • [unplanned] November 13, 2017, 8 hours. #3284 main openaire.eu site becoming unavailable, two hard drives down one after another in a 8h period overnight
  • [planned] November 14, 2016, 2 hours. #2423 dealing with Linux Dirty COW vulnerability: kernel upgrade, OpenAIRE services restart.

@CERN

  • [unplanned] September 16, 2016, 3 hours. Preparation of a new load balancer caused an automatic update of CERN outerperimeter firewall that automatically closed access to the operational load balancers.
  • [planned] September 12, 2016, 8 hours. Complete migration from old infrastructure to new infrastructure
  • minor incidents until September 12, 2016 due to overload of the legacy system

Updated by Jochen Schirrwagen over 5 years ago · 83 revisions