D64 OpenAIRE Maintenance Report » History » Version 79

Version 78 (Marek Horst, 28/06/2018 12:08) → Version 79/83 (Marek Horst, 28/06/2018 12:20)

h1. D6.4 OpenAIRE Maintenance Report (v2, 28th of June 2018)


h2. Overview

This document contains information about the status of OpenAIRE2020 workflows, services and content. It provides details about deployment and history of major modifications of the system hosted at ICM, Poland and Zenodo repository hosted at CERN, Switzerland. The official maintenance of the OpenAIRE2020 services began on January 1st, 2015 when the project started.

_Information Space Data_ section focuses on OpenAIRE data model including stats for main entities, inferred relations, funders, available data sources and records stored in Zenodo repository.

_OpenAIRE workflows_ describes in depth all subsequent phases of data processing:
* aggregation
* Information Space population
* deduplication
* inference generation
* Information Space monitoring and publishing

The last section gives insight into software life-cycle of all major services involved in the project, whole infrastructure and most important architectural changes.

h2. Information Space Data

The OpenAIRE [[Core Data Model]] comprises the following interlinked entities:
* results ( in form of publications, datasets and patents)
* -persons- (removed since 2018.01) persons
* organisations
* funders
* funding streams
* projects
* data sources (in form of institutional, thematic, and data repositories, Current Research Information Systems (CRIS), thematic and national aggregators, publication catalogues and entity registries)

h3. Content Status

First table presents numbers for different InformationSpace main entities and fulltexts:

|*data type*|*count*|
|publication metadata|17460368|
|dataset metadata|3226586|
|EuropePMC XML fulltext|1574358|
|PDF fulltext|2227458|

Second table provides numbers related to inferences generated by IIS:

|*inference type*|*count*|
|dataset references|88610|
|project references|351302|
|software urls references|21481|
|protein db references|196462|
|research initiatives references|7294|
|documents classified|2405869|
|similar documents found|164602477|
|citations matched by reference text|11390293|
|citations matched by id|3929053|

Both tables are based on IIS report generated on November 20, 2016 for OpenAIRE production infrastructure.

h3. Funders

Table below presents number of publications funded by different funders:

|EC - FP7|195151|
|EC - H2020|1782|
|NSF (US)|119550|
|NHMRC (AU)|5785|
|ARC (AU)|8053|

Presented numbers are based on production infrastructure state as of December 14, 2016.

h3. Data Sources

Data sources in OpenAIRE are divided in:
* data providers (~805) OpenAIRE continuously collects information from. Need to comply with the "OpenAIRE Guidelines":https://guidelines.openaire.eu
* data sources (~6562) where metadata are collected from compatible data providers (usually aggregators) and represented in the OpenAIRE portal as links to the publications and datasets (_downloadFrom_).

The table below presents number of data sources grouped by OpenAIRE compatibility level and data source type:

|*data source type*|*compatibility*|*count*|
|institutional repository |openaire-3.0 |122 |
| |openaire-2.0+ |103 |
| |openaire-2.0 |10 |
| |openaire-basic |252 |
|thematic repository |openaire-3.0 |5 |
| |openaire-2.0+ |9 |
| |openaire-basic |22 |
|other publication repository |openaire-3.0 |8 |
| |openaire-2.0+ |1 |
| |openaire-2.0 |2 |
| |openaire-basic |34 |
|data repository |openaire-data-2.0 |11 |
|publication repository aggregator |openaire-3.0 |5 |
| |openaire-2.0 |1 |
| |openaire-basic |7 |
|data repository aggregator |openaire-data |1 |
|journals |openaire-3.0 |71 |
| |openaire-2.0 |2 |
| |openaire-basic |77 |
|journal aggregator / publisher |openaire-3.0 |4 |
| |openaire-basic |26 |
| |proprietary |1 |
|publication catalogue |proprietary |12 |
|repository registries|proprietary|2 |
|funder databases |proprietary |8 |

The number of data sources is based on the OpenAIRE-index from December 15, 2016.

h3. Zenodo Content Status

|*data type*|*count or size*|
|managed files|1.2M|
|files total size|38TB|

Presented numbers are based on Zenodo repository status on March 2018.

h2. [[OpenAIRE workflows]]

The OpenAIRE aggregation system is based on the "D-NET software toolkit":http://www.d-net.research-infrastructures.eu/. D-NET is a service-oriented framework specifically designed to support developers at constructing custom aggregative infrastructures in a cost-effective way. D-NET offers data management services capable of providing access to different kinds of external data sources, storing and processing information objects of any data models, converting them into common formats, and exposing information objects to third-party applications through a number of standard access API. Most importantly, D-NET offers infrastructure enabling services that facilitate the construction of domain-specific aggregative infrastructures by selecting and configuring the needed services and easily combining them to form autonomic data processing workflows.

The Enabling Layer contains the Services supporting the application framework. These provide functionalities such as Service registration, discovery, subscription and notification and data transfer mechanisms through ResultSet Services. Most importantly, these Services are configured to orchestrate Services of other layers to fulfil the OpenAIRE specific requirements and implement the *[[OpenAIRE workflows]]*.

h2. Services

h3. Software life-cycle

h4. D-NET services

The D-NET services are shipped as web applications and deployed on the tomcat application server (v7.0.52) on three distinct systems: dev, beta, production. To support the deployment process all the software codebase is versioned on "SVN":https://svn-public.driver.research-infrastructures.eu/driver/dnet45 and the project is managed with maven, whose artifacts are automatically built on a continuous integration system ("Jenkins":https://jenkins-dnet.d4science.org) and hosted on a dedicated maven repository ("nexus":http://maven.research-infrastructures.eu/nexus), while webapp builds are made available via "http server":http://ppa.research-infrastructures.eu/ci_upload. The mentioned tools supporting the software lifecycle are maintained by CNR.

The D-NET services deployment is performed in subsequent stages:
* The development infrastructure plays the role of test bench for the integration of the software developed by different institutions. It is maintained by CNR and runs mostly un-released code and contains mock or subsets of the data available on the production system.
* The beta infrastructure runs only released code. It is maintained by ICM and consists of the final integration stage where all the system workflows are tested on the real data (not necessarily the same data as the production system) before making them available to the production system. Although the software running on the beta system is not yet production ready, its portal is publicly accessible in order to showcase new features and data.
* The production infrastructure is maintained by ICM and runs only code that was tested on the beta system.

D-NET backend services are packed in four different web applications, each of them running on a dedicated tomcat instance.

h4. Information Inference Service

"Information Inference Service":https://github.com/openaire/iis is an oozie application providing set of inference algorithms deployed on Hadoop cluster. Each new IIS release is deployed into unique, dedicated HDFS location. This way different IIS versions can be utilized by different D-Net environments: beta and production.

Formerly IIS was being deployed on CDH4 cluster. Since October 1st, 2015 dedicated "cdh5 branch":https://github.com/openaire/iis/commits/cdh5 was created where new SPARK modules were introduced and existing modules were optimized. On November 20, 2016 for the first time all inferences in production infrastructure were generated by IIS deployed on new CDH5 OCEAN cluster. Both stability and major performance increase were noticed.

Currently IIS heavily depends on the following technologies:
* "Java Platform Standard Edition (Java SE) 8":http://docs.oracle.com/javase/8/docs/
* "Apache Avro":https://avro.apache.org/ hdfs data serialization
* "Apache Hadoop":http://hadoop.apache.org/ framework for distributed computing
* "Apache Oozie":http://oozie.apache.org/ workflow definitions
* "Apache Pig":https://pig.apache.org transformers, documents similarity
* "Apache Spark":http://spark.apache.org affiliation and citation matching algorithms
* "Kryo":https://github.com/EsotericSoftware/kryo serialization framework
* "Cermine":https://github.com/CeON/CERMINE metadata and text extraction from PDF documents
* "Madis":https://github.com/madgik/madis documents classification and reference extraction algorithms

"Continuous Integration System":http://ci.ceon.pl/view/IIS/ hosted @ICM allows both building IIS artifacts and running full suite of integration tests which are triggered nightly.

More details on service versioning and deployment can be found on "IIS versioning and deployment":https://issue.openaire.research-infrastructures.eu/projects/openaire/wiki/IIS_versioning_and_deployment wiki page.

h4. Portal

OpenAIRE portal is hosted at ICM. It uses Joomla! 3.6.2, a free dynamic portal engine and content management system (CMS).

The Joomla! depends on other upstream applications:

* Apache 2.4.7
* PHP 5.5.9
* MySQL 5.5.53
* OpenLDAP 2.4.31

h4. Zenodo

Zenodo repository employs an instance of the Invenio software, developed by CERN.

Repository is deployed in a production system (https://zenodo.org) and a QA system (https://sandbox.zenodo.org). In total the two systems are running on 30 VMs hosted in CERNs OpenStack infrastructure. All machines are configured using Puppet and are running on top of CERN CentOS 7.2.

Zenodo/Invenio depends on the following applications:
- HAProxy for load balancing
- Nginx for serving static content and proxying request to application server
- UWSGI as application server for Zenodo/Invenio application
- Redis for memory cache
- RabbitMQ as message broker
- Celery as distributed task queue
- PostgreSQL as database
- Elasticsearch as search engine

Deployment process is described at http://zenodo.readthedocs.io/projectlifecycle.html#release-process

See https://github.com/zenodo/zenodo/commits/production for changes to Zenodo production system (does not include changes to Invenio modules).

Zenodo repository was relaunched on Invenio v3 alpha on September 12, 2016.

h3. Infrastructure services

Because OpenAIRE2020 services are a continuation and incremental extension of the services already present that resulted from OpenAIRE+ project, so they are still hosted on that same machines. More details are available at "OpenAIRE+ WP5 Maintenance Report":http://wiki.openaire.eu/xwiki/bin/view/OpenAIREplus%20Specific/WP5%20Maintenance%20Report.

h4. Hadoop clusters

h5. DM hadoop cluster

CDH version: @cdh4.3.1@

h5. IIS hadoop cluster

There were two IIS clusters deployed:
* old CDH4 IIS cluster, version @cdh4.3.1@, in operation until December 9, 2016
* new CDH5 IIS cluster deployed on March 22, 2016 in OCEAN infrastructure, supports MRv2 on YARN and SPARK

CDH5 cluster version history:
** @5.5.2@ deployment on March 22, 2016
** @5.5.2 -> 5.7.5@ upgrade on November 30, 2016
** @5.7.5 -> 5.9.0@ upgrade on December 8, 2016

h4. Databases

|_database type_|_usage_|_version_|
| postgress | statistics | @9.1.23@ |
| postgress | DNet services | @9.3.14@ |
| mongodb | DNet services | @3.2.6@ |
| virtuoso | LOD | @7.2@ |

h4. Piwik analytics platform

Currently deployed Piwik version: @2.17.1@, since December 6, 2016.

h4. ownCloud filesync platform

Deployed at https://box.openaire.eu. Current version: @8.2.7@, since October 19, 2016.

h3. Services monitoring

The following technologies are in use at ICM for monitoring purposes:
* *Zabbix* - enterprise-class open source monitoring solution for networks, servers and applications
* *Monit* - open source tool for monitoring and managing, processes, files, directories and file systems
* *Pingdom* - website performance and availability monitoring
* *Uptime* - website availability monitoring from worldwide locations

Their main purpose is to cover the following group of monitoring activities:
* Availability
** Zabbix - monitoring local network health
** pingdom, uptime - services probing
* Watchdogs
** Monit
** Zabbix
Both are used for a large variety of monitoring aspects such as:
** disk space
** cpu usage
** network bandwidth usage
** services monitoring
** automatic services restarting
among many others.
* Alerts
** Monit
** Zabbix
Both having appropriate thresholds set and both sending emails when those thresholds are exceeded.

All those monitoring technologies are in use for all production machines and services monitoring. The most crucial machines and services from BETA infrastructure are also being monitored.

h4. Hadoop cluster monitoring

CDH5 IIS hadoop cluster was configured and set up using Cloudera Manager instance deployed at ICM OCEAN data center. CM provides many features for monitoring health and performance of the cluster components (hosts and service daemons) as well as the performance and resource demands of the jobs running on cluster. Apart from regular monitoring of various characteristics of the whole system (e.g. CPUs utilization, network/disk/hdfs IO) it is also capable of sending alerts whenever any host on a cluster goes down or when resources are severely exhausted.

h3. Architectural changes

Detailed list of all administrative operations undertaken @ICM is available on [[D64_Servers_Administration_Operations_Changelog|changelog for servers administration operations]] wiki page.

h4. Introducing CDH5 IIS cluster hosted in ICM OCEAN infrastructure

Slave node specification:
* Huawei RH1288 V3
* CPU: 2x Intel(R) Xeon(R) CPU E5-2680 v3 @ 2.50GHz (24 cores, 48 threads)
* RAM: 128GB
* HDD: 4x SATA 6TB 7.2K RPM (HDFS)

Cluster summary (16 slaves):
* CPU: 384 cores, 768 threads
* RAM: 2048GB
* HDD: 384TB (HDFS)

YARN available resources:
* vcores: 640
* memory: 1.44TB
* HDFS: 344TB

h4. Incorporating resources from old CDH4 IIS cluster into existing DM CDH4 cluster

This task was possible after shutting down old IIS CDH4 cluster what happened on December 9, 2016.

h4. Deploying DNet postgress and mongodb databases on separate machines

Separating @openaire-services@ database instances into dedicated ones (since June 27, 2016):
* @openaire-services-postgresql@
* @openaire-services-mongodb@

h4. Updating Zenodo repository infrastructure at CERN

Several architectural changes were introduced in CERN's infrastructure:
* changed storage backend from OpenAFS to CERN EOS (18 PB disk cluster) for better scalability
* changed from self-managed MySQL database to CERN Database Team managed PostgreSQL database
* deployment of Elasticsearch clusters (6 VMs)
* SLC to CentOS 7 on all 30 VMs

h3. System downtimes

h4. @ICM

* [unplanned] November 13, 2017, 8 hours. #3284 main openaire.eu site becoming unavailable, two hard drives down one after another in a 8h period overnight
* [planned] November 14, 2016, 2 hours. #2423 dealing with Linux Dirty COW vulnerability: kernel upgrade, OpenAIRE services restart.

h4. @CERN

* [unplanned] September 16, 2016, 3 hours. Preparation of a new load balancer caused an automatic update of CERN outerperimeter firewall that automatically closed access to the operational load balancers.
* [planned] September 12, 2016, 8 hours. Complete migration from old infrastructure to new infrastructure
* minor incidents until September 12, 2016 due to overload of the legacy system