D64 OpenAIRE Maintenance Report » History » Revision 71
« Previous |
Revision 71/83
(diff)
| Next »
Marek Horst, 19/01/2017 12:20 PM
D6.4 OpenAIRE Maintenance Report (v1, 9th of December 2016)¶
- Table of contents
- D6.4 OpenAIRE Maintenance Report (v1, 9th of December 2016)
Overview¶
This document contains information about the status of OpenAIRE2020 workflows, services and content. It provides details about deployment and history of major modifications of the system hosted at ICM, Poland and Zenodo repository hosted at CERN, Switzerland. The official maintenance of the OpenAIRE2020 services began on January 1st, 2015 when the project started.
Information Space Data section focuses on OpenAIRE data model including stats for main entities, inferred relations, funders, available data sources and records stored in Zenodo repository.
OpenAIRE workflows describes in depth all subsequent phases of data processing:- aggregation
- Information Space population
- deduplication
- inference generation
- Information Space monitoring and publishing
The last section gives insight into software life-cycle of all major services involved in the project, whole infrastructure and most important architectural changes.
Information Space Data¶
The OpenAIRE Core Data Model comprises the following interlinked entities:- results ( in form of publications, datasets and patents)
- persons
- organisations
- funders
- funding streams
- projects
- data sources (in form of institutional, thematic, and data repositories, Current Research Information Systems (CRIS), thematic and national aggregators, publication catalogues and entity registries)
Content Status¶
First table presents numbers for different InformationSpace main entities and fulltexts:
data type | count |
publication metadata | 17460368 |
dataset metadata | 3226586 |
projects | 653268 |
organizations | 64591 |
authors | 16188328 |
EuropePMC XML fulltext | 1574358 |
PDF fulltext | 2227458 |
Second table provides numbers related to inferences generated by IIS:
inference type | count |
dataset references | 88610 |
project references | 351302 |
software urls references | 21481 |
protein db references | 196462 |
research initiatives references | 7294 |
documents classified | 2405869 |
similar documents found | 164602477 |
citations matched by reference text | 11390293 |
citations matched by id | 3929053 |
Both tables are based on IIS report generated on November 20, 2016 for OpenAIRE production infrastructure.
Funders¶
Table below presents number of publications funded by different funders:
funder/funding | count |
EC - FP7 | 195151 |
EC - H2020 | 1782 |
FCT | 23849 |
WT | 16108 |
NSF (US) | 119550 |
NHMRC (AU) | 5785 |
ARC (AU) | 8053 |
Presented numbers are based on production infrastructure state as of December 14, 2016.
Data Sources¶
Data sources in OpenAIRE are divided in:- data providers (~805) OpenAIRE continuously collects information from. Need to comply with the OpenAIRE Guidelines
- data sources (~6562) where metadata are collected from compatible data providers (usually aggregators) and represented in the OpenAIRE portal as links to the publications and datasets (downloadFrom).
The table below presents number of data sources grouped by OpenAIRE compatibility level and data source type:
data source type | compatibility | count |
institutional repository | openaire-3.0 | 122 |
openaire-2.0+ | 103 | |
openaire-2.0 | 10 | |
openaire-basic | 252 | |
thematic repository | openaire-3.0 | 5 |
openaire-2.0+ | 9 | |
openaire-basic | 22 | |
other publication repository | openaire-3.0 | 8 |
openaire-2.0+ | 1 | |
openaire-2.0 | 2 | |
openaire-basic | 34 | |
data repository | openaire-data-2.0 | 11 |
publication repository aggregator | openaire-3.0 | 5 |
openaire-2.0 | 1 | |
openaire-basic | 7 | |
data repository aggregator | openaire-data | 1 |
journals | openaire-3.0 | 71 |
openaire-2.0 | 2 | |
openaire-basic | 77 | |
journal aggregator / publisher | openaire-3.0 | 4 |
openaire-basic | 26 | |
proprietary | 1 | |
publication catalogue | proprietary | 12 |
repository registries | proprietary | 2 |
funder databases | proprietary | 8 |
The number of data sources is based on the OpenAIRE-index from December 15, 2016.
Zenodo Content Status¶
data type | count or size |
records | 96771 |
managed files | 181778 |
files total size | 8TB |
Presented numbers are based on Zenodo repository status on December 14, 2016.
OpenAIRE workflows¶
The OpenAIRE aggregation system is based on the D-NET software toolkit. D-NET is a service-oriented framework specifically designed to support developers at constructing custom aggregative infrastructures in a cost-effective way. D-NET offers data management services capable of providing access to different kinds of external data sources, storing and processing information objects of any data models, converting them into common formats, and exposing information objects to third-party applications through a number of standard access API. Most importantly, D-NET offers infrastructure enabling services that facilitate the construction of domain-specific aggregative infrastructures by selecting and configuring the needed services and easily combining them to form autonomic data processing workflows.
The Enabling Layer contains the Services supporting the application framework. These provide functionalities such as Service registration, discovery, subscription and notification and data transfer mechanisms through ResultSet Services. Most importantly, these Services are configured to orchestrate Services of other layers to fulfil the OpenAIRE specific requirements and implement the OpenAIRE workflows.
Services¶
Software life-cycle¶
D-NET services¶
The D-NET services are shipped as web applications and deployed on the tomcat application server (v7.0.52) on three distinct systems: dev, beta, production. To support the deployment process all the software artifacts are automatically built on a continuous integration system (Jenkins) and hosted on a dedicated maven repository (nexus), while webapp builds are made available via http server. The mentioned tools supporting the software lifecycle are maintained by CNR.
The D-NET services deployment is performed in subsequent stages:- The development infrastructure plays the role of test bench for the integration of the software developed by different institutions. It is maintained by CNR and runs mostly un-released code and contains mock or subsets of the data available on the production system.
- The beta infrastructure runs only released code. It is maintained by ICM and consists of the final integration stage where all the system workflows are tested on the real data (not necessarily the same data as the production system) before making them available to the production system. Although the software running on the beta system is not yet production ready, its portal is publicly accessible in order to showcase new features and data.
- The production infrastructure is maintained by ICM and runs only code that was tested on the beta system.
D-NET backend services are packed in four different web applications, each of them running on a dedicated tomcat instance.
Information Inference Service¶
Information Inference Service is an oozie application providing set of inference algorithms deployed on Hadoop cluster. Each new IIS release is deployed into unique, dedicated HDFS location. This way different IIS versions can be utilized by different D-Net environments: beta and production.
Formerly IIS was being deployed on CDH4 cluster. Since October 1st, 2015 dedicated cdh5 branch was created where new SPARK modules were introduced and existing modules were optimized. On November 20, 2016 for the first time all inferences in production infrastructure were generated by IIS deployed on new CDH5 OCEAN cluster. Both stability and major performance increase were noticed.
Currently IIS heavily depends on the following technologies:- Java Platform Standard Edition (Java SE) 8
- Apache Avro hdfs data serialization
- Apache Hadoop framework for distributed computing
- Apache Oozie workflow definitions
- Apache Pig transformers, documents similarity
- Apache Spark affiliation and citation matching algorithms
- Cermine metadata and text extraction from PDF documents
- Madis documents classification and reference extraction algorithms
Continuous Integration System hosted @ICM allows both building IIS artifacts and running full suite of integration tests which are triggered nightly.
More details on service versioning and deployment can be found on IIS versioning and deployment wiki page.
Portal¶
OpenAIRE portal is hosted at ICM. It uses Joomla! 3.6.2, a free dynamic portal engine and content management system (CMS).
The Joomla! depends on other upstream applications:
- Apache 2.4.7
- PHP 5.5.9
- MySQL 5.5.53
- OpenLDAP 2.4.31
Zenodo¶
Zenodo repository employs an instance of the Invenio software, developed by CERN.
Repository is deployed in a production system (https://zenodo.org) and a QA system (https://sandbox.zenodo.org). In total the two systems are running on 30 VMs hosted in CERNs OpenStack infrastructure. All machines are configured using Puppet and are running on top of CERN CentOS 7.2.
Zenodo/Invenio depends on the following applications:
- HAProxy for load balancing
- Nginx for serving static content and proxying request to application server
- UWSGI as application server for Zenodo/Invenio application
- Redis for memory cache
- RabbitMQ as message broker
- Celery as distributed task queue
- PostgreSQL as database
- Elasticsearch as search engine
Deployment process is described at http://zenodo.readthedocs.io/projectlifecycle.html#release-process
See https://github.com/zenodo/zenodo/commits/production for changes to Zenodo production system (does not include changes to Invenio modules).
Zenodo repository was relaunched on Invenio v3 alpha on September 12, 2016.
Infrastructure services¶
Because OpenAIRE2020 services are a continuation and incremental extension of the services already present that resulted from OpenAIRE+ project, so they are still hosted on that same machines. More details are available at OpenAIRE+ WP5 Maintenance Report.
Hadoop clusters¶
DM hadoop cluster¶
CDH version: cdh4.3.1
IIS hadoop cluster¶
There were two IIS clusters deployed:- old CDH4 IIS cluster, version
cdh4.3.1
, in operation until December 9, 2016 - new CDH5 IIS cluster deployed on March 22, 2016 in OCEAN infrastructure, supports MRv2 on YARN and SPARK
5.5.2
deployment on March 22, 20165.5.2 -> 5.7.5
upgrade on November 30, 20165.7.5 -> 5.9.0
upgrade on December 8, 2016
Databases¶
database type | usage | version |
postgress | statistics | 9.1.23 |
postgress | DNet services | 9.3.14 |
mongodb | DNet services | 3.2.6 |
virtuoso | LOD | 7.2 |
Piwik analytics platform¶
Currently deployed Piwik version: 2.17.1
, since December 6, 2016.
ownCloud filesync platform¶
Deployed at https://box.openaire.eu. Current version: 8.2.7
, since October 19, 2016.
Architectural changes¶
Detailed list of all administrative operations undertaken @ICM is available on changelog for servers administration operations wiki page.
Introducing CDH5 IIS cluster hosted in ICM OCEAN infrastructure¶
Slave node specification:- Huawei RH1288 V3
- CPU: 2x Intel(R) Xeon(R) CPU E5-2680 v3 @ 2.50GHz (24 cores, 48 threads)
- RAM: 128GB
- HDD: 4x SATA 6TB 7.2K RPM (HDFS)
- CPU: 384 cores, 768 threads
- RAM: 2048GB
- HDD: 384TB (HDFS)
- vcores: 640
- memory: 1.44TB
- HDFS: 344TB
Incorporating resources from old CDH4 IIS cluster into existing DM CDH4 cluster¶
This task was possible after shutting down old IIS CDH4 cluster what happened on December 9, 2016.
Deploying DNet postgress and mongodb databases on separate machines¶
Separatingopenaire-services
database instances into dedicated ones (since June 27, 2016):
openaire-services-postgresql
openaire-services-mongodb
Updating Zenodo repository infrastructure at CERN¶
Several architectural changes were introduced in CERN's infrastructure:- changed storage backend from OpenAFS to CERN EOS (18 PB disk cluster) for better scalability
- changed from self-managed MySQL database to CERN Database Team managed PostgreSQL database
- deployment of Elasticsearch clusters (6 VMs)
- SLC to CentOS 7 on all 30 VMs
System downtimes¶
@ICM¶
- [planned] November 14, 2016, 2 hours. #2423 dealing with Linux Dirty COW vulnerability: kernel upgrade, OpenAIRE services restart.
@CERN¶
- [unplanned] September 16, 2016, 3 hours. Preparation of a new load balancer caused an automatic update of CERN outerperimeter firewall that automatically closed access to the operational load balancers.
- [planned] September 12, 2016, 8 hours. Complete migration from old infrastructure to new infrastructure
- minor incidents until September 12, 2016 due to overload of the legacy system
Updated by Marek Horst almost 8 years ago · 71 revisions